This is a great introduction to the mess that is traffic signal controllers!
The reality is perhaps even worse than the article suggests. The majority of signal controllers support the NTCIP "standard" MIBs in addition to the "proprietary" MIBs that are provided through FreeTheMIBs. These "standard" MIBs are defined in standards like NTCIP 1202[1], which are freely available online through the NTCIP group.
These standard MIBs let you set/get all kinds of fun settings... put the lights into flash, change timing settings, set "preempts" to give yourself a green light, and more.
The standard also strongly suggests that all vendors use a default SNMP community name of "public". That means, for any traffic controller you happen to find on a network, you can almost certainly change tons of scary settings without even needing to _exploit_ anything!
I've been working in the industry for quite some time, and it's genuinely scary how poorly secured some of this infrastructure is and how slowly things move when issues are found.
(Disclaimer: I work in the industry, not for any of the companies discussed in the article, and all these views are my own and not those of my employer)
I'd be pretty interested in working on this kind of critical infrastructure. Any tips or pointers for an experienced SE/SWE on getting into your world?
I sort of accidentally stumbled into it when I joined an (at the time) startup as they were just getting into the market. So I don't know that I have anything specific to offer :)
I don't want to name names for companies in the industry, but you can find them in industry publications like Traffic Technology Today, or often as contributors to the standards documents like NTCIP 1202, ITE ATC 5301, etc.
I will say that there are a number of long-standing (40+ years) companies in the industry that seem to still operate the "legacy" way - slow iterations, very small software team, seemingly not much desire for large change. Basically, a hardware company that also happens to sell software.
There are also newer entrants to the market in the past ~decade or so that operate a lot closer to a modern software company - lots of new features coming out, fast-moving software teams, etc.
You sound like me. Stumbled into the industry at a startup (different than the one you're at -- you could probably guess which one) and have been around a while now. The condition of our traffic infrastructure is terrifying, frankly.
I was shocked when I learned that NTCIP was built on top of SNMPv1. To make matters worse, there are actually people in the industry against the adoption SNMPv3. That would at least adds a modicum of security via authentication and encryption. I'd prefer we build around another protocol entirely.
Imagine if folks at IBM knew we still used SDLC as the backbone of our communication in the cabinets...
> for any traffic controller you happen to find on a network
But how would one get on such "a network" in the first place? I assume it would involve physically opening a (hopefully locked) cabinet in public near the road? So just a bit of cutting/picking reveals an ethernet port, you drop in a wireless bridge, close it back up, and then hack from a parked car?
Well, the "locked" cabinet generally uses the same key everywhere in North America, which isn't a great start :)
A number of agencies put these controllers directly on the Internet (a search on Shodan for some telltale strings produces concerning numbers of hits).
Others will use one giant flat network across their entire city - so if you get access at once location, you have access to the entire network. This could mean accessing a "rural" or quiet location, but then actually attacking a much busier one.
Every “genie” lift has the same key
Most “skyjacks” have the same key, there are maybe 3 iterations.
Tractors have a lot of similar if not the same keys
RV handle locks (not padlocks) have about 8 different combinations - they are color coded. Eg your RV has the purple or green key. Dead bolts are unique
Every single RV storage lock is the same, if you have an RV look at the storage lock and if it says “ CH751 “ , well now you know :)
I am aware of a municipality local to me that, as part of a franchise agreement for a new ISP entering the community, had the ISP run fiber to every traffic cabinet. They're connected back to the city network in a VLAN that's "behind the firewall". >sigh<
It was only 100Mbps service, per the agreement, but yeah... >smile<
They do have cameras at each intersection, as well as networked audio at many (for all the speakers hanging from light poles that blare annoying instrumental covers of old popular songs).
The issue is that legacy copper plant has a finite lifetime. Paper insulated lines are already mostly useless today. If you have to replace infrastructure you may as well select a more robust modern alternative.
Cameras are cheap these days, and with a decent fiber link, just install one for each crossing, feed the live streams back to the pig sty and whoops you suddenly have all you need for a comprehensive monitoring solution to track people. No matter if they're suspects or not.
The shit you saw on NCIS a decade ago and dismissed as "science fiction" is getting ever more to reality.
"LAN" doesn't imply the same use of VLAN trunking or flat network architecture.
Traffic infra being on a VLAN behind the firewall implies a lot of trust in the traffic infra physical plant. You can harden against layer 2 vulnerabilities, but they're a whole 'nother can of worms and possible failure point.
It also implies the possibility of VLAN trunking being used inappropriately.
All the CCIEs I've learned from and trusted were very suspicious about extending the size and scope of LANs offsite through VLANs.
If we’re as serious about cybersecurity as all the noise that gets made about it indicates, we really need legal immunity for unsolicited responsible disclosure. You shouldn’t have any ability to beat someone with the CFAA who is trying to help you.
We do have that now, as of 2022! The new Justice Department policy now instructs prosecutors not to prosecute security researchers who acted in good faith for the public benefit and who avoided any harm to individuals or the public.
That checks off federal cases, but there are still options for a corporation under what are normally much stricter state laws in the USA. For instance, in Illinois you can get up to 5 years in prison for violating the ToS of a web site.
As well I think that would still leave you exposed for a civil suit, which even if you win, can be financially devastating. What would be needed is an anti-SLAPP type legislation at the state and federal level to mitigate it.
Anyone has the “ability” and freedom to make threats under the CFAA. Because there are no consequences for doing so. This particular company wouldn’t get the feds to prosecute this case.
Another annoying problem is that this company seems to think that their “policy” overrides first sale doctrine wrt their products: ‘we don’t know where or how you got that device, therefore CFAA violation threat.’
You may not get the feds to prosecute the case, but it's very possible for the feds to investigate you with varying levels of fervor.
If you're a well lawyered security researcher this is probably fine.
If you're some IT related person that does something else as your primary job this may or may not be fine if the FBI shows up and starts asking lots of questions about all kinds of things.
> If you're some IT related person that does something else as your primary job this may or may not be fine if the FBI shows up and starts asking lots of questions about all kinds of things.
This is exactly what I tell my coworkers who are getting into security. Keep your mouth shut about anything you find unless you have a reporting channel that leads to a "well lawyered" security company.
I've found vulnerabilities that I would have loved to disclose, but being a lowly IT generalist, I'm not going to stick my neck out. I can't imagine my employer would like the press.
I use one-off email addresses at my personal domain and historically warned companies that I was seeing spam to one-off addresses as possible indications of a data breach. By and large I was ignored, but occasionally I received a word of thanks. Even more occasionally I received notes of thanks that, in fact, I had uncovered a data breach.
Once, however, I received a nasty response insinuating that I'd breached their systems. The person I contacted didn't, apparently, understand what I was saying. They were confused that their company name was to the left of the "@" in my email address.
That was enough for me. I decided I was done reporting those events. Too much risk.
I sign-up for a service using "123abc-theirdomain.com@mydomain.com" as the email address. Messages to that address come to my "Inbox". I don't use the address for anything else. I never send a message with that address.
Years pass.
I start receiving email solicitations for erectile dysfunction remedies and, oddly, woodworking plans (what is it with the spam for shed plans?) to that address.
Either my address was sold or a data breach occurred.
(It could have been my own data breached, but it seems unlikely, if that did happen, that the result would be me receiving spam only to that one specific address.)
Yup. I submit that this sort of threat poisons the well and makes security worse for everyone, including those making the threat (but it's a bit of a commons problem because the worsening security is industry-wide, but for the threat-maker it seems to improve their situation).
If I am a person interested in how these systems work, and maybe making some money off my work in the area, this sort of threat, both its severity (potentially years of costly litigation and or/jail) and how frequently it happens (seems we read of such incidents many times per year, which is only the tip of the iceberg) would make someone seriously question the straight white-hat path. Why not find the exploits and sell them on the dark web? One might even justify it with "they wouldn't listen anyway and it's their fault for releasing a system with such stupid vulns." and/or "they'll fix it only when they see real-world consequences and if they don't it doesn't matter". One's moral compass need not be very compromised to lean on such excuses.
There REALLY needs to be a Safe Harbor law with basic requirements that the work is documented, first revealed to the company security dept (perhaps citing the Safe Harbor law?), no action taken by the researcher to allow it to be disclosed or released publicly for 90 days, and perhaps a few other reasonable safeguards.
> Anyone has the “ability” and freedom to make threats under the CFAA.
Certainly. What I’m saying is that it should be cheap or free to neutralize their threat. There should be a lawyer-free portal where you can upload their threat letter and your responsible disclosure letter, and get some kind of legal order blessing your work that you can throw back at them.
>There should be a lawyer-free portal where you can upload their threat letter and your responsible disclosure letter, and get some kind of legal order blessing your work that you can throw back at them.
Who's going to check it to make sure that "your responsible disclosure letter" actually is a responsible disclosure letter and not just nonsense?
Sounds like the company realised they can't solve the issue in 90 days. Betting a combination of infrastructure scale problems, terrible tech, no-longer-building old solutions, no maintenance fee's built in and contractors who hate them. So they pulled the only lever they had left, which was the lawyers.
Same time, RedThreat's email was kinda (maybe rightly) hostile. Read from the other side it's basically "You have 90 days to work (/maybe pay) me before you start hearing your name on TickTok under the label 'wanna hack the city?'".
"Work with your team" leaves a ton of negotiating opportunity for a company that obviously does this for a living and expects to make money somewhere.
The 90 day window is an industry standard for zero-days, how the author worded it is neither here nor there. 90 days is ample time for even a half-functioning organization to address the issue in some way. I agree with Red Threat’s decision to not show their hand in the first email. The altruistic take on this is that they do not want the email to fall upon deaf ears (or even a bad actor within a company) and would prefer to have a channel of communication open with the security team before outlining the details.
The biggest problem when faced with a zero-day is that it’s unknown who else knows about it. This helps the the company’s security team justify the work due to the fire lit under the company to take action - especially if their corporate structure does not allow for more “elective” fixes.
90 days isn't really an industry standard. It's what Google unilaterally decided upon when they chose to staff up Project Zero and reflects their assumptions being, as they are, a company that grew up on the web. One of those assumptions is that you have fully automated test processes and the ability to remotely update any/all installs of your software within a week or so, which in turn implies that users aren't involved in the decision about whether to upgrade or not. It also assumes you can do this as often as you want. This is a strong set of assumptions that happens to be true for Google but isn't true of SCADA shops.
Even if they make a patched firmware, actually rolling it out would require a lot of work by their customers and of course maybe the same guy finds another security bug after 80 days and the whole thing starts again.
Given the unclear threat model here (how does one get access to the networks that these are attached to? could you just hotwire the lights themselves and bypass the controller?) it's also not really obvious how you'd classify reports. If there's a bypassable HTTP login page that's clearly an exploit but customers may not care if they trust the underlying VPNs/firewalls/air gaps. If there's unauthenticated SNMP access by design then is it even intended to be secure against malicious network access at all?
In many cases these devices will be reachable from the public internet, and in some of those cases it will be intentional. But is the security bug there on the controller or in the network setup that allows that access? It's probably easier to properly firewall off the controllers than continuously patch all the controller firmwares themselves, especially as the latter done wrong could easily enable hackers to perform a worse-than-Crowdstrike level takedown of all controllers simultaneously.
It's really not clear that the model that works well for web browsers will ever work well for infrastructure. We just saw an awesome demo of what can go wrong when rapidly hot-patching security updates into critical infrastructure computers goes wrong.
> could you just hotwire the lights themselves and bypass the controller?
Assume you're a bank robber... place a bug (e.g. a raspi with an lte dongle) in a cabinet somewhere, now you control all traffic lights in the city. Then when you do the heist, progressively turn each intersection you pass all-sides green, and dumbass drivers will do the rest and prevent the police from catching up with you.
At least here in Germany, IIRC there used to be a mandate for a "detect conflicting greens" hardware interlock - pretty simple, wire the green light powers to AND gates, and if they trigger, shut down the cabinet hard. Same for a red light burning out - measure the current on each red light power line, if it drops below a threshold, shut down everything else to avoid a driver not seeing any light and t-boning someone who legitimately has green.
But a system without such a hardware interlock will just happily do what is asked of it.
> 90 days isn't really an industry standard. It's what Google unilaterally decided upon when they chose to staff up Project Zero
Every published vulnerability disclosure policy I've found in a quick look around has a 90 or 120 day timer involved somewhere. There may be some variation in exact details but there's not significant disagreement in the industry aside from those who don't want any firm time limits at all.
Also to be specific Google's policy is actually 90+30, you have 90 days to release a patch and as long as you do that details will be withheld an additional 30 days from the release of the patch. There is also an option for a 14 day grace period on the patch release if a vendor has been working in good faith and Google has reason to believe they will actually get it done in that time.
> and reflects their assumptions being, as they are, a company that grew up on the web. One of those assumptions is that you have fully automated test processes and the ability to remotely update any/all installs of your software within a week or so,
If you are competent software developers in 2024 you have fully automated test processes and if your things are expected to be connected to the internet you should have the ability to remotely update them. These are not things that anyone has any excuse to not understand. I am well aware that OT equipment vendors are often absolute horror shows from a software development standpoint, that's not an excuse for anything. If they can't do things right they deserve everything that happens to them and their customers should hold them accountable for the inevitable result.
> which in turn implies that users aren't involved in the decision about whether to upgrade or not. It also assumes you can do this as often as you want. This is a strong set of assumptions that happens to be true for Google but isn't true of SCADA shops.
It's an assumption that has to be true of anything connected to the open internet. If a bad actor discovers the exploit and starts using it against exposed systems you won't have a choice but to patch it yesterday, at which point a 90+30 deadline will feel like all the time in the world.
The simple answer of course is if for whatever reason you can not patch the thing in such a timeframe it should never be connected to the internet and connections between internet-connected systems and the private network should be severely restricted and heavily monitored for unusual activity.
At that point you don't care about whether exploit details are public because you know every single person who could potentially implement it.
> Given the unclear threat model here (how does one get access to the networks that these are attached to?
The article also mentions this, but you answer your own question a paragraph later.
> In many cases these devices will be reachable from the public internet, and in some of those cases it will be intentional.
And again the fact is if it is on the internet it must be rapidly patchable. If short-notice patching is not viable for your use case then don't put it on the internet. Very simple, no exceptions.
> But is the security bug there on the controller or in the network setup that allows that access?
Yes. If the intentional access controls can be bypassed that's a bug in the controller, but if the OT device is accessible to the general internet in the first place that's a bug in the network setup.
> It's probably easier to properly firewall off the controllers than continuously patch all the controller firmwares themselves, especially as the latter done wrong could easily enable hackers to perform a worse-than-Crowdstrike level takedown of all controllers simultaneously.
Also yes.
> It's really not clear that the model that works well for web browsers will ever work well for infrastructure.
The model is "if it is exposed to the internet and there is a remotely exploitable vulnerability known it must be either patched or not exposed to the internet anymore". It doesn't matter what the thing is, be it browsers, infrastructure, medical, etc. Either take it off the internet or be ready and willing to patch it on short notice.
> We just saw an awesome demo of what can go wrong when rapidly hot-patching security updates into critical infrastructure computers goes wrong.
I'd argue that was more of a demo of what will go wrong when you don't have automated testing and why you should always have staged deployment when doing things at scale.
That said, I'll return to the same point, how much infrastructure that wasn't connected to the internet was affected? Every system that was affected was allowed to download software from the internet controlled by a third party.
This discussion reminds me of the recent discussion around Entrust where a lot of the excuses were around certs being used in places where they could not be easily rotated, which led to the obvious question of "what would these users have done in the event of a key compromise?" having no good answers. When you're using internet infrastructure you need to be able to move quickly from time to time.
Now that digital web properties are included the ADA, there are law firms out there essentially doing the same thing. They are actively scanning the internet to find companies who have accessibility issues which are primarily widgets or overlays. Then they're emailing them the issues and "allowing" them 90 days to correct the issues, otherwise, they will be sued.
There's been like a 400% increase in these suits over the last two years because you can go after a large company and even if they fix one issue, you can find another issue and sue or threaten them on that one as well.
My co-workers think its great because of the pressure to solve these issues that really do need to be fixed. But like in this instance, its a fine line between doing something positive and extorting money for yourself or in our case, a law firm.
Just what we need to tie up the cops so we can rollerblade into Grand Central Station and hack the Gibson to get the garbage file that will exonerate Joey!
Part 2 article goes into a bit more of detail, but the funniest thing is that they requested access to the SNMP MIBs of the controller and never got them
> I requested MIBs from Q-Free but didn’t receive any follow-up after the request and I never received access to the MIBS, so it was back to square one.
SNMP stands for Simple Network Management Protocol, and is a way to directly address not just individual hardware elements, but to access specific functions or methods within that device via a "simple" addressing scheme. A MIB file describes the various endpoints available on a device, much like a wsdl file would describe a SOAP endpoint.
So you might have an SNMP address like 2.1.4.3.0.1* which the MIB file would translate to "the current temp for CPU1"
Don't know their specific process, but this sounds like "we got a bunch of submissions and yours didn't make the cut."
Honestly, rather than this being a nefarious "too dangerous even for defcon" like your wording suggests, I think the author knows why it didn't make the cut and snarkily addressed it:
> I’d love to write a long detailed blog about getting a root shell via UART or extracting the firmware via JTAG and then reversing it, but the honest truth is I found a vulnerability in the webapp in the first 15 minutes of having the unit online and it was the first thing I tried.
So my guess is so basic it just wasn't interesting enough. /shrug
I'm mostly familiar with North American traffic signal control, and in those traffic cabinets there is a device known as an "MMU" (Malfuction Management Unit) which acts as a safety monitor for the rest of the traffic cabinet.
That device will catch so-called "conflicts" (two conflicting directions green at the same time) and put the intersection into a fail-safe state (usually flashing red/yellow lights).
There are of course some edge cases where this is technically possible (as long as the cabinet door is open in CalTrans TEES cabinets, you can actually remove the MMU entirely and do whatever you want), and I'm not familiar with safety mechanisms used in other localities.
(Note: I work in the industry, not for any of the companies in this article, and my views are my own).
In the old timer-and-cam based systems I also believe this was electrically impossible. IIRC the green light in one direction was grounded through the green light in the crossing direction. So it was impossible for both of them to be on at the same time.
Fiction... indeed, science fiction. There was a short story (I believe in Analog) in the 1960s about this, later amplified by the author into a book.
But getting to your real point, about the use of an MMU safety monitor: I'm sure this works. But I confess, the first thing I thought about when I read that was Cloudstrike's explanation of their pre-release testing mechanism: running "validation checks" on the content, rather than running the actual software. Had they actually run their release, they would surely have detected the bug, since it apparently bricked every single Windows machine that downloaded it.
> I'm mostly familiar with North American traffic signal control, and in those traffic cabinets there is a device known as an "MMU" (Malfuction Management Unit) which acts as a safety monitor for the rest of the traffic cabinet.
Presumably the logic for this MMU could be implemented in strictly electrical components (relays or such). That would give me the most comfort (since its functionality would be, literally hard-wired).
I worry that some enterprising manufacturer, out to save a few bucks, would implement this functionality in a microcontroller with firmware that could be updated remotely.
Does the standard specify the functionality of the MMU must be hard-wired, or at the very least not able to be changed without physical access?
The majority of MMUs on the market that I have had a close look at implement safety-critical functionality on a microcontroller with updatable firmware. Some can even be updated over IP. I haven't had the opportunity to dig into if those firmware upgrades are signed or otherwise integrity-protected.
The standard unfortunately does not specify a functional safety standard or other measures to ensure absolute safety.
In theory it would be possible to implement it in discrete logic (or an FPGA or other formally-verifiable process), but as far as I know no manufacturer has done so (I'd love to be wrong!)
Now you start to get into the differences between the various standards :)
In NEMA TS2 (and the more modern ITE ATC), the MMU does enforce a yellow clearance time - you need the light to turn yellow for a period of time before a conflicting phase goes green. Usually this is a few seconds. Changing phases rapidly would likely confuse drivers, but in _theory_ shouldn't cause a collision if people respect yellows.
(believe it or not, in some localities a "red clearance" time - all red - is not required and lights will go from yellow in one direction to green in another.)
In CalTrans TEES, I do not believe the standard calls for the MMU to enforce clearance times - the attack you describe would potentially be possible.
> (believe it or not, in some localities a "red clearance" time - all red - is not required and lights will go from yellow in one direction to green in another.)
This was definitely true in the past, I feel like the concept of a 'red clearance time' is something that only became common within the last 5-10 years. Do you think it has become (with rare exceptions) ubiquitous at this point?
I'd like to think it's become ubiquitous - it has been a while since I've seen a signal without a red clearance configured.
However, the Federal Highway Administration in the US (which sets guidelines, but most states define actual rules at the state level) still says in their Signal Timing Manual [1]
> The use of a red clearance interval is optional, and there is no consensus on its application or duration. [...] there may not be safety benefits associated with increased red clearance intervals.
and goes on to describe how it has negative traffic flow implications.
so I suspect at least some agencies out there still are not using them.
I feel like my area (where I've lived my whole life) does not have red clearance interval. It's not something Ive paid attention to before.
I'm sure it's proper driving technique but I feel it's ingrained in my head to give a couple seconds when a red turns into a green for all cross traffic to finish / anyone who runs the light. It's a common thing around me and I don't think it would be happening as much if an all-red period was implemented.
I moved up to Oregon for a few years during/after the pandemic. I can say from experience that the entire state consistently did not have red clearance times at least up to when I left in late 2023.
No. There is a thing called a "conflict monitor." It uses a hard wired control board. This board has solderable links on it. When you connect a link you're describing two phases which CANNOT be green at the same time. You then slide the board into the monitor.
If the monitor sees that current for the green lights on two opposing phases are green at the same time, or any of several other fault conditions, it will trip, then "throw the intersection into flash" as a roadway protection mechanism. At this point none of the controllers or other logic is capable of controlling the intersection and it stays in flash until someone comes and manually resets it.
Here's an example of what one of those cards looks like. Wonderfully old school and absolutely nothing active about it.
I remember back in the day some guy was selling the strobe light devices that emergency vehicles use on some nefarious site and it came with a warning that its a felony to even possess one of these and you should only use it for "research" purposes winkwink.
The funny part of his ad was him saying its still dangerous AF to just switch a light to green so quickly, it has the very real potential to cause accidents which back in the early aughts when this was online, I could totally believe him.
You don't need a light to change to green in all directions, using this and then thinking you're good to go and you blow through the intersection while someone is trying to beat a yellow light in the opposite direction could be disastrous.
I would hope they still use physical interlocks instead of implementing it in software, even though it’s semiconductors instead of relays controlling stoplights these days.
The legal threat letter from the vendor is among the most insane examples I've read. They only consider something a valid vulnerability if the reporter can demonstrate they obtained the equipment through a legitimate recorded sale? What on earth does that have to do with the existence of a vulnerability?
This blog post is dated 5 days prior to The Intelligent Transportation Society of America's publishing of its Cybersecurity and Transportation Safety Issue Brief.
The original author of the blog post was invited to speak at an ITSA.ORG conference, and present as through the eyes of an attacker. Thus, the perspective he posits.
There is nothing untoward in his observations but I can see why DefCon might hold off on letting him present his findings.
The ITSA is based in Washington D.C. and has a fairly large membership consisting of state's DOT's (primarily western U.S.), tech companies, car companies, engineering design companies, consulting firms, etc.
Their vision is a better future transformed by transportation technology and innovation. Safer. Greener. Smarter. For all.
A lot of automation is factored into that vision including the use of autonomous vehicles, high-speed inter-connected systems, their attendant technologies, and of course, cyber-security.
Personally, I'm dismayed the U.S. in only now awarding grants for these studies. Maybe the whole thing got sidetracked when our focus shifted to COVID, I don't know. But it does seem as though we're behind the private and governmental initiatives going on in Asia.
It has been years since I have seen a fire truck get a green light through preemption. It probably varies by jurisdiction, but around here the preemption just causes the entire intersection to go red. Then the emergency vehicle can navigate across carefully.
In addition to being basically useless as a trick these days, it is also trivially easy to detect an IR strobe being used, and the penalty can be hefty.
Fun fact: In some places, there are strobes on the signal that activate when it has been preempted, to let everyone know an emergency vehicle is coming through.
Yeah near me (mid-sized metro area) they just hit the intersection with lights and sirens and creep forward until they’re sure they won’t get t-boned. Light sequence never changes.
Security people act like it's their duty to expose every vulnerability, and that companies are negligent if they don't harden themselves against all attack vectors, while they are responsible for a good part of the danger.
Out in meatspace, I don't wander around picking random people's locks, making smug posts about how vulnerable their houses are (along with their address). Nobody would be happy about that, no matter what color hat I have on.
Security is a twin-engine racket, based on the pillars of
Could you help me understand what you are suggesting is done instead?
To me, it seems like you're suggesting that vulnerabilities are just left in play until someone malicious comes along and decides to do some real damage. But that seems so silly that I must be missing some alternative that you're thinking about.
> it seems like you're suggesting that vulnerabilities are just left in play until someone malicious comes along and decides to do some real damage
That's how security mostly works in meatspace, yes.
In the specific case of internet connected software the industry has a lot of experience saying that if something is exploitable then someone will come along and exploit, so we don't normally need to see an example of it happening in the real world first. It's sufficient to assume that if you get popular enough, a professional blackhat will find your bugs and exploit them. It's also reasonable to assume that the cost of a fix is low and the cost of change in the field is also low.
Outside that context the threat models are usually unclear and refined through experience. If you notice someone cut through a wire fence to steal some equipment from a cell tower maybe you build a wall around it instead. But if nobody is stealing anything there's no point in pre-emptively trying to guess that it might happen and building lots of walls because that might just be a waste of resources (perhaps there's no market for stolen tower equipment, so protecting it better would be a waste of resources).
> vulnerabilities are just left in play until someone malicious comes along and decides to do some real damage. But that seems so silly
Well, that's exactly how it tends to work for housing, so I think GP's point is that if it works there it should work here. However, I disagree because the stakes are so different (harming a single family who are free to harden however they like, versus harming the general public who are at the mercy of whatever hardening is done for them).
You're presenting this as if its a new idea, but the security industry tried the above (for the majority of the time that "computer security" has been a thing) and... it didn't work! That's the whole reason public disclosure came about in the first place -- there's quite a rich history there if you're interested.
Some other thoughts:
>You let the manufacturer know, and you let them decide for the next steps.
Which, as history has proven, the "next steps" is generally to sweep it under the rug and to be forgotten about until it's exploited by a bad actor.
>it's not your business
But, what about when it is? On-topic: I drive a car, so I care about vulnerabilities in traffic lights and they may directly affect me. It's also my business if my personal data is stolen, or my identity, or corporate data, etc.
>You helped: no lawyers, no problems.
No problems... Until the vulnerability is exploited and it causes me a problem.
> No ultimatum to threaten to disclose to the public or to ruin their reputation, it's not your business.
I found an authentication bypass in a door card access controller. Per the installer I was working with the units are regularly exposed directly to the Internet. (Heck, the installer was trying to cajole my Customer into doing it for "remote support" reasons.)
Given that there's an impact to the public-- albeit not necessarily directly safety-related-- I think this kind of vulnerability is still "my business".
If I owned one of these controllers and it was "protecting" my property I'd want to know.
(Fun aside: The installer went so far as to suggest that because their other Customers expose these units to the Internet-- particularly a small bank who is "audited" for "security"-- it would be okay if my Customer did it. Needless to say, my Customer did not. I let my Customer know about the auth. bypass and we kept the unit locked down in a VLAN w/ a restrictive ACL, but I never publicly disclosed... too afraid of hostile response from the vendor. Eventually a researcher did find it and disclose it publicly, at least...)
I think a better analogy is wandering around noting what locks random peoples' houses use, buying your own, breaking your own, informing the manufacturer of the flaw, and then informing the owners of the random peoples' houses.
AKA what criminals already do, except the criminals actually break into the random peoples' homes and steal their stuff.
> Out in meatspace, I don't wander around picking random people's locks
Sure, because the chances of getting punched out, shot, or reported to the cops is significantly higher. Given how trivial it is to quietly attack across the network, I think the analogy with meatspace makes little sense.
Also breaking into a single persons house... maybe they don't want to lock their doors. It affects nobody but them.
These systems affect lots of people, it's a public safety issue, and there is a company being paid money by the public to ensure that their systems are safe and secure. They should be tested by everybody and anybody who wants to test them. Especially if they are running on a publicly accessible IP address.
Also if you want to test the locks on someone's house, you don't go to their house. You buy the locks that they are using, and test them quietly in your own location.
> Also if you want to test the locks on someone's house, you don't go to their house. You buy the locks that they are using, and test them quietly in your own location.
This is a pretty great point, because it's exactly what the guy in the article did to find the vulnerability in the traffic controllers.
I assume that any piece of technologically backed infrastructure is a potential target for state-level actors. If rando security researcher finds the vuln in 15 minutes, I guarantee China already has it.
Anyone operating infrastructure hardware is negligent if they won't take basic measures to harden it against disclosed threats.
I’m not worried about malfeasant citizens mucking with the traffic lights, there are simpler ways to make mayhem. But in the event of a war, you can bet every unpatched vulnerability in your infrastructure will be used against your country.
When you work for a state-actor it's no different than anywhere else; you don't have exploits coming out of the sky.
You either research these vulnerabilities (and you have limited capacity and knowledge) or you purchase vulnerabilities from vendors, exchanges and "research companies".
In such case, it was an unnecessary free gift to an enemy state or a malicious actor.
The same with NSA, they do not know all the vulns of the universe (due to budget, resources, or simply focus).
You may actually have some vulns they are interested into but unless someone points these vulns to them, they will not get aware that they exists.
Company makes HW that can potentially harm people, if someone logs in to it remotely and turns all lights green at once. It's possible to find the vulnerability in 15 minutes of getting remote access to the device without any prior knowledge, that gives admin access to the HW. Company rejects the report based on flmisy reasons via a lawyer, threatening with a felony prosecution.
But the person finding the vulnerability and notifying the company is the smug one. :)
There are electrical junction boxes all over my neighborhood, that direct power to the stoplights and residential buildings (?). They have a simple padlock, and could be opened in 30 seconds with a lockpick or bolt cutters.
Nobody tries! Not even to test it out! The question isn't "How easy is it to break in?", but rather "Should I be tampering with this?"
Your analogy breaks down immediately because the Internet isn't your neighborhood, it's effectively everyone's neighborhood, including the state-level bad actors mentioned above.
If someone could access those electrical junction boxes from China or North Korea, I'd want the locals finding the vulnerabilities first.
Your analogy is flawed. This isn't testing someone's house. This is buying the locks that are on people's houses and testing them in your own location. Which people should absolutely be doing.
> Out in meatspace, I don't wander around picking random people's locks, making smug posts about how vulnerable their houses are (along with their address). Nobody would be happy about that, no matter what color hat I have on.
Yeah, but in youtube space, there are, eg, lawyers who are into lockpicking who post smug videos showing how vulnerable various manufacturers' locks really are to common lockpicking techniques. Apparently 4.5M subscribers are quite happy being informed what the state of lock security is out there.
This is a great introduction to the mess that is traffic signal controllers!
The reality is perhaps even worse than the article suggests. The majority of signal controllers support the NTCIP "standard" MIBs in addition to the "proprietary" MIBs that are provided through FreeTheMIBs. These "standard" MIBs are defined in standards like NTCIP 1202[1], which are freely available online through the NTCIP group.
These standard MIBs let you set/get all kinds of fun settings... put the lights into flash, change timing settings, set "preempts" to give yourself a green light, and more.
The standard also strongly suggests that all vendors use a default SNMP community name of "public". That means, for any traffic controller you happen to find on a network, you can almost certainly change tons of scary settings without even needing to _exploit_ anything!
I've been working in the industry for quite some time, and it's genuinely scary how poorly secured some of this infrastructure is and how slowly things move when issues are found.
(Disclaimer: I work in the industry, not for any of the companies discussed in the article, and all these views are my own and not those of my employer)
[1]: https://www.ntcip.org/file/2019/07/NTCIP-1202v0328A.pdf
I'd be pretty interested in working on this kind of critical infrastructure. Any tips or pointers for an experienced SE/SWE on getting into your world?
I sort of accidentally stumbled into it when I joined an (at the time) startup as they were just getting into the market. So I don't know that I have anything specific to offer :)
I don't want to name names for companies in the industry, but you can find them in industry publications like Traffic Technology Today, or often as contributors to the standards documents like NTCIP 1202, ITE ATC 5301, etc.
I will say that there are a number of long-standing (40+ years) companies in the industry that seem to still operate the "legacy" way - slow iterations, very small software team, seemingly not much desire for large change. Basically, a hardware company that also happens to sell software.
There are also newer entrants to the market in the past ~decade or so that operate a lot closer to a modern software company - lots of new features coming out, fast-moving software teams, etc.
(again, all opinions are my own here.)
You sound like me. Stumbled into the industry at a startup (different than the one you're at -- you could probably guess which one) and have been around a while now. The condition of our traffic infrastructure is terrifying, frankly.
I was shocked when I learned that NTCIP was built on top of SNMPv1. To make matters worse, there are actually people in the industry against the adoption SNMPv3. That would at least adds a modicum of security via authentication and encryption. I'd prefer we build around another protocol entirely.
Imagine if folks at IBM knew we still used SDLC as the backbone of our communication in the cabinets...
> for any traffic controller you happen to find on a network
But how would one get on such "a network" in the first place? I assume it would involve physically opening a (hopefully locked) cabinet in public near the road? So just a bit of cutting/picking reveals an ethernet port, you drop in a wireless bridge, close it back up, and then hack from a parked car?
Well, the "locked" cabinet generally uses the same key everywhere in North America, which isn't a great start :)
A number of agencies put these controllers directly on the Internet (a search on Shodan for some telltale strings produces concerning numbers of hits).
Others will use one giant flat network across their entire city - so if you get access at once location, you have access to the entire network. This could mean accessing a "rural" or quiet location, but then actually attacking a much busier one.
Every “genie” lift has the same key Most “skyjacks” have the same key, there are maybe 3 iterations. Tractors have a lot of similar if not the same keys RV handle locks (not padlocks) have about 8 different combinations - they are color coded. Eg your RV has the purple or green key. Dead bolts are unique Every single RV storage lock is the same, if you have an RV look at the storage lock and if it says “ CH751 “ , well now you know :)
I am aware of a municipality local to me that, as part of a franchise agreement for a new ISP entering the community, had the ISP run fiber to every traffic cabinet. They're connected back to the city network in a VLAN that's "behind the firewall". >sigh<
Because of course a controller for a traffic light needs gigabit fiber internet connectivity....
That’s not the scary thing here. Better to future-proof it.
Running presumably unencrypted SNMP over shared lines is sketchy.
Well to be fair a number of traffic lights now have cameras to monitor the intersection as well. Didn't consider that.
It was only 100Mbps service, per the agreement, but yeah... >smile<
They do have cameras at each intersection, as well as networked audio at many (for all the speakers hanging from light poles that blare annoying instrumental covers of old popular songs).
The issue is that legacy copper plant has a finite lifetime. Paper insulated lines are already mostly useless today. If you have to replace infrastructure you may as well select a more robust modern alternative.
Cameras are cheap these days, and with a decent fiber link, just install one for each crossing, feed the live streams back to the pig sty and whoops you suddenly have all you need for a comprehensive monitoring solution to track people. No matter if they're suspects or not.
The shit you saw on NCIS a decade ago and dismissed as "science fiction" is getting ever more to reality.
Interesting, but I think the VLAN in your explanation is equivalent to the "network" I'm asking about. The V is mostly immaterial, I think.
The VLAN part is important.
"LAN" doesn't imply the same use of VLAN trunking or flat network architecture.
Traffic infra being on a VLAN behind the firewall implies a lot of trust in the traffic infra physical plant. You can harden against layer 2 vulnerabilities, but they're a whole 'nother can of worms and possible failure point.
It also implies the possibility of VLAN trunking being used inappropriately.
All the CCIEs I've learned from and trusted were very suspicious about extending the size and scope of LANs offsite through VLANs.
If we’re as serious about cybersecurity as all the noise that gets made about it indicates, we really need legal immunity for unsolicited responsible disclosure. You shouldn’t have any ability to beat someone with the CFAA who is trying to help you.
We do have that now, as of 2022! The new Justice Department policy now instructs prosecutors not to prosecute security researchers who acted in good faith for the public benefit and who avoided any harm to individuals or the public.
https://www.justice.gov/opa/pr/department-justice-announces-...
That checks off federal cases, but there are still options for a corporation under what are normally much stricter state laws in the USA. For instance, in Illinois you can get up to 5 years in prison for violating the ToS of a web site.
As well I think that would still leave you exposed for a civil suit, which even if you win, can be financially devastating. What would be needed is an anti-SLAPP type legislation at the state and federal level to mitigate it.
Policy can change, retrospectively even
Anyone has the “ability” and freedom to make threats under the CFAA. Because there are no consequences for doing so. This particular company wouldn’t get the feds to prosecute this case.
Another annoying problem is that this company seems to think that their “policy” overrides first sale doctrine wrt their products: ‘we don’t know where or how you got that device, therefore CFAA violation threat.’
You may not get the feds to prosecute the case, but it's very possible for the feds to investigate you with varying levels of fervor.
If you're a well lawyered security researcher this is probably fine.
If you're some IT related person that does something else as your primary job this may or may not be fine if the FBI shows up and starts asking lots of questions about all kinds of things.
> If you're some IT related person that does something else as your primary job this may or may not be fine if the FBI shows up and starts asking lots of questions about all kinds of things.
This is exactly what I tell my coworkers who are getting into security. Keep your mouth shut about anything you find unless you have a reporting channel that leads to a "well lawyered" security company.
I've found vulnerabilities that I would have loved to disclose, but being a lowly IT generalist, I'm not going to stick my neck out. I can't imagine my employer would like the press.
I use one-off email addresses at my personal domain and historically warned companies that I was seeing spam to one-off addresses as possible indications of a data breach. By and large I was ignored, but occasionally I received a word of thanks. Even more occasionally I received notes of thanks that, in fact, I had uncovered a data breach.
Once, however, I received a nasty response insinuating that I'd breached their systems. The person I contacted didn't, apparently, understand what I was saying. They were confused that their company name was to the left of the "@" in my email address.
That was enough for me. I decided I was done reporting those events. Too much risk.
> I was seeing spam to one-off addresses as possible indications of a data breach
For the uninformed, how does spam to an address on your domain indicate a breach of someone else's system?
I sign-up for a service using "123abc-theirdomain.com@mydomain.com" as the email address. Messages to that address come to my "Inbox". I don't use the address for anything else. I never send a message with that address.
Years pass.
I start receiving email solicitations for erectile dysfunction remedies and, oddly, woodworking plans (what is it with the spam for shed plans?) to that address.
Either my address was sold or a data breach occurred.
(It could have been my own data breached, but it seems unlikely, if that did happen, that the result would be me receiving spam only to that one specific address.)
Yup. I submit that this sort of threat poisons the well and makes security worse for everyone, including those making the threat (but it's a bit of a commons problem because the worsening security is industry-wide, but for the threat-maker it seems to improve their situation).
If I am a person interested in how these systems work, and maybe making some money off my work in the area, this sort of threat, both its severity (potentially years of costly litigation and or/jail) and how frequently it happens (seems we read of such incidents many times per year, which is only the tip of the iceberg) would make someone seriously question the straight white-hat path. Why not find the exploits and sell them on the dark web? One might even justify it with "they wouldn't listen anyway and it's their fault for releasing a system with such stupid vulns." and/or "they'll fix it only when they see real-world consequences and if they don't it doesn't matter". One's moral compass need not be very compromised to lean on such excuses.
There REALLY needs to be a Safe Harbor law with basic requirements that the work is documented, first revealed to the company security dept (perhaps citing the Safe Harbor law?), no action taken by the researcher to allow it to be disclosed or released publicly for 90 days, and perhaps a few other reasonable safeguards.
and how often it happens
> Anyone has the “ability” and freedom to make threats under the CFAA.
Certainly. What I’m saying is that it should be cheap or free to neutralize their threat. There should be a lawyer-free portal where you can upload their threat letter and your responsible disclosure letter, and get some kind of legal order blessing your work that you can throw back at them.
>There should be a lawyer-free portal where you can upload their threat letter and your responsible disclosure letter, and get some kind of legal order blessing your work that you can throw back at them.
Who's going to check it to make sure that "your responsible disclosure letter" actually is a responsible disclosure letter and not just nonsense?
The firm funded by fines for making unfounded threats with scary lawyer letters.
Sounds like the company realised they can't solve the issue in 90 days. Betting a combination of infrastructure scale problems, terrible tech, no-longer-building old solutions, no maintenance fee's built in and contractors who hate them. So they pulled the only lever they had left, which was the lawyers.
Same time, RedThreat's email was kinda (maybe rightly) hostile. Read from the other side it's basically "You have 90 days to work (/maybe pay) me before you start hearing your name on TickTok under the label 'wanna hack the city?'".
"Work with your team" leaves a ton of negotiating opportunity for a company that obviously does this for a living and expects to make money somewhere.
The 90 day window is an industry standard for zero-days, how the author worded it is neither here nor there. 90 days is ample time for even a half-functioning organization to address the issue in some way. I agree with Red Threat’s decision to not show their hand in the first email. The altruistic take on this is that they do not want the email to fall upon deaf ears (or even a bad actor within a company) and would prefer to have a channel of communication open with the security team before outlining the details.
The biggest problem when faced with a zero-day is that it’s unknown who else knows about it. This helps the the company’s security team justify the work due to the fire lit under the company to take action - especially if their corporate structure does not allow for more “elective” fixes.
90 days isn't really an industry standard. It's what Google unilaterally decided upon when they chose to staff up Project Zero and reflects their assumptions being, as they are, a company that grew up on the web. One of those assumptions is that you have fully automated test processes and the ability to remotely update any/all installs of your software within a week or so, which in turn implies that users aren't involved in the decision about whether to upgrade or not. It also assumes you can do this as often as you want. This is a strong set of assumptions that happens to be true for Google but isn't true of SCADA shops.
Even if they make a patched firmware, actually rolling it out would require a lot of work by their customers and of course maybe the same guy finds another security bug after 80 days and the whole thing starts again.
Given the unclear threat model here (how does one get access to the networks that these are attached to? could you just hotwire the lights themselves and bypass the controller?) it's also not really obvious how you'd classify reports. If there's a bypassable HTTP login page that's clearly an exploit but customers may not care if they trust the underlying VPNs/firewalls/air gaps. If there's unauthenticated SNMP access by design then is it even intended to be secure against malicious network access at all?
In many cases these devices will be reachable from the public internet, and in some of those cases it will be intentional. But is the security bug there on the controller or in the network setup that allows that access? It's probably easier to properly firewall off the controllers than continuously patch all the controller firmwares themselves, especially as the latter done wrong could easily enable hackers to perform a worse-than-Crowdstrike level takedown of all controllers simultaneously.
It's really not clear that the model that works well for web browsers will ever work well for infrastructure. We just saw an awesome demo of what can go wrong when rapidly hot-patching security updates into critical infrastructure computers goes wrong.
> could you just hotwire the lights themselves and bypass the controller?
Assume you're a bank robber... place a bug (e.g. a raspi with an lte dongle) in a cabinet somewhere, now you control all traffic lights in the city. Then when you do the heist, progressively turn each intersection you pass all-sides green, and dumbass drivers will do the rest and prevent the police from catching up with you.
At least here in Germany, IIRC there used to be a mandate for a "detect conflicting greens" hardware interlock - pretty simple, wire the green light powers to AND gates, and if they trigger, shut down the cabinet hard. Same for a red light burning out - measure the current on each red light power line, if it drops below a threshold, shut down everything else to avoid a driver not seeing any light and t-boning someone who legitimately has green.
But a system without such a hardware interlock will just happily do what is asked of it.
> 90 days isn't really an industry standard. It's what Google unilaterally decided upon when they chose to staff up Project Zero
Every published vulnerability disclosure policy I've found in a quick look around has a 90 or 120 day timer involved somewhere. There may be some variation in exact details but there's not significant disagreement in the industry aside from those who don't want any firm time limits at all.
Also to be specific Google's policy is actually 90+30, you have 90 days to release a patch and as long as you do that details will be withheld an additional 30 days from the release of the patch. There is also an option for a 14 day grace period on the patch release if a vendor has been working in good faith and Google has reason to believe they will actually get it done in that time.
> and reflects their assumptions being, as they are, a company that grew up on the web. One of those assumptions is that you have fully automated test processes and the ability to remotely update any/all installs of your software within a week or so,
If you are competent software developers in 2024 you have fully automated test processes and if your things are expected to be connected to the internet you should have the ability to remotely update them. These are not things that anyone has any excuse to not understand. I am well aware that OT equipment vendors are often absolute horror shows from a software development standpoint, that's not an excuse for anything. If they can't do things right they deserve everything that happens to them and their customers should hold them accountable for the inevitable result.
> which in turn implies that users aren't involved in the decision about whether to upgrade or not. It also assumes you can do this as often as you want. This is a strong set of assumptions that happens to be true for Google but isn't true of SCADA shops.
It's an assumption that has to be true of anything connected to the open internet. If a bad actor discovers the exploit and starts using it against exposed systems you won't have a choice but to patch it yesterday, at which point a 90+30 deadline will feel like all the time in the world.
The simple answer of course is if for whatever reason you can not patch the thing in such a timeframe it should never be connected to the internet and connections between internet-connected systems and the private network should be severely restricted and heavily monitored for unusual activity.
At that point you don't care about whether exploit details are public because you know every single person who could potentially implement it.
> Given the unclear threat model here (how does one get access to the networks that these are attached to?
The article also mentions this, but you answer your own question a paragraph later.
> In many cases these devices will be reachable from the public internet, and in some of those cases it will be intentional.
And again the fact is if it is on the internet it must be rapidly patchable. If short-notice patching is not viable for your use case then don't put it on the internet. Very simple, no exceptions.
> But is the security bug there on the controller or in the network setup that allows that access?
Yes. If the intentional access controls can be bypassed that's a bug in the controller, but if the OT device is accessible to the general internet in the first place that's a bug in the network setup.
> It's probably easier to properly firewall off the controllers than continuously patch all the controller firmwares themselves, especially as the latter done wrong could easily enable hackers to perform a worse-than-Crowdstrike level takedown of all controllers simultaneously.
Also yes.
> It's really not clear that the model that works well for web browsers will ever work well for infrastructure.
The model is "if it is exposed to the internet and there is a remotely exploitable vulnerability known it must be either patched or not exposed to the internet anymore". It doesn't matter what the thing is, be it browsers, infrastructure, medical, etc. Either take it off the internet or be ready and willing to patch it on short notice.
> We just saw an awesome demo of what can go wrong when rapidly hot-patching security updates into critical infrastructure computers goes wrong.
I'd argue that was more of a demo of what will go wrong when you don't have automated testing and why you should always have staged deployment when doing things at scale.
That said, I'll return to the same point, how much infrastructure that wasn't connected to the internet was affected? Every system that was affected was allowed to download software from the internet controlled by a third party.
This discussion reminds me of the recent discussion around Entrust where a lot of the excuses were around certs being used in places where they could not be easily rotated, which led to the obvious question of "what would these users have done in the event of a key compromise?" having no good answers. When you're using internet infrastructure you need to be able to move quickly from time to time.
I work in accessibility as an engineer.
Now that digital web properties are included the ADA, there are law firms out there essentially doing the same thing. They are actively scanning the internet to find companies who have accessibility issues which are primarily widgets or overlays. Then they're emailing them the issues and "allowing" them 90 days to correct the issues, otherwise, they will be sued.
There's been like a 400% increase in these suits over the last two years because you can go after a large company and even if they fix one issue, you can find another issue and sue or threaten them on that one as well.
My co-workers think its great because of the pressure to solve these issues that really do need to be fixed. But like in this instance, its a fine line between doing something positive and extorting money for yourself or in our case, a law firm.
Just what we need to tie up the cops so we can rollerblade into Grand Central Station and hack the Gibson to get the garbage file that will exonerate Joey!
https://m.youtube.com/watch?v=yhVDhcuRY1I
[flagged]
That's an embarassing letter from the company.
Part 2 article goes into a bit more of detail, but the funniest thing is that they requested access to the SNMP MIBs of the controller and never got them
> I requested MIBs from Q-Free but didn’t receive any follow-up after the request and I never received access to the MIBS, so it was back to square one.
Then you go look at https://www.freethemibs.org/advocates and... there they are, "advocates" for free MIB access. What clowns.
What are SNMP and MIBs?
SNMP stands for Simple Network Management Protocol, and is a way to directly address not just individual hardware elements, but to access specific functions or methods within that device via a "simple" addressing scheme. A MIB file describes the various endpoints available on a device, much like a wsdl file would describe a SOAP endpoint.
So you might have an SNMP address like 2.1.4.3.0.1* which the MIB file would translate to "the current temp for CPU1"
Simple Network Management Protocol. Management Information Base.
https://en.m.wikipedia.org/wiki/Management_information_base
Why wouldn't defcon allow this to be presented?
> my CFP wasn’t accepted
Don't know their specific process, but this sounds like "we got a bunch of submissions and yours didn't make the cut."
Honestly, rather than this being a nefarious "too dangerous even for defcon" like your wording suggests, I think the author knows why it didn't make the cut and snarkily addressed it:
> I’d love to write a long detailed blog about getting a root shell via UART or extracting the firmware via JTAG and then reversing it, but the honest truth is I found a vulnerability in the webapp in the first 15 minutes of having the unit online and it was the first thing I tried.
So my guess is so basic it just wasn't interesting enough. /shrug
Sounds like amazing material for a CCC talk.
Can you turn all the lights at a given intersection green at the same time?
Generally no, this is something from fiction.
I'm mostly familiar with North American traffic signal control, and in those traffic cabinets there is a device known as an "MMU" (Malfuction Management Unit) which acts as a safety monitor for the rest of the traffic cabinet.
That device will catch so-called "conflicts" (two conflicting directions green at the same time) and put the intersection into a fail-safe state (usually flashing red/yellow lights).
There are of course some edge cases where this is technically possible (as long as the cabinet door is open in CalTrans TEES cabinets, you can actually remove the MMU entirely and do whatever you want), and I'm not familiar with safety mechanisms used in other localities.
(Note: I work in the industry, not for any of the companies in this article, and my views are my own).
In the old timer-and-cam based systems I also believe this was electrically impossible. IIRC the green light in one direction was grounded through the green light in the crossing direction. So it was impossible for both of them to be on at the same time.
Fiction... indeed, science fiction. There was a short story (I believe in Analog) in the 1960s about this, later amplified by the author into a book.
But getting to your real point, about the use of an MMU safety monitor: I'm sure this works. But I confess, the first thing I thought about when I read that was Cloudstrike's explanation of their pre-release testing mechanism: running "validation checks" on the content, rather than running the actual software. Had they actually run their release, they would surely have detected the bug, since it apparently bricked every single Windows machine that downloaded it.
> I'm mostly familiar with North American traffic signal control, and in those traffic cabinets there is a device known as an "MMU" (Malfuction Management Unit) which acts as a safety monitor for the rest of the traffic cabinet.
Presumably the logic for this MMU could be implemented in strictly electrical components (relays or such). That would give me the most comfort (since its functionality would be, literally hard-wired).
I worry that some enterprising manufacturer, out to save a few bucks, would implement this functionality in a microcontroller with firmware that could be updated remotely.
Does the standard specify the functionality of the MMU must be hard-wired, or at the very least not able to be changed without physical access?
Unfortunately those fears are well-founded.
The majority of MMUs on the market that I have had a close look at implement safety-critical functionality on a microcontroller with updatable firmware. Some can even be updated over IP. I haven't had the opportunity to dig into if those firmware upgrades are signed or otherwise integrity-protected.
The standard unfortunately does not specify a functional safety standard or other measures to ensure absolute safety.
In theory it would be possible to implement it in discrete logic (or an FPGA or other formally-verifiable process), but as far as I know no manufacturer has done so (I'd love to be wrong!)
How about switching lights in quick succession, enough to cause real-world issue, but avoiding the direct conflict?
Now you start to get into the differences between the various standards :)
In NEMA TS2 (and the more modern ITE ATC), the MMU does enforce a yellow clearance time - you need the light to turn yellow for a period of time before a conflicting phase goes green. Usually this is a few seconds. Changing phases rapidly would likely confuse drivers, but in _theory_ shouldn't cause a collision if people respect yellows.
(believe it or not, in some localities a "red clearance" time - all red - is not required and lights will go from yellow in one direction to green in another.)
In CalTrans TEES, I do not believe the standard calls for the MMU to enforce clearance times - the attack you describe would potentially be possible.
> (believe it or not, in some localities a "red clearance" time - all red - is not required and lights will go from yellow in one direction to green in another.)
This was definitely true in the past, I feel like the concept of a 'red clearance time' is something that only became common within the last 5-10 years. Do you think it has become (with rare exceptions) ubiquitous at this point?
I'd like to think it's become ubiquitous - it has been a while since I've seen a signal without a red clearance configured.
However, the Federal Highway Administration in the US (which sets guidelines, but most states define actual rules at the state level) still says in their Signal Timing Manual [1]
> The use of a red clearance interval is optional, and there is no consensus on its application or duration. [...] there may not be safety benefits associated with increased red clearance intervals.
and goes on to describe how it has negative traffic flow implications.
so I suspect at least some agencies out there still are not using them.
[1]: https://ops.fhwa.dot.gov/publications/fhwahop08024/chapter5....
I feel like my area (where I've lived my whole life) does not have red clearance interval. It's not something Ive paid attention to before.
I'm sure it's proper driving technique but I feel it's ingrained in my head to give a couple seconds when a red turns into a green for all cross traffic to finish / anyone who runs the light. It's a common thing around me and I don't think it would be happening as much if an all-red period was implemented.
I moved up to Oregon for a few years during/after the pandemic. I can say from experience that the entire state consistently did not have red clearance times at least up to when I left in late 2023.
"...if people respect yellows." Of course they respect yellows! Yellow means "go faster so you can make it through the intersection."
No. There is a thing called a "conflict monitor." It uses a hard wired control board. This board has solderable links on it. When you connect a link you're describing two phases which CANNOT be green at the same time. You then slide the board into the monitor.
If the monitor sees that current for the green lights on two opposing phases are green at the same time, or any of several other fault conditions, it will trip, then "throw the intersection into flash" as a roadway protection mechanism. At this point none of the controllers or other logic is capable of controlling the intersection and it stays in flash until someone comes and manually resets it.
Here's an example of what one of those cards looks like. Wonderfully old school and absolutely nothing active about it.
https://i.ebayimg.com/images/g/9roAAOSwKOFfh3gM/s-l1600.jpg
I remember back in the day some guy was selling the strobe light devices that emergency vehicles use on some nefarious site and it came with a warning that its a felony to even possess one of these and you should only use it for "research" purposes wink wink.
The funny part of his ad was him saying its still dangerous AF to just switch a light to green so quickly, it has the very real potential to cause accidents which back in the early aughts when this was online, I could totally believe him.
You don't need a light to change to green in all directions, using this and then thinking you're good to go and you blow through the intersection while someone is trying to beat a yellow light in the opposite direction could be disastrous.
I would hope they still use physical interlocks instead of implementing it in software, even though it’s semiconductors instead of relays controlling stoplights these days.
As an embedded engineer I would design that as an interlock in the circuit and not rely on firmware
The legal threat letter from the vendor is among the most insane examples I've read. They only consider something a valid vulnerability if the reporter can demonstrate they obtained the equipment through a legitimate recorded sale? What on earth does that have to do with the existence of a vulnerability?
This blog post is dated 5 days prior to The Intelligent Transportation Society of America's publishing of its Cybersecurity and Transportation Safety Issue Brief.
The original author of the blog post was invited to speak at an ITSA.ORG conference, and present as through the eyes of an attacker. Thus, the perspective he posits.
There is nothing untoward in his observations but I can see why DefCon might hold off on letting him present his findings.
The ITSA is based in Washington D.C. and has a fairly large membership consisting of state's DOT's (primarily western U.S.), tech companies, car companies, engineering design companies, consulting firms, etc.
Their vision is a better future transformed by transportation technology and innovation. Safer. Greener. Smarter. For all.
A lot of automation is factored into that vision including the use of autonomous vehicles, high-speed inter-connected systems, their attendant technologies, and of course, cyber-security.
Personally, I'm dismayed the U.S. in only now awarding grants for these studies. Maybe the whole thing got sidetracked when our focus shifted to COVID, I don't know. But it does seem as though we're behind the private and governmental initiatives going on in Asia.
1: https://itsa.org/
2: https://itsa.org/wp-content/uploads/2024/07/Cybersecurity-an...
3: https://itsa.org/wp-content/uploads/2023/01/2026-ITS-America...
[flagged]
[flagged]
If you just want a green light an easier way to get one is to flash the infrared strobe pattern that gives fire trucks green lights. Seems simpler.
It has been years since I have seen a fire truck get a green light through preemption. It probably varies by jurisdiction, but around here the preemption just causes the entire intersection to go red. Then the emergency vehicle can navigate across carefully.
In addition to being basically useless as a trick these days, it is also trivially easy to detect an IR strobe being used, and the penalty can be hefty.
Fun fact: In some places, there are strobes on the signal that activate when it has been preempted, to let everyone know an emergency vehicle is coming through.
Yeah near me (mid-sized metro area) they just hit the intersection with lights and sirens and creep forward until they’re sure they won’t get t-boned. Light sequence never changes.
I've never seen that in the real world. In my area, the lights go RED in all directions. Much safer option.
Using one of these will get you 6 months in federal prison btw.
Doing any of the things suggested by this headline will send you to jail.
This has been bothering me for a while.
Security people act like it's their duty to expose every vulnerability, and that companies are negligent if they don't harden themselves against all attack vectors, while they are responsible for a good part of the danger.
Out in meatspace, I don't wander around picking random people's locks, making smug posts about how vulnerable their houses are (along with their address). Nobody would be happy about that, no matter what color hat I have on.
Security is a twin-engine racket, based on the pillars of
- assumed intellectual superiority
- the actual protection racket part
Could you help me understand what you are suggesting is done instead?
To me, it seems like you're suggesting that vulnerabilities are just left in play until someone malicious comes along and decides to do some real damage. But that seems so silly that I must be missing some alternative that you're thinking about.
> it seems like you're suggesting that vulnerabilities are just left in play until someone malicious comes along and decides to do some real damage
That's how security mostly works in meatspace, yes.
In the specific case of internet connected software the industry has a lot of experience saying that if something is exploitable then someone will come along and exploit, so we don't normally need to see an example of it happening in the real world first. It's sufficient to assume that if you get popular enough, a professional blackhat will find your bugs and exploit them. It's also reasonable to assume that the cost of a fix is low and the cost of change in the field is also low.
Outside that context the threat models are usually unclear and refined through experience. If you notice someone cut through a wire fence to steal some equipment from a cell tower maybe you build a wall around it instead. But if nobody is stealing anything there's no point in pre-emptively trying to guess that it might happen and building lots of walls because that might just be a waste of resources (perhaps there's no market for stolen tower equipment, so protecting it better would be a waste of resources).
> vulnerabilities are just left in play until someone malicious comes along and decides to do some real damage. But that seems so silly
Well, that's exactly how it tends to work for housing, so I think GP's point is that if it works there it should work here. However, I disagree because the stakes are so different (harming a single family who are free to harden however they like, versus harming the general public who are at the mercy of whatever hardening is done for them).
You've noticed an issue.
You let the manufacturer know, and you let them decide for the next steps.
No ultimatum to threaten to disclose to the public or to ruin their reputation, it's not your business.
In the meantime, you keep it for yourself.
You helped: no lawyers, no problems.
If really there is a safety issue, after a reasonable period of time you can inform the regulators, as it is their job to assess safety.
This is responsible disclosure, not TMZ-style public-shaming.
You're presenting this as if its a new idea, but the security industry tried the above (for the majority of the time that "computer security" has been a thing) and... it didn't work! That's the whole reason public disclosure came about in the first place -- there's quite a rich history there if you're interested.
Some other thoughts:
>You let the manufacturer know, and you let them decide for the next steps.
Which, as history has proven, the "next steps" is generally to sweep it under the rug and to be forgotten about until it's exploited by a bad actor.
>it's not your business
But, what about when it is? On-topic: I drive a car, so I care about vulnerabilities in traffic lights and they may directly affect me. It's also my business if my personal data is stolen, or my identity, or corporate data, etc.
>You helped: no lawyers, no problems.
No problems... Until the vulnerability is exploited and it causes me a problem.
> No ultimatum to threaten to disclose to the public or to ruin their reputation, it's not your business.
I found an authentication bypass in a door card access controller. Per the installer I was working with the units are regularly exposed directly to the Internet. (Heck, the installer was trying to cajole my Customer into doing it for "remote support" reasons.)
Given that there's an impact to the public-- albeit not necessarily directly safety-related-- I think this kind of vulnerability is still "my business".
If I owned one of these controllers and it was "protecting" my property I'd want to know.
(Fun aside: The installer went so far as to suggest that because their other Customers expose these units to the Internet-- particularly a small bank who is "audited" for "security"-- it would be okay if my Customer did it. Needless to say, my Customer did not. I let my Customer know about the auth. bypass and we kept the unit locked down in a VLAN w/ a restrictive ACL, but I never publicly disclosed... too afraid of hostile response from the vendor. Eventually a researcher did find it and disclose it publicly, at least...)
I think a better analogy is wandering around noting what locks random peoples' houses use, buying your own, breaking your own, informing the manufacturer of the flaw, and then informing the owners of the random peoples' houses.
AKA what criminals already do, except the criminals actually break into the random peoples' homes and steal their stuff.
> Out in meatspace, I don't wander around picking random people's locks
Sure, because the chances of getting punched out, shot, or reported to the cops is significantly higher. Given how trivial it is to quietly attack across the network, I think the analogy with meatspace makes little sense.
Also breaking into a single persons house... maybe they don't want to lock their doors. It affects nobody but them.
These systems affect lots of people, it's a public safety issue, and there is a company being paid money by the public to ensure that their systems are safe and secure. They should be tested by everybody and anybody who wants to test them. Especially if they are running on a publicly accessible IP address.
Also if you want to test the locks on someone's house, you don't go to their house. You buy the locks that they are using, and test them quietly in your own location.
> Also if you want to test the locks on someone's house, you don't go to their house. You buy the locks that they are using, and test them quietly in your own location.
This is a pretty great point, because it's exactly what the guy in the article did to find the vulnerability in the traffic controllers.
I assume that any piece of technologically backed infrastructure is a potential target for state-level actors. If rando security researcher finds the vuln in 15 minutes, I guarantee China already has it.
Anyone operating infrastructure hardware is negligent if they won't take basic measures to harden it against disclosed threats.
I’m not worried about malfeasant citizens mucking with the traffic lights, there are simpler ways to make mayhem. But in the event of a war, you can bet every unpatched vulnerability in your infrastructure will be used against your country.
When you work for a state-actor it's no different than anywhere else; you don't have exploits coming out of the sky.
You either research these vulnerabilities (and you have limited capacity and knowledge) or you purchase vulnerabilities from vendors, exchanges and "research companies".
In such case, it was an unnecessary free gift to an enemy state or a malicious actor.
The same with NSA, they do not know all the vulns of the universe (due to budget, resources, or simply focus).
You may actually have some vulns they are interested into but unless someone points these vulns to them, they will not get aware that they exists.
Company makes HW that can potentially harm people, if someone logs in to it remotely and turns all lights green at once. It's possible to find the vulnerability in 15 minutes of getting remote access to the device without any prior knowledge, that gives admin access to the HW. Company rejects the report based on flmisy reasons via a lawyer, threatening with a felony prosecution.
But the person finding the vulnerability and notifying the company is the smug one. :)
There are electrical junction boxes all over my neighborhood, that direct power to the stoplights and residential buildings (?). They have a simple padlock, and could be opened in 30 seconds with a lockpick or bolt cutters.
Nobody tries! Not even to test it out! The question isn't "How easy is it to break in?", but rather "Should I be tampering with this?"
Your analogy breaks down immediately because the Internet isn't your neighborhood, it's effectively everyone's neighborhood, including the state-level bad actors mentioned above.
If someone could access those electrical junction boxes from China or North Korea, I'd want the locals finding the vulnerabilities first.
Hypothetic answer from state-actor:
"We will push an update to Flipper Zero for this, the right moment.
Thanks to the Flipper Zero, we have millions of devices in the wild that we can remotely control and send signals from. Just wait."
Your analogy is flawed. This isn't testing someone's house. This is buying the locks that are on people's houses and testing them in your own location. Which people should absolutely be doing.
> Out in meatspace, I don't wander around picking random people's locks, making smug posts about how vulnerable their houses are (along with their address). Nobody would be happy about that, no matter what color hat I have on.
Yeah, but in youtube space, there are, eg, lawyers who are into lockpicking who post smug videos showing how vulnerable various manufacturers' locks really are to common lockpicking techniques. Apparently 4.5M subscribers are quite happy being informed what the state of lock security is out there.
https://www.youtube.com/channel/UCm9K6rby98W8JigLoZOh6FQ