I was just reading about the Stuxnet worm at Motley Moose (one of the reader blogs I added to my RSS last week).
One of the most sophisticated pieces of malware ever detected was probably targeting “high value” infrastructure in Iran, experts have told the BBC.
Nearly 60% of the reports of the virus have indeed come from Iran. Is it really possible that the US or Israel or whoever was able to create malware that specifically targeted Iran? I know nothing about computer viruses.
Is it really possible that the US or Israel or whoever was able to create malware that specifically targeted Iran?
yes. Malware can be made to target IP addressing, even specific IP addressing.
And IRAN has access to publicly use only a specific range of IP addressing.
I can see how that could be done. If a virus can be programmed to recognize a particular language such as Farsi then bam there you go. I would think that would be easy enough for an enterprising Israeli hacker.
From what I’ve heard is that the malware was targeting a specific bit of control software used by the Iranians.
DougJ is the business and economics editor for Balloon Juice.
Apparently what it does is turn your blog and twitter feed green.
If Iran were targeting our nuclear plant computers, would we consider that an act of war? And what happens if the malware works and the Iranian reactors get shut down? Will they be shut down properly?
Why are we and the Israelis the boss of everyone?
That’s interesting. Wasn’t the GOP whining that if a country is the victim of a cyber-attack, they should be allowed to retaliate with nuclear weapons?
This particular piece of code was designed to attack a certain configuration of automated control and reporting equipment according to an article I read. Find the facility with that particular setup of that particular equipment and you find the target. Supposedly the code attacks a certain configuration of equipment with certain types of Siemens ACRE.
I’ll note that just because it appears to have been found primarily in Iran does NOT mean that Iran, or a facility therein, is necessarily the target. The Iranians aren’t stupid, you know. They have their own aerospace industry and their own heavy and medium industrial economic sectors with all that implies. The code could have originated there for attacking something else, or it could have originated there for attacking something there, or it could have originated elsewhere for attacking something there.
licensed to kill time
That’s a fascinating link at Motley Moose with some well informed comments (not that I necessarily understand them all). When I first read about the Stuxnet worm I figured Israel or the US, for sure. Sneaky backdoor bomb! It all reads like a spy novel.
@DougJ is the business and economics editor for Balloon Juice.: Awesome.
And am I mis-reading the Motley Moose post, or is the author positively giddy at the thought of major cyberwarfare?
@Yutsano: Thought of? We’re in it every day.
@DougJ is the business and economics editor for Balloon Juice.: OMG, It’s Andrew Sullivan, secret spy.
@Yutsano: If he is, then he’s a decade too late.
DougJ is the business and economics editor for Balloon Juice.
I didn’t detect that. Personally, I think it sure beats bombing.
That was Michelle Bachmann, though in her defense, she is insane.
Yes, it is entirely possible.
A piece of malware (virus, worm etc…) can be written to look around itself for preset conditions. In a targeted attack, the author(s) will build in as much information as possible to identify the target site (i.e. “this sort of host running this sort of stuff, connected to this other thing configured thus and such a way…”). The virus itself will not specifically “move towards” the target, but as it spreads it will become more active as it encounters conditions that match more of its presets.
Not only is it possible, it’s easy.
Countries are assigned particular blocks of IP addresses. All you need to do is program your worm or virus to attack, preferentially or exclusively, the addresses of the target country by including its IP address ranges in your code.
It might be a bit tedious for a large country, which could have a large number of discrete blocks of address ranges, but it’s certainly not difficult.
It’s also possible to excerpt a country from the list of targets. Some Russian malware is famously alleged to prevent itself from attacking Russian targets.
@licensed to kill time:
Thanks, we try to be moderately intelligent at the Moose… ;~)
@soonergrunt: Yeah, that’s the thing. It looks for a very specific machine (perhaps down to the actual serial numbers of specific components) before activating. We don’t know if it actually found it’s target and executed, and it’s not like the Iranians are going to tell us. But the level of sophistication, and the intelligence/spycraft implied by the specificity, point directly to a state actor here.
The virus was found dormant on other machines in multiple countries, which is how the rest of the world found out about it – but it was programmed to not activate unless VERY specific parameters were found.
NY Times has a bit more on the impact on Iran.
I wonder what the Feds could do if a similar effort was directed at the US? How do you secure all of these industrial facilities all run by different companies?
I doubt Shaun Appleby is ‘giddy’, he’s just enthusiastic. :~) His research and commentary are as valid as the hardcore cybersecurity lists I follow.
We should be the boss of everyone who sucks worse than us, and I would put Iran in that category. Everyone who sucks less than us (western Europe, Australia, Canada, etc.) should be the boss of us, though.
@JGabriel: Few of these are internet connected. It appears that this is mostly being transmitted physically – perhaps infect an internet connected computer and then get copy itself onto flash drives that then get plugged into off-grid systems. It appears to be looking for specific hardware and software systems to identify its target.
The key to being able to do this is having a significant amount of intel of what components went into the system you want to target. These are the national security benefits of not outsourcing.
@JGabriel: I would assume that 99% of the seimens control equipment in the world is running on a 10.0.0.* network. Somewhere yeah there’s a public facing gateway that will tip where they are, but only an idiot would give a real pubic ip to a control valve.
With any halfway sensible topography, stuxnet is the payload of an attack that’s already penetrated a few layers to get near the target.
Of course since one of the vectors is uses is sneakernet via usb drives, there’s a good chance there’s what we might call social engineering and intelligence services call field craft.
There’s a lot of interesting stuff out there on all the sploits this one uses. It was certainly written by somebody who stays on top of zero day holes and can also get the private encryption key of a Taiwanese chip maker.
A computer virus is a program, so certainly it can be programmed to look for things. Some have mentioned looking for certain IP addresses, but there are other things too. For example, it could be programmed to check default language, access building maps and look for specific configurations, etc.
Also interesting is that it used a number of zero-day (unpublicized) security holes. This implies that the developers of this virus discovered the holes on their own, which implies significant effort.
@DougJ is the business and economics editor for Balloon Juice.:
seems better than a cruise missile to me. Now if someone could only find a way to target the RNC and/or Fox ‘News”* with a worm….
** as if there were a difference….
I’m really skeptical about targeting high-security installations like nuclear plants – they tend to be heavily firewalled or even disconnected from the Internet entirely.
But as far as targeting a country – hell, yeah, that’s possible.
This attack used USB, apparently.
Villago Delenda Est
The only secure computer is one that is in the original unopened shipping box.
@Anon: That makes sense. Time for the Iranians to squirt some glue into those USB ports, I guess.
Okay, I’ll give you the rundown on what I know — and what I can share — in prep for a similar presentation I have to give to my management.
1) Stuxnet is Malware that was discovered by a security company in Belarus.
2) Stuxnet takes advantage of 4 0 day attacks — 2 of which Microsoft — at the time of this writing — doesn’t have patches for
3) The IT community wrote off Stux as another worm, But a genetleman by the name of Ralph Langner and his team (www.Langner.com) figured out that it attacks a variety of non-windows things, and is able to inject code into certain models of PLCs when the engineer is updating the ladder logic code. Lots of techspeak here, but basically when the Engineer updates the code on the controller device, it injects code to make the device behave unpredictably
4) The code was initially propogated by USB and in order to fool the Siemans software used a stolen digital signature used by Realtek. I’m unclear at this time whether that was a flaw of Siemens (just validating the code had a signature, rather than validating the signature itself)
5) Its pretty well determined that the writer of the worm would need access to the process control systems and the procedures used by the control system. They would need resources to get (steal) the digital signature, and probably a lab to test out the whole thing. Generally, the idea only a nation state could pull this off, but cost-wise, a medium to well funded NGO could also do it.
The politics of the issue is actually pretty irrelevant, and to borrow from someone else “Stuxnet will be discected to the last byte” Which means we will be seeing “kits” like for creating malware, only now made to target specific control systems. So no longer will the source be a nation state or an NGO. Now it will be your organized crime, and other malware producers.
So control systems touch every part of our lives. Most Industrial Control Systems felt they were immune to the struggles in the IT world, but now this kind of attack bridges the two worlds pretty effectively. Many IT shops are not ready to handle ICS (Industrial control systems) — they don’t understand the special needs of ICS, and neither does the typical IT security organization.
From what I read, it came in on a USB stick. And it was not targeted at a particular system or country, per se, but a specific process equipment configuration. The Siemens system specified is used for DCS (distributed control systems) or SCADA and is not usually connected to an external network. This would be more in the line of reprogramming a high speed trip of some equipment and hoping for a wreck. To change the ladder logic and do anything useful you’d pretty much have to be familiar with the process you’re attacking.
And which country has the most Windows-based DCS systems? Not Iran, that’s for sure. If the U.S. thought this up we’re going to very shortly regret it big time.
Therein lies the rub. The program couldn’t have been looking for specific IP address range, or at least one with Iran’s IP range, because the target probably uses the public IP address. The developer was looking for a program, and a very particular one at that. Whomever wrote this, knew, and undoubtedly so, that this program was available for exploitation.
The 4 zero day exploits are very interesting. If a state actor found any of them, it’s reasonable to assume they would be concerned about attacks against them, and so notified MS. So either these zero days were so sophisticated that the developer knew others could not find them or they simply thought the danger was worth the risk – namely they knew their systems were invulnerable or protected in some other way. (The link provided by Doug talks about the Madrid Metro.)
The forged digital signatures indicate a truly high level of sophistication, much higher than the zero-days. That is industrial espionage at its peak, which points back to state actor.
As Chris on Motley Moose said, when the discussion in Vancouver starts (a security conference) it will be most interesting.
I’m not disagreeing with this, but I’m looking at it from a bit of a higher level, and this might get a bit confusing.
While windows was problematic, the specific challenges are around managing the code base, which is what Stux and others can attack with relative ease.
I’ve done code reviews for companies before, and asked. “Your code looks good. What guarantees do you have this is the code running on your system?” Usually, this is greeted with blank stares. Occasionally, someone will mutter about code signing.
Then the question becomes. “Great. How do you control your code signing certificates?” The other is “How do you protect your build process?”
The general game in security is when a control protects the prize, attack the control and see what protects it.
The current thinking from NIST and friends is changing from “Make an impenetrable perimeter” to “Agilely respond to an environment full of threats” While that sounds kinda hollow, the idea that your perimeter will be penetrated and you need corrective and detective controls so you can respond effectively is probably one of the most ignored parts of security.
The discussion already started.
@mistermix: Well, you always need a means to connect to the system. Having a meatware connection gives you far greater ability to secure the systems, assuming you actually exercise that, which few organizations ever bother to do to the degree that they secure their electronic ones. That’s always been the best vector to get into a system if your goal is to do real harm.
@DougJ is the business and economics editor for Balloon Juice.:
Economic warfare causes just as much unjust harm to civilian populations as bombing does.
If the code used just any old signature, then it would not need to be stolen, since signatures by themselves are public. (Just like normal signatures.) Rather, it seems to have been the private key used to create signatures that was stolen.
Quick primer in digital signatures. First, public-key cryptography: PKC has two keys. One is used to decrypt, and is public. The other is used to encrypt, and is private. Furthermore, you can’t figure out one from the other.
To sign a digital document, the document is “hashed” to create a short “fingerprint” of the document (called a message digest). The digest is then encrypted with the private key. The result is then appended to the document.
To verify the digital signature, the receiver does the same hashing to get the message digest. She then decrypts the encrypted signature using the public key. If it matches the message digest, then the document must have come from someone with access to the private key.
I could be wrong, but I’m somewhat skeptical that any Joe Schmoe that could do this. They can’t change the payload without access to the private key, for one.
Edit is really lame.
As noted before. The idea that the lab was reproduced to create the attack has been floated around. I’d like to meet the development team. They indicate a higher level of discipline than a lot of IT shops.
IP has very little to do with the Stux attack, and in many cases, its a very traditional worm. What it does when it finds the Siemens development .DLL and or finds a PLC connected is entirely different.
Now what’s really cool is a lot of ICSs, like the Siemens one under attack, have their own IP stack and behave like PCs in that they can spread worms. So while Stux didn’t make it so that infected PLCs propogated the worm, they could have.
That’s where Siemens gets really tight-lipped. If there was a flaw in the verification, then theft might become easier.
As far as easy of production, with Stux demonstrating the heavy lifting, it won’t take the level of sophistication of Stux to create additional attacks.
Key management is one of the areas I find the most flaws in systems.
Also, how many digital signature verificaiton processes really check ACLs?
Generally its n-1.
Well, that assumes that whoever wrote this knew how to get it onto the contractor USB drives in the first place. That it’s spread as far as has suggests that the originators weren’t confident they could do that as a directed effort. I mean, why infect systems that don’t match the target profile? That only draws attention to your virus.
It sounds like they didn’t have access to the contractors and instead decided to infect computers that the contractors were likely to come into contact with, get the virus onto their USB drives that way, and then hope that the contractors carry it to the right system. If the virus does no harm to the carrier systems, a defense to it might be slow to develop, and certainly if the virus is contained in a place like Iran where, let’s be honest here, groups like CERT wouldn’t give a fuck. Even if the Iranians themselves could work out a defense to the virus, they may not have any way to get it distributed nationally given that they have zero influence with any of the standard anti-virus vendors.
Basically the originators of this could rely on Iran’s political and economic isolation (which the west has helped impose) to ensure that the virus could run relatively wild as a way to get around the lack of physical access (the byproduct of that political and economic isolation) to reach the contractors they hoped to reach. So long as Iranian security was sufficiently lax to not be able to scan for and remove such a virus from every piece of equipment (including such innocuous things as flash drives – something that is reasonable to assume the Iranians would screw up, given that we screw it up too) the strategy would probably be fairly reliable.
The 0-day exploits were just step one, though. So, a state actor could reasonably believe that the other challenges were enough to protect their own systems.
Yes, it’s true that if there is a flaw in the verification, then it will be easier. I’m assuming that we will know this soon, though, because anyone who has analyzed the worm/virus/whatever will be able to verify whether or not the signature is actually a correct one, simply by using RealTek’s public-key, in its cert.
This is all a part of Steve Jobs’ plan to get Iran to buy Macs.
You forgot to read the fine print: Only if it it’s the US or Israel. Then again, since we’re the only two God-ordained countries in existence, all the others are just “fake.” Probably the work of Satan, or someone who dabbled into witchcraft.
@LurkerAbove: As someone who was recently laid off from the quality department of the IT division in the above mentioned* company, I’m beginning to wonder if the lay off was a blessing in disguise. Nah, I’d rather have a job.
One of the articles I read stated that said company told their clients not to change the default passwords on the affected PLC systems, as it could harm plant operations. What!?
This type of exploit has some scary ramifications for all SCADA and PLC. Tell the gas turbine that there’s no rpm limit, stop the lubrication for a piece of equipment, etc. Said company’s equipment processes most, if not all of the mail in the US.
*A severance agreement contract insists that I not bad mouth said company. Since I don’t care to find out what that involves, they will remain said company.
This is one of those areas where the IT world and ICS world collide. In the IT world, having things with default passwords that cannot be changed would be ludicrous. Even so, I do run into the occasional IT system
Many of these ICS systems are well over 10 years old. If you remember where IT thinking was around 2000, its no surprise that ICS system designers were not thinking, as most ICS systems didn’t even *speak* TCP/IP, and instead used proprietary protocols.
Now, ICS systems can speak TCP/IP and some companies even bridge their Corporate Network with their ICS network. Luckily, I am not in such a situation.
In the ICS world, Availability is king. The idea that passwords were confidential was a bit foreign to them, as they were working more on the availability part of the Availability, Integrity, Confidentiality triad.
One of the more interesting things is that many ICS folks will say their network is isolated, completely forgetting about the modem that the vendor uses to connect to the devices for maintenance.
Well that is what the bobble-head Virgin Mary sitting in the control room is fer, id’nit?
Why do we need private passwords? Did you see how many times I needed to show my ID badge just to get in here!?
Exactly. Joe Weiss has a great book about industrial accidents and cyber security. Making this mess secure is going to be very tough and despite our best efforts, things are going to go wrong.
I propose F007 as a new tag.
You will thank me later already, you just don’t know it yet.
@jayboat: Isn’t that F00F? http://en.wikipedia.org/wiki/F00f
No. I should have been more clear, because my post looks kinda dumb unless you follow the link to Ralph Langner’s website from the Motley Moose site that DougJ posted at the start of the thread.
It refers to code that Langner has unwrapped from Stuxnet that reads DEADF007.
My apologies, imagination was running wild.
When that happens I tend to type in incomplete sentences.
Still rooting for a new tag. 8-]
@LurkerAbove: True, but it still boggles my mind. But then I remember the last company I worked for 15 years ago, and I could dial in to Boeing’s inventory control system from home to repair after hours database glitches.
More likely, tell the centrifuges in a uranium enrichment plant to run way outside normal parameters until they break.
My money’s on Israel as the source.
Oh, I’d put money on Stuxnet targeting that process. I’m just thinking about what could happen next, as the malware is reverse engineered and adapted for other purposes.
Why blame the United States or Israel? There was, after all, an extraordinarily sophisticated multi-step attack launched by the People’s Republic of China last year.
if it’s Iran, definitely Israel or the US. who did the Chinese go after?
@DougJ is the business and economics editor for Balloon Juice.: No, I would take bombing. Cyberwar could look a lot like biological warfare in the sense that (1) an attack can have an open ended number of casualties, and (2) an attack can easily get away from its original target and go on doing damage into perpetuity. Whoever made Stuxnet took exquisite care to limit its effects. You can hardly expect North Korea or some anarchist teenager copy-pasting its code to do the same.
Richard Clarke at least is freaked as hell about our awful electronic infrastructure.
More than ip addresses, the Israeli worm targeted the customized software of the Iranian nuclear program.
@Bernard: China went after a great many *multinationals*, including Siemens. It turned out that Google noticed the attack first, but that was only after the attack had been going on for many months.
Sy Hersh has been reporting clandestine operations against Iran since 2008.
“Is it really possible that the US or Israel or whoever was able to create malware that specifically targeted Iran?”
Sort of. There would be spill-over, or perhaps even cyber-blowback, as the malware infested some systems not in Iran. But it does seem likely to be the work of a government or a large well-funded criminal organization. I think some good old-fashioned employee blackmail could get it placed.
Damn, can’t eat radioactive food.
you can target any country, all it takes is proper knowledge of IP services and subnets/ port scans etc. . Why do you think most DOS attacks hit the US come from foreign countries?
betcha they don’t use Macs in Iran.