RT @shapeshed: Given Spectre and Plundervolt I wouldn't go anywhere near SGX for DLT privacy solutions and I feel sorry for vendors that have built their privacy house on it. Full crypto key extraction 🤦♂️ https://t.co/QabqRlBuas
>The security aspect with SGX is nice
Except SGX gets broken every 6 months, then Intel releases a fix that a subset of people never apply, then it gets broken in a new way 6 months later. SGX is nowhere near the panacea for "trusted execution" that Intel sells it as, and putting it as the root of your trust model for your system is a *baaad* idea.
It's a standard feature in a lot of microcontrollers but I suspect it is much more complicated in "big" CPUs due to all of the throttling and power management going on.
Like for all we know CPU might tolerate 200mV drop just fine (and benefit from power savings), but only below some clock speed
This is a form of privilege escalation that requires you to start out as Root/Administrator, then escalates you into "Security Enclave" level by messing with voltages. So much for integrated TPMs being "good enough".
Yes. The key for me is you need to already start out with admin privileges, meaning the box is already compromised.
Using root commands to further compromise and already compromised box isn't that impressive to me as an exploit. The attacker already has full control.
You have root on your computer. I have root on my computer. SGX exists to keep data out of the actual computer user's hands, no matter what. The user is *supposed* to have root and SGX is still supposed to stop them getting the data.
Note: System Management Mode is also a level more privileged than the OS kernel, but that's so the OS can't accidentally interfere with it, more than it's a security measure.
>you need to already start out with admin privileges, meaning the box is already compromised.
Right. And? There's likely a plethora of exploits to get that. Don't dismiss the fact that they can get even *deeper* than root.
>Using root commands to further compromise and already compromised box isn't that impressive to me as an exploit. The attacker already has full control.
No, they had high level control. Now they're in deeper still.
They could probably also do it by cutting traces or removing components on the motherboard and soldering extra bits on. After all, voltage comes from *outside* the processor.
Or making their own completely custom motherboard which ignores what voltage the processor asks for. I'm sure some research teams have the funding.
Even if you need to use the BIOS or even need physical access, this is still a vulnerability. Secure enclaves are supposed to stop anyone accessing the code *even if you have full control over the machine*. The most obvious use is DRM but they mention another one:
> attackers with physical access would also be in the threat model of SGX (e.g. to protect against malicious cloud providers).
> Intel Xtreme Tuning Utility
Protip: This program can be used to undervolt laptop CPUs, which means less heat production. That is great for certain shitty laptop designs (I'm looking at you, Asus Zenbooks from 2016).
> which allows you to do all this from userspace (albeit as root).
> The whole point of SGX was for applications to be able to hide stuff in memory, even from root
Yeah, but if you've already rooted the box, all bets are off anyway.
When the instructions are *start with a rooted box, and as the sysadmin run the protected commands...*, the exploit isn't that impressive.
But, see, this is the crux.
We've reached a time when it's not realistic to assume that your environment, especially a shared environment, is guaranteed root-proof. There'll always be a vulnerability (if not now, then after an update).
SGX was supposed to give users a measure of confidence in this sort of environment: you encrypt the memory the same way you would encrypt the contents of disk (for sensitive data). But it's important to not have the key in a place where root can read it, so you have these enclaves such that only the process that created it can read it.
It's all "defense in depth" (i.e. each is individually not a guaranteed protection, but layering everything makes it exponentially more difficult to break through).
But this vulnerability means that you bypass all this protection.
SGX isn't to keep your data safe from privilege escalation exploits. SGX is for media companies to keep their data safe from you, even when your computer is processing it. Maybe someone will use it to help you store your private keys securely one day, but I don't think that's the design goal.
Secure enclaves are *supposed* to be an answer to that. You're right in most cases but the point of SGX is that root is no longer an omnipotent authority. This is more akin to breaking a security model where memory is readable only by root, but you can't actually become root.
I'm not an expert and I feel like I'm missing something. How does the code get into the secure enclave in the first place? Surely the code itself is available to the user, no? And if so, couldn't I just "emulate" the code outside of an enclave to determine what it's doing inside?
The DRM server talks to secret hardware inside the CPU that Intel put there. The secret hardware has a private key for a certificate signed by Intel. It sends the DRM server the certificate, and the DRM server sends back encrypted code and encrypted data, which the CPU won't decrypt unless you set it to SGX mode, and the CPU encrypts/decrypts memory accesses while it's running in SGX mode.
You can't emulate it because you can't get a certificate signed by Intel. At best, you can pass the code and data through to an actual Intel CPU that has one, and the code will run there, but you can't see what it's doing.
>The DRM server talks to secret hardware inside the CPU that Intel put there. The secret hardware has a private key for a certificate signed by Intel. It sends the DRM server the certificate, and the DRM server sends back encrypted code
Sorry if I'm asking for a semester of courses in comp sci, but doesn't this mean that a hacker need only crack the hardware once and then have access to the key, and thus DRM data, forever?
Most of the Intel exploits I have heard of make use of features designed for data centers or enterprises. I don’t know much about modern hardware, but it makes me wonder: Do AMD chips have similar features that are potentially just as exploitable? Maybe engineering for the data center/enterprise first ironically makes the hardware less secure.
IDK about datacenters but for the sort of ETL/modeling/ML we do, AMD chips are significantly worse off. I can't actually tell you how much worse because we haven't bothered trying an AMD chip in so long.
I bet that they also have vulnerabilities, but they are certainly harder to exploit, because they we're pretty resistant against the bigger attacks for Intel CPUs. Also you have to take into account that more research seems to be focused on Intel (maybe because they currently have a higher market share in the enterprise and consumer domain).
It's also because AMD's equivalent of SGX, called SEV, has been totally broken already. SGX on the other hand is a worthy opponent, especially because so many surprising issues seem to be patchable in the field with microcode changes.
Interesting. Having played around with stuff like ryzen master I thought it was interesting they were allowing this kind of access from outside the uefi firmware. I wonder how long until malware starts overvolting to damage chips, or undervolting to the point of crashes to get a ransom payout.
You'd need to give them root/admin access.
I know that the average user is as smart as a rock from the stone age, but there's much much easier attack vectors if your requirement is that they execute your program as administrator.
If malware wanted to wreck your computer it could have reflashed the firmware or reformatted the disks or turned down the fans years ago. They want to steal or ransom the data instead. A dead machine can't be ransomed.
A substantial part of x86 CPU maker revenue comes from the computer modding community, who want to be able to fiddle with these voltage settings.
> A substantial part of x86 CPU maker revenue comes from the computer modding community
The number of people that actually do this is minuscule on the scale of intel's entire operation - pretty much all 'enthusiast' Intel CPUs are basically just Xeons without ECC memory support - due to defects or drivers. The datacentre is where Intel makes its money, the Xeon rejects are then just pawned off to the enthusiast consumers. They don't actually care about them as a segment very much beyond flaunting who has the fastest CPU around.
[This is intel's sales figures over the last few years](https://www.statista.com/statistics/495928/net-revenue-of-intel-by-segment/) \- the 'client computing group' is where their consumer processors lie. But this also includes their 'notebooks' category [which accounts for 2/3 of their consumer pc sales](https://www.fool.com/investing/2018/09/13/this-is-the-workhorse-of-intels-technology-empire.aspx) (likely even more given AMD is making gains in the desktop space). This shows just how small a section of their profits are from desktop computers these days. Then take into account that the majority of those desktops will be NUCs or the low powered kind of thing with a Pentium or an i3 that you'll find in your local BestBuy... I would be surprised if the Gamer/Enthusiast '*I'm going to build my own pc and tinker with it*' segment made up more than 1-2% of their overall profits.
> I would be surprised if the Gamer/Enthusiast 'I'm going to build my own pc and tinker with it' segment made up more than 1-2% of their overall profits.
I really, really, wish game companies would understand this. Unless you’re making an FPS, requiring any enthusiast gfx card for decent graphic quality will cut off most of your potential customers.
Overvolting to damage chips makes no sense - malware isn't made to kill pcs, its made to profit from them.
In the same way that a human virus doesn't go out of its way to kill you, that's counterproductive, it does so by accident.
There are many examples of pre-WWW era malware designed to break hardware. Setting monitor frequency out of band could cause damage, as could overclocking. And of course BIOS flashing (before there were failsafes), and even trying to stress the harddrive. Over time we developed better hardware that would not self-destruct so readily, and they pretty much went away.
These were mostly made by people wanting to watch the world burn just to prove they could do it, not like today when financial gains is a huge motivation. Spreading too far and wide would only make it likely they'd get caught.
Giving a threat just gives people time to safely remove the malware before it can do anything. A scammer isn't going to make much money with a tactic like that.
Besides, on the grand scheme of things losing a CPU isn't that big of a deal - many people's CPUs are going to be worth a lot less than the ransom (WannaCry was $300-$600 in BTC for reference), and how many people will actually understand the threat in the first place?
You're better off threatening something harder/impossible to get back - their personal files. Hence encryption malware is so effective.
Nobody is going to believe anyone trying to ransom by saying "I'll destroy your CPU if you dont give me money." Unless they do, then you dont have a pc and it doesn't matter.
Makes more sense to lock the pc and give a number to call like the microsoft support scams.
Edit: It seems people are skeptical of my assumptions. I urge you to really consider imagining me calling you right now, and saying "I know your ip, I have your number, if you don't send 10 Bitcoin to this address in an hour then I will destroy your computer by sending too much voltage to it." Does that really sound plausible to you?
Most people seem to just assume that the computer is already locked, which is not what I am saying. I differentiate that it is certainly more effective to simply lock the user out for the ransom than to destroy their property. If you claim that 'the user will have no choice' you are already assuming the computer is locked, which is what I said is more effective.
This method on it's own without locking the computer sounds rather baseless and crazy, if you truly think there are people that would just accept that phone call at face value then you need to help educate them. It's as easy as unplugging an ethernet cable.
I mean fair, but that's more of a 'revenge tactic' more than an incentive to pay-up. Again, you're not going to believe somebody online is capable of frying your PC until it happens and at that point, would you feel inclined to give them money? Maybe if they were threatening enough, but I would assume emotions would prevail and you'd get angry/upset/panicked and call authorities or something to that extent.
A big reason why Ransomware works is because it has a proven track record of unlocking your computer after you pay up. They know people would stop paying otherwise. The same logic applies here; if it became common for computers to get fried if you don't pay, people would start paying.
Well my idea is that you turn off your computer only until your IT friend arrives. Hardware damaging ransomware has no leverage because the potential damage is just that - potential future event that you can prevent by simply turning off the device and letting a professional handle it.
You're referencing the situation with the knowledge/assumption that the hackers/ransomers are good people and will keep their word. When you're in the moment with somebody who obviously has malicious intent does it really seem rational to think to yourself "Jee, this random person gained access to my computer and is holding it hostage, they say give them x amount of dollars to go away so of course I'll pay it."? People aren't googling "what is the rate of ransoms that are successful and result in return of ownership of property?" while they are being held at ransom. I understand your point, but I feel like that's not a rational line of thinking while being held hostage/ransom.
Well it probably wouldn't work the first time but once the word spreads around people might get afraid and pay up. But even then why bother since all this is cpu specific it would be a ton of work to implement in order to hit enough victims. A crypto ransom trojan can be hacked together in 2hrs and will work on most OS and hardware.
I feel like people aren't going to buy "I've locked your computer and will destroy it unless you pay x$" from some random person/message on their computer. Unless as you say, it becomes very prevalent (at which point I believe intel would be called upon to take action) people aren't going to believe that because it sounds fake. Most people also don't own crypto and unless it's malware targeted at crypto-hodlers (again a small population in the larger scheme) it would require people to sign up for Coinbase or something at which point the victim would probably realize that they have time to ask for help from authorities or something to that effect.
It sounds crazy and not-possible especially since the layman doesn't know about these intel chip exploits. We're arguing semantics at this point, I could just as easily say "I know a lot of people that wouldn't believe that.", but if you weren't aware of this exploit, and somebody *randomly* called you and said "Give me 10,000$ or I'll send a virus that'll destroy your computer!" you wouldn't be at least *a little suspicious?* Really?
I can't really see how it would be useful for that. Ransomware works because it threatens companies (entities with massive amounts of money) with a loss of data (an irreplaceable asset worth massive amounts of money). Replacing even the most expensive CPUs is cheap compared to losing data.
That said, I'm not even sure how this would work. I don't think software can alter the emergency shutdown point for the CPU, and damaging a CPU with high-but-acceptable temperatures takes days/weeks. Ransoming requires that you inform your victim that you've compromised their computer, and "You haven't given us money; please don't shut down this computer for a few weeks" isn't really a valid strategy.
But I don't think undervolting can cause permanent damage, and the victim can just take the computer offline and restart from an uncompromised image. If hackers have such complete control over your system that this is impossible, you have much bigger problems than undervolting.
In the case of undervolting and extortion no more than any other malware (though most people still don't know how to refresh/reset windows).
The overvolting scenario would be to specifically attempt to permanently damage the CPU. Crank up the operating voltage to dangerous levels and hope no one notices. All you have to do is break a couple of transistors out of billions and you've made the chip permanently unreliable. Not all malware is written to make money.
You're right, but that really only applies to half of what I was saying. Ransomware works because you can do the damage (removing access to data) before you notify the victim of what you have done, then you can get the victim to pay you to undo the damage. With thermal damage to the CPU, it takes a long time to do damage\* which is then permanent; if you notify the datacenter before you do the damage, then they just have a small downtime while they go offline, get rid of the malware, and patch the vulnerability.
What *is* reasonable is doing this as cyber warfare, cyber terrorism, or for the lulz.
\* I'm assuming that the thermal trip threshhold is not something which can be modified by software, and the temperatures reachable by software meddling can only do major damage if sustained for at least a day. However, I'm no master overclocker, so maybe I'm wrong.
I'm not sure Thermal is the only way to go.
With the Intel management engine, one might be able to effectively brick a CPU. But I don't know if it's possible to persist malicious code in the IME like that.
Glitching attacks have been around for a very long time - I think they were used to attack satellite TV encryption cards decades ago. As far as I know there's no good way to stop them.
Another interesting variant is that the power usage of chips leaks information.
> stupendously simple and clever hack like this
This is why I enjoy reading about computer security. The attacks are often not that difficult to understand, yet I'd probably never think of them in a hundred years. It's like watching someone solve a puzzle game that I'd given up on
I love low-level programming, but my god, are certain classes of security vulnerabilities so easy to accidentally create. Heartbleed for example was nothing but a simple buffer overrun. The C language is fantastic in some ways, but modern languages provide so much more protection in that sense. I want to read more about Rust, as I've heard a lot of praise heaped its direction in terms of memory safety while preserving performance.
Don't beat yourself about it.
It's like a senior oncologist saying "when I start to think I know something about the human body...", after reading a neurology article.
Except that they know medicine is unfathomably complex, and we generally don't, when talking about computers.
> endlessly complex in the details.
The devil is in that shit. It's all easy to explain but each time you increase the granularity you exponentially increase the intricacies. Even something as simple as `fork` isn't as simple when you get into the details.
This kind of glitching attack is becoming more and more common in devices these days, except usually you need physical access to the board in order to execute it (but that's for when you want to dump memory/firmware/keys). It's quite amazing to see it done in software, where this functionality is actually a built in "feature".
>This kind of glitching attack is becoming more and more common in devices these days, except usually you need physical access to the board in order to execute it
Well, I'd phrase that as "as chip makers add more security features intended to stop attacks based on physical access, attackers are putting more effort into breaking those features".
SGX is precisely intended to give me a trusted environment in a machine you control. This is one of several attack vectors that break that trust.
>Well, I'd phrase that as "as chip makers add more security features intended to stop attacks based on physical access, attackers are putting more effort into breaking those features".
In theory, in practice features like that are mostly used to push DRM...
It provides security — I didn't say for whom 😁
Also, I have worked on a since-abandoned project where providing third parties a trusted environment was a requirement and we were considering implementing that functionality in SGX. It's rare, but not *entirely* unheard of.
Yeah, we were looking into implementing a part of our product using SGX since we had some clients interested in that functionality. The technology has its limitations (especially for certain compute or memory intensive workloads) and SGX in particular enforces a lot of restrictions in order to "guarantee" security, but it's definitely got some very interesting applications.
Now if Intel could actually make it secure...
> Now if Intel could actually make it secure...
If you can hold it, you can pwn it. This will always be true. They can make it harder and harder to pwn but they can't make it impossible. Ever.
Maybe the sooner your clients realise this the sooner we can get over DRM.
I used to do tech support for McAfee and pretty much everyone I worked with would sit around during break saying "Why doesn't anyone make a virus that _____? It would be SO MUCH BETTER!"
Nobody admitted to writing them in their spare time, but there were one or two guys who I'm pretty sure contributed.
~~That doesn't really explain it - the ring, iGPU and IMC, etc. should all be in different voltage domains/planes. Nowadays overclockers don't need to touch VCCIN because the FIVRs let us tweak individual domain voltages, and adjusting V^core shouldn't impact any other domain (directly, anyway).~~
~~Perhaps I am misunderstanding which voltages these guys are tweaking, or the domain layout of Intel's chips (in fact, I almost certainly am).~~
Never mind, I've done some extra reading on the SGX and it's clear I misunderstood how it works. I was under the impression it utilised another entity in the system/chip to store keys like a TPM - it isn't.
No, it can be executed remotely
My understanding is yes, but I could be wrong. There's this [article](https://www.bleepingcomputer.com/news/security/clkscrew-attack-can-hack-modern-chipsets-via-their-power-management-features/) that describes how some people performed the attack on ARM chips.
No, it seems you just need root access and a remote code execution exploit. So they found an escalation of privilege bug in XTU, then tried to flip bits.
>Does anyone know what non-Intel architetures are similarly effected?
Considering they're just flipping voltages until there's an error, could be anything.
If intel could have it their way, they would label all vulnerabilities for the last two years under a single name with a boring acronym, like MDS for example. One boring acronym is a lot less scary and makes fewer headlines than a hundred names like spectre, meltdown, rowhammer, etc.
You want proof you say? Check out this page from [intel.com](https://www.intel.com/content/www/us/en/architecture-and-technology/facts-about-side-channel-analysis-and-intel-products.html) that is describing the Spectre, Meltdown, and MDS vulnerabilities without ever using the words 'spectre or meltdown'.
Unless they are hammering dram on the cpu then it couldn't be. I skimmed the paper and I can't seem to find where they actually explain the attack they just say they need control of the voltage about a billion times.
They're hammering the CPU until they cause a fault.
I think the part Intel actually cares about is the escalation of privilege found in their overclocking interface.
The article only mentions that CVE.
> Expect BIOS/microcode updates to allow disabling of undervolting.
"If you do not use SGX, you do not need to do anything. If you do use SGX: Intel has released a microcode update that - together with a BIOS update - allows disabling of the undervolting interface."
That sounds more like the undervolt is only disabled while using SGX, can anyone verify?