Le blog de numerunique
→fr

Meltdown and Spectre: Should We Really Panic?
07/01/2018

Clever names, logos, timing (right after the end-of-year holidays), a dedicated website, media blitz, GIF showing a keylogger demo: the marketing treatment of the latest computer security "vulnerabilities" is truly spectacular!

What’s far less spectacular is what emerges from the analysis of the real risk of these threats, based on currently published information.

Indeed, the "risk" is confirmed by Intel as "software analysis methods that, when used for malicious purposes, have the potential to improperly gather sensitive data" but only as a potential. At its core, the risk stems from processor performance optimization that takes advantage of memory access times to speculatively execute probable code without memory access protection for this speculative execution.

However, the two reference articles describing Meltdown and Spectre clearly state that these vulnerabilities require the execution of a program on the target machine.

And there’s nothing new here; it’s long been known that the battle is nearly lost against a hacker once they can execute their programs on the machine being defended. The primary concern of a system administrator is the reliability of their machine’s users.

It’s therefore logical that hosting providers like OVH or Online put so much effort into applying patches, while noting that to date, they have no evidence these attacks could be carried out in real-world conditions: "To date, OVH has not received any information demonstrating that the concerned vulnerabilities have been exploited outside of a research laboratory setting." Hosting providers know that among their hundreds of thousands of clients with access to their machines, there are certainly more than a few bad actors.

In the "academic" paper describing Spectre, which no fewer than 10 authors contributed to, the combination of techniques used to exploit the hardware vulnerability is impressive, to the point where one might wonder if the sum of conditions required for the exploit makes it highly unlikely. At least one critical aspect is unclear: does the success of the technique depend on the presence of specific instructions in a target program whose memory-held information is to be stolen? The paper provides an example of exploitable code:

if (x < array1_size) y = array2[array1[x] * 256];

It’s good to check if the index x is within the bounds of array1, but why not also verify that the index (array1[x]*256) is within the bounds of array2? If this code is written by the hacker, they can do what they want, but they will likely only have access to their own program’s context, as the article specifies: "The completed attack allows the reading of memory from the victim process." If this code must exist in another program running on the machine, the likelihood of such code existing seems extremely low.

Many other conditions or reservations are mentioned or expressed in the article, such as "Kernel mode testing has not been performed, but the combination of address truncation/hashing in the history matching and trainability via jumps to illegal destinations suggest that attacks against kernel mode may be possible."

All this completely changes the reality of the threat.

Unfortunately, this does nothing to mitigate the already irreversible consequences: a tsunami of updates, a greatly enhanced version of planned obsolescence :-(


Previous | Next