First of all, best wishes for a happy 2018 from the SWITCHengines team!
If you haven’t been living under a rock, you have heard about security vulnerabilities affecting most Intel (and, in part, most other modern) CPUs in the world. They are called Spectre aka CVE-2017-5715/CVE-2017-5753 and Meltdown aka CVE-2017-5754. The SWITCHengines infrastructure contains such CPUs, so it is affected as well.
We are doing everything in our power to mitigate the consequences of these vulnerabilities for our users (you). Some countermeasures will have negative impact on performance, the extent of which is hard to predict—and will vary over time.
For technical details about the vulnerabilities, please refer to the excellent sources of information that are already out there. An in-depth description can be found in Reading privileged memory with a side-channel by Jann Horn from Google’s Project Zero, one of several security researchers who independently discovered these issues.
Of the vulnerabilities, Meltdown is the more imminent threat: There are impressive proof-of-concept (PoC) exploits, and it is only a question of time until real attackers start exploiting it. Fortunately there are effective workarounds (“mitigations”) available. Spectre affects more hardware and is harder to mitigate. But there are no realistic exploits out there—at least not yet.
Why we should care
Whether a particular vulnerability is relevant depends on your attack model.
Inter-VM and “hypervisor-escape” attacks
For “Infrastructure as a Service” providers such as SWITCHengines (but also Amazon AWS/EC2, Microsoft Azure, Google Compute Engine etc.), the most relevant risk is that a malicious or compromised user VM (virtual machine) could attack other VMs running on the same system, or obtain elevated privileges on our infrastructure itself, which would again put other users at risk.
For you as users of such an IaaS system that means that the integrity of your VMs is at risk of being attacked by other users. For the case of SWITCHengines, we can hope that our users are relatively friendly and refrain from attacking each other. But unfortunately, user VMs do get compromised from time to time, so your VMs are also at risk of being attacked by evil hackers that get into your (otherwise friendly) neighbor’s VMs.
It is actually not clear whether such inter-VM or hypervisor-escape attacks are feasible in our setup with QEMU/KVM hypervisors—QEMU developers suggest they aren’t. But for now we assume that there is a risk.
In principle, the vulnerabilities also apply within your SWITCHengines VMs. If you are the sole user and/or can control which code is run on your machine, you may not care much. But if you share your machine with others whom you don’t trust, or you run code outside your control yourself (e.g. software that others have written and that you haven’t personally reviewed in depth then you should also worry about such “intra-machine” attacks. In this case, you will be interested in applying the mitigations outlined below to your own VMs as well.
Status of Mitigation
The root of these vulnerabilities is in the microarchitecture of the affected CPUs, so it will be quite difficult to fix them “at the source”. Instead, suppliers of operating systems and other system-level software (such as virtual machine hypervisors) have started to modify their code to “mitigate” them, i.e. to make them harder to exploit in relevant settings.
As of today [2018-01-05], none of these mitigations have been applied on SWITCHengines; we will update this page as we deploy them.
KPTI (aka KAISER) to mitigate against Meltdown
An important mitigation approach is to better isolate virtual memory mappings between privileged and non-privileged processes, i.e. kernel and user-space. The Linux version of this is called KPTI for Kernel Page-Table Isolation. It has also been referred to as KAISER. This has been mostly integrated into the Linux kernel source (as of 4.15-rc6, with backports to 4.14.11, 4.9.75, and 4.4.110). So kernels with a good workaround against Meltdown should become available for various system distributions in the coming days and weeks.
We’ll install fixed kernels—with due testing—on all relevant systems soon. To activate them, we will have to reboot those servers. Your VMs should keep running despite this, because we will move VMs “around” the reboots using live migration.
For our official images, we will closely monitor our “upstream” distributions and build new images as patched kernels appear. Existing VMs should also eventually pick up patched kernels via the default auto-update mechanism. If you have turned this off, or worry about timely protection, you will have to check for updates yourself, and reboot your system once you have a patched kernel. Note that this paragraph only addresses the “intra-VM” attack case.
Mitigations against Spectre
As mentioned before, protection against Spectre is more complicated than for Meltdown.
Mostly likely a combination of measures at the levels of CPU (microcode), operating system, and possibly compilers, will be necessary. Intel started to publish microcode fixes. Other developers are working on mechanisms such as “retpolines” to defeat abuse of branch prediction. Again, we will apply these patches on our infrastructure as soon as possible. In general, mitigations against Spectre are not as far along compared to those against Meltdown. This probably means that we will have to update our infrastructure more than once.
The KPTI mitigation is said to be effective against Meltdown, but has a performance cost, in particular for system call-intensive workloads. This will include most applications doing I/O transactions at a high rate. On the other hand, pure “number crunching” workloads won’t be affected. People are busy running benchmarks of various workloads to estimate the performance loss. Slowdowns in the range of 5%–30% were observed in benchmarks, with outliers in the 50% range.
The first proposed mitigations against Spectre (IBRS, IBPB, retpolines…) also have potentially large performance costs. We’ll see.
In general, we can expect the performance overhead to decrease over time, as mitigation techniques get optimized and/or supported by low-level features in CPUs, microcode etc.
When deploying mitigations, we will carefully monitor our infrastructure and take appropriate measures to maintain overall performance levels.