The goal of MIKELANGELO is to develop an approach and the accompanying software that will disrupt the traditional HPC and Private Cloud fields.
Currently, the so-called hypervisors are used to divide a physical node into one or more virtual nodes (virtual machines – VMs), which leads to sharing of resources for an efficient overall usage of the physical infrastructure. However, this mechanism introduces a management layer, which transfers the data between the virtual machines and the physical infrastructure and maps the requests and events of the different virtual systems. The complex management layer enables sharing of physical resources with virtual guest systems, however lacks due to the management overhead high performance access to the physical sub systems. In particular, one limiting factor is currently the support of sharing specialized network interfaces such as Infiniband and its included protocol, remote direct memory access (RDMA), with several virtualized guest systems. Although there are hypervisors available like commercial versions of Xen that already support this kind of functionality, the integration in the open source hypervisor KVM is desired as virtualisation using kernel modules in state of the art operating systems providing more flexibility in terms of usability and system updates.
Full virtualisation of operating systems results in a lower overall performance due to doubling (or even more, depending on the amount of virtual resources per node) of operating system components. A lightweight guest operating system is required to improve the communication layer between the virtualized operation system and the bare metal hardware. As a result, shorter booting times and less management overhead leads us enables higher performance.
The global architecture of MIKELANGELO is intended to be as modular as possible and easily expandable. In addition to the modular approach MIKELANGELO will focus on cross-level optimizations to be as flexible as possible concerning application execution. Flexible in terms of the actual HPC hardware environment is not relevant, as it is abstracted by a hypervisor. Further, to improve the overall performance of applications running inside VMs, corresponding modifications will be applied to both sKVM and OSv. Both technologies will be interchangeable with their counterpart, sKVM with KVM and OSv with any Linux guest. So OSv will also remain fully compatible with KVM, and sKVM runs any other guest operating systems KVM is able to. However, only when both are combined they will provide the desirable benefits in full extent: less CPU-overhead, improved I/O and stronger security. These cross-layer optimization will also serve as a demonstrator for other users on how dedicated solutions may be used to achieve the highest possible improvements.
Guest Operating System:
MIKELANGELO runs an application on many virtual machines (VMs), also known as “guests” of the hypervisor. Each VM needs an operating system to run the application. VMs on the cloud traditionally run the same operating systems that were used on physical machines. But the features that made these operating systems desirable on physical machines, are losing their relevance: Examples include a familiar single-machine administration interface, the support of multi-user and multiple applications, and the support for a large selection of hardware.
OSv is a new operating system designed specifically for running a single application on a single VM, limited to a single application because the hypervisor already supports isolation between VMs, so we believe an additional layer of isolation inside a VM is redundant and hurts performance. As a result, OSv does not support processes with separate address spaces but does fully support multi-threaded applications on multi-core Vms.
On the other side, different features are important for MIKELANGELO: The VM’s operating system needs to be fast, small, and easy to administer at large scale.
The sKVM architecture is aimed at enabling HPC (High-Performance Computing) and big data providers to virtualize their workloads. This abstraction of the actual hardware provides the benefit of a highly flexible design in terms of compile once run everywhere. To accomplish this challenging goal we are developing an optimized KVM-based hypervisor, sKVM, with several improvements to both I/O performance and security.
The role of SCAM (Side-Channel Attack Monitoring/Mitigation) is to provide a varied granularity of monitoring, profiling, and mitigation capabilities, in order to identify VMs that are attempting to exert information from co-located VMs via cache side- channels.
The goal of the monitoring module is to collect data on the cache accesses of the virtual machines (VMs) running on the host. Since SCAM has no prior information as to the identity of a potential attacking VM, the role of this module is to collect information on the cache activity of the VMs running on the host, in an attempt to extract traces of VM cache activities that can later be profiled. The information gathered by the monitoring module is passed on to the profiling module, in benign or hostile.
The role of the profiling module is to analyse the pattern of cache accesses of each VM and assign a score that represents the risk that a VM is conducting a cache-based side-channel attack. The input of the profiling module is the data that the monitoring module collects on each VM. The profiling module may trigger the operation of the mitigation module. The basis for profiling VMs is a common characteristic of all currently known cache-based side-channel attacks, namely priming and probing specific cache-sets persistently. The profiling module characterizes the risk posed by a VM by the degree of similarity between the cache accesses of the VM and that of a generic attack.
The objective of the mitigation module is to reduce the effectiveness of cache-based side- channel attacks and prevent them completely where possible. The module takes action based on input from three possible sources. The profiling module may initialize mitigation action against a VM based on the risk score that is assigned to that VM. In addition, user applications may request protection for specific pages in memory even without any indication that there are malicious VMs running on the same hardware platform. This second option is a form of cross-layer interaction that significantly reduces the overhead incurred compared to mitigating side-channel attacks aimed at data extraction from arbitrary memory locations. Finally, the mitigation module may be configured to perform some mitigation operations on the whole system regardless of the presence of malicious VMs.
The integration of the MIKELANGELO stack into infrastructures is split into Cloud and HPC computing for testing. These two types of infrastructure are the main targets of MIKELANGELO as they can benefit most from improved I/O performance and security of virtual machines.
The Cloud architecture is based on a state of the art, scalable high-availability deployment of OpenStack. There are two types of deployments, which are connected and offered. The first one is a full cloud deployment, which will be used to test MIKELANGELO in a production setting. The second deployment is a test-bed with fewer nodes used to run integration tests via continuous integration with the help of Jenkins.
For the HPC integration a small test-bed cluster, mirroring a production environment, has been setup. The test-bed cluster consists of a dedicated front-end, a storage server and compute-nodes. There is Infiniband connectivity available for fast data exchange between compute nodes as well as a shared file-systems (NFS), as it is present in common HPC clusters. The software which is used on production environments to schedule batch jobs is also used on this test system. All important aspects of HPC production environments are mirrored, enabling us to validate the new concepts for this field of computing.