Virtual Machines (VMs) have revolutionized the IT world and the impact is dramatic. According to a 2016 Gartner Report, the impact of server virtualization has “matured rapidly over the last few years, with many organizations having server virtualization rates that exceed 75 percent, illustrating the high level of penetration.” In some organizations, no one even remembers the days when a single application ran on a single physical server. Today, any physical server runs a multitude of applications, which are encapsulated by a multitude of virtual machines.
This flexibility, which results from the decoupling of the operating system running within the virtual machines from the bare metal server, has led to tremendous cost savings. Each physical server can run any application, regardless of its supported OS kernel and other security and compatibility constraints, which enables optimized utilization of each physical hardware. VMs with different operating systems can run on the same physical server – a Windows VM can sit alongside a Linux-based VM, and so on.
This step entailed decoupling the physical from the virtual, providing the ‘ultimate freedom’ – the ability to run a Windows OS on a Linux-based physical server. Today’s data center virtualization management software solutions offer a pool of physical resources that can be utilized by tenfolds of virtual machines. The virtual machine may move live between physical servers, almost eliminating hardware maintenance windows. The VMs can even be auto-restarted remotely should a hardware failure occur, again eliminating the need to hot standby the server, which, among other benefits, offers unprecedented efficiencies. But were we satisfied with this technological advancement?
Although it is possible to squeeze a large number of VMs into a single physical server, each VM has a relatively large footprint. A VM includes the entire virtual hardware stack, from virtualized BIOS to virtualized network adapters, storage virtualization and CPU. And although a VM boot process is normally much quicker than booting physical equipment, the process can still take seconds or minutes based on the operating system.
One of the most serious drawbacks of VMs is hypervisor lock in, which ultimately leads to vendor lock in. All the flexibility above is true while using the same hypervisor managed by the same management software. As soon as you need to port your application elsewhere, for example to a cloud, this process is usually slow and painful.
Containers come to the rescue
Containers offer an ideal solution to the efficiency and lock-in challenges mentioned above. A container consists of an entire runtime environment: an application plus all its dependencies, libraries and other binaries, and the configuration files needed to run it – bundled into one package.
Containers give each application running on a server its own, isolated environment to run. Since containers don’t have to load up an entire operating system, they can be created in a matter of seconds, as opposed to more than a few minutes for a virtual machine. This speed is extremely important for data centers that need to respond immediately to demand spikes, and for new age applications that use microservices architecture and scale out by rapidly spawning more instances of the application.
Containers are also highly portable. They can run reliably when moved from one computing environment to another. Containers create portable, consistent operating environments for development, testing and deployment. All they need is a kernel version that supports the run time environment they hold.
Does this mean the end for VMs?
Some say that the advantages mentioned above create an uncertain future for virtual machines. But many experts agree that container technology has not reached the stage in which it completely replaces VMs.
Security is one of the main concerns with containers. Virtual machines offer the security of a dedicated operating system and harder logical boundaries, as in hardware separation. The abstraction at the physical hardware level translates to individual kernels. These individual kernels limit the attack surface to the hypervisor. In theory, vulnerabilities in particular operating systems versions can’t be leveraged to compromise other VMs running on the same physical host. In contrast, since containers share the same kernel, admins and software vendors need to apply special care to avoid security issues from adjacent containers.
Take Docker, for example, which uses libcontainers as its container technology. Libcontainers accesses five namespaces — Process, Network, Mount, Hostname, and Shared Memory — to work with Linux. That’s great as far as it goes, but there are a lot of important Linux kernel subsystems outside the container. This means if a user or application has super-user privileges within the container, the underlying operating system could, in theory, be cracked.
Compatibility is also an issue. As mentioned above, VMs are completely isolated from the host OS. You can mix any VM, running any OS on top of any host OS and it all works perfectly well. In contrast, containers only abstract the application environment, which means that all the containers must run on the same operating system and sometimes even specific kernel versions. You can’t, for example, run a Windows-based container on a Linux OS.
The result is you may need multiple types of bare metal operating systems for different groups of containers. If you create all your applications from scratch this may be possible, but if you have legacy applications this brings you back to silos of bare metal servers. This is an operational challenge that was solved long ago by – not surprisingly – VMs.
Although containers have substantial advantages, there are workloads or use cases that will still require virtual machines. Sure, you could run containers inside VMs, this is what basically happens when you run your containers on the cloud, but to obtain the greatest benefits discussed above, you need to run bare-metal containers, meaning containers directly running on top of the OS.
via Technology & Innovation Articles on Business 2 Community http://ift.tt/2jEMYKu