This feature first appeared in the Summer 2016 issue of Certification Magazine. Click here to get your own print or digital copy.
In the beginning, there was hardware. Big, chunky, heavy computing hardware that filled entire buildings built with specially reinforced floors to bear the immense weight. These early machines were essentially as smart as the programmers who "taught" them to perform complex calculations, through the use of coded instructions which evolved into the first software programs.
Software caught up to computing hardware, and then became the spur that drove the creation of faster, more powerful machines. One type of software in particular — simulation programs — needed serious hardware to run effectively. Airlines used simulation software to create virtual jumbo jets for pilots to train on; NASA astronauts-in-training spent hundreds of hours in virtual space shuttles powered by simulation software.
As simulation software became more sophisticated, and computer hardware grew more powerful, the inevitable question was asked: Could a physical computer be used to simulate one or more virtual computer systems?
While the earliest work on virtualization goes back several decades, it wasn't until the late 1990s that it made a significant impact on the mainstream enterprise. Virtualization gave IT departments the ability to create virtual machines (VMs), a feature that helped generate additional value from the latest hardware.
What is a VM?
A virtual machine consists of an actual operating system which is installed on simulated hardware components. To the virtual machine and the OS powering it, all of the perceived hardware (including CPU, RAM, and hard disk) belongs to it. The computer hosting the virtual machine shares its memory, CPU, and other components with the virtualization software.
The VM setup offers a huge advantage, in that the VM's hard disk actually exists as a single file on the host machine. Because of this, a VM can be saved, copied, moved, and restored just like a typical file can.
This save-and-restore feature of VMs was a revolutionary improvement for many different environments. Software testers could use VMs to run prototype programs, and if a VM blew up, it could quickly be restored to its original state, rather than requiring the re-imaging of an entire hard disk.
Schools would also greatly benefit from virtualization, as VMs enabled the quick and inexpensive setup of classrooms and computer labs.
The VM in your house
It wasn't long before virtualization trickled down to the home computing market. Two popular software programs, VMware Fusion and Parallels Desktop, let Mac owners create virtual Windows PCs on their Apple systems.
Oracle VM VirtualBox, a powerful open source program that runs on Windows, OS X, Linux, and Solaris, can create VMs capable of running versions of Windows, Linux, Solaris, OpenBSD, and OS/2.
Impressed by the potential of virtualization, hardware vendors began to support it in their products. Industry leader Intel created a CPU feature called VT-x which provides tailored hardware assistance to virtualization software, making VMs work more efficiently and reducing resource overhead on the host machine. Chip maker AMD added a similar feature, AMD-V, to its processors.
Beyond simulated computers
As virtualization has evolved, it has grown beyond the mere simulation of workstations and servers. Virtualization has grown to include the following:
Virtual applications that can run as though they are installed on the client PC.
Storage virtualization, which pro vides an abstraction layer separating physical storage devices and how they can be presented and accessed.
Memory virtualization, which turns multiple networked RAM resources into a single shared memory pool.
Today, virtualization has grown to include nearly anything of which a virtual version can be made. Virtual servers can be used to host virtual networks, which are built using virtual routers and switches — this may seem like virtual overkill, but it is a viable example of how virtualization is being used to replace traditional networking hardware devices.
Network virtualization enables combining a number of physical networks into one logical network. Alternatively, you can take a single physical network and split it into a number of logical networks. You can even create a virtual network between two virtual machines which exist on the same physical server.
Put in the simplest terms, virtualization is excellent at taking one physical resource (a server, for example) and carving it into several virtual resources, which saves on the number of physical machines required. Alternatively, it also excels at taking several physical resources (like a large number of networked RAM chips) and making them appear as a single resource.
Virtualization has also revolutionized the economy of network computing resources for small businesses and individual entrepreneurs. Virtual private servers can be leased from service providers for as little as $20/month, with full support and maintenance included.
Virtualization limitation
That said, virtualization is more complicated than just throwing a ton of CPU cores, RAM, and hard disks into a server, and spinning up an army of virtual machines. One of the current challenges of virtualization is application management on VMs. In a perfect world, applications would run as smoothly and consistently on VMs as they do on physical computers.
As it turns out, applications are often finicky. While the operating system loaded on a VM may be quite happily convinced it is running on its own physical computer, applications can often make arcane resource demands that cause VMs to pitch fits. This is particularly true for web applications, which have grown in complexity during the cloud computing revolution.
Software installed on VMs can cause problems like application incompatibilities with the VM's virtual hardware management, or unbalanced application workloads causing bottlenecks in the host machine's CPU, RAM, or network bandwidth.
Containing the problems
Much of the discussion concerning the future of virtualization (and virtual machines in particular) is on the use of containers. A container bundles a software application with all of the code, libraries, and tools necessary for the application to run. This bundle is capable of running in any OS environment, and doesn't require any system emulation like a virtual machine.
In essence, a container makes an application platform-agnostic, taking away the requirement for a virtual version of the app's native OS.
A recent example of the use of containers is Google's plan to make its popular Chromebook computers (which run the very limited Chrome OS operating system) compatible with the full catalog of existing Android apps. This will be done by running the Android apps in a container that contains the full Android Framework. The container will let Android apps run on Chrome OS without any virtualization required.
Containers have a key advantage over virtual machines — an app running in a container can still communicate with the OS the container is running on. In Google's case, an Android app in a container can still communicate with Chrome OS to get access to the onboard hardware. This means the Chromebook doesn't have to provide any heavy virtual hardware emulation.
The virtual tomorrow
Perhaps the most interesting item being discussed about the future of virtualization, is the possible resurgence of the "big, chunky, heavy computing hardware" that we mentioned in the opening paragraph of this article. Yes, the mainframe computer is trying to come back in style!
Aren't mainframes dead? Not to some experts who assert that using mainframes for virtual machine infrastructure provides greater security than using commodity servers. Given the potential costs associated with security breaches, these proponents believe that the higher costs associated with mainframes are worth it.
To be fair, mainframes have continued to evolve along with the rest of the computing technology industry. In particular, mainframe computing has become more powerful and less expensive, while also becoming easier to administrate and maintain.
It is unlikely, however, that mainframes represent the future of virtualization, except perhaps for organizations with very ambitious requirements. The relatively low cost of commodity hardware servers is a powerful incentive for businesses and public service groups to stay with traditional virtualization solutions. After all, virtualization's primary advantage is the reduction of costs associated with purchasing physical servers.
Here is one safe prediction: The future of virtualization is directly related to the future of cloud computing. Virtualization and the cloud go hand in hand, and as technologists come up with bigger and better ways to implement the cloud in our daily lives, virtualization will be called upon to efficiently and affordably enable these new ideas.
Important Update: We have updated our Privacy Policy to comply with the California Consumer Privacy Act (CCPA)