Virtualisation is the way forward, there's absolutely no doubt about this. It all began as a way of saving money, and saving the planet, by running two or more operating systems, each with their own specific tasks to perform on the same server hardware. It's not a new concept either, the now legendary VMWare was founded in 1998, an age in computing terms and long before people considered computers powerful enough to run multiple virtualised environments on a single hardware layer.
Then in 2009 virtualisation found its way onto the mainstream PC with Microsoft's Virtual PC. These early Type 2 VMs (Virtual Machines) were limited in functionality though. Everything was virtualised, including the hardware. The VM core was essentially an emulator for earlier processors and other hardware. Thus anything that ran in an VM couldn't access the full power of the hardware in your computer. This made them slow, unable to access peripherals such as printers and USB drives, and not tremendously useful for the majority of tasks.
The other problem with Type 2 VMs was that on many occasions people found that their software simply wouldn't run on them. Many software packages required access to better hardware than the VM could emulate, or access to peripherals to work properly. Finally you always had two operating systems running concurrently on your computer even if you weren't using your main one. This would push up your electricity consumption and make the process of running VMs very costly.
This can also present all manner of problems accessing server-side, and cloud-ready resources. The VM was almost always cut off from the outside world unless careful and technical configuration took place. Clearly a better solution had to be found.
Eventually new types of VM appeared including the Type-1. This was able to give the virtual machine full access to the computer's hardware, though it still operated the OS in a virtualised environment, not as a full OS with the full processing capability of your Intel or AMD chip. Windows 7 was the first major OS to include a 'boot from VM' option. It takes a bit of tinkering to get the thing to work, but you can use this to actually boot your computer from a VM running Windows 7.
This bootable VM has full access to all your hardware and peripherals and you'd never know it was a VM you were using. The problem with Windows 7's option though is that it's very hard to configure and of limited usefulness because you can only use it with these two editions of Windows 7. Another example of this is Apple's boot camp, which uses Type-1 virtualisation because it has to emulate the standard BIOS in PCs.
But why would you want to do this anyway? From my own experience in support it can be an enormous time-saver. Images can be easily built on central servers and rolled out quickly and quietly to PCs across a business network with almost no productivity downtime. There's no upgrade involved as it's simply a case of copying a new file over to a PC containing the virtualised image. This also makes it very fast to restore in the event a Windows or other software error occurs in the VM.
But you still were stuck with performance issues. Now that dual and quad core processors are the norm however, virtualisation was able to take the next leap, and new Type-0 hypervisors are now appearing.
The main difference between a Type-2 and a Type-0 is that with the latter everything, including the main OS, runs on your core hardware. This means that you would never know the difference between a virtualised and a 'real' OS. It's clever too as it can use the hardware in such as way as to maintain several of these Type-0 operating systems at the same time (depending on the virtualisation solution you buy into).
The main benefits to new Type-0 virtualisation though comes in the way they can be used within their own ecosystem. You can create whole virtualised server systems to work with them (the Type-0 machines can also run perfectly well offline though as well). On the face of it this means that you can't do anything with a PC that can't ordinarily be done with a standard installation of your OS. But...
This system does so much more. By allowing full access to your hardware you can avoid costly new hardware upgrades required for older VM solutions. You can also run the code natively on the local machine, avoiding the need for the VM to have to run on an expensive server-side monstrosity and making sure that the VM runs all the time, even when out and about such as a laptop.
I believe the biggest advantage however is for permitting multiple roles for each PC (or laptop). Let's say in an organisation you buy two thousand laptops of a certain type for your mobile workers. In this you have sales staff, HR professionals, managers, executives and more. Each worker will require their own build and in a traditional Windows scenario where installations take time to build and even longer to deploy, assuming a stable connection to the central server can even be maintained, this is a pain and can be hugely expensive.
In a Type-0 virtualisation environment you can deploy multiple images simultaneously (and securely away from the prying eyes of the worker) or quietly in the background as they work. They need never know this is happening, need never have a single minute of downtime, and switching the computer between roles is a simple matter of changing a single setting in a configuration file.
Type-0 hypervisors are certainly the way forward, Microsoft know this and they'll finally be able to use it as an excuse to drop all the legacy code support in Windows 8. This will make the switch to VMs even more important for both businesses and consumers who have older software and hardware that they're either very fond of or have come to rely on.
While true holistic Type-0 solutions are currently thin on the ground, a notable exception being AppSense and zInstall's forthcoming Zirtu product, we'll see much more of these in the next couple of years. Before too long everything will be virtualised for the benefits of stability, security and dependability, and this will be a future worth embracing.
Advertising revenue is falling fast across the Internet, and independently-run sites like Ghacks are hit hardest by it. The advertising model in its current form is coming to an end, and we have to find other ways to continue operating this site.
We are committed to keeping our content free and independent, which means no paywalls, no sponsored posts, no annoying ad formats or subscription fees.
If you like our content, and would like to help, please consider making a contribution:
Ghacks is a technology news blog that was founded in 2005 by Martin Brinkmann. It has since then become one of the most popular tech news sites on the Internet with five authors and regular contributions from freelance writers.