Virtualization provides a solid core of benefits — cost savings, system consolidation, better use of resources, and improved administrative capabilities — but it’s important to remember that supporting the goals of the business are the reason IT departments exist in the first place. Virtualizing everything as far as the eye can see without analyzing the consequences is like what comedian Chris Rock said about driving a car with your feet: You can do it, but that doesn’t make it a good idea.
The first step in any virtualization strategy should involve envisioning disaster recovery if you put all your eggs in the proverbial basket. Picture how you would need to proceed if your entire environment were down — network devices, Active Directory domain controllers, email servers, etc. What if you’ve set up circular dependencies that will lock you out of your own systems? For instance, if you configure VMware’s vCenter management server to depend on Active Directory for authentication, it will work fine so long as you have a domain controller available. But if your virtualized domain controller is powered off, that could be a problem. Of course, you can set up a local logon account for vCenter or split your domain controllers between virtual and physical systems, but the above situation represents a good example of how it might be possible to paint yourself into a corner.
In my experience, some things just aren’t a good fit for a virtual environment. Here is my list of 10 things that should remain physical entities.
1: Anything with a dongle/required physical hardware
This one is a no-brainer, and it’s been repeated countless times elsewhere, but — like fire safety tips — just because it may be a well-known mantra doesn’t make it less significant. Believe it or not, some programs out there still require an attached piece of hardware (such as a dongle) to work. This piece of hardware is required by licensing for the program to work properly (to prevent piracy, for instance).
Case in point: An HVAC system for a client of mine ran on a creaking old desktop. The heating-and-cooling program required the use of a serial-attached dongle to administer the temperature, fans, etc. We tried valiantly to virtualize this system in a VMware ESXi 4.0 environment, using serial port pass through and even a USB adapter, but no luck. (I have heard this function may work in ESXi 5.) Ironically, this would have worked better using VMware workstation instead of the ESX environment, which did allow the pass-through functionality. But there was little point in hosting a VM on a PC, so we rebuilt the physical system and moved on.
This rule also applies to network devices like firewalls that use ASICs (application-specific integrated circuits) and switches that use GBICs (Gigabit interface converters). I have not found relevant information as to how these can be converted to a virtual environment. Even if you think you might cobble something together to get it to work, is it really worth the risk of downtime and administrative headaches, having a one-off setup like that?
2: Systems that require extreme performance
A computer or application that gobbles up RAM usage, disk I/Os, and CPU utilization (or requires multiple CPUs) may not be a good candidate for virtualization. Examples include video streaming, backup, database, and transaction processing systems. These are all physical boxes at my day job for this reason. Because a virtual program or machine runs in a “layer” on its host system, there will always be some level of performance sacrifice to the overhead involved, and the sacrifice likely tips the balance in favor of keeping it physical.
You might mitigate the issue by using a dedicated host with just the one program or server, but that detracts from the advantage of virtualization, which allows you to run many images on a dedicated physical server.
3: Applications/operating systems with license/support agreements that don’t permit virtualization
This one is fairly self-explanatory. Check the license and support contract for anything before you virtualize it. You may find that you can’t do that per the agreement, or if you proceed you’ll be out of luck when it comes time to call support.
If it’s a minor program that just prints out cubicle nameplates and the support agreement doesn’t cover (or mention) virtualized versions, you might weigh the risk and proceed. If it’s something mission critical, however, pay heed and leave it physical.
Which brings me to my next item…
4: Anything mission critical that hasn’t been tested
You probably wouldn’t be likely to take your mortgage payment to Las Vegas, put it down on at the roulette table, and then bet on black. For that matter, you definitely wouldn’t gamble it all on number 7. The same goes for systems or services your company needs to stay afloat that you haven’t tested on a virtual platform. Test first even if it takes time. Get a copy of the source (use Symantec Ghost or Acronis True Image to clone it if you can). Then, develop a testing plan and ensure that all aspects of the program or server work as expected. Do this during off-hours if needed. Believe me, finding problems at 11 PM on a Wednesday night is far preferable to 9 AM Thursday. Always leave the original source as is (merely shut it off, but don’t disconnect/remove/uninstall) until you’re sure the new destination works as you and your company anticipates. There’s never a hurry when it comes to tying up loose ends.
5: Anything on which your physical environment depends
There are two points of failure for any virtual machine — itself and its host. If you have software running on a VM that unlocks your office door when employees swipe their badges against a reader, that’s going to allow them in only if both the VM and its parent system are healthy.
Picture arriving to work at 8 AM Monday to find a cluster of people outside the office door. “The badge reader isn’t accepting our IDs!” they tell you. You deduce a system somewhere in the chain is down. Now what? Hope your master key isn’t stored in a lockbox inside the data center or you’ll have to call your security software vendor. Meanwhile, as employees depart for Dunkin’ Donuts to let you sort out the mess, that lost labor will quickly pile up.
It may not just be security software and devices at stake here. I have a client with a highly evolved VMware environment utilizing clustering and SAN storage. And yet if they clone four virtual machines simultaneously, their virtualized Exchange 2010 Client Access Server will start jittering, even though it runs on another server with a separate disk (datastore). That server is being converted to a physical system to heal the issue. Yes, there is probably further tweaking and analysis that could be done to fix this, but in my client’s view, solid Exchange connectivity is too valuable for them to experiment behind the scenes and hope for the best.
6: Anything on which your virtual environment depends
As I mentioned in the introduction, a circular dependency (such as a virtual domain controller being required to log into the virtual environment) puts you at a great risk once the inevitable downtime arrives — and yes, even in clustered, redundant environments that day will come. Power is the big wildcard here, and if you live in the Northeast like me, I bet you’ve seen your share of power outages spike up just over the past five years.
I grouped this separately from the previous item because it requires a different way of thinking. Whereas you need to figure out the layout of your physical environment to keep the video cameras up and running, you need to map out your virtual environment, including the host systems, virtual images, authentication, network, storage, and even electrical connectivity. Take each item out of the mix and then figure out what the impact will be. Set up physically redundant systems (another domain controller, for instance) to cover your bases.
7: Anything that must be secured
This is a slightly different from rule #5. Any system containing secure information that you do not want other staff to access may be a security risk if virtualized. You can set up permissions on virtual machines to restrict others from being able to control them, but if those staff members have the ability to control the host systems your controls might be circumvented. They might still be able to copy the VMware files elsewhere, shut down the server, etc.
The point of this is not to say you should be suspicious of your IT staff, but there may be compliance guidelines or regulations that prohibit anyone other than your group from maintaining control of the programs/data/operating system involved.
8: Anything on which time sync is critical
Time synchronization works in a virtual environment — for instance, VMware can sync time on a virtual machine with the host ESX server via the VMware tools application, and of course the operating systems themselves can be configured for time sync. But what if the operating systems forget or the host ESX server time is wrong? I observed this latter issue just a few weeks back. A set of virtual images had to have GMT for their processing software to work, but the ESX host time was incorrect, leading to a frustrating ordeal trying to figure out why the time on the virtual systems wouldn’t stick properly.
This problem can be reined in by ensuring all physical hosts use NTP to standardize their clocks, but mistakes can still occur and settings can be lost or forgotten upon reboot. I’ve noticed this happening on several other occasions in the VMware ESX realm, such as after patching. If the system absolutely has to have to correct time, it may be better to keep it off the virtual stage.
9: Desktops that are running just fine
In the push for VDI (virtualized desktop infrastructure), some companies may get a bit overzealous in defining “what should be virtualized” as “anything that CAN be virtualized.” If you’ve got a fleet of PCs two or three years old, don’t waste time converting them into VDI systems and replacing them with thin clients. There’s no benefit or cost savings to that plan, and in fact it’s a misuse of the benefits of virtualization.
It’s a different story with older PCs that are sputtering along, or systems that are maxed out and need more juice under the hood. But otherwise, if it ain’t broke, don’t fix it.
10: Anything that is already a mess… or something sentimental
On more than one occasion I’ve seen a physical box transformed into a virtual machine so it can then be duplicated and preserved. In some situations, this has been helpful; but in others it has actually led to keeping an old cluttered operating system around far longer than it should have been. For example, a Windows XP machine already several years old was turned into a virtual image. As is, it had gone through numerous software updates, removals, readditions, etc. Fast forward a few more years (and MORE OS changes) and it’s no surprise that now this XP system is experiencing strange issues with CPU overload and horrible response time. A new one is being built from scratch to replace it entirely. The better bet here would have been to create a brand new image from the start and install the necessary software in an orderly fashion, rather than bringing that banged-up OS online as a virtual system with all of its warts and blemishes.
The same goes for what I call “sentimental” systems. That label printing software that sits on an NT server and has been in your company for 15 years? Put it on an ice floe and wave good-bye. Don’t be tempted to turn it into a virtual machine to keep it around just in case (I’ve found “just in case” can be the three most helpful and most detrimental words in IT) unless there is absolutely 0% chance of replacing it. However, if this is the case, don’t forget to check rule #3!
Bonus: The physical machines hosting the virtual systems
I added this one in tongue-in-cheek, fashion, of course. It’s intended to serve as a reminder that you must still plan to buy physical hardware and know your server specs, performance and storage needs, network connectivity, and other details to keep the servers — and subsequently the virtual systems — in tiptop shape. Make sure you’re aware of the ramifications and differences between what the hosts need and what the images need, and keep researching and reviewing the latest updates from your virtualization providers.
Conclusion
As times change, these rules might change as well. Good documentation, training, and an in-depth understanding of your environment are crucial to planning the best balance of physical and virtual computing. Virtualization is a thing of beauty. But if a physical host goes down, the impact can be harsh — and might even make you long for the days of “one physical server per function.” As is always the case with any shiny new technology (cloud computing, for instance), figure out what makes sense for your company and its users and decide how you can best approach problems that can and will crop up.
By Scott Matteson in 10 Things, August 8, 2013, 1:45 PM PST // scott_matteson