Upgrading VMware in the Enterprise (5.0 to 5.1) End Game
Let me bottom line this- (for all 3 readers who have been tracking this upgrade). Until HP comes out with Update 1 on their ESXi 5.1 installation disk, I’m stuck manually upgrading the hosts with a combo plate of fresh installations and profiles.
The Virtual Infrastructure Challenge
Here’s the thing about blade servers and the enclosures in which said blades reside: you’re a slave to the manufacturer for firmware and driver updates and to what they think should be in the enclosures. For example, our HP Blade enclosures (c7000 series) are happier when all the blades are from the same generation. Back in the day (like, 5 years ago- that’s what “back in the day” means to IT nerds), you may be able to get CAPEX for blade replacement. The hardware was improving at a much faster rate than virtualization platforms were at improving on resource management and optimization. Providing a business ROI was in the “possible” category. Now, it’s more about OPEX which helps on the software side but slows the hardware side.
So, what to do with “older” hardware infrastructures that are risky to update because there are multi-generational blades in one enclosure? It’s a tough proposition. The hardware and virtualization vendors have to meet to make sure their API’s, firmware, and drivers don’t overlap or otherwise beat the crap out of each other. In those cases, the infrastructure ends up falling apart, and you have a multi-vendor conference call going on. HP and VMware are locked in pretty good, but timing is usually the issue. In my case, I have to wait for HP to catch up.
What we’re getting closer to is generic, stateless hardware on the resource side (CPU, RAM, NICs), software based networking, and extremely fast SANs. The infrastructure components are being peeled off one at a time and managed on a software level.
The enclosure software can control the blades on a new virtualization level, apart from the base hypervisor installed on the server. Enclosure software uses CNAs (Converged Network Adapters) to tell the server what kind of network adapter it thinks it has (1GB Ethernet, 10GB Ethernet, FC, FCoE, iSCSI) and how many, where to boot from (Local disk or SAN), what the BIOS settings are, and what kinds of disks it has (if any). The Enclosure uses software based networking to throttle bandwidth between itself and the physical LAN and SAN all the while using algorithms to dynamically give and take bandwidth to its own virtualized networks, which increases flow. The Enclosure can build profiles, so when a new blade is attached it is built automatically with all the software and virtual hardware it needs to function. It’s a completely new level of virtualized hardware and network on top of the base hypervisor.
This can make your infrastructure very complex and confusing. The Enclosure software has to work with VMware or Hyper-V using API’s so admins can see what the hell is going on. It’s not inconceivable to have just two enclosures with 4 blades each, two switches, and a SAN housing an extremely complex infrastructure with hundreds of virtual machines and data flowing from one virtualized space to another and then finally out to physical switches.
The newer enclosures have some of this functionality. Be aware, if you choose to go with enclosures, that the administration tools meet most of your needs. It’s tough following those packets around when they enter the virtualized ether. Make sure the enclosure and the blades can be upgraded together and stay fused. Find out if you can mix and match blade sizes and models in the same enclosure to give yourself some flexibility.
All this and you still have to make sure that your Enclosure and your Hypervisor play nice when it comes to software updates.
There’s no turning back. Learn your virtualization and prepare for administration options that back in the day seemed impossible.