Tag Archives: ESX

VMware ESXi at Home

What started out as a simple experiment to help me learn more about VMware ESX has now turned into a full blown experiment running my current Windows Home Server setup, along with two Linux servers used for an online FEAR Combat gaming server (for the kids of course) and a private WordPress website development environment running on a single Dell T110 server.

I am now able to create and tear down “server sandboxes” right next to my “leave alone” server setups (i.e. Windows Home Server) given the relative ease with which I can now create new virtual servers. More experimentation is necessary to get to streaming high definition videos which seem to struggle, but standard resolution and audio seem to work fine so far. This certainly seems like virtual servers are now well within the reach of the tekinerd and small home office type setups.

In addition to the Dell server, I also created a low cost iSCSI box using the free OpenFiler software and the VIA Artigo A2000 shoebox sized computer to expand the storage capabilities of ESX and primarily the virtual Windows Home Server which required additional storage to handle my total home PC backup requirements. Details of the final setup are included below, with a more detailed writeup on the Tekinerd Server Pages at http://tekinerd.com/server-pages/at-home-with-vmware-esxi/.

Hardware:

  • Dell T110 server (2x 160G drives in my particular setup), 2G DRAM (~$399 special at Dell)
  • VIA Artigo A2000 for the iSCSI storage box with 2x WD 500G drives (~$350 all in)

Software

  • Dell: VMware ESXi v4 (free download from VMware)
  • Dell: Client OS#1: Microsoft Windows Home Server ($99)
  • Dell: Client OS#2: OpenSUSE 11.2 ($0) setup as a FEAR Combat Server ($0)
  • Dell: Client OS#3: OpenSUSE 11.2 ($0) setup as a WordPress Development Server ($0)
  • VIA Artigo A2000: Openfiler v2.3 ($0) configured with 2 iSCSI targets (317G + 465G available)
  • Laptop: VMware vSphere Client software ($0)

Hardware RAID Adapters Making A Comeback?

We are constantly reading about how the number of cores being offered by Intel and AMD will eventually make hardware acceleration devices in a server mostly obsolete. But hang on, I recall hearing something similar back in the 90s when MIPS – meaningless indication of performance provided by sales people – was the in metric. Having just been involved in a key OEM software RAID project and watching the steady replacement of server based RAID adapter unit shipments start to shift downward as more CPU cycles come on line, I recently saw some recent market numbers from Gartner that showed the reverse trend. Hardware RAID adapter cards, which were supposed to be dying slowly, are starting to see a resurgence in servers.

So why is this?

Not surprisingly, one of the key contributing suspects is looking like increased virtual server adoption, in particular VMware ESX. Unlike the conventional operating system environment, a dedicated hypervisor environment like ESX doesn’t have the same “luxurious” methods for developing a broad range of device drivers for starters, let alone the lack of RAM space to load up complex device drivers. The whole point of virtual servers is to drive the hardware to a minimal number of absolutely necessary interface types, so no surprises that there is a rather slim pickings of storage adapters in the standard distribution of ESX. This is specially so with the new breed of skinny hypervisors capable of operating on minimal ROM and RAM footprints (e.g. inside the system BIOS itself) that require skinny device drivers versus something like a fat fully featured software RAID driver that could required up to 1Gbyte RAM minimum in a single-OS setup. Then there is the user aspect of loading custom drivers via the vSphere command line if VMware doesn’t bundle the one you need with their standard distribution. I’m still coming up my learning curve on ESX, but there is a definate and significant learning curve as I attempt to explore the capabilities of my newfound experimental VMware ESXi system to make sure I get it right (i.e. don’t kill it) versus the Microsoft easy-to-install approach for standard apps and drivers we’ve all become used to seeing.

In the case of hardware RAID versus software RAID, the initial problem is that conventional software RAID just doesn’t fit well into a VMware ESX hypervisor environment because most of today’s solutions were built with a single-OS in mind. Apart from the fact you can’t get hold of a software RAID stack for ESX, even if you could, it is likely to take up significant – possibly excessive – system RAM resources as a percentage of the overall hypervisor functions. Re-enter hardware RAID as it doesn’t really care about what OS you are running above it. It can use a simpler, skinny host driver without impacting the system resources and ports over easier to the ESX environment as the RAID stuff runs down on a dedicated hardware accelerated engine or storage processor.

One of the other contributing factors to hardware RAID’s incline again could also be the increased usage and focus on external SAS disk arrays. DAS is certainly making a comeback, especially given that the latest generation SAS disk arrays can operate at up to 24Gbps rates over a single cable to the host (four SAS channels running at 6Gbps). Having a hardware RAID adapter as the primary connection to a SAS JBOD ensures that there is always a consistent virtual interface to the hypervisor layers and that the CPU and RAM resources are not being overtaxed. Sure a software RAID stack can do the same given enough system resources and CPU cycles, but concerns about scaleability as you add drives over time is something that makes software RAID more difficult to manage in an ESX environment as it impacts the RAM usage when adding more drives for starters. Again, not a problem with hardware RAID. Same resources consistently presented to the hypervisor layers without significant CPU-RAM impact as drive changes and capacity increases are made.

So for hardware RAID, and many other traditional functions that were on the track to oblivion if you read the multi-core CPU tea leaves of late, maybe things aren’t so bad after all for storage in particular. VMware creates a new lease of life for performance or IO functions that really need to operate at maximum performance and not steal CPU cycles or RAM resources from the hypervisor and/or applications.

If hardware RAID vendors can continue to add enhanced functions such as the SSD caching algorithms and other storage virtualization functions behind an external SAS switched storage setup, then there is definitely some life left in ye old RAID engines, at least in the opinion of this blogger.

While software RAID has a solid place in the future, dare I say “long live hardware RAID” (again)?