Hardware RAID Adapters Making A Comeback?
We are constantly reading about how the number of cores being offered by Intel and AMD will eventually make hardware acceleration devices in a server mostly obsolete. But hang on, I recall hearing something similar back in the 90s when MIPS – meaningless indication of performance provided by sales people – was the in metric. Having just been involved in a key OEM software RAID project and watching the steady replacement of server based RAID adapter unit shipments start to shift downward as more CPU cycles come on line, I recently saw some recent market numbers from Gartner that showed the reverse trend. Hardware RAID adapter cards, which were supposed to be dying slowly, are starting to see a resurgence in servers.
So why is this?
Not surprisingly, one of the key contributing suspects is looking like increased virtual server adoption, in particular VMware ESX. Unlike the conventional operating system environment, a dedicated hypervisor environment like ESX doesn’t have the same “luxurious” methods for developing a broad range of device drivers for starters, let alone the lack of RAM space to load up complex device drivers. The whole point of virtual servers is to drive the hardware to a minimal number of absolutely necessary interface types, so no surprises that there is a rather slim pickings of storage adapters in the standard distribution of ESX. This is specially so with the new breed of skinny hypervisors capable of operating on minimal ROM and RAM footprints (e.g. inside the system BIOS itself) that require skinny device drivers versus something like a fat fully featured software RAID driver that could required up to 1Gbyte RAM minimum in a single-OS setup. Then there is the user aspect of loading custom drivers via the vSphere command line if VMware doesn’t bundle the one you need with their standard distribution. I’m still coming up my learning curve on ESX, but there is a definate and significant learning curve as I attempt to explore the capabilities of my newfound experimental VMware ESXi system to make sure I get it right (i.e. don’t kill it) versus the Microsoft easy-to-install approach for standard apps and drivers we’ve all become used to seeing.
In the case of hardware RAID versus software RAID, the initial problem is that conventional software RAID just doesn’t fit well into a VMware ESX hypervisor environment because most of today’s solutions were built with a single-OS in mind. Apart from the fact you can’t get hold of a software RAID stack for ESX, even if you could, it is likely to take up significant – possibly excessive – system RAM resources as a percentage of the overall hypervisor functions. Re-enter hardware RAID as it doesn’t really care about what OS you are running above it. It can use a simpler, skinny host driver without impacting the system resources and ports over easier to the ESX environment as the RAID stuff runs down on a dedicated hardware accelerated engine or storage processor.
One of the other contributing factors to hardware RAID’s incline again could also be the increased usage and focus on external SAS disk arrays. DAS is certainly making a comeback, especially given that the latest generation SAS disk arrays can operate at up to 24Gbps rates over a single cable to the host (four SAS channels running at 6Gbps). Having a hardware RAID adapter as the primary connection to a SAS JBOD ensures that there is always a consistent virtual interface to the hypervisor layers and that the CPU and RAM resources are not being overtaxed. Sure a software RAID stack can do the same given enough system resources and CPU cycles, but concerns about scaleability as you add drives over time is something that makes software RAID more difficult to manage in an ESX environment as it impacts the RAM usage when adding more drives for starters. Again, not a problem with hardware RAID. Same resources consistently presented to the hypervisor layers without significant CPU-RAM impact as drive changes and capacity increases are made.
So for hardware RAID, and many other traditional functions that were on the track to oblivion if you read the multi-core CPU tea leaves of late, maybe things aren’t so bad after all for storage in particular. VMware creates a new lease of life for performance or IO functions that really need to operate at maximum performance and not steal CPU cycles or RAM resources from the hypervisor and/or applications.
If hardware RAID vendors can continue to add enhanced functions such as the SSD caching algorithms and other storage virtualization functions behind an external SAS switched storage setup, then there is definitely some life left in ye old RAID engines, at least in the opinion of this blogger.
While software RAID has a solid place in the future, dare I say “long live hardware RAID” (again)?
1 comment