Tag Archives: raid

Kingston SSDnow VSeries and RAID

I’ve been watching the price of SSDs fall for a little while now and was intrigued by the sub $100 pricing being offered by Kingston and their MLC based 30G SSDnow Vseries, a 3G SATA II based SSD. Once they reached $80 at NewEgg.com, I dove straight in. Actually, I dove right in at the deep end and bought 6 of them as I wanted to see just how well they performed with an old 3-port NetCell RAID SATA I (1.5Gbps) card I had (some of you may remember them), along with a more recent 3ware (pre LSI) 8 port hardware RAID SATA II (3Gbps) card to see just how fast they were capable of going.

I definitely saw some interesting results. I should say upfront that this was a quick and dirty test to see if you can use these low cost devices in any RAID configuration, which they were not really designed for, so please take that into consideration as you read this as this was more an academic exercise to prove a point when mixing old with new.

Inside the Kingston 30G SSDs

For fun, I disassembled one of the SSDs at the end of my tests (don’t worry Kingston; not going to try and return it) and I was surprised to see lots of fresh air inside. These things are tiny (duh! I hear a few folks saying). They kept the cost low by removing much of the length of the electronic circuit card, with the Toshiba T6UG1XBG controller on one side with a Micron 9TB17-D9JVD DRAM memory, and four 8GB Toshiba flash devices on the underside. This combination according to the published specs should result in streaming speeds of up to 180MB/s read, and 50MB/s write, so definately more on the entry level side versus what you’d see in some of the more expensive versions.

Testing with Legacy RAID Cards

I picked two old legacy cards to work with: a PNY NetCell 3-port SATA RAID 3 and a 3ware 9650SE 8 port SATA II RAID card. I was really pleased that the SSD worked just fine with my old PNY NetCell RAID card in one of the older PCI test rigs. The Kingston looked just like a regular SATA drive as advertised and the NetCell card auto-configured it nicely into a single drive before booting into Windows. No surprises there.

However, given my primary performance test system was a SuperMicro X8DT6 PCIe setup which had no PCI slots, I quickly moved onto testing with the PCIe x4 Gen1 3ware 9650SE SATA2 RAID adapter as I wanted to see just how well traditional RAID adapters work with SSDs. Of course, having just attended the Flash Summit and was told 100 times over that most of today’s controllers and applications don’t take advantage of SSDs the way they should be doing, I really had to see it for myself. I was also curious why most benchmarks used host based software RAID, and following this am beginning to see why.

Quick Test RAID Performance

Low cost SSDs don’t really work that well in a traditional hardware RAID setup. On the positive side, using the 3ware RAID adapter, a single drive outperformed against it’s published spec of 180MB/s coming in at 200MB/s on streaming reads. The performance test results were based on IOmeter 2006 running under Windows XP and were all performed using raw volumes, Q size of 1 and only two block sizes; 1M for streaming reads and writes, and 512 bytes for random I/Os. The drives were set up in simple RAID0 configurations and with the write caches left on.

While the Kingston SSD’s I used were not really designed for RAID applications, I saw no reason why I shouldn’t at least see linear performance increases as I added drives from the RAID controller. The quick results published here in the two charts did convince me as an novice SSD user that throwing these into a system without consideration of all the various moving parts doesn’t automatically make for a high performance system. In fact, it could even slow it down. While 1-2 drives in a RAID0 configuration did reasonably well on streaming reads and significantly higher that ye old hard drives, this quickly hit a ceiling at around 550MB/s with not much of a benefit in performance past 4 SSDs. Not bad for an 3-4 year old RAID card that wasn’t really optimized for low cost flash memory. However, the writes were a different story, barely making it to 50MB/s (with some very high max latencies >4 seconds in some cases). IOPs were a real mixed bag with decent results at around 4500 IOPs, but never really seeing the increase as more SSDs were added. IOPS writes were again a mixed bag and not really related to the number of drives at all, with the performance actually falling off as more drives were added beyond 3 SSDs. Nothing conclusive here of course, but definitely illustrates that the caching algorithms of the older RAID cards were not optimized for a 4000+ IOPs device hanging off each port!

Conclusion

This exercise with legacy RAID components for me confirms what we saw at the Flash Summit i.e. even with all the excellent work going on in the industry on making SSDs perform to their true potential, it will be a while before the whole PC eco-system will be able to take full advantage of them. While these devices work well as a low cost SSD for entry level single drive systems, it’s unlikely they can offer much when used with legacy RAID adapter cards. A lot of effort has gone into making a flash drive look just like a regular hard disk drive which has fooled a few of us end users into thinking it’s just a faster hard drive, whereas it’s a very different beast. Lesson learned here is that applications, software drivers and of course RAID firmware all need to be re-written to ensure that they are SSD literate. Bottom line is you need to use a modern RAID card that is SSD literate. LSI and Adaptec are moving in this direction with their built in SSD caching technology, but general concensus is that hardware RAID engines in all types of equipment are going to need an overhaul given the quantum leap in performance of SSDs over traditional hard drives.

Hardware RAID Adapters Making A Comeback?

We are constantly reading about how the number of cores being offered by Intel and AMD will eventually make hardware acceleration devices in a server mostly obsolete. But hang on, I recall hearing something similar back in the 90s when MIPS – meaningless indication of performance provided by sales people – was the in metric. Having just been involved in a key OEM software RAID project and watching the steady replacement of server based RAID adapter unit shipments start to shift downward as more CPU cycles come on line, I recently saw some recent market numbers from Gartner that showed the reverse trend. Hardware RAID adapter cards, which were supposed to be dying slowly, are starting to see a resurgence in servers.

So why is this?

Not surprisingly, one of the key contributing suspects is looking like increased virtual server adoption, in particular VMware ESX. Unlike the conventional operating system environment, a dedicated hypervisor environment like ESX doesn’t have the same “luxurious” methods for developing a broad range of device drivers for starters, let alone the lack of RAM space to load up complex device drivers. The whole point of virtual servers is to drive the hardware to a minimal number of absolutely necessary interface types, so no surprises that there is a rather slim pickings of storage adapters in the standard distribution of ESX. This is specially so with the new breed of skinny hypervisors capable of operating on minimal ROM and RAM footprints (e.g. inside the system BIOS itself) that require skinny device drivers versus something like a fat fully featured software RAID driver that could required up to 1Gbyte RAM minimum in a single-OS setup. Then there is the user aspect of loading custom drivers via the vSphere command line if VMware doesn’t bundle the one you need with their standard distribution. I’m still coming up my learning curve on ESX, but there is a definate and significant learning curve as I attempt to explore the capabilities of my newfound experimental VMware ESXi system to make sure I get it right (i.e. don’t kill it) versus the Microsoft easy-to-install approach for standard apps and drivers we’ve all become used to seeing.

In the case of hardware RAID versus software RAID, the initial problem is that conventional software RAID just doesn’t fit well into a VMware ESX hypervisor environment because most of today’s solutions were built with a single-OS in mind. Apart from the fact you can’t get hold of a software RAID stack for ESX, even if you could, it is likely to take up significant – possibly excessive – system RAM resources as a percentage of the overall hypervisor functions. Re-enter hardware RAID as it doesn’t really care about what OS you are running above it. It can use a simpler, skinny host driver without impacting the system resources and ports over easier to the ESX environment as the RAID stuff runs down on a dedicated hardware accelerated engine or storage processor.

One of the other contributing factors to hardware RAID’s incline again could also be the increased usage and focus on external SAS disk arrays. DAS is certainly making a comeback, especially given that the latest generation SAS disk arrays can operate at up to 24Gbps rates over a single cable to the host (four SAS channels running at 6Gbps). Having a hardware RAID adapter as the primary connection to a SAS JBOD ensures that there is always a consistent virtual interface to the hypervisor layers and that the CPU and RAM resources are not being overtaxed. Sure a software RAID stack can do the same given enough system resources and CPU cycles, but concerns about scaleability as you add drives over time is something that makes software RAID more difficult to manage in an ESX environment as it impacts the RAM usage when adding more drives for starters. Again, not a problem with hardware RAID. Same resources consistently presented to the hypervisor layers without significant CPU-RAM impact as drive changes and capacity increases are made.

So for hardware RAID, and many other traditional functions that were on the track to oblivion if you read the multi-core CPU tea leaves of late, maybe things aren’t so bad after all for storage in particular. VMware creates a new lease of life for performance or IO functions that really need to operate at maximum performance and not steal CPU cycles or RAM resources from the hypervisor and/or applications.

If hardware RAID vendors can continue to add enhanced functions such as the SSD caching algorithms and other storage virtualization functions behind an external SAS switched storage setup, then there is definitely some life left in ye old RAID engines, at least in the opinion of this blogger.

While software RAID has a solid place in the future, dare I say “long live hardware RAID” (again)?