Kingston SSDnow VSeries and RAID

I’ve been watching the price of SSDs fall for a little while now and was intrigued by the sub $100 pricing being offered by Kingston and their MLC based 30G SSDnow Vseries, a 3G SATA II based SSD. Once they reached $80 at, I dove straight in. Actually, I dove right in at the deep end and bought 6 of them as I wanted to see just how well they performed with an old 3-port NetCell RAID SATA I (1.5Gbps) card I had (some of you may remember them), along with a more recent 3ware (pre LSI) 8 port hardware RAID SATA II (3Gbps) card to see just how fast they were capable of going.

I definitely saw some interesting results. I should say upfront that this was a quick and dirty test to see if you can use these low cost devices in any RAID configuration, which they were not really designed for, so please take that into consideration as you read this as this was more an academic exercise to prove a point when mixing old with new.

Inside the Kingston 30G SSDs

For fun, I disassembled one of the SSDs at the end of my tests (don’t worry Kingston; not going to try and return it) and I was surprised to see lots of fresh air inside. These things are tiny (duh! I hear a few folks saying). They kept the cost low by removing much of the length of the electronic circuit card, with the Toshiba T6UG1XBG controller on one side with a Micron 9TB17-D9JVD DRAM memory, and four 8GB Toshiba flash devices on the underside. This combination according to the published specs should result in streaming speeds of up to 180MB/s read, and 50MB/s write, so definately more on the entry level side versus what you’d see in some of the more expensive versions.

Testing with Legacy RAID Cards

I picked two old legacy cards to work with: a PNY NetCell 3-port SATA RAID 3 and a 3ware 9650SE 8 port SATA II RAID card. I was really pleased that the SSD worked just fine with my old PNY NetCell RAID card in one of the older PCI test rigs. The Kingston looked just like a regular SATA drive as advertised and the NetCell card auto-configured it nicely into a single drive before booting into Windows. No surprises there.

However, given my primary performance test system was a SuperMicro X8DT6 PCIe setup which had no PCI slots, I quickly moved onto testing with the PCIe x4 Gen1 3ware 9650SE SATA2 RAID adapter as I wanted to see just how well traditional RAID adapters work with SSDs. Of course, having just attended the Flash Summit and was told 100 times over that most of today’s controllers and applications don’t take advantage of SSDs the way they should be doing, I really had to see it for myself. I was also curious why most benchmarks used host based software RAID, and following this am beginning to see why.

Quick Test RAID Performance

Low cost SSDs don’t really work that well in a traditional hardware RAID setup. On the positive side, using the 3ware RAID adapter, a single drive outperformed against it’s published spec of 180MB/s coming in at 200MB/s on streaming reads. The performance test results were based on IOmeter 2006 running under Windows XP and were all performed using raw volumes, Q size of 1 and only two block sizes; 1M for streaming reads and writes, and 512 bytes for random I/Os. The drives were set up in simple RAID0 configurations and with the write caches left on.

While the Kingston SSD’s I used were not really designed for RAID applications, I saw no reason why I shouldn’t at least see linear performance increases as I added drives from the RAID controller. The quick results published here in the two charts did convince me as an novice SSD user that throwing these into a system without consideration of all the various moving parts doesn’t automatically make for a high performance system. In fact, it could even slow it down. While 1-2 drives in a RAID0 configuration did reasonably well on streaming reads and significantly higher that ye old hard drives, this quickly hit a ceiling at around 550MB/s with not much of a benefit in performance past 4 SSDs. Not bad for an 3-4 year old RAID card that wasn’t really optimized for low cost flash memory. However, the writes were a different story, barely making it to 50MB/s (with some very high max latencies >4 seconds in some cases). IOPs were a real mixed bag with decent results at around 4500 IOPs, but never really seeing the increase as more SSDs were added. IOPS writes were again a mixed bag and not really related to the number of drives at all, with the performance actually falling off as more drives were added beyond 3 SSDs. Nothing conclusive here of course, but definitely illustrates that the caching algorithms of the older RAID cards were not optimized for a 4000+ IOPs device hanging off each port!


This exercise with legacy RAID components for me confirms what we saw at the Flash Summit i.e. even with all the excellent work going on in the industry on making SSDs perform to their true potential, it will be a while before the whole PC eco-system will be able to take full advantage of them. While these devices work well as a low cost SSD for entry level single drive systems, it’s unlikely they can offer much when used with legacy RAID adapter cards. A lot of effort has gone into making a flash drive look just like a regular hard disk drive which has fooled a few of us end users into thinking it’s just a faster hard drive, whereas it’s a very different beast. Lesson learned here is that applications, software drivers and of course RAID firmware all need to be re-written to ensure that they are SSD literate. Bottom line is you need to use a modern RAID card that is SSD literate. LSI and Adaptec are moving in this direction with their built in SSD caching technology, but general concensus is that hardware RAID engines in all types of equipment are going to need an overhaul given the quantum leap in performance of SSDs over traditional hard drives.

Flash Memory Summit 2010

The Flash Memory Summit was well attended this year with attendance up over 50%. A total of around 2000 people showed up versus the planned 1200 according to the show organizers.

It was first time for me, but I definitely thought the show was well worth attending (Note: I’m an independent and not associated with the show in any way). Not only did it bring folks like me up to speed quickly on what was happening, but it also provided a forum for discussing where this industry is heading and more importantly, what the various vendors and researchers are up to in this space. We even had Steve Wozniak (now at FusionIO) as a keynote speaker, entertaining us with his various brushes with memory from the first DRAM chips from his computer club days to the hilarious pranks he used to play on folks, including Steve Jobs in the early years.

In summary… things are definitely looking up for the flash industry.

Technology tidbits of interest:

  • PCM (Phase Change Memory) as a future technology looked very interesting, especially given it’s significantly higher write endurance (1M+) though not meeting density expectations yet
  • MLC 3 -bits per cell using 25nm semiconductor process was announced by Intel and Micron; time for higher densities (albeit at worse write cycle levels)
  • Most SSD manufacturers appear close to having a PCIe Flash add-in card solution
  • Automatic tiered storage and auto load balancing are on the rise and badly needed wherever flash shows up in general computing
  • The term “short-stroking SSDs” came up which was really another way to say only use 75% of the capacity of an MLC-SSD in order to provide the optimum performance
  • The concept of SCM – storage class memory – due in 2013 onwards also came up on a number of occasions as being a new class of memory mapped flash versus the current storage I/O model commonly used today (i.e. flash becomes an extension of DRAM space in this case)
  • FusionIO were giving away nice looking capes, monkeys and T-shirts. They also had a very cool looking display with a huge number of video streams.

One point of interest on the SSD adoption side; Intel claim almost all datacenters want MLC as the TCO for SLC solutions doesn’t compute for most end IT managers (lots of discussion around that one). It was pretty clear throughout the show that an enterprise class MLC solution was badly needed, tied of course to the basic cost advantages over SLC.

Other new products of interest introduced at the show included FusionIO’s hybrid flash card with external Infiniband (IB) I/O (40Gbps) connectivity, the ioSAN, which was simply a Infiniband PCIe controller integrated with a Fusion Duo card using a PCIe switch . While there is no onboard bridging offered, the system software driver does allow the card to transfer data to and from the flash memory as a target IB device, offering a very fast method to move data onto or off the card, and potentially offer support for virtual machine migration down the road. Future versions will offer 10G iSCSI support also.

Another cool little device was the SANDisk® 64GB iSSD SATA-flash module in the form of a silicon BGA package for embedded applications, no bigger than the size of a US postage stamp. Given the increased availability of SATA ports appearing in embedded chipsets, this makes for a very nice component for many applications beyond just the mobile space.

The only concern that continues to come up consistently is the write endurance problem, along with how end users and applications still need to understand the differences between SSD and traditional hard drives to achieve the most out of them. As noted in one of the keynotes, it has taken almost 30 years to go from a few 10s of IOPS on a HDD to approximately 280+ IOPs. All applications and operating systems have conditioned themselves around this level of performance, so it’s not surprising that a sudden jump by 10-20x is not being seem in anything other than raw benchmarks. RethinkDB™ was one company that had gone to the trouble of rewriting a SQL database application to take advantage of SSDs, and managed to push operations per second up from 200 to over 1200 for the same hardware.

There is still the serviceability issue however for enterprise class and data centers whose IT managers have been well trained by the hard drive industry in ways not always well suited to SSDs. While there are tremendous gains from technologies such as PCIe Flash cards from the likes of FusionIO and OCZ, there is also the question of how to replace them when they go bad without turning off the server and migrating data for example. This will continue to drive new ideas and solutions for a while in the enterprise space.

All in all, a great show and well worth attending.

Hardware RAID Adapters Making A Comeback?

We are constantly reading about how the number of cores being offered by Intel and AMD will eventually make hardware acceleration devices in a server mostly obsolete. But hang on, I recall hearing something similar back in the 90s when MIPS – meaningless indication of performance provided by sales people – was the in metric. Having just been involved in a key OEM software RAID project and watching the steady replacement of server based RAID adapter unit shipments start to shift downward as more CPU cycles come on line, I recently saw some recent market numbers from Gartner that showed the reverse trend. Hardware RAID adapter cards, which were supposed to be dying slowly, are starting to see a resurgence in servers.

So why is this?

Not surprisingly, one of the key contributing suspects is looking like increased virtual server adoption, in particular VMware ESX. Unlike the conventional operating system environment, a dedicated hypervisor environment like ESX doesn’t have the same “luxurious” methods for developing a broad range of device drivers for starters, let alone the lack of RAM space to load up complex device drivers. The whole point of virtual servers is to drive the hardware to a minimal number of absolutely necessary interface types, so no surprises that there is a rather slim pickings of storage adapters in the standard distribution of ESX. This is specially so with the new breed of skinny hypervisors capable of operating on minimal ROM and RAM footprints (e.g. inside the system BIOS itself) that require skinny device drivers versus something like a fat fully featured software RAID driver that could required up to 1Gbyte RAM minimum in a single-OS setup. Then there is the user aspect of loading custom drivers via the vSphere command line if VMware doesn’t bundle the one you need with their standard distribution. I’m still coming up my learning curve on ESX, but there is a definate and significant learning curve as I attempt to explore the capabilities of my newfound experimental VMware ESXi system to make sure I get it right (i.e. don’t kill it) versus the Microsoft easy-to-install approach for standard apps and drivers we’ve all become used to seeing.

In the case of hardware RAID versus software RAID, the initial problem is that conventional software RAID just doesn’t fit well into a VMware ESX hypervisor environment because most of today’s solutions were built with a single-OS in mind. Apart from the fact you can’t get hold of a software RAID stack for ESX, even if you could, it is likely to take up significant – possibly excessive – system RAM resources as a percentage of the overall hypervisor functions. Re-enter hardware RAID as it doesn’t really care about what OS you are running above it. It can use a simpler, skinny host driver without impacting the system resources and ports over easier to the ESX environment as the RAID stuff runs down on a dedicated hardware accelerated engine or storage processor.

One of the other contributing factors to hardware RAID’s incline again could also be the increased usage and focus on external SAS disk arrays. DAS is certainly making a comeback, especially given that the latest generation SAS disk arrays can operate at up to 24Gbps rates over a single cable to the host (four SAS channels running at 6Gbps). Having a hardware RAID adapter as the primary connection to a SAS JBOD ensures that there is always a consistent virtual interface to the hypervisor layers and that the CPU and RAM resources are not being overtaxed. Sure a software RAID stack can do the same given enough system resources and CPU cycles, but concerns about scaleability as you add drives over time is something that makes software RAID more difficult to manage in an ESX environment as it impacts the RAM usage when adding more drives for starters. Again, not a problem with hardware RAID. Same resources consistently presented to the hypervisor layers without significant CPU-RAM impact as drive changes and capacity increases are made.

So for hardware RAID, and many other traditional functions that were on the track to oblivion if you read the multi-core CPU tea leaves of late, maybe things aren’t so bad after all for storage in particular. VMware creates a new lease of life for performance or IO functions that really need to operate at maximum performance and not steal CPU cycles or RAM resources from the hypervisor and/or applications.

If hardware RAID vendors can continue to add enhanced functions such as the SSD caching algorithms and other storage virtualization functions behind an external SAS switched storage setup, then there is definitely some life left in ye old RAID engines, at least in the opinion of this blogger.

While software RAID has a solid place in the future, dare I say “long live hardware RAID” (again)?