I’ve been watching the price of SSDs fall for a little while now and was intrigued by the sub $100 pricing being offered by Kingston and their MLC based 30G SSDnow Vseries, a 3G SATA II based SSD. Once they reached $80 at NewEgg.com, I dove straight in. Actually, I dove right in at the deep end and bought 6 of them as I wanted to see just how well they performed with an old 3-port NetCell RAID SATA I (1.5Gbps) card I had (some of you may remember them), along with a more recent 3ware (pre LSI) 8 port hardware RAID SATA II (3Gbps) card to see just how fast they were capable of going.
I definitely saw some interesting results. I should say upfront that this was a quick and dirty test to see if you can use these low cost devices in any RAID configuration, which they were not really designed for, so please take that into consideration as you read this as this was more an academic exercise to prove a point when mixing old with new.
Inside the Kingston 30G SSDs
For fun, I disassembled one of the SSDs at the end of my tests (don’t worry Kingston; not going to try and return it) and I was surprised to see lots of fresh air inside. These things are tiny (duh! I hear a few folks saying). They kept the cost low by removing much of the length of the electronic circuit card, with the Toshiba T6UG1XBG controller on one side with a Micron 9TB17-D9JVD DRAM memory, and four 8GB Toshiba flash devices on the underside. This combination according to the published specs should result in streaming speeds of up to 180MB/s read, and 50MB/s write, so definately more on the entry level side versus what you’d see in some of the more expensive versions.
Testing with Legacy RAID Cards
I picked two old legacy cards to work with: a PNY NetCell 3-port SATA RAID 3 and a 3ware 9650SE 8 port SATA II RAID card. I was really pleased that the SSD worked just fine with my old PNY NetCell RAID card in one of the older PCI test rigs. The Kingston looked just like a regular SATA drive as advertised and the NetCell card auto-configured it nicely into a single drive before booting into Windows. No surprises there.
However, given my primary performance test system was a SuperMicro X8DT6 PCIe setup which had no PCI slots, I quickly moved onto testing with the PCIe x4 Gen1 3ware 9650SE SATA2 RAID adapter as I wanted to see just how well traditional RAID adapters work with SSDs. Of course, having just attended the Flash Summit and was told 100 times over that most of today’s controllers and applications don’t take advantage of SSDs the way they should be doing, I really had to see it for myself. I was also curious why most benchmarks used host based software RAID, and following this am beginning to see why.
Quick Test RAID Performance
Low cost SSDs don’t really work that well in a traditional hardware RAID setup. On the positive side, using the 3ware RAID adapter, a single drive outperformed against it’s published spec of 180MB/s coming in at 200MB/s on streaming reads. The performance test results were based on IOmeter 2006 running under Windows XP and were all performed using raw volumes, Q size of 1 and only two block sizes; 1M for streaming reads and writes, and 512 bytes for random I/Os. The drives were set up in simple RAID0 configurations and with the write caches left on.
While the Kingston SSD’s I used were not really designed for RAID applications, I saw no reason why I shouldn’t at least see linear performance increases as I added drives from the RAID controller. The quick results published here in the two charts did convince me as an novice SSD user that throwing these into a system without consideration of all the various moving parts doesn’t automatically make for a high performance system. In fact, it could even slow it down. While 1-2 drives in a RAID0 configuration did reasonably well on streaming reads and significantly higher that ye old hard drives, this quickly hit a ceiling at around 550MB/s with not much of a benefit in performance past 4 SSDs. Not bad for an 3-4 year old RAID card that wasn’t really optimized for low cost flash memory. However, the writes were a different story, barely making it to 50MB/s (with some very high max latencies >4 seconds in some cases). IOPs were a real mixed bag with decent results at around 4500 IOPs, but never really seeing the increase as more SSDs were added. IOPS writes were again a mixed bag and not really related to the number of drives at all, with the performance actually falling off as more drives were added beyond 3 SSDs. Nothing conclusive here of course, but definitely illustrates that the caching algorithms of the older RAID cards were not optimized for a 4000+ IOPs device hanging off each port!
This exercise with legacy RAID components for me confirms what we saw at the Flash Summit i.e. even with all the excellent work going on in the industry on making SSDs perform to their true potential, it will be a while before the whole PC eco-system will be able to take full advantage of them. While these devices work well as a low cost SSD for entry level single drive systems, it’s unlikely they can offer much when used with legacy RAID adapter cards. A lot of effort has gone into making a flash drive look just like a regular hard disk drive which has fooled a few of us end users into thinking it’s just a faster hard drive, whereas it’s a very different beast. Lesson learned here is that applications, software drivers and of course RAID firmware all need to be re-written to ensure that they are SSD literate. Bottom line is you need to use a modern RAID card that is SSD literate. LSI and Adaptec are moving in this direction with their built in SSD caching technology, but general concensus is that hardware RAID engines in all types of equipment are going to need an overhaul given the quantum leap in performance of SSDs over traditional hard drives.