Tag Archives: SBB

Will Traditional Disk Array Vendors Survive?

The more you look at how SAN disk array systems are evolving, the more you start to wonder how traditional disk array systems vendors can survive long term without a significant change in their business model. What’s been a business primarily focused on building solid high-availability RAID arrays is now facing a serious commoditization challenge as standard Intel compatible server hardware is adapted to storage device applications. Furthermore, the latest or pending acquisitions of 3PAR, Isilon and now Compellent, puts significant pressure on the last few independent disk array vendors to start beefing up their virtualization or unified storage play.

There are a few industry trends that should raise concerns for traditional players:

  • There are fewer margin dollars available for incremental R&D for pure RAID disk arrays, not helped by the continuing drop in disk drive prices
  • Standard Intel compatible off the shelf CPUs, memory, chipsets and RAID components developed for higher volume servers and workstations are now being utilized in storage platforms running standard operating systems, or open storage software Linux or ZFS based  (e.g. OpenFiler)
  • We continue to see new hybrid storage-server hardware product offerings from white box equipment builders, including dual controller SBB solutions, that facilitate easier disk array and storage virtualization integration
  • SSD device performance increases are leading to newer approaches to storage systems that may not lend themselves to traditional disk array controllers and slow speed SANs
  • We are seeing many storage management features and thin provisioning making their way into server hypervisors, in many cases as “free” standard components
  • Large scale data intensive environments, such as those used by search engines, are favoring non-RAID, non-SAN architectures (e.g. Hadoop) utilizing low cost networked PC technology and Linux (JBOS – just a bunch of servers)

Does this spell the end for traditional RAID array guys? If you read many of the press around RAID today, then the outlook does not look good. However, in the short term there is still business to be had. The good news is that the storage equipment market, along with infrastructure upgrades in traditional corporate markets, takes many years to transition to new technologies. You just cannot switch out storage when something new comes along. It’s like trying to upgrade the train carriages while the train is moving; not practical, dangerous and will pretty much upset customers if you try to do it. You have to migrate non-disruptively to any new storage system to avoid significant service or application disruption. But this will not last forever.

They literally have to starting thinking outside the box. What is clear is that these traditional disk array vendors really need to start taking note of the highly successful Dell EqualLogic effect i.e. storage that is scalable at network system level, simple and cost effective from an operating cost standpoint as opposed to the device centric model they currently use.  The market has certainly rewarded this virtualized, scalable networked approach in spades given Dell’s reported revenue successes with this product family over the past few years, even though  it is more costly on a dollar per Gigabyte basis.

Some pure play disk array vendors or divisions such as LSI, Dot Hill and Xyratex (who OEM their products to larger companies such as Dell, HP and IBM) are already migrating upward by acquiring or building storage virtualization appliance companies, but it will take time for them to adjust to the new world as it takes a significant shift in the types of engineers to develop network level, virtualized storage. Then there is the issue of doubling investment for a while as you sustain one and develop another approach. Selling styles also change as the system level knowledge needs to target a different type of buyer in many cases. In their favor, they have the robustness, service and know-how advantage i.e. they know how to make, sell and support storage extremely well.

It wouldn’t be the first time an industry had to reinvent itself in order to survive and it will be interesting to see how it pans out over the next several years, and most importantly, who survives until the next round of commoditization.

SBB needs a new eco system

I have to confess that in true tekinerd style, I actually have an SBB storage chassis in my garage; hence the reason I’m writing about it I guess. Thanks to my good friends at AIC, I’ve actually had one of their first prototype SBB chassis for nearly 2 years now (still here if you need it back guys). Why? We actually built and demo’d a 4G and 8G Fibre Channel SBB based storage solution based on an AMD Opteron running Linux with a RAIDcore host based RAID stack back in 2007-2008 timeframe working with some very talented engineers in Minneapolis.

I knew I liked SBB when our first proto fired up and we were able to demonstrate pretty quickly without building a chassis that our controllers worked. We had infact designed a specialty enclosure for a military application, but the chassis that AIC provided us at the time was not only ideal as a bring up chassis, but also as a demonstration to ourselves that we could also offer a commercial version of the product using an off the shelf chassis. That was the cool part. Develop for one chassis, and bring up in another commercially available chassis. Even cooler however was that this was a full Linux server as well. We could turn different host interfaces and switch between basic SAN and NAS (or both) functionality pretty easily.

Storage bridge bay, or SBB, is now at revision 2.0. It all started in 2006 when Dell, EMC, LSI and Intel banded together to create the SBB Working Group. Many other vendors have since joined in. To date however, only a handful of vendors have adopted with the orginal intent in mind of an open platform, with the majority of the market staying on the disk array side with proprietary or what I’d call non-open SBB solutions. Examples of proprietary solutions are HP MSA and EVA, EMC, NetApp and so on with only Dell, Xyratex, Xiotech and a few whitebox manufacturers such as AIC and SuperMicro adopting SBB and in most cases, as one more line item they offer versus whole hearted adoption.

So if it’s so cool, why hasn’t it proliferated?

The answer lies in the motives behind SBB and the end user value equation. As far as motives are concerned, several large OEMs like SBB as they can drive increased competition amongst the storage array vendor community. On the end user side, SBB has demonstrated no real benefits with the versions of products that are available beyond what you can get today with the proprietary versions. Let’s face it, SBB to an end user, there is no benefit in an SBB approach if there are no well developed open market controllers to plug, play and match with.

Several other factors to consider:

  1. No one is offering truly unbundled SBB products i.e. the storage controller comes separate from the chassis. They are still selling it like traditional storage arrays.
  2. There is a 99% chance that different vendor controllers and chassis have not been tested with each other i.e. its up to the end user to check it all out. End users don’t have time for this.
  3. There has been no demonstrated cost benefit of SBB to end users yet – it appears to help the OEMs in the bargaining powers with the storage providers more than end users at this point.
  4. The storage vendors themselves are operating on such slim margins that their incentive to move to SBB is low.

What does this mean for SBB?

The strength of SBB is not in its ability to replace existing entry level or mid-range disk arrays. As it is an open standard, it fosters more innovation by allowing many more smaller volume vendors and developers to create a value add controller without having to go through the rigmarole of developing a whole chassis solution. Further, the fact that both Intel and AMD provide development kits to select partners to be able to put a whole PC or server architecture on an SBB blade allows a whole lot more than just plain RAID level storage functions.

This is where the “storage” in SBB becomes a little cloudier as there is no reason why an SBB controller blade could in fact be a complete NAS or a virtual server head running several end user processes as well. It could also be a virtualized storage controller with complete cluster capabilities using off the shelf operating systems versus the closed software of today’s SBB controllers using the same identical hardware and just a software change.

The industry needs to wake up to the possibilities of SBB not as a replacement for what they have but as a platform for the next evolution of intelligent storage devices.

A few suggestions to the SBB community from someone who’s tried to make it fly:

  • Increase the focus on interoperability – create a lab like we did in the networking community and offer a certification sticker for interoperable SBBs components.
  • Create a software developer eco-system for the Intel and AMD reference designs to create more innovation and differentiation with today’s RAID only solutions.
  • Consider creating a server splinter group that focuses on developing virtual machine aware derivatives of SBB that focus more on server-storage solutions…. A Modular Server Rack (MSR) type solution.

Overall, SBB is a great start to break the proprietary stranglehold of the traditional storage vendors and drive storage to the next level, but it takes more than just a hardware standard to do it… it requires an eco-system of next generation thinkers.

SBB – a viable server platform?

SBB, or storage bridge bay (see link on the right), was originally intended as a standard and open disk array platform that allowed different controller and chassis vendors to plug and play.

Then a few years ago, Intel and AMD started providing reference designs to several OEMs that provided their almost latest and greatest CPU and core logic chipset technology used in regular server environments in the SBB form factor. Not as powerful in terms of raw CPU power you find in typical servers, but still a viable x86 server (two in fact – one per SBB slot) that can run Linux, NAS, SAN target software, tiered storage maybe, thin provisioning, host failover in a single box, etc, etc, etc.

So why hasn’t it caught on? CPU too under powered? OEMs not getting behind it?

All good questions. The opinion here is that there is great potential in the SBB platform, but not for the original design goals stated. For one, the name – SBB = storage bridge bay – implies this platform can only do storage. Secondly, the focus has been on replacing low cost disk array controller technology with a more expensive CPU + software based technology for RAID and other storage functions, instead of focusing on the far greater potential of what else you can use that CPU for given a far more open software environment.

Bottom line, a few suggestions for the powers to be working on SBB:

  1. Come up with another acronym for servers based on SBB e.g. Modular Server Bay or something (ok maybe not MSB, but you get the idea)
  2. Focus the attention on developing a software ecosystem – the hardware was the easier (relatively speaking) part
  3. A multi-vendor interoperability lab and certification is needed to expand SBB to its full potential as a more open hardware platform
  4. You can’t rely on the large OEMs to drive multi-vendor interoperability. It has to be the little guys with help from Intel and AMD that innovate and drive a new set of applications