Storage Bridge Bay (SBB)

Introduction

Storage Bridge Bay, or SBB, is an open standard for storage disk array and computer manufacturers to create controllers and chassis that use multi-vendor interchangeable components. The SBB working group was formed in 2006 by Dell, EMC, LSI and Intel and the standards are openly published at the SBB website. Visit their web site to learn more about the background and current member companies.

SBB based solutions have been shipping for some time now but have not really been marketed to end users for their multi-vendor capabilities. The primary benefits so far are based around either reduction in test and development costs for the vendors themselves or from a tier 1 OEM perspective (i.e. the IBM, Dell or HPs of the world), it provides the ability to maintain a standard chassis (which includes the drives, disk trays, internal mid-plane that connects the drives to the controllers) and invite several vendors to bid for the controller portion. All in all, this is a solid attempt to replicate the PC whitebox model by standardizing the various components inside of a RAID disk array product and enabling multiple different vendors to make plug and play components.

Primary Features of SBB

The SBB specification amongst other things, standardizes on the interchangeability of the primary pluggable components or FRUs – field replaceable units i.e. the controller and power supplies. It does not however dictate disk drive tray styles, overall chassis size or rack mounting features allowing vendors to continue to differentiate look and feel.

In summary, the v2.0 standard addresses:

  • Controller formfactor, internal connections from controller to drives and power supply component to facilitate mix and match controllers from different vendors in the same chassis
  • Controller hookup connections for up to 48 drives (3Gbps), power supplies and dual controller interconnects
  • Power requirements of up to 200W per cannister
  • Minimum management requirements that the controllers must perform to conform to the standard

It does not however dictate which host interfaces, drives or other auxiliary connections are required on the controllers themselves – this is up to the individual vendors to specify.

End User Benefits

Long term, end users stand to benefit from SBB as it accomplishes two things: firstly greater competitiveness in what has been a closed, proprietary market for many years, and secondly the introduction of more feature rich controllers based on alternative architectures.

Both Intel and AMD have provided SBB reference designs to the developer community which provide what is essentially an x86 class server in an SBB formfactor. While this is generally a more expensive solution that that provided by established vendors such as LSI Logic, Infortrend or DotHill for example, they do provide a greater flexibility for open storage vendors to add value add functions beyond the basic RAID level and traditional disk array management functions such as snapshot or replication such as integrated storage virtualization functions or unified (SAN and NAS) functionality relatively easily.

5 thoughts on “Storage Bridge Bay (SBB)”

  1. Will a SBB chassis like Supermicro offers work in a Windows 2008 cluster for HA storage ?

    This would be a game changer for me and reduce costs. If I could offer one box that is HA and only requires Windows that would be ideal.

    1. In theory yes, though I’ve only played with dual Linux implementations. Each half looks just like an X86 server motherboard, each having access to the disks in the front section if SAS enabled. Now we have 10G LAN/iSCSI/FCoE controllers as the primary host side connection, dual blade servers in SBB certainly seem to make your 2008 cluster far more feasible.

      The limitations are that you have to live with whatever RAID or SAS I/O controller they’ve integrated as you usually have no PCIe slots to play with. I’ve seen LSI’s 6G SAS IOC on some boards as well as PMC’s RAID controller in recent designs which are fine for JBOD server implementations.

  2. How about the use of Non volatile DIMM (or hybrid DIMMs) and supercapacitors in providing backup for cache in storage bridge bays? Is the only alternative a battery backed DIMM for 72+ hr backup and change of batteries every year? Or are there other approaches? How widely are the hybrid DIMMs used today?

    1. I haven’t played with hybrid DIMMs directly but very much aware of them given the relevance to storage and requests to support in the past.

      I’m personally a fan of supercaps combined with flash memory versus battery backup approaches as it removes the annual maintenance problem and even though rare, avoids messy battery leaks which can destory equipment. Companies such as Viking make such devices that are DDR3 compatible. I’ve not seen market penetration data yet given their specialist nature, though haven’t looked too deeply. I’ll post if I find anything useful.

      As always, the value of these devices are largely dependent on the ability to completely and reliably recover the data and/or system correctly following a power loss and all elements in the IO chain can handle loss of power (i.e. the storage device itself which may have a large RAM cache, IO controllers with large buffers and write caching ability, etc) . Furthermore, the OS or application needs to be able to specifically handle the situation during power up before the system can fully operate and (a) correctly detect a loss of power had occured, (b) determine if there was any unfinished or partial operations in progress when the outage hit that were critical and (c) correct the situation if at all possible by either completing the transaction or invalidating it as required. Otherwise, the system will simply overwrite the data that was still in the NVM-DDR3 hybrid device before the critical post crash data can be extracted.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2010-2017 TekiNerd™ All Rights Reserved -- Copyright notice by Blog Copyright