Tag Archives: storage

Rollback using Windows Home Server

I’ve been running a Windows Home Server for about 2 years now using a low cost VIA based home built system, and I’m very happy with it. Just a few examples of how useful it’s been:

  • I’ve used it to rescue my laptop twice from a corrupt C: drive
  • Temporarily rollback my laptop to a point 9 month earlier to find a lost email
  • Restore the family gaming machine more than once due to excessive build up of internet “plug ins”
  • Create a set of bootable images for my various test servers so I don’t have to update to the latest OS as my master CD/DVD is getting old

Though the backup capability is nice (I have a lot active PCs in my house), the real value of WHS came home to me when I needed to find some old emails from 9 months back that I thought I’d kept when upgrading my laptop from Windows XP to Vista. My home server had been configured to keep both the old XP backups I’d been making regularily up until Nov 2009, then it was keeping  a new set (under the same PC name) for the newer Vista Operating system I’d installed.

Here’s the steps I went through to get my now Windows Vista laptop temporaily reverted back to it’s Nov 2009 Windows XP state:

  1. Manual/instant backup of my current laptop
  2. Insert WHS provided Home Restore CD and reboot my laptop
  3. Let the restore program boot up (which can take up to 5 minutes….)
  4. When asked to select which machine, select the version of my laptop that contained the Nov 2009 Windows XP backup
  5. Click restore and let it run through it’s 1.5 hr process of copying everything back to my hard drive (blowing the exiting Vista copy away)
  6. Reboot the laptop back to it’s Nov 09 state and recover the needed files onto one of the WHS network directories I needed
  7. Reboot again with the Home Restore CD and restore the Vista backup I’d made earlier

A long winded process that took several hours (actually left some of it running over night on the last restore step), but it did the trick and worked pretty nicely. Of course, not as nice and instant as the Apple Time Machine (which wouldn’t have been able to go back to a prior OS install), but achieved the same functionality.

SBB needs a new eco system

I have to confess that in true tekinerd style, I actually have an SBB storage chassis in my garage; hence the reason I’m writing about it I guess. Thanks to my good friends at AIC, I’ve actually had one of their first prototype SBB chassis for nearly 2 years now (still here if you need it back guys). Why? We actually built and demo’d a 4G and 8G Fibre Channel SBB based storage solution based on an AMD Opteron running Linux with a RAIDcore host based RAID stack back in 2007-2008 timeframe working with some very talented engineers in Minneapolis.

I knew I liked SBB when our first proto fired up and we were able to demonstrate pretty quickly without building a chassis that our controllers worked. We had infact designed a specialty enclosure for a military application, but the chassis that AIC provided us at the time was not only ideal as a bring up chassis, but also as a demonstration to ourselves that we could also offer a commercial version of the product using an off the shelf chassis. That was the cool part. Develop for one chassis, and bring up in another commercially available chassis. Even cooler however was that this was a full Linux server as well. We could turn different host interfaces and switch between basic SAN and NAS (or both) functionality pretty easily.

Storage bridge bay, or SBB, is now at revision 2.0. It all started in 2006 when Dell, EMC, LSI and Intel banded together to create the SBB Working Group. Many other vendors have since joined in. To date however, only a handful of vendors have adopted with the orginal intent in mind of an open platform, with the majority of the market staying on the disk array side with proprietary or what I’d call non-open SBB solutions. Examples of proprietary solutions are HP MSA and EVA, EMC, NetApp and so on with only Dell, Xyratex, Xiotech and a few whitebox manufacturers such as AIC and SuperMicro adopting SBB and in most cases, as one more line item they offer versus whole hearted adoption.

So if it’s so cool, why hasn’t it proliferated?

The answer lies in the motives behind SBB and the end user value equation. As far as motives are concerned, several large OEMs like SBB as they can drive increased competition amongst the storage array vendor community. On the end user side, SBB has demonstrated no real benefits with the versions of products that are available beyond what you can get today with the proprietary versions. Let’s face it, SBB to an end user, there is no benefit in an SBB approach if there are no well developed open market controllers to plug, play and match with.

Several other factors to consider:

  1. No one is offering truly unbundled SBB products i.e. the storage controller comes separate from the chassis. They are still selling it like traditional storage arrays.
  2. There is a 99% chance that different vendor controllers and chassis have not been tested with each other i.e. its up to the end user to check it all out. End users don’t have time for this.
  3. There has been no demonstrated cost benefit of SBB to end users yet – it appears to help the OEMs in the bargaining powers with the storage providers more than end users at this point.
  4. The storage vendors themselves are operating on such slim margins that their incentive to move to SBB is low.

What does this mean for SBB?

The strength of SBB is not in its ability to replace existing entry level or mid-range disk arrays. As it is an open standard, it fosters more innovation by allowing many more smaller volume vendors and developers to create a value add controller without having to go through the rigmarole of developing a whole chassis solution. Further, the fact that both Intel and AMD provide development kits to select partners to be able to put a whole PC or server architecture on an SBB blade allows a whole lot more than just plain RAID level storage functions.

This is where the “storage” in SBB becomes a little cloudier as there is no reason why an SBB controller blade could in fact be a complete NAS or a virtual server head running several end user processes as well. It could also be a virtualized storage controller with complete cluster capabilities using off the shelf operating systems versus the closed software of today’s SBB controllers using the same identical hardware and just a software change.

The industry needs to wake up to the possibilities of SBB not as a replacement for what they have but as a platform for the next evolution of intelligent storage devices.

A few suggestions to the SBB community from someone who’s tried to make it fly:

  • Increase the focus on interoperability – create a lab like we did in the networking community and offer a certification sticker for interoperable SBBs components.
  • Create a software developer eco-system for the Intel and AMD reference designs to create more innovation and differentiation with today’s RAID only solutions.
  • Consider creating a server splinter group that focuses on developing virtual machine aware derivatives of SBB that focus more on server-storage solutions…. A Modular Server Rack (MSR) type solution.

Overall, SBB is a great start to break the proprietary stranglehold of the traditional storage vendors and drive storage to the next level, but it takes more than just a hardware standard to do it… it requires an eco-system of next generation thinkers.

SBB – a viable server platform?

SBB, or storage bridge bay (see link on the right), was originally intended as a standard and open disk array platform that allowed different controller and chassis vendors to plug and play.

Then a few years ago, Intel and AMD started providing reference designs to several OEMs that provided their almost latest and greatest CPU and core logic chipset technology used in regular server environments in the SBB form factor. Not as powerful in terms of raw CPU power you find in typical servers, but still a viable x86 server (two in fact – one per SBB slot) that can run Linux, NAS, SAN target software, tiered storage maybe, thin provisioning, host failover in a single box, etc, etc, etc.

So why hasn’t it caught on? CPU too under powered? OEMs not getting behind it?

All good questions. The opinion here is that there is great potential in the SBB platform, but not for the original design goals stated. For one, the name – SBB = storage bridge bay – implies this platform can only do storage. Secondly, the focus has been on replacing low cost disk array controller technology with a more expensive CPU + software based technology for RAID and other storage functions, instead of focusing on the far greater potential of what else you can use that CPU for given a far more open software environment.

Bottom line, a few suggestions for the powers to be working on SBB:

  1. Come up with another acronym for servers based on SBB e.g. Modular Server Bay or something (ok maybe not MSB, but you get the idea)
  2. Focus the attention on developing a software ecosystem – the hardware was the easier (relatively speaking) part
  3. A multi-vendor interoperability lab and certification is needed to expand SBB to its full potential as a more open hardware platform
  4. You can’t rely on the large OEMs to drive multi-vendor interoperability. It has to be the little guys with help from Intel and AMD that innovate and drive a new set of applications