Enmotus Fuzedrive has Arrived!

It’s been a long while with lots of heads down time,  but we finally launched FuzeDrive™ Server for Windows and Linux in the last week or so at Enmotus. We can finally come up for air!

Thanks so much to the team that worked so hard to get it out there and the early adopter customers that helped get it to production level. Going back to hands on software development for the past 2-3 years has left me with a whole new level of respect for those folks who labor every day in front of a computer screen and we’re all reminded @enmotus just how much effort goes into getting a great product from inception to shipping.

I try not to promote any particular product at Tekinerd, but given this is one is near and dear, I’ll do my best to discuss objectively but be aware, I’m biased Smile.  That said,  I do use FuzeDrive as my primary boot volume (Pro version – still in beta) for both my primary desktops (Samsung Pro and Toshiba HDD), laptop (WD Black 2 drive #WDBlack2) and Windows home server, so it’s definitely part of my stable enthusiast collection here at home and has been for some time.

That said, let’s take a look FuzeDrive. Here’s a summary of some of its features:

  • A fully automated mapping based tiering software solution for Windows Server and Linux (i.e. not a cache)
  • Operates at the full performance of the solid state disk (SSD) for reads and writes
  • Easily combine fast SSDs (SATA or PCIe) with hard drives (HDDs) to create a virtual disk drive that operates with the equivalent performance of a pure SSD
  • SSD capacity is additive – now you can use that 512G SSD without wasting any capacity or carving it up into pieces
  • File pinning integrated into Windows Explorer with simple right click to lock files to either SSD or HDD (or command line)
  • Visual mapping and activity monitor to see at a glance if your SSD is actually in the right place!

For the true techies out there, FuzeDrive is a kernel based Windows virtual storport driver and Linux modular block driver based implementation that includes some user space apps for management, file pinning and the visual maps.

Let’s Start with Visibility

imageOne of the first things we tout about FuzeDrive is the visibility that mapping instead of caching gives the end user. You can see exactly where the SSD is being applied versus the storage activity using a simple at-a-glance tool we call eLiveMonitor.  As we talked to end users out there, their first reaction was “at last – I have way to see how effectively my SSD is working for me”. An example for the PC I’m typing from right now is shown here.

The eLive tool is a great way to show how the software keeps track of which files (or more correctly the file’s blocks) I’m using on my FuzeDrive virtual block device or disk and makes sure the most active of these files automatically live on the SSD. You can see instantly if activity and SSD mapping align. This all happens automatically and the software continuously adapts to new or different activity profiles i.e. if a bunch of new files are copied to the FuzeDrive virtual disk and are now accessed more than the ones I already have on there, it simply swaps all or part of the inactive files or file chunks with active ones from the HDD.

Versus Caching

So why is this any different from my run of the mill Intel SRT or SSD caching tools, or even a hybrid drive you may ask?

  • Performance is significantly faster. Mapping is much simpler and provides direct access to your SSD with minimal overhead. Mapping takes a few microseconds at best to turn virtual requests into actual physical disk IO requests and doesn’t require a cache hit-miss search algorithm to translate where data lives on the SSD (or HDD).
  • Capacity handling is significantly better for larger drives.  Caching uses both capacity and adds CPU overhead to manage itself. This combination limits the practical size and hit rate for a cache approach. FuzeDrive takes less 1% of the CPU, even if I’m using a 10TB fast tier! Furthermore, it doesn’t require SSD capacity to manage itself, only a small slither (a few hundred MB to 1-2GB for an average configuration). Hence, if you take a 1TB SSD and combine it with a 6TB HDD, you get the full 7TB to use for applications.
  • Programmable options allow the behavior to be altered from it’s defaults e.g. you can decide if relocation between SSD is to be decided based on size of requests (MBytes) or the number of requests (IOPs), reads traffic or read+write traffic and so on. FuzeDrive gathers a lot of statistics is can use to make more intelligent relocation decisions, and while best left in its automated mode, having that additional tweak capability often helps tune it for specific applications that need it.

microtiering

As a word of caution, and as covered in my other blogs on tiering vs. caching, FuzeDrive relies on a more reliable SSD device as data is relocated not copied. Today’s SSDs are certainly much more reliable nowadays, so this is no worse than a desktop machine or laptop which doesn’t have redundancy, but for servers this does matter. Hence, a RAID configuration for both the SSD and HDD tiers is definitely recommended (and supported) for those that require device level redundancy. For more budget conscious readers, you can also use the mirror capability of Microsoft Storage Spaces or Linux MD RAID.

Now for Performance

For a basic test, I used Crystal Mark 3.0.1 x64 on my Intel Z97 system to see how my Samsung 840 Pro “fuzed” with a Toshiba 3TB DT01ACA300 drive faired. I ran the test three times on a volume that been installed and running as a bootable FuzeDrive for 3-4 weeks. Note, this is the first time I’ve run Crystal Mark on this system. I’m not a huge fan of benchmarks as they are all biased toward single media type and don’t always handle mixed virtual media disks like a FuzeDrive well that has different performance depending on where you hit the volume. I was pleasantly surprised however.

Here are the results.

Run 1

cm_run1_cm

Run 2

cm_run2_cm

Run 3

cm_run3_cm

We see from the first run, I’m not quite seeing the max rate out of my SSD and I did notice the eLive monitor flash it’s SSD bars to indicate it was re-load balancing the drive. However, the second run figured out where it was hitting and started to reflect a result closer to the SSD performance.

The place it really shines is writes however which explains why the overall system performance in Windows (which is always doing background writes) works well. This also helps tremendously with virtual machines which can end up being much more write intensive due to their large RAM cache flushes back to disk.

Where You Can Use FuzeDrive

I  personally have FuzeDrive running on several home machines:

  • Dell Inspiron N4110 laptop with the WD Black 2 drive (2.5” 120G + 1TB dual drive #WDBlack2) using the Beta FuzeDrive Pro client version
  • Intel Z97 gaming/music station desktop machine using a Samsung 840Pro and Toshiba 3TB HDD and again the Beta FuzeDrive Pro client version
  • Window 2012R2 Storage Server box with the released FuzeDrive Server for Windows I’m experimenting with as a replacement for my existing Windows Home Server

Other test configurations running the Tekinerd lab also include an AMD Ubuntu 14.04 configuration. I’m also experimenting with Windows Home Server 2010.