LSI MegaRAID CacheCade Pro 2.0 Review – Total Storage Acceleration Realized

Today we are evaluating the newest version of CacheCade, a storage caching system developed by LSI for use with their MegaRAID controllers. This system aims to create the perfect fusion between HDD capacity and SSD speed, accelerating new or existing storage subsystems that are already in place.

A previous CacheCade v1.0 review of the software illustrated some of the extreme performance gains that are to be had with the selective intelligent caching of data in storage subsystems.

That version, while extremely successful and a boon for users on a global scale, had one major limitation as it cached only read data.

LSI has had the intentions of including write caching into the structure of CacheCade as the product matured from the very onset. Today, that goal has been realized, and both read and write data is capable of being cached. This truly is an amazing achievement, as other competing products (ex. MaxIQ from Adaptec) are limited to caching read data only. This puts LSI in an enviable position of offering something that other companies simply cannot.

SSDs have exploded into the data center realm as an extremely disruptive technology that is redefining storage performance and possibilities, not only for the users, but also for the manufacturers of RAID controllers and storage products.

The ever-increasing demands for data in today’s burgeoning data centers leaves them with few options, either continuing the safe and beaten path of ‘inexpensive’ exponential HDD array growth, or delving into the super accelerated and ‘expensive’ SSD realm. This can leave many with large databases and enterprise users ‘caught in the middle’ with large HDD arrays, yet needing more performance from their existing infrastructure. Cost and space constraints are always a concern, and that is the void that CacheCade aims to fill.

The inclusions of both read and write caching is taking this technology further into the realm of a must-have for the users who are looking to actually reduce their costs, whilst boosting performance. Many still consider SSDs as far too expensive for integration into the data center, but that is one of the inherent issues that CacheCade Pro is looking to address with this technology.

CACHECADE PRO

CacheCade Pro is a new technology that falls into a category of “Tiered Storage” and allows for another ‘Tier’ of cache to be built upon a current HDD subsystem. Currently in a typical large enterprise server, you have RAID controllers connected to large mass arrays of HDD that serve data out to ‘customers’. There is a level of cache that is built into the controller, however, this amount of cache is very small.  It is usually around 1GB, and rarely above 4 GB of RAM, and mainly used to alleviate high write loads. On server arrays where the amount of data can go into hundreds of Terabytes, the 1-4GB of cache is rather inconsequential.

Enter the CacheCade theory, which creates a supplemental layer of cache between the HDD and the controller. This cache can be much larger using SSDs that enjoy much lower access times and much higher random I/O speeds than HDD can ever hope to attain. One of the main weaknesses of the HDD base array is small random writes. The inclusion of write caching with this version will allow a tremendous amount of off-loading of the most strenuous type activity that the drives are asked to do. In the typical server usage model the mixed read/write data access is by far the most demanding. Any alleviation of the write burden will yield exponential results.

Let’s start with an understanding of the advantages and disadvantages of each solution:

HDD ARRAYS

Pros

  • Low Cost-When it comes to cost per GB, HDD reigns
  • Usually involves existing infrastructure
  • High storage capacity

Cons

  • High power consumption
  • High Latency (Slow)
  • Large footprint
  • Produce heat/need cooling

SSD ARRAYS blank

Pros

  • Extreme speed compared to HDD
  • Low latency
  • Low power requirements
  • Generate low amount of heat (negligible)
  • Small footprint

Cons

  • Price Per-GB is very high, especially for SLC drives
  • Low capacity
  • Endurance concerns

 

In the end, we have a complex mixture of factors here for one to consider. If the customer already has a large system of HDD equipped servers, then they already have their infrastructure in place. Power, Cooling and Space are all very large concerns in these types of scenarios, however, if you are looking to speed up your systems to handle larger loads, you will need more of all three.

Switching to an all SSD or SAS HDD setup can be very cost prohibitive and very complex, especially in scenarios where downtime is absolutely not tolerated. SSDs are much easier in terms of power, cooling and space, notwithstanding the fact that one SSD can do the work of several HDDs.

The key is to accelerate the performance of your new or existing HDD arrays without incurring the massive costs and investments that a total switch to SSD would require.

That is exactly the concept behind CacheCade Pro.

 NEXT: Basic Concepts and Application

~ Introduction ~ Basic Concepts and Application ~

~ Enter Write Caching ~ Exploring TCO ~ Test Bench and Protocol ~

~ Single Zone Results ~ Overlapped Region Results ~

~ Real World Results and Conclusion ~

 

3 comments

  1. blank

    Wow. Using Cachecade 2.0 is a risky thing. I tried to get it working on ESXi 5.0 with SAS 9260-8i controller. By initial problem was that I had freezes of my datatore during operation. This was solved by updating MEGARAID driver for ESXi 5.0.

    After 1 week of operation the SDD now totally failed.Currently I am rebuilding the RAID 5.
    Now I heard “never use SSD alone, allways RAID1 in case of Fail”.

    I had 1 SSD only (broken) and settings for READ/WRITE cache.
    Cross your fingers that I’ll get it back online or min. 2 weeks of data are totally lossed!

    • blank

      So, you used a new feature from your RAID controller vendor, without updating drivers or anything and then experienced issues? Best to call Sherlock Holmes for this one. Before doing anything of this nature create a backup, to a seperate RAID controller or ideally a seperate machine, to ensure that if something does go wrong you have all your data still. It’s the difference between being set back 6 hours and being set back 2 weeks plus time to recover data. Honestly, computer maintenance 101. I shudder to think if you are a sysAdmin, as this would be the most basic part of your core duties – data integrity.

    • blank

      Related note – will be trying this on 2008 R2 (physical, not virtual) and look forward to the results. Should be very interesting considering the success I have had installing my customer base with accelerated volumes on their client machines. The performance in just a consumer level read cache has been noticeable to my customers, to the point where they can identify their hot data based on the access time improvements from the caching algorithm. Regardless of how long NAND based SSDs continue to be relevant (see ‘6.5nm lithography and above’) they are, in theory and practice, the simplest way to alleviate the bottlenecks that present in day-to-day storage applications for home users and sysadmins. Big fan +1

Leave a Reply

Your email address will not be published. Required fields are marked *