TEST BENCH AND PROTOCOL
In testing the Adaptec ASR-74205, along with all enterprise products, we focus on long term stability. In doing so, we stress products not only to their maximum rates, but also with workloads suited to enterprise environments.
We use many off-the-shelf tests to determine performance, but we also have specialized tests to explore specific behaviors we encounter. With enterprise drives, you will see that we do not focus on many consumer level use-cases.
Our hope is that we present tangible results that provide relevant information to the buying public.
SOFTWARE
You can access the Series 7 adapters in a number of ways. The maxVIEW Storage Manager is a browser-based application that gives you access to all of the major functions of the adapter. It is clean, simple and easy to use.
Adaptec also includes a command line interface (CLI) where you can control most functions of the adapter. In fact, we were able script many of our tests using the CLI.
If all else fails, you can always configure the adapters via the option ROM during POST.
CONFIGURATION
The Adaptec ASR-72405, like all RAID controllers, can be configured in countless ways. Unfortunately, it would take us months to test them all, so we are using the most common modes. As recommended by the MaxVIEW Storage Manager, the write caching was disabled for all tests. All other options on the controller and the logical drive were left in their default states.
The Series 7 adapters also support multiple modes of operation: Physical Drive, Simple Volume and RAID. Physical drive mode operates the adapter in HBA mode, where all caching is disabled. Simple volume mode adds metadata to each drive so that you can use write caching. RAID mode opens up all of the features of the adapter.
All of our tests were performed in RAID mode.
In many published reports a single optimus cannot provide latency performance within that tight of a range. These are obviously system cache results.
All caching was disabled, except for any write coalescing that the ROC was doing behind the scenes. You have to remember that the SSDs were not the bottleneck on the latency measurements, in fact, they were only going at 40% of their specified rates. Also, every test we have performed, and other sites as well, show the Optimus to be a very stable SSD. So, to your point, there is some amount of caching happening outside of the DRAM, but it very limited.
Any chance of reviewing the 71605Q, see how it stands up with 1 or 2TB worth of SSD cache and a much larger spinning array? Since it comes with the ZMCP (Adaptec’s version of BBU) you can even try it with write caching on.
I’d especially love to see maxCache 3.0 go head to head with LSI CacheCade Pro 2.0
Considering I actually already own the 71605Q, and bought practically “sight unseen” as there are still no reviews available it is nice to see that the numbers on the other cards in the line are living up to their claims.
Like I said in the review, I wish we had the time and resources to test out all combination, but we can’t get them all. I have both the 8 and 24-port versions and, yes, they always hit or exceed their published specifications. I agree, that would be a great head-to-head matchup, We have a lot of great RAID stories coming up, maybe we can fit it in. Thanks for the feedback!
Yeah after I posted that I started brainstorming all the possible valid combinations you could test with those two cards and there’s quite a few permutations… Also might not be too fair to the older LSI solution but it’s what they have available and I don’t know of any release schedule for CC 3.0 or next gen cards, so might not hurt to wait for those.
I guess the best case to test would be best case cache worst case spinners, so RAID-10/1E SSDs with RAID-6 HDDs. See how the two solutions do at overcoming some of the RAID-6 drawbacks esp the write penalty.
I’m guessing the results would probably be fairly similar to the LSI Nytro review but still would be interesting to see how up to 2TB of SMART Optimus would do with a 20TB array.
Nice! How did you manage to connect the two cards (X2)?
“..We were able to procure a second ASR-72405 and split the drives evenly across the two…”
See my comment
I’m assuming they made a stripe on each raid card and then did a software raid 0 stripe of the arrays into one volume. That would be why the processors showed 50% until under load.
Great review though that SSD has me worried for sustained enterprise usage.