SCALABILITY
Before we started in on our normal tests, we took a look at how the performance scaled from 1 to 24 drives. This should show us whether the drives or the RAID adapter is limiting our performance. This is where the fun starts. No matter how fast an individual component is in the system, there is always a bottleneck. Whether it be the storage device, PCIe bus interface or the ROC, the bottleneck in these large systems is always a moving target. This test should show us where we are limited.
We took a look at sequential and random performance, starting with a single drive until we reached 24. We limited the test time to only 5 minutes for each test and limited the LBAs to 50% of the total capacity.
For all operations, the system scaled almost linearly until we reached the maximum performance of the ASR-72405. Random read operations required only 8 drives to reach over 550,000 IOPS, while write operations required nearly 21 drives to reach the same levels.
Sequential performance was nearly identical for both reads and writes. We saturated the system performance after 13 drives.
After we finished the drive scaling tests, we started to notice something interesting. When tested across all LBAs, we observed significantly lower performance in our random tests. There was no difference with sequential tests, which was expected.
Once we passed the 50% mark for LBAs tested, the performance dropped to 370,000 for both read and write operations. Since we normally test 100% of the LBAs, the tests throughout the rest of the review will reflect that.
From here, we can draw a couple of conclusions. The first is that even with 24 native ports, the Adaptec ASR-72405 is clearly outmatched by the 24 SMART Optimus SSDs. When you take into account that for full LBA testing the ASR-72405 is limited to 370,000 IOPS, the best option for maximizing performance and cost is the ASR-71605, the 16 port version.
Now that we know that the ASR-72405 is the bottleneck, we wondered what would happen if we removed it? We were able to procure a second ASR-72405 and split the drives evenly across the two. You will notice that many of the subsequent performance tests include the results where we used two ASR-72405s. Those results are based on full LBA testing, but we wanted to see what we could hit with both cards, using 50% of the LBAs. Here’s what we saw.
1 million input/output operations per second…
That really wasn’t that hard. All you need is two RAID adapters and 24 SSDs, which may sound like a lot, but considering existing solutions that claim 1M IOPS, it is actually a reasonable proposition. Now that we got that out of the way, lets move on to some more meaningful tests.
In many published reports a single optimus cannot provide latency performance within that tight of a range. These are obviously system cache results.
All caching was disabled, except for any write coalescing that the ROC was doing behind the scenes. You have to remember that the SSDs were not the bottleneck on the latency measurements, in fact, they were only going at 40% of their specified rates. Also, every test we have performed, and other sites as well, show the Optimus to be a very stable SSD. So, to your point, there is some amount of caching happening outside of the DRAM, but it very limited.
Any chance of reviewing the 71605Q, see how it stands up with 1 or 2TB worth of SSD cache and a much larger spinning array? Since it comes with the ZMCP (Adaptec’s version of BBU) you can even try it with write caching on.
I’d especially love to see maxCache 3.0 go head to head with LSI CacheCade Pro 2.0
Considering I actually already own the 71605Q, and bought practically “sight unseen” as there are still no reviews available it is nice to see that the numbers on the other cards in the line are living up to their claims.
Like I said in the review, I wish we had the time and resources to test out all combination, but we can’t get them all. I have both the 8 and 24-port versions and, yes, they always hit or exceed their published specifications. I agree, that would be a great head-to-head matchup, We have a lot of great RAID stories coming up, maybe we can fit it in. Thanks for the feedback!
Yeah after I posted that I started brainstorming all the possible valid combinations you could test with those two cards and there’s quite a few permutations… Also might not be too fair to the older LSI solution but it’s what they have available and I don’t know of any release schedule for CC 3.0 or next gen cards, so might not hurt to wait for those.
I guess the best case to test would be best case cache worst case spinners, so RAID-10/1E SSDs with RAID-6 HDDs. See how the two solutions do at overcoming some of the RAID-6 drawbacks esp the write penalty.
I’m guessing the results would probably be fairly similar to the LSI Nytro review but still would be interesting to see how up to 2TB of SMART Optimus would do with a 20TB array.
Nice! How did you manage to connect the two cards (X2)?
“..We were able to procure a second ASR-72405 and split the drives evenly across the two…”
See my comment
I’m assuming they made a stripe on each raid card and then did a software raid 0 stripe of the arrays into one volume. That would be why the processors showed 50% until under load.
Great review though that SSD has me worried for sustained enterprise usage.