TEST SETUP AND METHODOLOGY
In testing all enterprise drives we focus on long term stability. In doing so, we stress products not only to their maximum rates, but also with workloads suited to enterprise environments. We use many off-the-shelf tests to determine performance, but we also have specialized tests to explore specific behaviors we encounter. With enterprise drives, you will see that we do not focus on many consumer level use-cases.
When testing SSDs, the drive is purged and then preconditioned into a steady state before capturing its performance results. We also take advantage of a dedicated HBA and disable all write caching on the DUT, this ensures consistent results that are complaint with SNIA standards. Our hope is that we present tangible results that provide relevant information to the buying public.
SYSTEM COMPONENTS
PC CHASSIS: | Thermaltake Urban T81 |
MOTHERBOARD: | ASRock X99 WS-E |
CPU: | Intel Xeon E5-2690 v3 |
CPU COOLER: | Thermaltake Water 3.0 Ultimate |
POWER SUPPLY: | Thermaltake Toughpower 1500W Gold |
GRAPHICS: | MSI GT 720 |
SYSTEM COOLING: | be quiet! Silent Wings 2 |
MEMORY: | Crucial Ballistix Sport DDR4 2400MHz |
STORAGE: | Crucial MX200 500GB |
RAID HBA: | Adaptec 8805 |
HBA DRIVER: | |
OS: | Windows Server 2012 R2 |
HGST ULTRASTAR HE8 FIRMWARE: | A4GNT514 |
This Test Bench build was the result of some great relationships and purchase; our appreciation goes to those who jumped in specifically to help the cause. Key contributors to this build are our friends at ASRock for the motherboard and CPU, be quiet! for the cooling fans, and Thermaltake for the case. We have detailed all components in the table below and they are all linked should you wish to make a duplicate of our system as so many seem to do, or check out the price of any single component. As always, we appreciate your support in any purchase through our links!
Also, a big thank you to Adaptec for supplying us one of their 8805 RAID cards to test out these drives in RAID 0, 5, 6, 10, 50, and 60 configurations. Because there are so many options to configure when setting up an array depending on the end use, we decided to stick with mostly default settings across all RAID arrays. The performance mode was set to Dynamic, write back and read cache was enabled per each array, power savings was disabled, and the stripe size was set to 256KB.
SNIA IOPS TESTING
The Storage Networking Industry Association has an entire industry accepted performance test specification for solid state storage devices. Some of the tests are complicated to perform, but they allow us to look at some important performance metrics in a standard, objective way.
SNIA’s Performance Test Specification (PTS) includes IOPS testing, but it is much more comprehensive than just running 4KB writes with Iometer. SNIA testing is more like a marathon than a sprint. In total, there are 25 rounds of tests, each lasting 56 minutes. Each round consists of 8 different block sizes (512 bytes through 1MB) and 7 different access patterns (100% reads to 100% writes). After 25 rounds are finished (just a bit longer than 23 hours), we record the average performance of 4 rounds after we enter steady state.
- Purge: Secure Erase, Format Unit, or vender specific
- Preconditioning: 2x capacity fill with 128K sequential writes
- Each round is composed of .5K, 4K, 8K, 16K, 32K, 64K, 128K, and 1MB accesses
- Each access size is run at 100%, 95%, 65%, 50%, 35%, 5%, and 0% Read/Write Mixes, each for one minute.
- The test is composed of 25 rounds (one round takes 56 minutes, 25 rounds = 1,400 minutes)
Unlike our other performance tests, the SNIA tests only last for a relatively short period of time each (1 minute), but they cover many more access patterns and transfer sizes. All tests were done at a QD of 32. While this sort of testing is typical of our SSD reviews, it also serves as a great comparison in the performance differentiation that SSDs offer vs HDDs. Based upon the results we can see that this HDDs performance is very write bias, unlike the SSDs we are used to reviewing, which are typically read bias. Also, total IOPS performance is much lower due to higher latency, as can be seen following.
LATENCY
To specifically measure latency, we use a series of 512b, 4K, and 8K measurements. At each block size, latency is measured for 100% read, 65% read/35% write, and 100% write/0% read mixes.
Here we will be examining the latency of a single HDD. Overall latency at a QD of 32 averaged around 200ms on the 100% read side. As we move down to writes we can see that this native 4KB sector HDD excels in terms of 4KB and 8KB performance, averaging 97ms for full write. However, 512B performance lags behind.
Looking at maximum latency you can start to see more so why HDDs are typically used as secondary storage more and more these days. We see a similar trend as the average latency, 4KB and 8KB maximum latency improves as we move on to full writes and 512B performance isn’t as good.