SSD performance: Benchmarks that matter

 

For SSDs, read/write sequential performance does not tell the whole story

There is a growing performance scaling discrepancy between CPUs and hard-disk drives (HDDs). While CPU performance has improved by 175 times in the last 13 years, HDD performance has improved by only 1.3 times. These mechanical, rotating hard drives, represent a significant performance bottleneck in computing platforms.

Solid-state drives (SSDs) can significantly increase the performance of computer storage subsystems. However, most SSD manufacturers are using access time and read/write sequential performance, or “throughput,” to describe the performance of their devices. Though these are useful measures, they do not tell the complete story.

A drive’s performance can vary with data transfer size and different combinations of sequential random reads and writes. For example, about 70% of the apps run by most end users on a daily basis use file transfer sizes of less than 16 Kbytes. Here, random read and write speeds are a more important indicator of performance than throughput.

Benchmarks

While different industry benchmarking suites provide standardized methods to calculate computing performance, not all benchmarking suites are equal. Some capture the performance of the overall system very well, but are unable to adequately measure the storage device’s performance. Ideally, storage device manufacturers should provide performance measures that demonstrate how their product operates under real-world workloads. In conjunction with system level benchmarking scores, these would provide design engineers with a better understanding of drive dynamics.

 

HDD and SSD specifications

 

HDD performance is determined mostly by drive latency and drive seek time, and is relatively easy to communicate. Latency can be directly computed from the RPM specification. Average latency is equal to 30 divided by RPM. Data transfer time, or the time it takes to move the data from or to the HDD, is usually less than 0.1 ms and is insignificant compared to drive latency and seek time. Engineers can look at RPM and seek time specifications and differentiate one HDD from another.

SSD performance, on the other hand, is more complex. Most SSD vendors provide sequential read and write performance specs. Though these numbers show how efficiently a drive can use the SATA or PATA bus during a read or write operation, it does not necessarily show how the SSD will perform under real-world workloads since that is highly dependent on random reads and writes.

Adding to the complexity is the SSD write performance. Although the SSD write performance is faster than write performance of a HDD, it is slower than the SSD read performance because the underlying NAND write operation is slower than read. It takes approximately 25 μs to read a page of data from the NAND nonvolatile memory area to the internal buffer. However, it can take 800 μs, or more, to write a page of data from the NAND internal buffer to the non-volatile memory area. An SSD write operation may also require a NAND erase operation and move data from one NAND block to another. These involve multiple page writes and block-erase operations, which will slow down the SSD write operation even more. This is why SSD vendors cannot simply rely on a single metric like RPM; they need to communicate read and write performance separately.

Further adding to the complexity is an SSD’s random I/O performance. An SSD with good sequential read and write performance does not necessarily have good random read and write performance. Real-world workloads show that more than 50% of I/O transfers are small file sizes (<16 Kbytes) and random operations. Communicating only sequential performance or large file size specifications does not adequately represent typical computing workloads. A detailed SSD performance profile that is more comprehensive would include a combination of sequential and random reads and writes for different transfer sizes.

Computing equipment manufacturers use popular industry benchmarks like PCMark and MobileMark to measure product responsiveness and system performance by performing standardized tests on real-world workloads. Some benchmarks are designed to test specific segments, such as gaming. Other benchmarks are more focused on overall system responsiveness and not just the responsiveness of the storage device. All benchmarks have underlying assumptions and require measurements to follow specific methods.

 

 

 

 

Using Vista Mobile Mark ’07, the chart in the figure shows a real-world usage model and the importance of measuring random reads and writes. It shows that the majority of file transfers under normal usage are small files and random, with about 75% small random writes and 50% small random reads. Measuring only sequential will show just a small percentage of the actual performance.

System performance

A performance profile or specific SSD benchmarking report will be helpful for engineers, but it would provide a surplus of technical details for the average customer. SSD customers would benefit most from having a performance metric that differentiates one SSD from another in true system performance.

 

 

System performance benchmarks such as PCMark can take into consideration read/write ratio, I/O transfer sizes, random/sequential transfer ratio, sequential read/write performance, and random read/write performance to provide a thorough picture of drive performance. These metrics provide an apples-to-apples comparison of a storage solutions impact on system performance.

All benchmarks have underlying assumptions and require measurements to follow specific methods. To give an example, using a PCMark Vantage overall system score, one would find an Intel X25-M SATA SSD to have a 57% higher score than a 5400 RPM HDD (running on a system with a Core2 Duo Processor and 2 Gbytes of DRAM.) This means you will, on average, experience greater than 50% improved responsiveness. Due to differences in file sizes and random vs. sustained operations in completing tasks, some application benchmarks will show more than 50% improvement and others a bit less. For example, the same SSD has been benchmarked to be 40% faster running Spyware Scan with Microsoft Windows Defender and 100% faster exporting e-mail in Microsoft Outlook. This illustrates why running a sequential only benchmark will skew results for various everyday tasks and system level benchmarks are the best measure of a storage solutions contribution to overall system performance. ■

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值