The Storage Networking Industry Association (SNIA) today announced the release of a specification that can be used to test the performance of solid-state drives, regardless of the vendor.
SNIA, an industry trade group of vendors and universities that develops and promotes standards for storage systems, said its Solid State Storage Initiative (SSSI) came up with the SSD Performance Test Specification to level the playing field in benchmark testing.
The SSSI is releasing two versions of the test specification: one this week for enterprise SSDs, and another for server or client-side SSDs, which will be released in the third quarter of this year.
The Enterprise Performance Test Specification (PDF document) defines a set of device level tests and methodologies intended to enable comparative testing of SSD devices in enterprise systems, such as storage arrays. Previously, there has been no widely accepted test methodology for measuring SSD device performance. Each SSD manufacturer utilized different measurement methodologies to derive performance specs for their products.
"You couldn't compare one data sheet to another data sheet and expect to understand if one drive was faster than another because the manufactures used different metrics," said Paul Wassenberg, chairman of the SSSI Governing Board. "Today, the SSD market is where the HDD market was in the 1970s. There are a lot of different suppliers offering products with a lot of different abilities and there's a lot of variability."
More than 40 companies spent two years developing the Performance Test Specification (PTS), Wassenberg said. Among those companies were all of the major SSD and storage system manufacturers, including Samsung, Intel, Marvell, Toshiba, IBM, Seagate, Dell, EMC, Hitachi Data Systems, and Western Digital.
Jim Handy, an analyst with the market research firm Objective Analysis who was on the specification's technical working group, said, "The SNIA test specification is not an end-all, but it is certainly a big step ahead of the specificatons that are commonly used by SSD makers."
Handy said one of the most important aspects of the specification is that it's careful to ensure that SSDs are first "pre-conditioned" prior to testing, meaning data is first written to them and then erased to break the drives in.
All SSDs slow down after initial use because once a sufficient amount of data has been written to them, the processor in the drive begins to move data around in a process known as the read-modify-erase-write cycle. Simply put, when an SSD is new, data can be written to it without interference from management software. However, once the drive has had a certain amount of data written to it, the NAND flash memory used to make SSDs requires that old data first be marked for deletion before new data memory. Then, once the new data is written, the old blocks marked for deletion are actually deleted in a process known as "garbage collection."
SNIA has created a set of nomenclatures used to describe the lifecycle of an SSD. A new SSD is called FOB, for "fresh out of the box." After an SSD's initial use, it settles into a stage that SNIA terms the Steady State, which is when performance levels out and can be accurately measured. "In terms of performance, reads are fastest, writes [are] slower and erases are the slowest yet," Wassenberg said.
Handy and Tom Coughlin, founder of consultancy Coughlin Associates, teamed up with Calypso to compile a study on SSD performance that involved 18 different drives. "We found that there was no performance consistency between any two SSDs. They vary all over the map," Handy said, adding that some single-level cell (SLC) SSDs perform worse than less expensive multi-level cell (MLC) SSDs.
"And some MLC-based SSDs are slower than an enterprise hard-disk drives once they have entered their steady state," Handy said. Handy and Coughlin tested 18 drives using the PTS specification. No two were alike.
How long it takes an SSD to move into a steady state that's reliable for testing varies from product to product, but the new spec requires that an SSD go through five separate performance tests prior to it being benchmarked.
"The key thing with the PTS spec is it tells you what to do and how to prepare the drive. Is this the only way to test performance? No. But, over time, we found it to be very efficient and the most dependable way. You can run this test multiple times and get the same result," Wassenberg said.
The PTS Test Sequence is as follows:
The PTS describes a reference test hardware and software platform used to validate the specification itself. The reference test platform was developed by SSSI member Calypso Systems. Calypso built a hardware platform that has multiple bays to test drives in parallel and developed the software that adheres to the specification.
"You pretty much plug in a drive and it does the test," Wassenberg said. "If you want to test a drive, they'll test it for a fee. This reference test platform is the gold standard."
But Wassenberg said users can set up their own test bed using the spec and other open source benchmarking tools such as DBench or Iometer. "You just need to ensure that you use a hardware platform that doesn't bottleneck the SSDs. We recommend a server motherboard," he said. "You must also be knowledgeable enough to write [a] script for it."
The SNIA is currently also working on application-specific specifications that will allow SSDs to be tested under loads for specific tasks. For example, SSDs could be tested for their performance in PC environments running Windows 7 or in server environments running Oracle software. "But, that's a ways off. The important thing for us was to get something out there to test drives with and compare performance," Wassenberg said.
[Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is email@example.com.]
This story, "Enterprise SSD testing spec released" was originally published by Computerworld.