Choosing the right SSDs for your cloud: Datacenter SSDs vs. Consumer SSDs

Datacenter SSDs are not just higher performance and more expensive versions of desktop-grade devices. They have unique features that are critical for storage systems.  There are a number of companies who use consumer-grade SSDs for powering their business. Usually, the story goes along the lines of “consumer SSDs are just as good as Datacenter SSDs, and they are cheaper” or “we opened both SSDs, and the flash chips are the same” or “we’ve been using consumer-grade SSDs for X years now, and nothing has happened.”

SSDs account for the majority of the hardware cost of a new storage solution. So it comes as no surprise that some companies may try to cut costs. This is OK for consumer users, but a wrong move for your business. Cutting corners on the devices which store one of your most precious resources – your data (or your customer’s data) is the wrong thing to be “cheaper” on.

Picking the right Datacenter/Enterprise SSDs for your case is also a science by itself, as their technical characteristics and price also vary significantly. But you should definitely avoid desktop SSDs to store business data on. Unless you do not need that data all that much, or the risk of losing your business due to data loss is a gamble you fancy.


The differences between Datacenter SSDs and Consumer SSDs

Desktop/consumer SSDs differ from Datacenter/Enterprise SSDs in several ways.  To name the usual ones: endurance, throttling, latency, power-loss protection and price (duh, you get what you pay for!).

SSD endurance

SSD endurance is a relatively well-covered topic. NAND flash used in the SSD drives has a finite number of writes before it can’t be used anymore. Desktop-grade SSD are designed for light loads of the typical desktop applications. With such loads, desktop-grade SSD can operate for many years. Storage systems concentrate load from many applications and the load on the devices is much higher. With such high load desktop-grade SSD will fail in several months instead of years.

SSD throttling

SSD throttling is a rather unpleasant behavior, but not an availability/reliability issue. To optimize the cost, desktop SSDs are designed with some constraints. At low loads, they have very good throughput, but this throughput can not be maintained for prolonged periods of time. After a few minutes of continuous load, the throughput of desktop SSD drops drastically. This doesn’t usually happen in normal desktop use, but in highly utilized storage systems, some desktop-grade SSDs may start performing even worse than spinning HDD.

SSD latency

Low latency is among the most important determinants of application performance, yet better have slow applications, than data loss. Which leads us to the one feature of datacenter SSDs which is not well understood, yet critical: power-loss protection.


History of power-loss protected write-back cache/buffering

Standard HDDs don’t have a power-loss protected (PLP) write buffer. RAID controllers with a Backup Battery Unit (BBU) or supercap are commonly used to get low write latency on HDDs. With a RAID controller (+BBU or supercap) when a write operation completes, it is stored in a DRAM buffer on the controller. Performance difference between an HDD and the same HDD behind a RAID controller is substantial, mainly due to low latency of writes in the RAID controller case.

SAN and NAS appliances all have a power-loss protected buffer, usually in a specialized hardware component (NVRAM), and sometimes at system level (UPS in the rack with tight integration with the storage controllers).

Later in hyperscale design, Google used an integrated on-board battery in each server. Reference: . This is the same concept as the system-level power-loss protection used by SANs, but integrated at the level of any single server. This architecture, essentially a UPS for each server, is not popular outside of very specialized hyper-scale designs.

At present, flash SSDs come in two main flavors:

  1. “Consumer”, “Desktop” or “Client”-grade SSDs, which don’t have a power-loss protected buffer
  2. “Datacenter”, “Server” or “Enterprise”-grade SSDs, with power-loss protected write-buffer


There is a marketing distinction between “datacenter” (aka server) grade and “enterprise” grade SSDs, which is not relevant for the power-loss protection discussion.

The presence of power-loss protection functionality in an SSD is completely independent from the media type (MLC, 3D TLC, QLC, etc). There are QLC drives with power-loss protection. It is also independent from the interface of the SSD – SATA or SAS or NVMe. There are NVMe drives without power-loss protection.

The components in a desktop SSD vs. datacenter SSD are different. The datacenter SSD has an SSD controller chip with a PLP (power-loss protected) functionality, a DRAM buffer (usually a separate chip) and a large capacitor bank. The large capacitor bank is required so that in case of a power failure all data in flight in the DRAM buffer can make its way onto stable (NAND) media.


Requirements of various systems towards power-loss protection in SSDs

The discussion on whether saving cost is worth it in the area of SSDs is mostly meaningful in software-defined stacks: software-defined datacenter (SDDC), software-defined storage (SDS), converged and hyper-converged systems/infrastructure (HCI). This is so, because as we mentioned above, traditional SAN/NAS disk arrays have power-loss protection. As the IT industry moves towards newer, software-defined storage technologies, it is the software which manages standard, off-the shelf components, including SSDs. So it is becoming increasingly important that the software uses the right hardware platform for the given use case.

Unsurprisingly, most products, which deal with mission-critical data understand this and require PLP. Here are several examples, from databases and file-systems to storage software solutions.

Databases + SSD

An example with PostgreSQL:

> One aspect of reliable operation is that all data recorded by a committed transaction should be stored in a nonvolatile area that is safe from power loss, operating system failure, and hardware failure (except failure of the nonvolatile area itself, of course).

> Consumer-grade IDE and SATA drives are particularly likely to have write-back caches that will not survive a power failure. Many solid-state drives (SSD) also have volatile write-back caches.

Filesystems + SSD

Widely used journaling file systems such as EXT4 and XFS also require persistence of writes to the journal. If a committed write to the journal is not persistent across power failure they will lose/corrupt data.


ZFS SLOG (ZIL device) requires every single committed write to be persisted. If it isn’t, ZFS will lose data.


> Storage Device Requirements
> All capacity devices, drivers, and firmware versions in your vSAN configuration must be certified and listed in the vSAN section of the VMware Compatibility Guide.
> Compatibility. The model of the PCIe or SSD devices must be listed in the vSAN section of the VMware Compatibility Guide.
> Flash device model that is listed in the VMware Compatibility Guide.

To get an SSD on VMWare’s the list of validated models it needs to have the following mandatory features: “Drive Performance,Drive Reliability,Queue Depth,SAS Log Pages,Surprise Power Removal Protection,Write Cache,Write Failure Notification.”

Ceph + SSD

> Lack of proper power-loss protection will either result in extremely poor performance or not ensure proper data consistency Ceph assumes persistent writes on the journal device in both filestore and bluestore. If committed writes are lost this may lead to a cluster-wide data loss event, and judging by blog posts it has happened to a number of people.

StorPool + SSD

StorPool requires SSDs to have full power-loss protection and to pass a validation test, consisting of a long duration sustained write without significant performance degradation. Desktop SSDs usually fail both points.

Following are supported and typical SSDs in new StorPool systems:

– Samsung PM883 SATA – ~$120 per raw TB

– Samsung PM983 NVMe – ~$120 per raw TB

– Intel S4510 SATA – ~$160 per raw TB

– Intel P4510 NVMe – ~$180 per raw TB

– Micron 5200 PRO SATA  – ~$140 per raw TB

– Micron 9300 PRO NVMe – ~$166 per raw TB

(price references are as of August 2019)



Power-loss protection functionality in SSDs is required for correct operation of many storage systems. If you don’t have it, many systems will silently lose data, because they assume you have it. Alternatively some of these systems may be configured to work safely on desktop SSDs, but will do so very slowly and will also heavily wear the SSD device.

Obviously there is a performance vs. cost vs. reliability balance (trade-off). Some niche workloads may be OK running on desktop SSDs. For example, there may be very light workload which is perfectly OK with waiting for 10s of milliseconds for writes to the NAND media. Or there may be a dataset which doesn’t care about data availability or data integrity, so it doesn’t flush writes to the NAND media at all. But mixed workload from a virtualized environment, must use the appropriate datacenter/enterprise grade SSDs.

The best in class software-defined storage solutions (SDS) can leverage the most cost-effective Datacenter grade SATA SSDs with endurance of about 1 DWPD (Drive Writes Per Day). Less capable SDS solutions require SAS SSDs with 3+DWPD. In either case – consult with your storage (software) vendor on what type of hardware would fit to your needs and will deliver the expected results according to your business case.

If you have any comments of questions – ping us at


Leave a Reply

Your email address will not be published. Required fields are marked *