The global data storage market is expected to be valued at USD 144.76 Billion by 2022, at a CAGR of 16.76% between 2016 and 2022 and is among the fastest growing technology segments worldwide. Most of this spend will go to Cloud and storage software solutions.
As every year, StorPool makes a review of what happened throughout the year and releases its annual data storage market and infrastructure predictions. Here we summarize the most important trends and changes we saw in the data storage market in 2018 and list our predictions for 2019.
The widespread adoption of “the Cloud” is obvious. There is a lot of noise around “100% cloud” strategies, yet Enterprise IT spending on cloud is just 19% in 2018 and forecasted to grow to 28% in 2022, according to Gartner. This includes SaaS offerings too and even in 3 to 4 years, less than a third of the money spent will be in the Cloud.
Now, that being said, hybrid cloud architectures together with hybrid cloud storage will pick up the pace in 2019. AWS and Azure have strong hybrid strategies and there is a number of third-party vendors providing solutions to manage multi-cloud and hybrid cloud infrastructure and to provide underlying services such as storage and networking for the hybrid cloud.
With Amazon’s release of Outposts (now in preview), IT organizations, system integrators, and solution providers will be forced to consider their public and hybrid cloud strategy.
It has to be noted, though, that this is predominantly unstructured data, like videos, photos and new-age cloud-native applications and SaaS. The majority of core data storage services are still kept locally, with burst or backup/disaster recovery being the usual off-load to the cloud.
For more demanding workloads and sensitive data, on-premise is still King. I.e. the future is Hybrid: on-premise takes the lead in traditional workloads and cloud storage is the backup option; for new-age workloads, the cloud is the natural first choice and on-prem is added when performance, scale or regulation demands kick-in.
From legacy SANs to best-of-breed software-defined data storage
5 years ago, the adoption of modern “software-defined storage” solutions, capable of replacing a high-end SAN or all-flash array was in its early days. Now the SDS ecosystem has grown and matured.
2018 was the year when we saw mainstream enterprises to finally initiate projects to replace traditional SANs solutions. The most common drivers are the need for agility, cost optimization and performance improvements, which are needed to meet the changing and increasing business demands. We expect SDS adoption to have a spillover effect and gain majority market share over the next 3 to 5 years.
Infrastructure refresh cycles and performance complaints from customers/users are the top 2 triggers of this process. Investments in new-generation infrastructure software solutions are aimed at reducing vendor lock-in, achieving significant cost optimizations and accelerating application performance.
Use of SDS, especially at the high end of the performance spectrum (NVMe, NVMeOF) and when it comes to automation through APIs and integrations, is the only meaningful way to differentiate public and private cloud services, especially at scale.
Fibre Channel (FC) is dead
At this point, Fibre Channel (FC) is becoming an obsolete technology and we do not see either financial, nor performance justification to deploy FC in your IT infrastructure stack. Additionally, FC adds complexity in an already complex environment, being a separate storage-only component.
In 2019, it makes sense to deploy a parallel 25G standard Ethernet network, instead of upgrading an existing Fibre Channel network. At scale, the cost of the Ethernet network is 3-5% of the whole project and a fraction of the cost of a Fibre Channel alternative.
100G is becoming the typical network connectivity for demanding environments.
NVMeoF and NVMe/TCP will have a gradual increase in adoption. At the low latency end of the spectrum, they will still be considered the second-best option, after proprietary access protocols (with storage driver in the initiator host).
Next-gen storage media
Persistent memory in the form of DRAM-based NVDIMMs finally became widely available on the market in 2018. We expect next-gen storage media to gain wider adoption in 2019. Its primary use-case will still be as cache in software-defined storage systems and database servers.
On a parallel track, Intel will release large capacity Optane-based NVDIMM devices, which they are promoting as a way to extend RAM to huge capacities, at low cost, through a process similar to swapping. The software stack to take full advantage of this new hardware capability will slowly come together in 2019.
There will be a tiny amount of proper niche usage of Persistent memory, where it is used for more than a very fast SSD.
In the same way, as it happened with SSDs and then Flash-based NVMes, storage solutions will struggle to expose the high throughput and low latency of the new (persistent memory) media to applications. So, as usual – be wary of marketing-defined storage slideware, stressing hyped buzz-words, void of reasonable application.
ARM in the datacenters
ARM is (finally) being recognized as a serious potential alternative to the x86 server architecture, for particular workloads. The main drivers here are cost optimization and breaking vendor lock-in.
Arm is still not fast enough to compete for general purpose workloads, but in 2018 we saw the first CPUs which were fast enough to be a serious contender for owning a solid piece of the server CPU market. The recently announced AWS instances, powered by Amazon’s custom Arm-based CPU, which claim up to 45% cost savings, will definitely pave the way to a wider Arm adoption.
The prime use case for Arm servers in 2018 was “Arm test/dev”, which is self-explanatory. In 2019 we’ll see rising demand for Arm, however, this will still be a slow pickup, as wider adoption requires the proliferation of a wider ecosystem.
Throughput-driven, batch processing workloads in the datacenter and small compute clusters on “the edge” are the two prime use-cases for ARM-based servers for 2019.
We’ve recently written a dedicated piece on Arm, which you can explore: Is it the right time for Arm in your Software-Defined Data Center?
The multi-core race
Intel and AMD are on a race to provide high core-count CPUs for servers in the datacenter and in HPC. AMD announced its 64-cores EPYC 2 CPU with overhauled architecture (9 dies per socket vs EPYC’s 4 dies per socket). At the same time, Intel announced its Cascade Lake AP CPUs, which are essentially two Xeon Scalable dies on a single (rather large) chip, scaling up to 48 cores per socket. Both products represent a new level of per-socket compute density. Products will hit the market in 2019.
While good for the user, this is “business as usual” and not that exciting.
Global IT, data storage market and infrastructure changes
In our last year’s predictions, we wrote that we expect a wave of consolidations in 2018. While these are a natural process in the world of business, the tectonic shifts that happen in IT infrastructure are reflected here.
There were several high-profile acquisitions. Most of them were not directly storage related, yet they gave out signals of the massive transformation of the IT infrastructure landscape. The more notable acquisitions were:
- Microsoft acquiring Github, interesting because of the increasing involvement of Microsoft with developers, Open Source and Linux communities.
- IBM buying Red Hat for $33 billion, which was a bit of a surprise. It is interesting to see how the largest open source company in the world will “behave” as it changes the color of its hat to blue.
The rumored bid of Microsoft for Mellanox is definitely worth mentioning here, as Azure is the only real contender to the undisputed Cloud leader, Amazon AWS. Good and concise analysis of this is available here.
On the storage market, Tintri filed for bankruptcy in the US under Chapter 11 in the summer and was then acquired by DataDirect Networks (DDN).
Somewhat overlooked, as not as shiny was a myriad of regional deals between 2nd and 3rd tier cloud providers. Data centers and cloud providers shifted workloads to newly opened/acquired local facilities in bigger markets, so they can save and serve data locally.
As the year draws to an end, we take a look back and make a short summary of what we achieved in the year. And StorPool is more than happy to had a great year as a leading vendor on the software-defined data storage market. We put our targets for the upcoming months and start with more passion and readiness for new challenges.
So what will 2019 bring us? How the data storage market and infrastructure will change? Stay tuned to see.