StorPool Now Supports Microsoft Cluster Shared Volumes

StorPool is now expanding its support for Windows Server users by launching a new feature – Cluster Shared Volumes (CSV) support. Windows Server has a significant foothold across many enterprises, cloud providers, and private cloud builders. As many StorPool customers use Hyper-V, we are expanding our meet’s capability in this ecosystem.

The new features, tailored to Microsoft users, are enabling them to use:

  • Cluster Shared Volumes (CSV) for Hyper-V
  • Windows Server Failover Cluster for Microsoft SQL Server
  • Scale-Out File Server
  • Multi-stack deployments (mixing Microsoft and other hypervisors, like VMware and KVM)

Cluster Shared Volumes for Hyper-V

Hyper-V uses shared volumes to store virtual machine images. Multiple Hyper-V hosts can then access the shared volumes. As all the hypervisors see the same shared storage, you can easily migrate the virtual machines between the hosts. The feature is also a common failover scenario, used by multiple users.

One of the considerations when deploying a Hyper-V server farm is choosing a suitable storage solution. Companies are not only focused on deciding for the storage capacity. Still, the storage system needs to deliver sufficient I/O and I/O density per TB, to meet the needs of many virtual machines. Furthermore, the storage system should provide redundancy, space-saving features, data integrity features, and more.

With StorPool’s Cluster Shared Volumes support, you can migrate virtual machines between servers easily and without the risk of downtime or loss of data. Take advantage of the Windows Failover Clustering feature in combination with StorPool’s leading storage performance. In this way, you can accelerate your applications and achieve high-availability while making Hyper-V management simpler.

If you are using a Windows Server Hyper-V cluster with failover, you can now use StorPool as primary block storage and take all the benefits of a fast, scalable and highly available software-defined storage solution.

Windows Server Failover Cluster and Failover Cluster Instances

StorPool now also supports Windows Server Failover Cluster. The Windows Server Failover Cluster (WSFC) is a group of independent servers. They work together to increase the availability of the services and applications hosted on the cluster. With the new feature added, you can use StorPool as storage for Failover Cluster Instances (FCI), such as high availability deployments of Microsoft SQL Server, file server, and other services.

Scale-Out File Server

Scale-Out File Servers are used to make the same share simultaneously accessible through all nodes in the cluster. They are used to increase the bandwidth, provide transparent failover, eliminate downtime, and load balance the clients across all nodes in the cluster. Scale-Out File Servers are often used to store data of server applications where high bandwidth is required like Internet Information Services (IIS), Hyper-V and Microsoft SQL Server.

The extreme performance and distributed architecture of StorPool meet the high requirements set by large deployments based on Scale-Out File Server, eliminating the bottleneck in the storage layer usually associated with the traditional centralized SAN storages.

Multi-stack Deployments

Last, but not least, StorPool has multi-stack support, which allows one storage system to provide shared storage to multiple stacks – Windows, VMware, KVM, Kubernetes (K8S), bare metal, and more, without the need to partition the storage capacity. This allows StorPool’s customers to shift workloads between several IT stacks easily and to achieve unseen levels of agility and freedom.

StorPool storage is a perfect fit for your Hyper-V virtualized environment and for building Microsoft Server Failover Clusters, as it provides excellent reliability, high availability, and unmatched performance.

Request a Solution

Building a cloud with Hyper-V? Get in touch with us and request a solution with StorPool's storage and Hyper-V.

Leave a Reply

Your email address will not be published. Required fields are marked *