For StorPool, it’s a never-ending mission to provide the best software-defined block storage on the market. We are really excited to be featured on Architecting IT. In this series of posts, you’ll learn more about StorPool’s technology, with a hands-on and in-depth look at how the distributed storage architecture works, how it performs, and how it integrates into an on-premises public cloud strategy.
In this post, we will look at Kubernetes and provisioning storage to containerised applications.
Containerisation gained significant popularity with the development of Docker and the framework that was built up around the creation and deployment of application containers. Kubernetes has now surpassed Docker and becomes the de-facto standard for container-based application deployments.
As with many new and emerging technologies, storage tends to lag behind in development. We saw this problem with both Docker and OpenStack, where applications initially started out stateless and transitioned to stateful workloads as the platforms matured. Kubernetes is no different in that the storage components have taken time to evolve to a useful degree of maturity. However, modern containerised applications demand persistent storage and can’t offer enterprise-class capabilities without it.
Container Storage Interface
The Kubernetes community has addressed the management of persistent storage through the Container Storage Interface or CSI. The design of CSI provides a pluggable framework for vendors to integrate their storage solutions without having to update the base Kubernetes platform. CSI offers the capability for multiple vendor support and for ongoing addition of features and functionality as new versions of Kubernetes are released.
Kubernetes v1.13 (announced 15th January 2019) was also the general availability release for CSI. The current release of CSI is v1.5.0. More details can be found here.
CSI and StorPool
StorPool Storage provides block storage to Kubernetes clusters. As we examined in previous posts, a StorPool cluster can be run in HCI mode, where each node contributes and consumes storage, or with a client only mode where the node consumes resources but doesn’t contribute.
StorPool Review – Part 1 – Installation & Configuration
StorPool Review – Part 2 – performance, quality of service (QoS) and monitoring.
Both client and storage nodes can form part of a Kubernetes cluster, enabling, for example, a 1:1 relationship between StorPool storage cluster nodes and Kubernetes nodes, or a Kubernetes cluster with only client nodes and storage delivered from elsewhere. A StorPool cluster also supports multiple Kubernetes clusters, so any combination of client and storage node configurations are supported within the requirements (listed here) that show the specific installation processes and pre-requisites.
StorPool requires an additional Kubernetes management process to run on each Kubernetes node running kubectl, as well as the deployment of the CSI plugins. The process is straightforward, so not documented here.
StorPool, CSI and iSCSI
StorPool supports a second deployment model using iSCSI. As detailed in post #3, StorPool enables access to LUNs for non-Linux hosts via iSCSI. This mechanism also provides the ability for Kubernetes clusters not running StorPool data services to access a StorPool cluster.
The use of the iSCSI model is helpful when, for example, scaling workloads past the capability of a single cluster or deploying StorPool as a dedicated storage solution.
Learn more about testing StorPool & CSI, creating volumes and Kubernetes performance:
Read the full article on the Architecting IT blog!
White Paper: Persistent Storage for scalable bare metal Kubernetes
What are the benefits of running Kubernetes on a bare metal cloud and how to ensure reliable persistent storage for your bare metal Kubernetes clusters? Running cloud native applications on bare metal clouds eliminates the virtualization layer. The applications are deployed in containers and run directly on the bare metal storing data on the persistent storage layer. The result is radical simplification and improved data reliability.
In this White Paper you will discover a step-by-step guide in how to run persistеnt storage with scalable bare-metal Kubernetes.