For StorPool, it’s a never-ending mission to provide the best software-defined block storage on the market. We are really excited to be featured on Architecting IT. In this series of posts, you’ll learn more about StorPool’s technology, with a hands-on and in-depth look at how the distributed storage architecture works, how it performs, and how it integrates into an on-premises public cloud strategy.
In this post you’ll learn more about StorPool’s performance, quality of service (QoS) and monitoring.
The StorPool platform can be deployed in a dedicated storage configuration, or in a hyper-converged model that implements storage services and applications on the same infrastructure. To guarantee that storage performance can be delivered effectively, StorPool implements Linux control groups (cgroups) to ensure CPU and memory resources are ring-fenced for the StorPool processes. The main services started on a StorPool deployment include:
- beacon – this service advertises the availability of the node on which the service runs, while validating the availability of other visible nodes in a cluster.
- block – provides block initiator services to the local host.
- bridge – this service co-ordinate snapshots between multiple clusters.
- controller – provides statistics collection for the API.
- Iscsi – manages devices exported to clients as iSCSI volumes.
- mgmt – this service manages requests from the CLI and API.
- nvmed – runs on all nodes with NVMe devices to co-ordinate device management.
- server – this service runs on all nodes that provide storage into a cluster. On nodes with multiple disk devices, up to four server instances can be running.
- stat – this service collects metrics on running systems, including CPU utilisation, memory utilisation, network and I/O statistics.
The screenshot in figure 1 shows the output for services running on one cluster node (in this case, PS09). In this instance, only one service is not running (reaffirm), which is deprecated in the version of StorPool being tested.
StorPool helpfully provides a tool to display the cgroup configuration that applies to the StorPool services. On this machine, the two sockets (CPUs) have eight cores and 16 threads, with services assigned to specific processors and memory assigned to the StorPool common services and alloc services (mgmt, iscsi and bridge).
We can display the cgroup assignments for the StorPool services without having to look at the control group configuration. The output of the storpool_cg command is shown in figure 2.
Read the full article on the Architecting IT blog!
Learn more about Architecting IT’s review on StorPool’s installation and configuration: