StorPool Storage v19.2 – A Shift to Rapid Continuous Improvement

StorPool becomes the first storage vendor in the world to deliver storage software in a CI/CD (Continuous Integration/Continuous Delivery & Deployment) manner. 

With our latest release, we introduce the Continuous Improvement process that allows us to build, test, release, and deploy new versions of the software at short time intervals, which is also known as CI/CD. We take the latest builds of the software through a round of automated and manual testing on the dev/test clusters maintained by StorPool and the dev/test clusters operated by our customers. After ensuring that the software works as expected, we push each update to the production clusters of our customers.

This approach enabled us to push more than 30 updates of StorPool Storage to production during the past year – an achievement typically associated with the leading IaaS, PaaS, and SaaS vendors. As a result, the primary storage systems of our customers continue to improve. Many of them have also added secondary storage clusters or sites that enable backup and disaster recovery scenarios. We respond to bugs or threats to their environment faster than anyone else on the market. 

You can review all the improvements implemented during the past year here – https://kb.storpool.slm.dev/storpool_changelogs/storpool_changelog_18.02-19.01.html#

Read on to get an overview of the new capabilities added to StorPool Storage in 2020.

New Capabilities

Reliability 

  • Maintenance mode – Automates the checks needed to stop or start storage services without impacting user workloads. Administrators can enable Maintenance Mode on a per-node basis. As long as there is sufficient redundancy in the cluster, the given node has no running server instances, and the storage system is not rebuilding or rebalancing, the node transitions to maintenance mode. Administrators can transition a node to maintenance mode for a set period. Administrators can also trigger maintenance mode on a per-cluster basis to sync context between the customer and the StorPool support team.
  • In-Server Disk Tester – Automatically runs a set of reliability checks on any drive that fails an Input/Output operation, if it is still visible to the operating system. If the device passes all tests, StorPool returns it to the cluster. Administrators can also trigger the disk tester manually to test drive reliability proactively.
  • Data-at-Rest Encryption – When data is stored in nodes using only self-encrypting drives, encryption can be enabled for data at rest during deployment.
  • Introduced a Public Knowledge Base – You can access the KB at https://kb.storpool.slm.dev.

Business Continuity

  • StorPool VolumeCare – Builds on top of the snapshots and clones features, many-to-one and one-to-many asynchronous replication introduced in 2016. A few notes about this new capability:
    • Administrators can now create and manage consistent atomic snapshots of volumes by defining retention policies and can store the snapshots both in their primary StorPool Storage systems and in remote clusters. 
    • Detects if multiple volumes belong to the same virtual machine (based on tags added by the Cloud Management Platform) to create crash-consistent snapshots for the whole virtual machine. 
    • Enables administrators to revert to a previous state based on the virtual machine ID instead of volume by volume.
    • Supports multi-cluster and multi-site deployments – administrators can use backup clusters to store snapshots instead of or in addition to storing them in the primary cluster. In this configuration, the service runs in each of the clusters.
    • Enables backing up multiple primary StorPool Storage systems to a single backup cluster, provided the sites have aligned retention policies. The StorPool support team proactively ensures that the retention policies do not contradict each other.
    • After the initial transfer of a given volume or virtual machine snapshot to a backup cluster, additional replications are incremental – StorPool only sends new or changed data to the backup cluster.

Management

  • Major Installation Package System Changes – New implementation of most core installation tools. Service files are now handled as rpm/deb packages.
  • Implemented Infrastructure that Provides Access to StorPool Packages – Customers can independently complete deployments or StorPool Storage updates. We recommend that StorPool support is engaged in these activities to ensure successful completion and optimal performance of the storage system.
  • Monitoring Support for IPv6-only Environments – nodes with IPv6-only Internet connectivity can now connect to the StorPool Hosted Monitoring System. They send the comprehensive list of metrics typically collected from StorPool clusters. Metrics include cluster status info (disks, services, running rebuild/rebalancing tasks, iSCSI configuration, etc.), host status info (control groups, kernels, connectivity, etc.), and performance metrics (latency, IOPS, and throughput per disk/volume/node, CPU usage, etc.). For more details about the metrics we collect to deliver StorPool as an utterly hands-off solution, check out the knowledge base article on Data collected by the StorPool monitoring system.
  • Auto-interface Configurator – Extended to cover iSCSI configuration as well. Automatically detects the operating system, the type of interface configuration used for the storage system and iSCSI, and depending on the configuration type (i.e., exclusive interfaces or a bond) prints the interface configuration on the console. The Auto-interface Configurator automatically creates non-existing interface configurations and does not replace those that already exist unless explicitly instructed to do so. 

Hardware Compatibility

Software Compatibility

  • Added support for RHEL/CentOS 8.

Conclusion

In addition to the new capabilities added during 2020, we have updated many of the existing capabilities in the product. We will be sharing more about these improvements in the coming months.

Leave a Reply

Your email address will not be published. Required fields are marked *