StorPool Storage announces a new version 19.4. We continue to deliver next-generation, customer-centric support and the latest release offers improved management and monitoring capabilities for the mission-critical StorPool Storage software. Learn more about the latest updates and new features.
In 2021, we’ve delivered 39 updates in total that were focused on delivering enhanced support, new management and monitoring features and capabilities designed for cloud infrastructure running diverse, mission-critical workloads.
StorPool v19.4 Overview – changes and updates
In the second half of 2021, the StorPool team delivered 15 updates to StorPool Storage that would help our customers manage their infrastructure and monitor its performance more effectively. The new version StorPool v19.4 offers better agility, reliability, updated hardware and software compatibility, management and monitoring changes and improvements in the business continuity features. We’ve also added updates on our OpenNebula addon and support for OpenNebula version 6.2. StorPool has deep integrations with various technologies and cloud management systems in the Linux stack – OpenNebula, OpenStack, CloudStack, OnApp, Kubernetes, etc. We make constant changes and we develop, build, test, release, and deploy new versions of StorPool Storage at short time intervals in order to provide enhanced storage services to our customers.
“For us 2021 was a challenging year. Our customers’ needs are changing and we are changing our product based on what they are trying to achieve. That is what distinguishes us from all other storage systems. In 2021 we’ve made 39 updates in our platform that were focused entirely on improving the user experience and making building the storage solution of the future…..”, says Alex Ivanov, Product Lead at StorPool Storage.
Read on to get an overview of the more exciting changes to the StorPool Storage software. You can find a complete list of all the improvements added during the second half of 2021 here.
- Support for Up to 16 Server Instances per Node – In high-performance use cases, this feature increases the storage system’s linear scalability, ensuring more I/O requests can be processed while maintaining extremely low latency. In addition, it enables greater per-node density for next-generation all-NVMe deployments based on the PCIe Gen 4 specifications.
- Metadata Handling V2 – updates the internal metadata structure for more efficient metadata operations. The upgrade provides significantly improved flexibility for clusters with thousands of volumes/snapshots and chains of volumes or snapshots with sizes larger than 1 TiB. In addition, it enhances relocator performance, ensures more efficient object use for large snapshots, and improves snapshot delete times. Learn more here.
- Reliability and Stability Improvements – multiple fixes around storpool_iscsi, storpool_bridge, and storpool_mgmt services.
- Improved handling of TRIM commands for Windows-based iSCSI initiators
- Fixes the Balancer to Prevent Aborts – Avoids race-condition during re-balancing operation and system volumes aggregation with metadata handling v2 enabled.
- Fixes the Balancer to Correctly Handle Volume Overrides – the balancer now correctly allocates objects when changing overrides during each rebalancing process. Learn more about volume overrides here.
- StorPool Bridge Performance Optimisation – now uses BIC-TCP for high-latency links to achieve higher throughput for snapshot replication from primary storage systems to secondary storage systems.
- StorPool VolumeCare
- Policy-level Backup Locations – Administrators can now set remote locations per policy in the primary cluster. This feature is helpful for cases where a primary cluster needs to replicate snapshots to different StorPool secondary storage systems for different sets of customer workloads.
- Minutes-Hours-Days-Months Mode – This mode is available only in primary, backup, or primary_backup modes. It has four parameters – ‘minute_interval’, ‘minute_count’, ‘days’, and ‘months’. In this mode, StorPool creates snapshots at every ‘minute_interval’ minutes. The service will keep a number of these snapshots specified using the ‘minute_count’ parameter. Snapshots from the last 24 hours are reduced to one per hour in the primary cluster. Backup clusters keep daily snapshots for ‘days’ days. Older snapshots are reduced to one per month. Snapshots older than ‘months’ months are deleted. This setting enables maintaining both short-term snapshots with high granularity on-premises and long-term snapshots off-site, using a single policy.
Management and Monitoring
- Optimized Deployment Process with Drive Setup Tool – the disk_init_helper (more here) assists with the initialization of NVMes, SSDs, PMEMs, and HDDs as StorPool data drives. The main goal of this tool is to provide consistent defaults for known configurations and idempotency when it is used from the storpool-ansible playbook. The tool discovers, pre-configures, and initializes the set of drives in a given StorPool node.
- Volumes or Snapshots with Constraints Violated Can now be Visualized – the output of ‘storpool volume status’ now displays when volume or snapshot constraints are violated. This feature is helpful for tracking which volumes were created while the storage system was in a critical state (e.g., one node in a three-node cluster had failed), and when it is time to rebalance the cluster after the missing node returns.
- storpool_abrtsync Now Uses HTTP/HTTPS for Transfers – the change enables receiving error messages from complex networks, or nodes behind proxies. With this change, all the tools used to send monitoring and billing metrics support proxies and the previously used rsync/SSH connections will be deprecated in time.
- AttachmentsList API call and ‘storpool attach list’ CLI Command now show the global ID of the attached volume/snapshot – This change helps with the management of multi-cluster deployments where many StorPool sub-clusters behave as a single large-scale primary storage system with a unified global namespace from the cloud platform point of view.
- Change in the Way Volume Overrides are Loaded – Volume overrides are now visible with the ‘storpool balancer disks’ command, and enabling them requires a ‘storpool balancer commit’ to be applied manually. This allows administrators to ensure overrides are pre-loaded properly before the balancer begins executing the rebalancing operation. The volume overrides feature is useful when many volumes are created from the same parent snapshot, as it ensures that writes are evenly distributed across storage devices in the storage system. To learn more about volume overrides in StorPool, follow this link.
- Speed up for VolumesGetStatus in clusters with thousands of volumes and snapshots.
- Fix for storpool_abrtsync Service – The service now starts only after the vmcore-dmesg file permissions in /var/crash are updated. This fix ensures that, after a machine crashes, it sends the correct detailed vmcore-dmesg log with an accurate timestamp.
- Additional Metrics Collected to Improve Alerting – To enable providing more comprehensive monitoring alerts, the storpool_stat service now collects the following additional info: tainting modules from /sys/module/; free space on all filesystems; lshw json output; and lspci with kernel modules output.
- Changes in the Tool for Node-wide Actions for StorPool Services – storpool_ctl now just warns when the storpool_bd kernel module can not be reloaded, and its status shows all alerts for kernel module versions.
- Other Management and Monitoring Changes – added global max recovery request setup (Local and remote recovery overrides); fixed a known VolumeGetStatus issue that could cause the API to abort/restart; fixed entering maintenance mode with remote volume exports; storpool_stat service properly cleans up processes after data collection (no leftover zombie processes); fix for VolumesGetStatus API call showing volumes and snapshots as degraded; storpool_stat collects inventory less often to reduces CPU usage; storpool_bd now logs the name of any process that opens a StorPool block device for writing; fix for the monitoring agent to catch API outages; and more.
- Initial support for persistent memory devices used for journals (write caching in front of SSDs or HDDs).
- Confirmed compatibility with Intel P5510 and Samsung PM9A3 NVMe SSDs
- Added support for Connect-X6 DX and LX, Connect-X7 NICs.
- Compatibility improvements for NICs using the bnxt_en driver (e.g., Broadcom NetXtreme-E BCM57414).
- Added initial support for RHEL 8.4, AlmaLinux 8.4, and RockyLinux 8.4 (all supported up to 8.5)
- Added incremental support for 5.4+ kernels (up to 5.13 for Ubuntu 20.04 LTS) in preparation for support of the next LTS kernel.
OpenNebula Addon Improvements
- Added support for OpenNebula version 6.2.
- Added a tool that verifies the consistency of VM snapshots against existing StorPool snapshots. Learn more about the tool here.
- Other Enhancements and Bug Fixes – Handling for race condition in host-error (case OpenNebula/one#5564); bugfix in VC safeguard routine; rework datastore/export; added option to report other than ssh:// as a transport method (if needed), and more.
StorPool Storage is designed for workloads that demand extreme reliability and low latency. It’s the easiest way to deploy high-performance, linearly-scalable primary storage systems that serve as foundations for reliable, agile, speedy, and cost-effective clouds.
With StorPool Storage, companies streamline their entire IT operations by connecting a single storage system to all their cloud platforms. They complete upgrades between generations of servers non-disruptively, without the need for forklift upgrades every few years – all while benefiting from the stellar speed and reliability of their StorPool storage systems.
The software is provided as an utterly hands-off solution – the StorPool team architects, deploys, tunes, monitors, and maintains each storage system so that end-users experience fast and reliable services. Meanwhile, tech teams worldwide finally have time for the critical projects that aim to grow their companies.