StorPool Storage v19.3 – Enhancing Open Source Cloud Computing Support and Introducing Management GUI

StorPool Storage announces a new version 19.3. The latest release offers enhanced support for fully automated KVM-based clouds running virtualised or containerised primary workloads and introduces management features to the StorPool GUI. Learn more about the latest updates and new features.

Since the start of 2021, we have delivered 24 updates to StorPool Storage – the primary storage platform designed for cloud infrastructure running diverse, mission-critical workloads. Some interesting data about the StorPool storage systems using its utterly hands-off support:

  • 98% of clusters run a StorPool version released Q2’21 or later
  • 14% of clusters run the latest production version of StorPool Storage.

“Our continuous improvement process allows us to develop, build, test, release, and deploy new versions of StorPool Storage at short time intervals. Thus, we continuously improve our customers’ primary storage systems.”, says Alex Ivanov, Product Lead at StorPool Storage. “In addition, we are enhancing the native plug-ins/integrations we maintain for the cloud management platforms for KVM-based clouds – OpenNebula, CloudStack, OpenStack – and the Kubernetes container orchestration platform.”

Customers can now streamline their entire IT operations by connecting a single StorPool storage system to all their cloud platforms. At StorPool we have gone beyond what is typically expected from storage system plug-ins by solving some cumbersome challenges faced by companies using the leading Open Source Cloud Management Platforms (CMPs).

With the built-in automation for each plug-in/integration, customers seamlessly manage their clouds from the familiar user interfaces of their cloud platform(s). The StorPool storage system transparently executes actions initiated from the CLI/GUI of each cloud platform. Each virtual machine or container they deploy gets dedicated volumes in StorPool. The cloud inherits all of StorPool’s features – cloned provisioning, instant snapshots, thin provisioning, native BC/DR (Business Continuity/Disaster Recovery) and QoS (Quality of Service) policies per virtual disk or VM, and more.

Management and Monitoring

  • Management from StorPool GUI  – Using the simple StorPool GUI, administrators can now perform basic management tasks like create volume, resize volume, delete volume, create snapshot, and others. These features simplify the experience of infrastructure admins and IT generalists who need to perform manual tasks on StorPool Storage systems when using StorPool with VMware, Microsoft Hyper-V, XenServer, and other traditional enterprise virtualisation technologies. 
  • Remote Bridge Status Monitoring – Shows the status of connections between a given StorPool cluster and other StorPool clusters. We also added analytics dashboards that show the bridge traffic between clusters over time. Like the other analytics dashboards on the StorPool Hosted Analytics System, they deliver per-second metrics for the most recent two days and per-minute metrics for the most recent year. This feature is helpful for monitoring complex multi-site deployments where many StorPool clusters replicate snapshots to one site (many-to-one) or multiple sites (many-to-many). It also helps with multi-cluster deployments where tens of StorPool sub-clusters behave as a single large-scale primary storage system with a unified global namespace from the cloud platform point of view.
  • Automated Node-wide Actions for StorPool Services – the tool automates the standard safety checks before executing actions like ‘start’, ‘disable’, ‘status’, or ‘restart’ for StorPool services. The tool makes it easier to manage large-scale environments where parts of a cloud are hyper-converged, and parts are client-only or storage-only. Promoting a client-only node to a hyper-converged node or disabling all services to decommission or relocate a node is now easier than ever.
  • Accelerated Volume and Snapshot Space Queries – the ‘VolumesSpace’ and ‘SnapshotsSpace’ queries are now executed by the storpool_server service. Offloading the execution of these queries to the parallelised storpool_server services results in faster completion, making it easier to monitor storage space usage in clusters with tens of thousands of volumes and snapshots.

Business Continuity and Disaster Recovery

StorPool VolumeCare 

  • Keep-Daily-Split Mode – In this mode, StorPool takes snapshots every ‘interval’ hours (typically, every 1 hour). The StorPool VolumeCare service keeps the snapshots from the last 24 hours only in the primary cluster and replicates only one snapshot to the backup cluster where it is retained for ‘days’ days (typically, 7 days). This setting simplifies the management of environments where frequent snapshots are kept in a primary storage system and daily snapshots are sent to a secondary storage system.
  • Cross-cluster Replication – Enables configuring the VolumeCare policies so that each cluster in a pair serves as a primary for itself and as a backup for its backup cluster.
  • Protect Multi-cluster Primary Storage Systems – administrators can now configure VolumeCare to backup multi-cluster environments where tens of StorPool sub-clusters behave as a single large-scale primary storage system from the cloud platform point of view.
  • Snapshot Replication to Multi-cluster Secondary Storage Systems – administrators can now configure VolumeCare to automate snapshot replication into multi-cluster deployments where tens of StorPool sub-clusters behave as a single large-scale secondary storage system.

Integrations

OpenNebula Addon

  • Added support for OpenNebula version 5.12 and 6.0.
  • Improved OpenNebula Front-end Security – with StorPool, there is no need to transfer virtual machine disk images back to the OpenNebula Front-end. Added an option to skip this operation entirely when the OpenNebula cluster uses only StorPool Storage for datastores. As a result, the need for compute hosts to connect back to the OpenNebula Front-end is eliminated.
  • Prevent Data Corruption Caused by Admin Actions – when admins or users inappropriately start a virtual machine on more than one host, StorPool volumes are force-detached from the host(s) where the VM should not be running.
  • StorPool VolumeCare Support – the addon now handles StorPool VolumeCare policy tags on StorPool volumes. Administrators and users can manage StorPool VolumeCare policies per virtual machine.
  • Automatic Tagging of Recovered Volumes – when recovering a volume or complete virtual machine from a snapshot, automatically applies appropriate volume tags to the recovered volumes.
  • Delayed Deletion of StorPool Volume(s) Attached to VM – StorPool delays the delete operation for volumes backing OpenNebula virtual disks by 48 hours. In cases where someone deletes a virtual machine or virtual disk by mistake, this feature lets administrators reverse the action.
  • Support for UEFI Secure Boot VMs – creating a virtual machine with the nvram setting automatically creates a volume in the backing StorPool system and attaches it to the VM. Since libvirtd is not aware of the difference, this feature allows UEFI VMs to be deployed. All standard features (cloned provisioning, thin provisioning, zeroes detection, live migration, TRIM/Discard, instant snapshots, etc.) can be used for UEFI VMs.
  • Optimised Creation and Use of VM Templates – Administrators can now include VM context packages in the contextualisation CDROM. When used for Windows VMs, this feature removes the need for a CDROM dedicated for the contextualisation packages. 
  • Initiating an operation to delete snapshots from the OpenNebula UI now skips already deleted snapshots (prevents error prompt).
  • Added a tool that automates the configuration of OpenNebula needed by the StorPool addon.
  • Added a user-friendly helper script that tags volumes served by StorPool, making it easier to manage the integrated cloud infrastructure.
  • Added a helper script that shows all the StorPool volumes allocated to virtual machine disks.
  • CentOS 8 – Addon is now installable on CentOS 8
  • Other enhancements – refactored code: optimised deployment mechanics, detach is forced before renaming a volume (reattached after renaming), optimised the attach/detach mechanics for live migrations, optimised performance of disk size reporting to OpenNebula, deploy tweaks for video.py, os.py, and volatile2dev.py, optimised resource isolation for HCI deployments, VM save/restore handling, VM disk monitoring improvements and more.

CloudStack Plug-in

  • Added support for CloudStack versions 4.11.3 to 4.15 by ensuring StorPool uses the appropriate arguments and methods for each CloudStack version and does not rely on deprecated CloudStack functionalities.
  • Secondary Storage Bypass – Templates downloaded to CloudStack’s secondary storage are simultaneously downloaded to the StorPool primary storage as a snapshot. When creating a virtual machine in CloudStack, the plug-in uses StorPool’s features to create a virtual disk from the snapshot. StorPool keeps VM data only on the primary storage. Thanks to this feature, administrators can bypass the CloudStack secondary storage – decreasing network load and optimising storage utilisation for each virtual machine they deploy.
  • Volume and Snapshot Tagging with Virtual Machine UUID – The volumes served by StorPool are tagged in a more user-friendly way, making it easier to manage the integrated cloud infrastructure.
  • Multi-cluster Support for CloudStack – Enables customers with CloudStack to use multiple StorPool Storage clusters as availability zones. StorPool clusters that connect to CloudStack are automatically assigned a unique ID. Snapshots initiated in CloudStack are created and recovered transparently using the unique ID of each sub-cluster in the multi-cluster group.
  • StorPool VolumeCare Support – Enables handling StorPool VolumeCare policy tags on StorPool volumes. Administrators can manage StorPool VolumeCare policies per virtual machine. Users are not allowed to add or delete VolumeCare policies. StorPool automatically tags volumes attached to a given VM with the VolumeCare tag of that VM.
  • Automatic Tagging of Recovered Volumes – When recovering a volume or complete virtual machine from a snapshot, StorPool automatically applies appropriate volume tags to the recovered volumes.
  • Global Volume and Snapshot Names – Volume and snapshot names in CloudStack are generated from the global IDs issued by StorPool. The CloudStack-set names are stored in tags, along with other necessary metadata like VolumeCare policies. Ensures that the environment will operate reliably even in unexpected circumstances (e.g., restarting the CloudStack controller midway through a storage operation).
  • Seamless Quality-of-Service Controls – max IOPS limits set while creating a virtual machine in CloudStack are automatically applied in StorPool. You can also perform live changes to the size of each VM’s virtual disk(s) and IOPS limits. 
  • Other enhancements – StorPool now connects to CloudStack as primary storage immediately after initialisation (no restart required), snapshots that fail to be deleted in StorPool now remain in the CloudStack database, removed unnecessary logging, added more tests for VM/volume migration from NFS to StorPool to ensure stability, all storage-related CloudStack API commands are mapped to StorPool APIs and initialised during the initial bootup of the environment, and more.

OpenStack Integration

  • Added Support for attaching OpenStack compute nodes using the iSCSI protocol as an alternative to the native StorPool Block Protocol. With this change, the per-virtual-disk orchestration of OpenStack and Cinder is also available for StorPool iSCSI. Attaching KVM hosts, Hyper-V hosts and Cinder-volume services (for initial provisioning of VM images) over iSCSI is supported.
  • Efficient Provisioning of Nova Instances without Image Datastore – Administrators can now take snapshots of Cinder volumes created based on Glance images, and use them to efficiently deploy Nova instances. New instances automatically clone the snapshot in StorPool, and many instances can access the common data in a shared snapshot – there are production cases where thousands of volumes use the data in one shared snapshot. The key benefits of this feature are a decreased IO load on the storage system, higher rates of storage space efficiency, and – most importantly – removed need to maintain a second datastore since Glance keeps all image data in the connected StorPool storage system. 

Kubernetes CSI Driver

  • Added support for attaching Kubernetes worker nodes using the iSCSI protocol as an alternative to the native StorPool Block Protocol. With this change, the per-pod orchestration of persistent and ephemeral Kubernetes volumes is also available for StorPool iSCSI. K8s environments connecting via iSCSI can use the automation previously available only to environments using StorPool Block Protocol.

Hardware Compatibility

  • Widened compatibility with NVMe SSDs by enabling storpool_nvmed to manage NVMe drives using the standard Linux vfio-pci driver
  • Confirmed compatibility of various NVMe SSDs (Toshiba Cx5 NVMe, WD Ultrastar SN200, WD Gold NVMe WDS960G1D0D, HGST Ultrastar SN100, Micron 9100 Pro)
  • Added support for Intel NICs with ice driver (e.g., Intel E810)

Software Compatibility

  • Ubuntu 20.04 – Added support for Ubuntu 20.04 LTS, StorPool deployable up to Ubuntu 20.04 kernel 5.8
  • CentOS 8 – StorPool deployable up to CentOS 8.4 kernel 4.18.0

StorPool 19.3 is now being deployed to all new StorPool clusters and is rolled-out to all existing StorPool customers.

Read about version StorPool 19.2

Storage is one of the most important components of your IT stack. Let us help you unlock the power of high-performance data storage. Talk to an expert!

Leave a Reply

Your email address will not be published. Required fields are marked *