CloudStack European User Group & Ceph day

CloudStack User Day London – CloudStack Cloud Builders Get Together

StorPool Storage’s team had the pleasure to attend one of the most exciting CloudStack events of the year – the CloudStack User day 2019 on 24 of October in London. We were able to meet our friends in the CloudStack community, watch some amazing sessions and lead two talks of our own.

The CloudStack European User Group & Ceph day 2019 was an all-day event organized by our friends from ShapeBlue that brought together the Apache CloudStack and Ceph communities. It had a mixture of talks focused on how the technologies work together. In the afternoon, the sessions split into two tracks – one devoted to Ceph and the other to CloudStack.

The main focus of this event was about collaboration and the integration of new features in CloudStacк. The world’s leading experts and contributors shared their knowledge and presented the latest functionalities and updates in the projects in twelve exciting sessions.

One of the most notable talks were about the innovations in the Apache CloudStack software led by Andrija Panic from Shapeblue and “Running OpenShift Clusters in a Cloudstack Environment” by Sven Vogel. StorPool’s team had two talks at the event – Slavka Peleva and Venko Moyankov. They made a great impression with their useful sessions about the optimization of KVM VMs.

“Storage-based Snapshots for KVM VMs in CloudStack” by Slavka Peleva

Slavka Peleva is a CloudStack developer at StorPool Storage. She is an experienced software engineer, involved in StorPool’s development and integration with CloudStack. Her session: “Storage-based Snapshots for KVM VMs in CloudStack” was one of the most awaited talks of the day. Slavka gave a solution to one of the major drawbacks in CloudStack – the lack of live snapshots of KVM VMs.

Currently, live snapshots of KVM VMs are supported only through libivirt/QCOW snapshots and are using an internal snapshot format. The problem lies in the sole support of the qcow. It’s impossible to snapshot the VMs with raw disks and there are allowed only internal snapshots. These snapshots are non-live, meaning the VM has to be paused while all the state is saved. For end-users, this is likely not a problem, but if you are running a public server, minimizing downtime is essential.

New live snapshot feature in CloudStack

The solution that Slavka presented, is the new feature StorPool has designed and implemented into the CloudStack software to avoid these limitations. The workaround is to create consistent snapshots of all virtual machine disks while they are running. In the end, you will have a virtual machine snapshot but without its memory. The new alternative VM snapshot framework for CloudStack/KVM is employing the underlying storage provider’s capability to create, revert and delete disk snapshots. It is a new approach that delivers VM snapshots with minimal impact on the working VM and works with all storage providers.

Being independent of the image format gives the possibility to create consistent snapshots while the virtual machine is running. This eliminates the need to pause the VM during the process.

Creating consistent snapshots

To make all snapshots consistent, the new feature has to freeze the virtual machine. While it’s frozen we do an asynchronous snapshot of all disks with the appropriate datastore driver implementation. When the execution is complete, we unfreeze the virtual machine. To use the freeze and thaw commands, you need to have qemu-guest-agent installed on the guest. After a successful snapshot, if the “snapshot.backup.to.secondary” option is set to true we do the backup.

The new feature works with different storage providers such as NFS, Ceph, StorPool, etc. During the implementation, we made a lot of test scenarios with those three storage providers.

You should keep in mind that the live snapshot feature we created for the CloudStack project is not yet in production.  We are expecting implementation in the software shortly. If you are using StorPool with CloudStack the solution is even simpler. We have a separate solution that does not need additional software. It works on virtual machines with only StorPool’s disks attached. It creates a crash-consistent group snapshots without the need to freeze the virtual machine. And the process is instant.

See more in Slavka Peleva’s presentation “Storage-based Snapshots for KVM VMs in CloudStack”

Watch the session from CloudStack User Day 2019 now!

Achieving the ultimate performance with KVM’ by Venko Moyankov

Venko Moyankov is a solution architect at StorPool and part of our skillful operations team.

His session ‘Achieving the ultimate performance with KVM’ at the CloudStack User Day 2019 was one of the headliners of the event. The focus of the presentation was the core components of a virtualization stack and the different scenarios you have on each step during the selection process of these important elements. Venko’s session helped attendees to better understand the different optimization strategies for cloud infrastructures. In addition, it gave practical advice on how to build a scalable and high performing cloud. The session “Achieving the ultimate performance with KVM” was a deep technical dive. It unveils why performance matters and how to choose the optimal set of technologies, hardware, and software to build an unbeatable new-age software-defined cloud.

During his presentation Venko covered 4 main areas of performance optimization – hardware, BIOS and OS tuning, networking and storage. Each of these areas affects the performance, the cost of the entire solution or both.

Hardware selection

When you are selecting hardware and technologies you always try to get the best and keep the costs to a minimum. But the best means different things. As you need to think about your needs, specific project or workloads, which will be hosted in the cloud. The typical optimization goal for cloud infrastructure is to get the lowest cost per delivered resource at a fixed performance.

When calculating the cost you must include all components, not only the price of the bare metal servers. You should also always consider the cost of the:

  • Electric power
  • Cooling
  • Rack space
  • Network configuration and other equipment
  • Software
  • Implementation
  • Deployment
  • Support, etc.

Taking all these things into consideration may change the perspective and you may switch over to a different solution and configuration that better fits the needs and budget of your project.

Selecting the best hardware

StorPool has developed a tool that can help you in selecting the best hardware components depending on your requirements. It takes into account all main parameters – power cost, rack space cost, power limit per rack, operation and initial deployment cost per node, software licensing, server cost, networking, RAM and ram limit per server, etc.. The tool calculates the total cost per delivered resource for the lifetime of the server – 36 or 60 months for different configurations. We use this tool in-house and help our customers to select the optimal hardware for their cloud infrastructure. In this way, they can achieve better efficiency within their requirements and budget.

BIOS settings and Software layer

The optimization strategy on this layer depends on the desired results – maximum performance per watt, per CPU core, higher throughput or lower latency. Different requirements need different settings. It is important to understand how power management works in order to achieve good results.

With a change of just one parameter you may gain or lose a lot of performance points. This is the equivalent of spending a few thousand dollars per box.

Optimization strategy for the usage of CPU and RAM

Further on, Venko examined the optimization strategy for the usage of CPU and RAM, which is forming the main part of the hardware cost. The key point was proper differentiation between oversubscription and congestion and their impact on service quality. He also gave practical advice on how to measure and control CPU congestion.

Storage

Choosing the right storage solution is one of the critical factors when building an effective cloud infrastructure. During his session Venko examined two different strategies when building a storage system – local storage with pass-through and virtualized shared storage.

Fully bypass

In this case, you fully bypass the entire virtualization stack and enable direct access by the virtual machine to the local storage devices in order to achieve maximum performance. In this scenario, you will lose all the useful features: Live migrations, the ability to have shared storage, snapshots, volumes, thin provisioning, etc. PCI passthrough can be used for direct access to local NVMe devices and gives fast performance but that’s all you will get. It is also limited by the performance of a single NVMe device. It sets profound limitations on how you want to use and manage our systems. One of the major drawbacks is that if the node dies, you are losing all data or your data is inaccessible. In case you haven’t secured a backup or disaster recovery, the consequences could be irreversible or at least you will experience a major downtime of your services.

Virtualized storage stack

Virtualized storage is a must-have for every company that wants to provide stability to its platforms. With virtualized storage you gain more flexibility, you can make live migrations, create shared storage, etc.

Venko presented multiple different configuration options both for the guest VM and for the host that in numerous cases give really good results and optimizes the VM performance.

Vhost protocol providing the fastest path between the virtual machine to the storage system was compared with the traditional virtio-block protocol. Venko showed the benefits of using vhost protocol and how it completely eliminates context switching in the data path from the virtual machine through the hypervisor, storage initiator, network, storage target to the physical NVMe device, etc.

Venko paid special attention to measuring the performance and some good and bad practices to benchmark storage systems.

As a summary of Venko’s presentation on CloudStack User Day 2019:

There is no single configuration or optimization strategy that can work best in all use cases. You should choose the optimization strategy based on the specific requirements of the project. You shouldn’t follow any optimization recipe blindly without measuring the results. Ultimately you have to make your own benchmarking to match closely to the production workload, to find out which of the different optimizations recommended here work best in your environment.

Find out more about the BIOS optimization, tuning KVM, network acceleration and CPU and Memory optimization for CloudStack storage:

See more in Venko Moyankov’s presentation “Achieving the ultimate performance with KVM”

Watch the session from CloudStack User Day 2019 now!

Read more:

CloudStack European User Group Day 2018

Open Infrastructure Summit Shanghai

Interested to launch a cloud with CloudStack? StorPool is a primary CloudStack storage, which will ensure your high performance, high availability and reliability. Do not hesitate to write to us at info@storpool.slm.dev

Leave a Reply

Your email address will not be published. Required fields are marked *