Architecting it Connectivity & Scripting

Architecting IT: StorPool Review Connectivity & Scripting

For StorPool, it’s a never-ending mission to provide the best software-defined block storage on the market. We are really excited to be featured on Architecting IT. In this series of posts, you’ll learn more about StorPool’s technology, with a hands-on and in-depth look at how the distributed storage architecture works, how it performs, and how it integrates into an on-premises public cloud strategy.

In this post we will look at connectivity to non-Linux clients, support for virtual environments, and automation through scripting.

Client hosts running Linux access the StorPool platform through the native client.  This configuration uses StorPool networking protocols and presents block devices as if they are from a typical client with storage resources.  As we’ve seen in the previous post, the performance of non-storage clients is almost identical to those presenting storage locally.

Non-Linux Clients

For non-Linux clients, connectivity is provided through the iSCSI protocol.  From version 19 onwards, StorPool uses an internal TCP/IP stack to deliver iSCSI target support, as this offers the capability to implement NIC performance acceleration.  This means the iSCSI IP addresses on the storage hosts don’t show up through standard Linux commands.  Instead, the configuration is exposed (and configured) through a set of StorPool CLI commands.  Figure 1 shows the output from a series of commands that list the network interfaces, iSCSI base name, portals (IP addresses per host) and portal group definitions for iSCSI on our test StorPool cluster. 

StorPool iSCSI Configuration

iSCSI uses the standard SCSI concepts of initiators and targets.  An initiator is a host that consumes resources; a target is a storage system that exposes storage LUNs or volumes for consumption.  The iSCSI standard uses an arbitrary text format called an IQN that is used to identify both targets and initiators.  ISCSI LUNs are exposed to the network through a portal, which is defined by an IQN, IP address and TCP port.  Multiple portals can be combined together to make a portal group, which implements load balancing and resiliency across the network. 

In the test environment, we’ve pre-configured the iSCSI base name, portals and a portal group.  This is the first step for mapping volumes to external hosts.  Next, we need to create some volumes and host initiator definitions, then join the two together. The steps are:

  • Create an initiator configuration for the target host
  • Create a volume
  • Create a target for the volume
  • Export the volume on a portal group to the initiator

At this point the volume will be available for access via iSCSI across the network. 

The following short video demonstrates this process by attaching an iSCSI volume to a Windows 10 host.

Read the full article on the Architecting IT blog!

Learn more about Architecting IT’s review on StorPool’s installation, configuration and performance:

StorPool Review – Part 1 – Installation & Configuration

StorPool Review – Part 2 – performance, quality of service (QoS) and monitoring.

Leave a Reply

Your email address will not be published. Required fields are marked *