AIX > Administrator > Virtualization

VIOS 101: I/O


Implementation and Other Considerations for vSCSI and NPIV

A VFC server adapter needs to be created by the HMC for the VIO server partition profile that connects to a VFC client adapter created in the client LPAR. In the case of IVM, the server adapter is automatically created when the client adapter is created. Normally the server adapter is created first and then the client adapter.

The VIO server LPAR provides the connection between the VFC server adapters and the physical Fibre Channel adapters assigned to the VIO server partition on the managed system. Mapping is done using the vfcmap command.

When a client LPAR is created. it’s assigned a vhost number – vSCSI mappings are assigned by vhost. When a VFC adapter is created, it’s assigned a vfchost number and NPIV mappings are assigned by vfchost number. To assign storage, commands would be similar to the following:

Setup NPIV mappings:

vfcmap –vadapter vfchost0 –fcp fcs0
vfcmap –vadapter vfchost1 –fcp fcs1

These map two physical Fibre adapter ports to two VFC adapters.

Commands to show the mappings and information on the adapters:

lsmap –npiv –all
lsmap –vadapter vfchost0 –npiv
lsmap –vadapter vfchost1 –npiv
lsdev –virtual
lsnports
lsdev –slots
lscfg –vpl vfchost0
lscfg –vpl vfchost1

Setup vSCSI mappings:

mkvdev –vdev hdisk5 –vadapter vhost0

This maps hdisk5 on the VIO server to vhost0.

Commands to show the mappings and information on the adapters:

lsmap –all
lsmap –vadapter vhost0 
lsdev –virtual
lscfg –vpl vhost0
lsattr –El hdisk5

With vSCSI and NPIV, it’s important to ensure that all disks are set with reserve_policy=no_reserve. With vSCSI, you should also check queue_depth on the VIO server for each hdisk and on the client as well. The client will most likely default to the SCSI value of 3 and you may need to increase this. Don’t make it higher than whatever it is on the VIO server. For NPIV, queue_depth is only set on the hdisks on the client LPAR.

Fibre adapter tuning settings, specifically num_cmd_elems and max_xfer_size, are set on the Fibre adapters using chdev against the fcs. For vSCSI, these are only set on the VIO server. For NPIV, these are set on the VIO server and are also on the client LPAR. In recent releases of AIX, the client LPAR has a maximum setting of 256 (default is 200) for num_cmd_elems for NPIV clients. The VIO server settings must be at least as high as those for the client LPARs and must be set and activated (normally a VIO server reboot) prior to changing the client LPARs. It’s not uncommon to see num_cmd_elems set to 1024 or 2048 on a VIO server instead of the default 200. Typically, the command to change settings on the Fibre adapters is similar to:

chdev -l fcs0 -a max_xfer_size=0x200000 -a num_cmd_elems=1024 -P

Virtual Optical

VtOpt, also known as file backed optical (FBO), was introduced in PowerVM v1.5 and allows the VIO server to take a DVD that’s assigned to it and to virtualize it for shared use by the client LPARs. Additionally vtOpt can be used to provide FBO. This allows for loading ISO images of DVDs into a repository on the VIO server and then sharing those images out to client LPARs. The client LPAR sees those images as if they were a CD/DVD image.

While only one virtual I/O client partition can have access to the drive at a time, the advantage of a vtOpt device is that you don’t have to move the parent SCSI adapter between VIO clients. In many cases, this wouldn’t be possible anyway as the SCSI adapter often controls the internal disk drives on which the VIO server is installed. The virtual drive can’t be shared with another VIO server as client SCSI adapters (required for vtOpt) can’t be created in a VIO server.

To use FBO, it’s best to add a separate disk to the VIO server and put it into its own volume group. This ensures that any mksysb image of the VIO server is not huge. If that isn’t an option, then ensure you use the –nomedialib flag on your backups if you want to be able to restore quickly.

Once the volume group is created you can use mkrep to create the repository:

mkrep -sp fbovg -size 500g

This creates a 500 GB FBO library in the volume group fbovg. lssp will list the pool.

FBO requires that a vSCSI connection be available to the vhost. Once that’s there, we can create an FBO device as follows:

mkvdev –fbo –vadapter vhost0

Since this is the first FBO device I created, the command creates a device called vtopt0.

Images are loaded into the repository using the mkvopt command and are normally loaded from an ISO image of the DVD.

mkvopt -name aix71base1 -file /software/aix71tl04sp4-base1.iso

This loads the AIX v7.1 tl04 sp4 disk 1 image into the repository and names it aix71base1. I had previously ripped that disk to an ISO image.

I can make that image available to vhost0 (which is vtopt0) as follows:

loadopt -disk aix71base1 -vtd vtopt0

On the client LPAR, I can now see that image if I look at /dev/cd0.

FBO is a very useful way to provide CD/DVD images to client LPARs without having to move DVD drives around.

Virtual Tape

A tape device attached to a VIO server can be virtualized and assigned to VIO clients. This is done by assigning the physical tape drive to the VIO server partition and then creating a vSCSI server adapter using the HMC to which any partition can connect. A vtTape is defined on the VIO using:

mkvdev -vdev rmt0 –vadapter vhost0

This creates vttape0 as a virtual version of rmt0 and assigns it to vhost0. Most new servers don’t have internal tape drives so vtTape is not as commonly in use as FBO.

Shared Storage Pools

SSPs became available at PowerVM 2.2.0.11 SP1 and refer to a pool of SAN storage devices that can be shared among multiple VIO servers and their client LPARs. It’s based on a cluster of VIO servers that are cluster nodes. The cluster includes a distributed data object repository and a global namespace for keeping track of what’s happening. The SSP concept takes advantage of the Cluster Aware AIX (CAA) feature in AIX along with RSCT to form a cluster of VIO servers. It’s a server-based approach to storage virtualization that simplifies the aggregation of large numbers of disks across multiple VIO servers, simplifies the administration of that storage and helps improve storage utilization by using thin provisioning. Thin provisioning means that the device is not fully backed by physical storage if the data block is not in actual use – physical storage gets assigned when it’s actually required, not at definition time. To protect against over-allocating space, the VIO server posts a warning message when the pool has less than 75 percent free space. Features provided by SSPs include thick provisioning, thin provisioning and snapshot features.

When using SSPs, storage is provided by the VIO servers through logical units that get assigned to client LPARs. A logical unit is a file-backed storage device that exists in the cluster filesystem in the SSP and it can’t be resized after creation. It appears as a vSCSI device in the client LPAR so vSCSI is a prereq for SSPs. VFC doesn’t support virtualization capabilities based on the SSP, such as thin provisioning. Additionally, all VIO servers in the cluster must have a network connection to each other. All cluster nodes can see all the disks so disks must therefore be zoned to all cluster nodes that are part of the SSPs. The poold daemon handles group services and the vio_daemon monitors the health of the cluster nodes and the pool as well as pool capacity.

Many Ways

As you can see, you have multiple ways to support storage with the VIO servers. Apart from dedicated adapters, we also have the option of vSCSI, VFC/NPIV and SSPs for our SAN-provided disk. And we have the capability to virtualize DVD drives as well as tape drives. Finally, we have the option to provide ISO images to an LPAR by virtualizing them using FBO so they appear to the client LPAR as if they were a DVD drive. The options are very flexible and most of them can be used at the same time if so desired.

Jaqui Lynch is an independent consultant, focusing on enterprise architecture, performance and delivery on Power Systems with AIX and Linux.



Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

AIX > ADMINISTRATOR > VIRTUALIZATION

A Closer Look at the IBM PowerVM 3.1 Update

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters