AIX > Tips & Techniques > Systems Management

Follow These Steps for Converting LUNS from vSCSI to NPIV


Editor’s note: This is the second of two articles on converting logical unit numbers (LUNs) from virtual SCSI to N-Port ID Virtualization (NPIV).

Part 1 of this series, “Migration to NPIV Can Improve Security and Performance,” introduced the three major steps in migrating LUNs from vSCSI to NPIV— identification, setup and mapping, and execution and cleanup. Having walked through how to identify which vSCSI LUNs can be converted to NPIV, we’ll now conclude with how to setup and execute the migration.

2. Setup and Mapping

Step 2 entails creating the adapters to provide the WorldWide Port Names (WWPNs) to the storage-area network (SAN) administrator in order to map the disks. Once the WWPNs are known, the SAN administrator can pre-stage (not activate) the zones and disk subsystem mapping prior to the execution stage.

Adapters can be created while the servers are running using dynamic partitioning (DLPAR). When using DLPAR, save the running configuration to a profile to preserve the newly created WWPNs. Do so in the Hardware Management Console (HMC) using: Configuration->Save Current Configuration. I’d suggest creating a new profile so you can have a change history of profiles.

To start, create the client virtual Fibre Channel (FC) adapter using the HMC on the client LPAR. From the HMC, select the partition you’ll be migrating the LUNs onto, then select Dynamic Logical Partitioning->Virtual Adapters. A list of current virtual adapters will pop up. In the top-left corner, choose Actions->Create Virtual Adapter->Fibre Channel Adapter. Another pop-up will appear. Fill in the adapter number, the server partition (which Virtual I/O (VIO) has the server adapter), and the server adapter number. Repeat this process for both VIO servers and the quantity of client adapters needed.

Next, create the server virtual FC adapters. The steps are nearly the same, except you’ll perform this task on the VIO servers. Be sure the adapter numbers are properly aligned. Note the following:

  • Plan and follow an adapter-numbering scheme to allow for easier identification down the road. I like to start at 11 for FC adapters with odd-numbered adapters belonging to VIO1 and even-numbered adapters belonging to VIO2.
  • Make sure your VIO has an adequate number of virtual adapter slots. Create a numbering scheme for server FC adapters, dependent upon the number of client LPARs in your environment.
  • Add to your existing spreadsheet or create a new one to keep track of client-to-server mappings for your LPARs. This documentation will help in the future.

Once the adapters are created, retrieve the virtual WWPNs. Do so by going back into Dynamic Logical Partitioning->Virtual Adapters of the client server and clicking on the adapter ID. Two WWPNs will appear:

  1. The primary WWPN, used to initially present the disks, and
  2. The Live Partition Mobility (LPM) WWPN, used during an LPM event

You can send the WWPNs to your SAN administrator for reference, but they won’t be visible on the network until you map the virtual adapter to the physical adapter on the VIO server. To do this, first, list the virtual adapters created on the VIO server by using lsmap –all –npiv. Next, identify your vfchosts by matching the ClntID with the LPAR ID on the HMC. Having determined how you’ll map the virtual to the physical, use the vfcmap command to do the mapping. For example:

vfcmap –vadapter vfchostx –fcp fcsx

You must do this for all of the server adapters created. Once the adapter creation and mapping are done, your SAN administrator should be able to see the primary WWPN on the SAN network. If not, or if you use LPM and need the secondary WWPN to be turned on, use the chnportlogin command on the HMC to activate the FC ports.

Your SAN administrator should now be able to see the active WWPNs on the network and can create a zone for the NPIV configuration. The administrator, however, won’t be able to see the WWPNs at the storage subsystem, because the zone shouldn’t be active. The host definitions will need to be added manually.

3. Execution and Cleanup

If all of the elements have been identified and set up correctly, the execution step should simply require a reboot or two, so plan accordingly. First, make sure you have multiple backups of the environment prior to execution. This will ensure a current copy of your data is available, if needed.

With successful backups on hand, shut down your client partition. Because you’ll be removing and adding disks, this will eliminate the possibility of any outstanding I/Os being lost or corrupted. Once the server shuts down, use rmvdev on the VIO server to remove the disks being migrated. This won’t delete any information stored on those disks, but simply remove their configuration from the server. Now, the SAN administrator should remove the disks from the current host mapping and activate them in the pre-staged zone created in the step 2.

After the zone set is activated, power on the server, run cfgmgr and see your vSCSI LUNs presented as NPIV LUNs. Install the multipath I/O host software for the disk subsystem and clean up the old hdisk definitions.

As you can see, most of the process is spent in the identification and setup steps. So take your time and document well.

Andrew Goade is an architect for Forsythe Technology Inc. He can be reached at agoade@forsythe.com.


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Disk I/O and the Network

Increase performance with more tips for AIX 5.3, 6.1 and 7

AIX > ADMINISTRATOR > SYSTEMS MANAGEMENT

Paging, Memory and I/O Delays

How to tune AIX versions 5.3, 6.1 and 7 for increased performance

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters