AIX > Administrator > Virtualization

Growing Logical Units in Shared Storage Pools With Virtual I/O Server 2.2.4.0


The latest release of Virtual I/O Server, version 2.2.4.0, delivers several new capabilities to clients who are running IBM’s PowerVM Shared Storage Pool (SSP) feature. In this article, I’ll briefly discuss one of the new features I believe many PowerVM administrators will find very useful in their day-to-day activities managing their environment.

Quite often it’s necessary to increase the size of a disk that has been deployed to an AIX system. For many years now, it has been possible to increase the size of an assigned storage-area network (SAN) Logical Unit (LU) and have AIX recognize the disk size has changed and modify the volume group accordingly. Until now, it has never been possible to increase the size of an assigned LU in an SSP. Administrators were forced to create a new LU and then migrate their data from the existing larger LU to the new, smaller LU. This was time-consuming and tedious, particularly if there were many disks/LUs that needed to grow. VIOS 2.2.4.0 allows administrators to increase (grow) the size of an LU with one simple command on the VIOS. The new –resize option with the lu command allows you to specify the new size of the LU. You can specify the new size in gigabytes (G) or megabytes (M).

A Real-Life Example

It’s always easier to discuss this kind of enhancement with a real example, so let’s take a look at one.

In the following example, I want to increase the size of an existing SSP LU from 10G to 20G. The LU name is RR-AIX3-datavg. Using the new ‘lu –resize’ command, I can achieve this easily.

$ lu -resize -lu RR-AIX3-datavg -size 20G
Logical unit RR-AIX3-datavg with udid 'ad67212f2cbb000833f5dbd2822d5ae7' has been successfully changed.

This operation was completed immediately, and I verified the new size by running the ‘lu –list’ command and looking at the SIZE column for RR-AIX3-datavg, which showed 20480MB (20GB).

$ lu -list
POOL_NAME: rr-sp
TIER_NAME: Flash
LU_NAME                 SIZE(MB)    UNUSED(MB)  UDID
RR-AIX3-rootvg          9728        6962        762687da22dd6e95e2f4dfaf2a480507

POOL_NAME: rr-sp
TIER_NAME: SYSTEM
LU_NAME                 SIZE(MB)    UNUSED(MB)  UDID
RR-AIX2-rootvg          10240       7463        770fd117712ff5f4425c36c5c06ee27e
RR-AIX3-datavg          20480       20481       ad67212f2cbb000833f5dbd2822d5ae7
rr-aix-rootvg           9728        4556        17f91c21c21ca9939008a28bdf7f1df2

The next step is to ask AIX to re-examine the disk (associated with the LU RR-AIX3-datavg) and modify the volume to support the additional storage space. Prior to performing this exercise, I confirmed that the disk size had increased from 10 to 20G using getconf.

# lspv
hdisk0          00f98abb000284b1                    rootvg          active
hdisk1          00f965b23b2b9699                    datavg          active

# getconf DISK_SIZE /dev/hdisk1
10240					<< Old size 10G

# getconf DISK_SIZE /dev/hdisk1
20480					<< New size 20G after ‘lu –resize’

I also recorded the total number of physical partitions (PPs) in the volume group, as I expected this number to increase afterward.

# lsvg datavg
VOLUME GROUP:       datavg                   VG IDENTIFIER:  00f965b200004c00000001503b2b96c8
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      1275 (10400 megabytes)
MAX LVs:            256                      FREE PPs:       1174 (8784 megabytes)
LVs:                2                        USED PPs:       101 (1616 megabytes)
OPEN LVs:           2                        QUORUM:         2 (Enabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        yes
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no
FS SYNC OPTION:     no

I ran the chvg –g command against the data volume group (datavg), which examined all of the disks in the volume group to see if they had grown in size. If any disks had grown in size, it would attempt to add additional PPs to the PV (physical volume/hdisk).

# chvg -g datavg

As expected, the total number of PPs (and free PPs) increased, providing me with more storage space in the datavg volume group. All of this was completed without an outage to my running system!

# lsvg datavg
VOLUME GROUP:       datavg                   VG IDENTIFIER:  00f965b200004c00000001503b2b96c8
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      1915 (20400 megabytes)
MAX LVs:            256                      FREE PPs:       1814 (18784 megabytes)
LVs:                2                        USED PPs:       101 (1616 megabytes)
OPEN LVs:           2                        QUORUM:         2 (Enabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        yes
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no
FS SYNC OPTION:     no

Same for rootvg

I can do the same for my root volume group (rootvg). The process is exactly the same, except this time I’ll change the LU size from 9G to 20G. The LU name is RR-AIX3-rootvg.

# lsvg rootvg
VOLUME GROUP:       rootvg                   VG IDENTIFIER:  00f98abb00004c0000000000000294a0
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      607 (9712 megabytes)
MAX LVs:            256                      FREE PPs:       284 (4544 megabytes)
LVs:                12                       USED PPs:       323 (5168 megabytes)
OPEN LVs:           11                       QUORUM:         2 (Enabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no
FS SYNC OPTION:     no

# getconf DISK_SIZE /dev/hdisk0
9728

$ lu -list | head -3 | tail -1 ; lu -list | grep AIX3
LU_NAME                 SIZE(MB)    UNUSED(MB)  UDID
RR-AIX3-datavg          30720       30598       ad67212f2cbb000833f5dbd2822d5ae7
RR-AIX3-rootvg          9728        6960        762687da22dd6e95e2f4dfaf2a480507

$ lu -resize -lu RR-AIX3-rootvg -size 20G
Logical unit RR-AIX3-rootvg with udid '762687da22dd6e95e2f4dfaf2a480507' has been successfully changed.

$ lu -list | head -3 | tail -1 ; lu -list | grep AIX3
LU_NAME                 SIZE(MB)    UNUSED(MB)  UDID
RR-AIX3-datavg          30720       30598       ad67212f2cbb000833f5dbd2822d5ae7
RR-AIX3-rootvg          20480       17713       762687da22dd6e95e2f4dfaf2a480507
$

# chvg -g rootvg
0516-1164 chvg: Volume group rootvg changed.  With given characteristics rootvg
        can include up to 16 physical volumes with 2032 physical partitions each.

# lsvg rootvg
VOLUME GROUP:       rootvg                   VG IDENTIFIER:  00f98abb00004c0000000000000294a0
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      1279 (20464 megabytes)
MAX LVs:            256                      FREE PPs:       956 (15296 megabytes)
LVs:                12                       USED PPs:       323 (5168 megabytes)
OPEN LVs:           11                       QUORUM:         2 (Enabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     2032                     MAX PVs:        16
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no
FS SYNC OPTION:     no

# getconf DISK_SIZE /dev/hdisk0
20480

You cannot, however, decrease the size of an LU. When I tried to reduce my LU from 30G to 10G, I received a clear message stating that fact. In this case, I would create a new LU of smaller size (10G) and migrate my data to it; then I would remove, unmap and delete the larger (30G) LU.

$ lu -resize -lu RR-AIX3-datavg -size 10G
Cannot reduce LU size.
'RR-AIX3-datavg' with udid 'ad67212f2cbb000833f5dbd2822d5ae7'

Note: This feature is fully supported with AIX systems, not IBM i partitions. It may work with Linux but will depend on the Linux logical volume manager and file system being used by the client partition.

Another New Feature

There’s one other new feature in 2.2.4.0 that I just had to mention (not related to growing LUs). It’s the removal of the need to specify the clustername on many of the cluster SSP commands. This is a minor change, but it does make managing an SSP environment a little easier. For example, previously if I wanted to check the status of my cluster nodes, I would specify the –clustername option followed by the name of the cluster with the ‘cluster –status’ command:

$ cluster -status
Option "-status" also requires option "-clustername".

$ cluster -list
CLUSTER_NAME:    voyager
CLUSTER_ID:      a7e6c0ea3d4811e4bd8740f2e9d34f38

$ cluster -status -clustername voyager
Cluster Name         State
voyager              OK

    Node Name        MTM           Partition Num  State  Pool State
    s824vio1         8286-42A02214F58V         1  OK     OK
    s824vio2         8286-42A02214F58V         2  OK     OK

This is no longer necessary. I can simply type ‘cluster –status’ to display the information. This is so much faster and more convenient for administrators. This change makes sense, given that there is only one cluster per set of VIOS nodes and one pool as well. This change has been applied to many of the SSP cluster commands, such as lu, failgrp, tier, pv, alert and snapshot.

$ cluster -status
Cluster Name         State
voyager              OK

    Node Name        MTM           Partition Num  State  Pool State
    s824vio1         8286-42A02214F58V         1  OK     OK
    s824vio2         8286-42A02214F58V         2  OK     OK

SSP Tiers

A major enhancement introduced with VIOS 2.2.4.0 is SSP Tiers. I hope to cover this topic in a separate article, as there is much talk about with this new feature. In short, SSP Tiers provides more control over virtual storage allocation within the pool. Up to 10 tiers can be defined, which can each comprise different (or the same) types of SAN storage. You could use Tiers to isolate different types of disk; for example, you could create a tier for high-performing disk, like Flash, and another for lesser-performing non-Flash SAN disk. Or you may create a tier for development and another for test. It’s also possible to dynamically move LUs from one tier to another, giving you storage mobility.

We’ve Come a Long Way

SSPs have come a long way in the last five years of development. New features will make them more attractive to PowerVM administrators as they continue make the product easier and friendlier to use and manage.

Learn more about the latest features for PowerVM in the Announcement Letter and on developerWorks.

Chris Gibson is an AIX and PowerVM specialist located in Melbourne, Australia. He is an IBM Champion for Power Systems, IBM CATE (Power Systems and AIX), and a co-author of several IBM Redbooks publications.



Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

AIX > ADMINISTRATOR > VIRTUALIZATION

A Closer Look at the IBM PowerVM 3.1 Update

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters