AIX > Administrator > Performance

AIX Flash Cache

Flash Cache

The first step is to ensure the correct LPPs are installed—these consist of bos.pfcdd and cache.mgt and can be found on the AIX 7.2 base install DVD. After installation you should see something like:

lslpp -l | grep Cache
  bos.pfcdd.rte          7.2.1.0  COMMITTED  Power Flash Cache
  cache.mgt.rte         7.2.1.0  COMMITTED  AIX SSD Cache Device
  bos.pfcdd.rte          7.2.1.0  COMMITTED  Power Flash Cache
  cache.mgt.rte         7.2.1.0  COMMITTED  AIX SSD Cache Device

lslpp -l | grep lash
  bos.pfcdd.rte          7.2.1.0  COMMITTED  Power Flash Cache
	7.2.1.0  COMMITTED  Common CAPI Flash Adapter
	7.2.0.0  COMMITTED  CAPI Flash Adapter Diagnostics
	7.2.0.0  COMMITTED  CAPI Flash Adapter Device
  devices.common.IBM.cflash.rte
	7.2.1.0  COMMITTED  Common CAPI Flash Device
  bos.pfcdd.rte      	7.2.1.0  COMMITTED  Power Flash Cache
	7.2.1.0  COMMITTED  Common CAPI Flash Adapter
  devices.common.IBM.cflash.rte
	7.2.0.0  COMMITTED  Common CAPI Flash Device

If you forget to install pfcdd you will have the cache_mgt command but you won’t have a caching engine so make sure these are both installed. The next step is to set up the cache pool of SSDs. Our 4 disks were hdisk2, hdisk3, hdisk8 and hdisk9 and were not in any volume group.

cache_mgt pool create -d hdisk2,hdisk3,hdisk8,hdisk9 -p cmpool0

Behind the covers: This creates a volume group called cmpool0 as well as a pool called cmpool0 consisting of 2888 PPs or 2957312MB (all the PPS across the 4 SSDs). Note: don’t create the volume group directly yourself—use cache_mgt.

The next step is to create the cache partition

cache_mgt partition create -p cmpool0 -s 2957308M -P cm1part1

lsvg -l cmpool0
cmpool0:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
cm1part1            	jfs        2888    2888    4    closed/syncd  N/A

The partition it creates is a JFS partition, not JFS2. This does not matter as (per Nigel) the type is just a property string and is not used to enforce a how the disks are actually accessed.

lsvg -p cmpool0
cmpool0:
PV_NAME           	PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            	active            		722         0           00..00..00..00..00
hdisk3            	active            		722         0           00..00..00..00..00
hdisk8            	active            		722         0           00..00..00..00..00
hdisk9            	active            		722         0           00..00..00..00..00

Above we see how the PPs are spread amongst the SSDs that were assigned.

Below is the list of disks that could be candidates for the cache pool. Cmpool0 is the pool we are using.

cache_mgt device list -l
hdisk0,rootvg
hdisk1,nimvg
hdisk2,cmpool0
hdisk3,cmpool0
hdisk4,ssdvg
hdisk5,ssdvg
hdisk6,rootvg
hdisk7,ssdvg
hdisk8,cmpool0
hdisk9,cmpool0
hdisk10,ssdvg
hdisk11,ssdvg

You can also check the pool allocations as follows:

cache_mgt pool list -l
cmpool0,hdisk2,hdisk3,hdisk8,hdisk9

At this point we can assign hdisks to be targets for caching. For my initial test I just used four disks: Assign hdisk106-109 as the source disks to be cached from

cache_mgt partition assign -t hdisk106 -P cm1part1
cache_mgt partition assign -t hdisk107 -P cm1part1
cache_mgt partition assign -t hdisk108 -P cm1part1
cache_mgt partition assign -t hdisk109 -P cm1part1

Jaqui Lynch is an independent consultant, focusing on enterprise architecture, performance and delivery on Power Systems with AIX and Linux.



Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Achieving a Resilient Data Center

Implement these techniques to improve data-center resiliency.

AIX > ADMINISTRATOR > PERFORMANCE

AIO: The Fast Path to Great Performance

AIX Enhancements -- Workload Partitioning

The most exciting POWER6 enhancement, live partition mobility, allows one to migrate a running LPAR to another physical box and is designed to move running partitions from one POWER6 processor-based server to another without any application downtime whatsoever.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters