AIX > Administrator > Systems Management

Implementing a Simple Spectrum Scale/GPFS Single Node Cluster

GPFS Single Node Cluster

In April 2014, we looked at implementing a three node GPFS (IBM General Parallel File System) cluster on GPFS 3.5. Since then GPFS has been through many changes including a name change to IBM Spectrum Scale. I will refer to it as SCALE throughout this article. SCALE has now added support for additional protocols such as AD (active directory) and samba. In this article we will cover installing SCALE 4.3.2 on a single node on AIX 7.2.1. Installing and configuring samba within that node to work with AD and SCALE will be the subject of another article. In general, SCALE is setup as at least a 2 node and preferably a multi-node cluster. In this case we are doing a single node to get around many of the JFS2 restrictions on data size, queuing and performance.

Cluster Description

For this installation there will be one AIX node that acts as the SCALE system as well as the samba server. Although this is a cluster of one, it can be easily extended to add additional servers and clients. The intent is to migrate several 20TB JFS2 filesystems into SCALE to provide a single high-performing namespace and the allow users to update the files using samba from their desktops. This makes the changeover transparent to the users. For this test we will be using hdisk4 through hdisk40. The LPAR name or node being used is gpfsnode1.mydomain.com and this is a bare metal system with no VIO servers. This could be implemented in a virtualized environment very easily.

Implementation

The first step is to install AIX and to make sure we have a clean AIX installation. AIX was installed at 7.2 tl01 sp2 – running ‘oslevel -s’ shows: 7200-01-02-1717 Additionally, lppchk -v and various other commands were run to make sure there are no missing filesets.

The fiber adapter settings were changed to ensure there is no queuing on those adapters:

chdev -l fcs0 -a max_xfer_size=0x200000 -a num_cmd_elems=2048 -P
chdev -l fcs1 -a max_xfer_size=0x200000 -a num_cmd_elems=2048 -P

You should also check that the hdisks have their queue_depth and reserve_policy set and that they have a PVID on them, otherwise you need to set the values using chdev.

chdev -l hdisk4 -a pv=yes
chdev -l hdisk4 -a queue_depth=64 -a reserve_policy=no_reserve -P

Perform this on every hdisk that will be in the cluster (in our case hdisk4 through hdisk40). The changes to settings on the fibre adapters and the hdisks are dependent on what your storage supplier can support so you may need to check with them.

In a multiple node system, occasionally the hdisks show up in a different order so it may be helpful to rename them on the nodes so that they all match. You can do this using rendev. Don’t use rendev to rename the hdisks to anything other than hdisk – SCALE does not recognize the disk type when you run mmcrnsd. In our case we only have one system so out of order disks is not an issue.

You should also check the following settings:

/etc/security/limits
	Look at fsize and nofiles – I usually set them to fsize=-1 
and nofiles=20000 or nofiles=-1
	This allows for large file sizes and lots of files
/etc/environment
	Add /usr/lpp/mmfs/bin to end of PATH
Add WCOLL=/usr/local/etc/gpfs-nodes.txt

Then: I normally have a /usr/local/ filesystem that I use for customizations. There is an etc directory in it that I use for configuration files so I put the SCALE configuration files in there. On gpfsnode1 create a file called /usr/local/etc/gpfs-nodes.txt and put in it a list (one per line) of the nodes in the cluster. At this point, you must decide whether or not you’ll use fully qualified names. I used fully qualified names, which I also used for the hostname. The IP and hostname should be in /etc/hosts and should be resolvable before you start.

vi /usr/local/etc/gpfs-nodes.txt
gpfsnode1.mydomain.com

Jaqui Lynch is an independent consultant, focusing on enterprise architecture, performance and delivery on Power Systems with AIX and Linux.



Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.


comments powered by Disqus

Advertisement

Advertisement

2017 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

AIX > ADMINISTRATOR > SYSTEMS MANAGEMENT

How to Download Fixes

Drawer Configuration

Understand your options for 12X PCIe I/O drawers

clmgr: A Technical Reference

PowerHA SystemMirror 7.1 introduces a robust CLI utility

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
AIX News Sign Up Today! Past News Letters