AIX > Administrator > Systems Management

Implementing a Simple Spectrum Scale/GPFS Single Node Cluster

GPFS Single Node Cluster

Final Steps

Final steps include creating the filesystem.

Our filesystem will be gpfs0 with a client name of /gpfsfiles We are using the default blocksize of 512 with no replication -R2 says max of 2 replicas for data, -R2 says max of 2 data replicas We are only doing 1 replica (-r 1 and –m1)

mmcrfs gpfs0 -F /usr/local/etc/gpfs-nsdstanza.txt -B 512K -m1 -M2 -r 1 -R 2 -T /gpfsfiles
You should see a number of lines similar to:
The following disks of gpfs0 will be formatted on node
    nsdhdisk4: size 256000 MB
    nsdhdisk5: size 256000 MB
    nsdhdisk40: size 256000 MB
Formatting file system ...
Disks up to size 8.5 TB can be added to storage pool system.
Creating Inode File
  80 % complete on Tue Sep 26 12:53:12 2017
100 % complete on Tue Sep 26 12:53:14 2017
Creating Allocation Maps
Creating Log Files
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
Completed creation of file system /dev/gpfs0.

Now you can mount the filesystem using mmount

#mmmount all 
Tue Sep 26 12:54:14 CDT 2017: mmmount: Mounting file systems ...

#df -g /gpfsfiles
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/gpfs0      6000.00   5996.32    1%     4038     1% /gpfsfiles

#mmdf gpfs0
disk                disk size  failure holds    holds              free KB             free KB
name                    in KB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 8.5 TB)
nsdhdisk6           262144000       -1 yes      yes       261980672 (100%)          1264 ( 0%)
nsdhdisk7           262144000       -1 yes      yes       261982720 (100%)          1248 ( 0%)
nsdhdisk40           262144000       -1 yes      yes       261982720 (100%)          1248 ( 0%)

You will also see a pool total that shows the total storage for that filesystem along with a section that shows details on the inodes in use, free, allocated and the maximum inodes you can have.

Next steps:

At this point, you’re ready to test the cluster by adding data to 
the filesystem and testing access. You can also use the following 
commands to document your cluster:
mmgetstate -aLs
mmdf gpfs0

At this point, your cluster is ready to go. You can add additional nodes or just use it as a single node cluster, depending on your needs.

A Simple Alternative

This is a fairly simple implementation for a specific use but it can be used as the foundation for your SCALE environment and allows a scale up solution for a user who has huge filesystems and who needs the latency reduction you get with SCALE. If it becomes necessary to add additional nodes in the future that is easy to do. The next steps in our case are to add the samba and AD integration and to update some of the SCALE tunables, but as of right now we have a fully functional, well performing Spectrum Scale cluster. If you are having issues around JFS2 performance or scalabilty with respect to the size or number of files, or if you need to server out files to multiple systems while maintaining performance, then I would recommend getting a trial of Spectrum Scale. IBM offers the ability to use an Intel VM they provide or to trial it on your own systems. Spectrum Scale is supported on multiple operating systems including Windows, Linux, Linux on Power, Linux on Z and AIX. All of these can be in the cluster at the same time as long as they meet the required levels which can be found in the FAQ (frequently asked questions) document from IBM. The FAQ also provides documentation on the architectural limits on Spectrum Scale which are significantly higher than JFS2 filesystems.

Jaqui Lynch is an independent consultant, focusing on enterprise architecture, performance and delivery on Power Systems with AIX and Linux.

Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.

comments powered by Disqus



2018 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.


How to Download Fixes


Understand your options for 12X PCIe I/O drawers

clmgr: A Technical Reference

PowerHA SystemMirror 7.1 introduces a robust CLI utility

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters
not mf or hp