Skip to main content

Linux on IBM Z and LinuxONE: When to Use SCSI Versus DASD Storage

There are many aspects to consider when choosing between FICON and DASD or FCP and SCSI for Linux on IBM Z.

Stacked tiles and circles showing a storage concept against a blue background.

There are three types of storage devices available for Linux on IBM Z:
  1. Direct-access storage devices (DASD), implemented using Extended Key Count DATA (ECKD) and Fibre Connection (FICON) protocol. When using DASD in this article, it is assumed to be ECKD DASD.
  2. Small Computer Systems Interface (SCSI) devices using Fibre Channel Protocol (FCP)
  3. Fixed Block Architecture (FBA) devices
Linux on IBM Z supports DASD, SCSI, FBA or all three at once. DASD is a data recording format introduced by IBM with its IBM System/360 and it’s still being use by current IBM mainframes. SCSI utilizes an underlying FCP connection. FCP defines a high-speed data transfer mechanism that can be used to connect workstations, mainframes, supercomputers, storage devices and displays. FCP addresses the need for very fast transfers of large volumes of information and it’s open because it provides one standard for networking, storage and data transfer.

When using DASD for Linux on IBM Z, there is emulation overhead because DASD is not native to Linux. There is no emulation overhead when using SCSI because it’s native on Linux. FBA device are fixed byte block at FB-512 bytes per block, and device size is limited to 2TB by Linux. For Linux on IBM Z, the only way to define an FBA device is in z/VM by using the EDEVICE command to define a SCSI disk as an emulated 9336 model 20 FBA device. You can then map this emulated FBA device as a minidisk or dedicated device for Linux guest under z/VM. For EDEVICE, there is significant hypervisor path length increase with EDEVICE I/O processing compared to DASD or SCSI, resulting in higher CPU usage and latency.

The main choice of storage for Linux on IBM Z is mainly between DASD and SCSI disks. We hardly use FBA devices for Linux on IBM Z because our test methodology is to maximize throughput, and FBA devices are a poor choice for workloads requiring maximum throughput. This is because Linux FBA driver is limited to 32KB per I/O, which increases the CP overhead due to the number of SSCHs and reduces throughput due to latency. Emulated FBA devices should not be a choice for Linux on IBM Z unless the workload is moderate I/O.

FICON and FCP I/O Path

Figure 1 shows the I/O path for DASD and SCSI disk. DASD I/O driver is driven by channel command word (CCW). The CCW is the original I/O operation used for communications with the channel subsystem. The CCW contains a channel command, such as read, write or control, along with the data address of the data area involved. The data is passed to the channel subsystem. The channel subsystem communicates status of the I/O back to the issuing application. When a channel communicates with an application in an asynchronous fashion, it is referred to as a channel interrupt. The CCW is processed by the System Assistance Processor (SAP).

SCSI disk I/O is driven by the queued direct I/O (QDIO) device driver. The QDIO architecture, originally introduced with the OSA-Express, was later extended to HiperSockets and the FCP channels. The architecture itself was extended in HiperSockets to include a type of high-performance I/O interruption known as an adapter interruption. The use of adapter interruptions has been extended to the OSA-Express and FCP channels. With an OSA-Express card running in QDIO mode, I/O operations are affected using a signal adapter instruction, or SIGA. The SIGA is still processed by the SAP, like the way a CCW is processed by the SAP. However, the SIGA effectively passes a pointer to the data because the data already occupies internal storage. The advantages of this are:
  • A 20% improvement in performance versus non-QDIO mode
  • Reduced SAP utilization
  • Improved response time
  • Server cycle reduction
fig1.png
Figure 1: FICON and FCP I/O path comparison

DASD Over FICON Characteristics

The following are some of the I/O processing characteristics when using DASD over FICON:
  • In an DASD environment, the mapping of the host subchannel to DASD is 1:1. The multiple paths are handled in the channel subsystem.
  • Serialization of I/Os per subchannel
  • I/O requests are queued in the Linux guest
  • Disk blocks sizes are 4 KB
  • Multipathing handled in IBM Z firmware
  • Disk size restrictions to Mod 54 or extended address volumes (EAV) with max size of 1TB
  • High availability by FICON path groups is realized
  • Load balancing by FICON path groups and PAVs
  • Using ECKD devices with HyperPAV can improve performance

SCSI Over FCP Characteristics

The Linux zFCP device driver adds support for FCP attached SCSI devices to Linux on IBM Z. FCP is an open, standards-based alternative and supplement to existing FICON connections. The following are important I/O characteristics of SCSI over FCP:

  • FCP is faster than FICON
  • Several I/Os can be issued against a LUN immediately (asynchronous I/O)
  • No ECKD emulation overhead
  • I/O queues occur in the FICON Express card or in the storage server
  • No disk size restrictions for SCSI disks
  • Disk blocks are 512 bytes
  • High availability is provided by Linux multipathing, type failover or multibus, which are managed by either the z/VM or Linux OSes
  • Load balancing is provided via Linux multipathing, type multibus
  • Multipathing handled in the OS
  • Dynamic configuration without IOCDS changes
  • Additional configuration outside IBM Z necessary
  • Zoning in the SAN fabric
  • LUN masking on the storage server

Linux and DASD Aren’t a Natural Fit

Linux expects blocks rather than tracks. Linux doesn’t exploit DASD features and DASD channel programs are tedious for Linux disk I/O. Linux I/O relies on 512-byte blocks rather than 4K blocks. For Linux on IBM Z to use DASD, it is emulated, meaning there’s overhead and wasted disk space.

Traditional DASDs are 3390 devices based on tracks and cylinders with a maximum size of 65,520 cylinders, which is small for Linux. Modern DASD subsystems can do large volumes via Extended Address Volumes (EAV) which can hold between 65,521 and 1,182,006 cylinders. That means an EAV volume can be up to about 1TB of storage (using 1K = 1000 bytes. If using 1K = 1024 bytes it would be about 931GB). We formatted a 3390 Mod 54 of 60102 cylinders and an EAV volume with the max of 1,182,006 cylinders and the resulting space available after formatting is 42.26GB and 831.10GB instead of 55GB and 931GB respectively. This shows a loss of storage between 10% to 23%.

Additionally, only one I/O can be active on the subchannel and the rest of the I/Os need to be queued. This single I/O per subchannel limitation is addressed by Parallel Access Volumes (PAV). There are three flavors of PAV:
  1. Static PAV
  • Alias devices assigned in the DASD subsystem configuration
  • Association observed by the host OS
  1. Dynamic PAV
  • Assignment can be changed by higher power (z/OS WLM)
  • Moving an alias takes coordination between parties
  • Linux and z/VM tolerate but do not initiate Dynamic PAV
  1. HyperPAV
  • Pool of alias devices is associated with set of base devices
  • Alias is assigned for the duration of a single I/O

Choosing FICON or FCP

Linux on IBM Z can use FICON or FCP, or both in an installation. In a customer environment, the choice of using FICON or FCP usually is not performance-only oriented in a production environment because performance aspects depend heavily on the workload, and performance changes with improvements in future releases. There is no obvious “always best” winner. Aspects to consider are available skills, current processes, existing investment, ease of management, and cost. If clients only have distributed skills, you will probably be using SCSI disks and if clients only have IBM Z skills, you would probably be using DASDs. If clients have both distributed and IBM Z skills, they have a choice of what type of storage to use. Usually, clients need to consider:

  • Backup and recovery
  • Management and provisioning
  • Disaster recovery
  • Data administration
  • Monitoring
Figure 2, below, gives a summary of the pros and cons of FICON and FCP.
  Pros Cons
FICON Easy channel and disk management
 
IBM Z hardware and firmware take
care of multiple paths management
 
Can integrate with z/OS IBM GDPS and IBM HyperSwap architecture
(Disaster Recovery solution)
 
ECKD-based backup solution with z/OS
 
Provides better I/O monitoring capability
 
With the zHPF feature, the I/O performance is comparable to FCP
regarding I/O and data rates
Fair performance for I/O intensive
workload with traditional FICON
 
Distributed platform administrators are not familiar with FICON technology and configuration
 
Single disk capacity is limited to the size of 3390 device model; currently at 1TB using EAV
 
zHPF is an additional priced feature
on the storage server (vendor-specific)
FCP Good I/O performance compared to
traditional FICON
 
Distributed platform administrators are
more familiar with FCP and SCSI technology
 
Single disk capacity literally unlimited for
Linux (the maximum z/VM EDEV (FBA) device size is limited to 1 TB)
 
FBA disks provide better performance for
z/VM paging
It requires more channel and device
management efforts, either on z/VM
or Linux
 
OSes (z/VM or Linux on
IBM Z) take care of multiple paths
management. It takes additional
system overhead.
 
Cannot integrate with z/OS GDPS and HyperSwap architecture
 
Needs more efforts to monitor I/O activity
Figure 2: FCP versus FICON

When using an FCP channel, use a SAN switch. This is beneficial from a channel and device management perspective. Also, enable NPIV in the production environment.

In a z/VM SSI environment, FICON is mandatory because z/VM must be installed on DASD. A non-SSI z/VM environment can be set up using either DASD or SCSI devices defined with EDEVICE. For KVM on IBM Z, PCC Lab prefers to use DASD instead of SCSI disk because it is easier to move a KVM on IBM Z from one LPAR to another LPAR. SCSI definitions and mappings are specific to a LPAR. This means in order to move a system using SCSI disk, the SCSI definition and mapping would also have to be defined to the target LPAR.
In a customer environment, performance is not the only consideration when deciding to use FICON or FCP as stated above, but let’s look at the performance implication when it’s considered.

FCP Versus FICON Performance

In a customer environment, performance isn’t the only consideration when deciding to use FICON or FCP as stated above, but let’s look at the performance implications.

FCP is a FICON feature, when defined as CHPID type FCP. It conforms to the FCP standard to support attachment of SCSI devices to complement the classical storage attachment supported by FICON and zHPF channels. The IBM Washington System Center has measured the FICON and FCP I/O operations and bandwidth. For small data transfer I/O operations and large data transfer I/O operations using a mix of reads, writes, and read/write operations (using a FICON Express 16S+ on the IBM z14). FICON achieved a maximum of 23,000 I/O operations per second (IOPS), compared to the maximum of 380,000 IOPS achieved for FCP. The bandwidth comparison between FICON and FCP are 620 MBPS and 3200 MBPS respectively. From these numbers, the clear winner is FCP over FICON for performance.

Improved FICON Performance With zHPF

To improve FICON performance, IBM came up with High Performance FICON (zHPF). zHPF is a channel I/O architecture designed to improve execution of small block I/O requests. By using a Transport Control Word (TCW), zHPF support facilitates the processing of an I/O request by the channel and the control unit. The TCW enables multiple channel commands to be sent to the control unit as a single entity. The channel forwards a chain of commands and does not need to keep track of each single CCW. This leads to cost reduction and increases the maximum I/O rate on a channel. I/O operations that use TCWs are defined to be run in transport mode. A conversion routine translates a command mode channel program into a transport mode channel program. This makes zHPF support transparent for user applications. Transport-mode I/Os complete faster than traditional FICON command-mode I/Os do, resulting in higher I/O rates and less CPU overhead.

zHPF for Linux on IBM Z has been supported since SLES 11 SP1 and RHEL 6.0 and as VM guest using z/VM 6.2. Using zHPF and Linux on IBM Z provides greater I/O performance improvements. During large data transfers with zHPF, the FICON channel processor utilization is much lower than traditional FICON. That means zHPF can get better I/O response time than FICON given the same conditions. On an IBM z14 using a FICON Express 16S+, the number of IOPS is 23,000 using FICON compared to 300K using zHPF and 620 MBPS for FICON compared to 3200 MBPS for zHPF. The numbers for IOPS between zHPF and FCP are 300K and 380K respectively and for MBPS they are the same at 320 MBPS which makes zHPF competitive with FCP.

When choosing zHPF, remember not to mix zHPF and FICON.

FICON zHPF and FCP Performance Charts

Figure 3 shows charts for FICON and zHPF performance, and Figure 4 shows FCP performance (both of which are measured in IOPS and MBPS from the Washington System Center (WSC) Storage PowerPoint presentation titled “z14 and IO Infrastructure Modernization”). Comparing the zHPF and FCP IOPS and MBPS numbers, the performances are almost identical. FICON performance is a lot slower than zHPF and FCP.

fig3.png
Figure 3: zHPF and FICON performance

fig4.png
Figure 4: FCP performance


The PCC Lab has confirmed this by setting up two sets of 12 Linux on IBM Z systems to run the HammerDB database load testing and benchmarking tool. One set used SCSI disks and the other set used DASD with zHPF. Each Linux guest has the same amount of resources defined in terms of the amount of CPU, memory and network adapter. The only difference is the storage being used for the Linux system and database. Using SCSI disks, we measured the transaction per second (TPS) which averaged 37,230. DASD averaged 33,632. The TPS for SCSI disks is 10% better then DASD.

Running Linux in a LPAR or as a z/VM Guest

Running Linux in a LPAR or as a z/VM guest are both very close in performance, either in throughput or response time, when running Linux on IBM Z in a LPAR using SCSI disk or as a guest of z/VM, as shown in Figure 5, with a 50-50% read/write workload. Similar results were obtained for 100% reads and 100% writes.

fig5.png
Figure 5: Native Linux versus z/VM guest: FCP response time

 
From the bulk data transfer (bandwidth performance) perspective, native Linux and z/VM guest are also very close. See Figure 6.

fig6.png
Figure 6 Native Linux versus z/VM Guest: FCP bandwidth


From the response time and data bandwidth figures above, there is practically no performance difference in running a native Linux or as a Linux as a z/VM guest. This is because both can access SCSI disks directly. We call both configurations native FCP or direct-attached SCSI. For some very large installations, native Linux is a good choice when the benefits of z/VM are not required in the production environment, but Linux systems running as z/VM guests benefit from a z/VM SSI environment from management, maintenance, and consolidation perspectives.

Weighing All Options

There are many aspects to consider when choosing between FICON and DASD or FCP and SCSI for Linux on IBM Z such as existing investment in either architecture, existing skills available, and administrative and backup processes. Performance between FICON and FCP are on par with each other when using zHPF. SAP does not provide any performance advantages for FICON devices. As noted above, SCSI devices uses the QDIO driver which also uses SAP to process the SIGA, but the SAP used to process the SIGA is minimal. Also, as the WSC charts shows, FICON performs poorly against FCP and is on par with FCP when using zHPF. I/O using FICON needs to keep track of each single CCW and there are many CCWs for a single I/O request. This synchronization is what slows down the I/O request which zHPF attempts to solve by bundling multiple CCWs into a single TCW. Here’s a simple guide on how to set up for DASD and SCSI disk:
  • DASD
  • Leverage storage pool striped disks
  • Enable HyperPAV
  • Use EAV
  • Enable zHPF
  • SCSI
  • Use Logical Volume (LV) with striping
  • Configure multipathing with failover
The PCC Lab’s preference is to use DASD for KVM hypervisor running on IBM Z. It makes it easier to move the KVM hypervisor between LPARs without additional host mapping and zoning. KVM on DASD will not affect the I/O performance of the Linux guests running on KVM when they’re using direct attached SCSI disks. For Linux guests or Linux native running workloads for benchmark, the PCC Lab prefers to use SCSI disk because they provide the best performance. Listed below are reasons why you should use SCSI disk for Linux on IBM Z:
  • SCSI disk is native to Linux
    • DASD is emulated on Linux on IBM Z
    • DASD has more overhead
    • DASD loss space on average of about 16%
    • DASD formatting is less efficient
  • SCSI disk has no limit on LUN size
  • SCSI disk are more secure
    • LUN Masking and Zoning
    • N Port ID virtualization (NPIV)
  • SCSI disk has better performance
    • Asynchronous I/O
    • No emulation overhead
  • SCSI disk provisions more rapidly
    • This is good for cloud deployment
Using SCSI disk for Linux on IBM Z provides greater benefits and flexibility. It’s the best of both worlds. The mainframe provides reliability, availability and serviceability while SCSI is open source, has worldwide innovation and collaboration, and is adopted by a community of experts. SCSI continues to evolve and can exploit the latest open solutions such as storage virtualization appliances, virtual provisioning or thin provisioning.
IBM Systems Webinar Icon

View upcoming and on-demand (IBM Z, IBM i, AIX, Power Systems) webinars.
Register now →