Virtual channel-to-channel adaptors (vCTCAs) - These may be defined within z/VM to allow users to connect VMs using the CTCA protocol. On z/VM, this is useful in connecting Linux VMs to other VMs that don't support IUCV, such as VSE/ESA*, OS/390 and z/OS*. vCTCAs also may be used to connect Linux VMs.
Virtual LANs - IUCV and vCTCA support point-to-point connections, which can be cumbersome in connecting a server image to many VMs. z/VM 4.2.0 introduced "guest LAN" support. A server image running on z/VM connects to a guest LAN using the HiperSockets protocol. Virtual HiperSockets appear to be the real thing to a Linux server or to any software that supports real HiperSockets. Unlike the real thing, however, there's no limit on the number of virtual HiperSockets or guest LANs that may be defined in a z/VM environment.
Guest LAN support helps eliminate some of the point-to-point LAN management challenges associated with IUCV and vCTCA. z/VM enables virtual HiperSockets and guest LANs to be used on processors that dont support real HiperSockets, such as the G5/G6 and Multiprise 3000. The virtual networking environment is thereby enhanced on those processors while allowing preparation for a real HiperSockets environment prior to the actual move to a zSeries server.
Both "system" guest LANs and guest LANs associated with specific VM users may be defined. System guest LANs exist independently of any active (logged-on) user. Guest LANs associated with a user exist only while that user is active. For either type of guest LAN, authorized users may link to the LAN to participate in HiperSockets communications. No predefined limit exists on the number of virtual HiperSockets devices that may be linked to a guest LAN, nor is there a limit on the number of guest LANs that may be defined.
Proper administration of server farms typically necessitates the purchase of additional servers to run "command and control" software that often requires client code to be installed on each server image being managed. Software license fees for the server and client code, additional hardware and networking expenses for the servers, and additional infrastructure complexity add up to an expensive solution. However, this command and control function is built into z/VM.
Resource utilization-Resource utilization controls in z/VM include the capability to allocate processor capacity on a per-image basis with a high degree of granularity. Additional resources such as memory, disk space and data-in-memory support may be added quickly and easily. Note that this sometimes requires a reboot of the affected Linux image(s), depending upon the added resource. For example, adding more virtual memory requires shutting down a Linux image and then rebooting after the memory is increased.
Other built-in systems management functions exist in z/VM. Performance data may be collected for each Linux image at the users discretion. The same data may be processed using existing products and/or features. The same is true for accounting data, which provides resource consumption information for each Linux image.
Search our new 2013 Buyer's Guide.
Trends | IBM offers smarter systems for performance and scalability
Web Exclusive | Data experts aim to balance privacy risk, research potential