multinode persistent sessions the sysplex data sharing end game
Skip this Video
Download Presentation
Multinode Persistent Sessions: The Sysplex Data Sharing End Game

Loading in 2 Seconds...

play fullscreen
1 / 33

Multinode Persistent Sessions: The Sysplex Data Sharing End Game - PowerPoint PPT Presentation

  • Uploaded on

Multinode Persistent Sessions: The Sysplex Data Sharing End Game. Bryant L. Osborn Bank of America. Introduction. APPN - (Advanced Peer-to-Peer Networking) VTAM facility introduced in the late 1980’s.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Multinode Persistent Sessions: The Sysplex Data Sharing End Game' - donagh

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
  • APPN- (Advanced Peer-to-Peer Networking) VTAM facility introduced in the late 1980’s.
  • HPR - (High Performance Routing) extension to reroute sessions around links that have failed.
  • What does this have to do with the future of data sharing? Everything.
  • Scalability - a measure of the practical limits to how large a system can be.
  • CEC - Central Electronics Complex, or mainframe, box, footprint, etc.
  • CPC - Central Processor Complex, or a CMOS CEC.
  • CS/390 - New name for VTAM
sysplex benefits
Sysplex Benefits

(Besides software discounts)

  • Improved scalability
  • Continuous availability

Improved Scalability

Irwin F. Kraus, “The Parallel Sysplex as SMP: Viewing Performance, Capacity, and Scalability through a Familiar Lens”, CMG96 Proceedings, December, 1996


Improved Scalability

  • For MVS, maintaining the “general case” has too much overhead to go much above 12 processors.
  • Sysplex no longer supports the “general case”.
  • Sysplex builds systems by adding MVS images to a Sysplex

Improved Scalability

“When a user architects a parallel sysplex, the user decides how much data is to be shared, usually much less than 100%. The user decides how many processing nodes will be used and what capacity and performance. The user-system-architect has a lot of control over minimizing overhead and maximizing scalability in a ‘parallel sysplex computer’ ” - Irwin Kraus


Improved Scalability

  • MVS goes ‘massively parallel.’ Parallel Sysplex increases MVS scalability to make truly gigantic workloads possible.
  • But wait! Hardware improvements have resulted in CECs with 1000+ MIPS
  • Which way to go?

Continuous Availability

  • As workloads get larger and larger, the risk from an outage becomes greater and greater
  • Parallel Sysplex may allow continuous availability for overnight batch workloads, but what about interactive workloads with logged-on users?

Continuous Availability

The $64,000,000 question: What good is it to have a single gigantic workload if the organization cannot afford even a brief outage of the entire system?

Is anyone really willing to put all their proverbial eggs in one basket, and run the risk that a single outage could take down the entire workload?


Single-Node Persistent Sessions

  • A VTAM facility
  • There are two kinds of persistent sessions. The kind that exists today is called ‘single-node persistent sessions.’
  • VTAM keeps session state information about VTAM users. When a subsystem crashes, VTAM can maintain the sessions for a specified period of time.

The Benefits of SNPS

  • Saves users from having to re-establish VTAM sessions (VTAM keeps ACBs open)
  • Saves application time to clean up failed sessions, but
  • Users can do no work until the application is brought back up, and
  • What happens if VTAM itself fails??

What if . . .

Wouldn’t it be nice is there was a way that users could be reconnected to another MYCICS on CEC-B by reconnecting them through a

VTAM-B? What a great idea! But is it possible? What you call it?


Multinode Persistent Sessions

  • Users of a failed application are reconnected through a different VTAM to another application on another LPAR. How is this possible?
  • Uses APPN’s High Performance Routing (HPR) facility
  • Allows fast recovery from failures of all kinds

Multinode Persistent Sessions

  • SNPS stored user session information in dataspace. Where is the only place this information could be stored in a Sysplex where:
      • a) all VTAMs can get to it, and
      • b) information will not get trashed by any failure?
  • Requires a coupling facility and more that one LPAR

Multinode Persistent Sessions

  • CICS implementation - MVS Automatic Restart Manager will start another region on another LPAR. VTAM will use HPR to perform a path switch to move the connection endpoint
  • IMS implementation - will probably allow reconnection to an existing IMS. Have users open multiple ACBs at logon time, and switch between them???

VTAM Primer

  • Node - is the endpoint of a communication link
  • Subarea Networks - hierarchical networks
  • APPN Networks - Peer-oriented networks

Essential Subarea Networks

  • Key word is ‘hierarchy’
  • VTAM is ‘king’ of the hierarchy. Also called a System Services Control Point (SSCP).
  • Network Control Program (NCP) in a communications controller is second
  • Everything else is a peripheral node that comes in third

Essential Subarea Networks

  • LU (logical unit) - is a port where users connect to the node. Every LU is owned by a VTAM (SSCP). LUs request request network services from VTAM (SSCPs).
  • PU (physical unit) - manage links and routing. (Similar to APPN ‘control points.’)

Subarea Network Nodes

  • Type 5 Node - VTAM (SSCP)
  • Type 4 Node - NCP in a communications controller. Requires a type 5 node to provide all network services
  • Type 2 Node - Peripheral node that cannot perform routing. They are dependent on the VTAM (SSCP).
  • Type 2.1 Node - Peripheral node with limited peer-to-peer capabilities. Can be independent of the VTAM (SSCP) if using LU 6.2 communication.

Subarea Networks

  • All paths must be manually defined. Route selection is chosen from from predefined routes.
  • Virtual Routes (VRs) - logical paths on which sessions are carried. SSCP (type 5) and NCP (type 4) nodes establish and maintain VRs.
  • Class of Service (COS) - sessions are assigned to VRs on the basis of an assigned class of service.
  • Management and Control Sessions - SSCP-SSCP, SSCP-PU, and SSCP-LU

Essential APPN Networks

  • Key word is ‘peer’ which means ‘of equal standing’
  • Network Nodes (NNs) - provide all network services
  • End Nodes (ENs) - provide LU ports and rely on Network Nodes (NNs) for the rest.

Essential APPN Networks

  • Control Point (CP) - every APPN node has a control point from which LUs request network services. An APPN control point is similar to subarea PU.
  • Transmission Group (TG) - links between APPN nodes.

APPN Networks

  • APPN networks are self-defining. NNs maintain a distributed directory containing the characteristics and current status of each node and link.
  • Central Directory Servers (CDSs) - Large networks have one or more CDS to hold the distributed directory.
  • RTP Pipes - HPR groups sessions with the same Class of Service (COS) together into logical connections called RTP pipes. RTP pipes can be switched to a different path without affecting the sessions it carries.

APPN Networks

  • Management and Control Sessions - CP-CP sessions between adjacent nodes
  • Interchange Node (ICN) - special kind of APPN NN used to connect APPN and subarea networks. Supports hybrid VR-TG (virtual route based transmission group) communication and data flows.
  • Dependent LU Server (DLUS) - special kind of APPN NN used to provide SSCP services to dependent LUs in a subarea network.

MNPS Example

  • The endpoint of an interchange node (ICN) can be either a transmission group (TG) or a virtual route (VR). Since it could be either, an RTP-capable path cannot be guaranteed, so IMS3 is not eligible for MNPS.
  • If an LU were owned by EndNode1 (connected directly), the session with IMS1 is not recoverable because there is no HPR path to switch.

The Requirements for MNPS

  • All VTAMs with MNPS applications (CICS, IMS, DB2) must be defined as APPN end nodes supporting RTP
  • All VTAMs with MNPS applications must be connected to a MVS coupling facility that contains the VTAM structures
  • MVS/ESA V5R2 or later
  • Coupling Facility Control Code (CFCC) Level 1 or later
  • All VTAMs that own MNPS applications must be V4R4 or later

Conclusions about MNPS

  • MNPS Fulfills the Sysplex Promise. When the risks become trivial, the size of data centers and workloads will explode.
  • Data Sharing  DTR  Persistent Sessions
  • MNPS Will Cut Across Organizational Lines. Implementation may be difficult because support will cut across support groups.