1 / 80

Using WAS on a Unix Platform by Colin Renouf

Mia_John
Download Presentation

Using WAS on a Unix Platform by Colin Renouf

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Using WAS on a Unix Platform by Colin Renouf Version 1.4

    2. Using WAS on a Unix Platform What we will cover… What is Unix? AIX AIX Enterprise Infrastructure Features of AIX that can be exploited Features of AIX to beware of Solaris Solaris Enterprise Infrastructure Features of Solaris that can be exploited Features of Solaris to beware of HP-UX HP-UX Enterprise Infrastructure Features of HP-UX that can be exploited Features of HP-UX to beware of

    3. Why look at these things? To properly manage WAS on any platform and make best use of that platform we must understand the platform itself. What are the features that we can exploit for WAS on a platform for better performance or scalability? What features of the platform can we use to manage the platform itself? What features of the platform make managing upgrades easier? How should the WAS binaries, configuration and logs be managed for greatest reliability? What features of the platform should we be aware of that can cause problems in a WAS environment? We will briefly look at all of these and more…

    4. Unix is an operating system invented in 1969 by the Bell Labs, based on research into another operating system called Multics Ken Thompson and Brian Kernighan wrote it for their own use for text processing Unix is written in C and is designed to be portable Unix consists of a kernel, a number of shells, and a common set of utilities All devices are made to look like files to ease development Processes have a parent-child tree relationship and all descend from the “init” process. The portability of the code and its availability for research and development led to Unix being made available in many flavours and on many platforms The C source code was released to Universities to extend, port onto new hardware and for study Two Distinct Unix Flavours and Code Trees – System V from Bell Labs and BSD (Berkeley Software Distribution). BSD and System V have synchronised in System V R4 The use of the kernel with a defined API has allowed Unix “clones”, such as OSF/1, Mach, Hurd, and Linux to appear that have the same API, shells and utilities Linux is a Unix clone developed from scratch, initially by Linus Torvalds, supporting the Unix APIs, behaviour and utilities A Quick Overview of Unix

    5. Why do people use Unix? Most Unix platforms are 64-bit and support large amounts of memory Whilst Unix started off as a 32-bit platform, the portability of the C source code allowed it to be ported to other hardware setups, such as 64-bit platforms, very easily. Well managed Unix platforms are robust and scalable The core of Unix has been around since the beginning of the 1970s so most of the bugs and issues have been ironed out. The portability of vendor application source code between different platforms and vendors leads to a large pool of quality software being available. Since the Unix C APIs are the same or similar across all platforms it is easy for a vendor to develop software that runs across a number of Unix platforms. Targeting a new platform becomes merely a compile and test exercise, rather than a new development The POSIX standard has standardised the APIs even more, but even Windows NT can support this API in its rarely used POSIX subsystem. Most Unix’s also support Linux extended APIs directly to aid software availability. The similarities between all vendor Unix platforms are greater than the differences allowing a portability of support and development skillsets. Generally, someone experienced with one Unix can move to other Unix’s easily The growth of Linux has greatly increased the availability of quality Unix software

    6. Unix Structure

    7. Unix File System Structure (1)

    8. Unix File System Structure (2)

    9. Unix File System Structure (3)

    10. Unix File System Structure (4)

    11. Unix File System Structure (5) The use of iNodes in the file system allows other file systems to be “mounted” at “mount points” invisibly on the existing file system. This also allows “links” to be used, where a file or directory appears to be in more than one place. A link is set up using “ln FILEA FILEB”. Standard file management commands traverse other file systems and mount points without knowing the details as the operating system isolates them. Each process just gets a file descriptor, and the kernel APIs map the calls appropriately for the file type. Devices also appear as files in the /dev directory. File descriptors 0 (stdout), 1 (stdin) and 2(stderr) normally map to the console, but redirection allows them to output to a file instead. The process need know nothing about this. Most Unix’s support the remote Network File System (NFS), PC File System (FAT), CD File System (cdrfs) and the DVD File System to be “mounted” AIX version of the “mount” command to mount an NFS exported file system “/kirk” from machine “spock” on mount point “/mnt/kirk” is: mount –n spock /kirk /mnt/kirk

    12. Unix File System Structure (6) Permissions Unix permissions are applied to: The user or owner of a file A group the user is a member of (usually the primary group) All other users Permissions can be: Read Write Execute Special (i.e. directory) Example -rwxr-x--x SomeExecutableFile Owner has read, write and execute permission Group has read and execute permission Everyone else has execute only permission

    13. Unix Process Model

    14. Unix Process Model (2)

    15. Unix Process Control The “init” process starts up dependent processes when the system starts up, using entries in the “inittab” file. Most of these are essential to the behaviour of the system and run in the background. An entry indicates the runlevel the command should run at (after the entry name) and then the command to run. init:2:initdefault: rc:23456789:wait:/etc/rc 2>&1 | alog -tboot > /dev/console # Multi-User checks srcmstr:23456789:respawn:/usr/sbin/srcmstr # System Resource Controller rctcpip:23456789:wait:/etc/rc.tcpip > /dev/console 2>&1 # Start TCP/IP daemons “inetd” is started by “init” and controls the TCP/IP related background processes and runs as a “daemon”. TCP/IP services it starts are controlled by “/etc/inetd.conf” (and “/etc/rc.tcpip” on AIX). AIX has its own mechanism for stopping and starting subsystems dynamically, and uses this from “inittab”, called the “System Resource Controller” On AIX to start a subsystem use “startsrc –s ssname” and to stop it use “stopsrc”. Unix has a built in scheduler called “CRON”. Schedule and entry using “at” and edit entries using “crontab –e”.

    16. Pipes and Redirection One of the strengths of Unix is its ability to build to build complicated programs and functions out of more modular functions. This relies on pipes and redirection Pipes are file like connections set up between two active processes, feeding the output of one into the input of another. Chains of processes can be built up. A pipe can reside on the file system (using “mkfifo”) or in memory (using “|”). Example: To force all output from the “ps” command to appear in screen sized pages pipe the output to “more” ps –eaf | more Redirection allows the input, error, and output streams of a Unix process to be redirected to a pipe or file (using “>” or “<“). Example: To force all output from the “ps” command to be sent to a file redirect the output. ps –eaf > psoutput.out

    17. Pipes and Redirection (2) Example: You need to find all of the entries in the error log of an AIX system for today and put them into a file. # errpt | grep `date +%m%d` > error.out # cat error.out AA8AB241 0212140805 T O OPERATOR OPERATOR NOTIFICATION The ‘|’ symbol after the “errpt” command takes the output stream of “errpt” and pipes it into the input stream of the “grep” command. The “`” (right apostrophe) symbol causes the shell to evaluate the command “date +%m%d” and substitute it as a parameter of the “grep” command. The “>” after the “grep” command redirects the output stream to the file “error.out”.

    18. Command Shells Unix supports a number of command shells. Bourne Shell (sh) was the original C Shell (csh) and the Korn Shell (ksh) are also standard for most systems, with the Korn Shell being essentially an enhanced Bourne Shell. The Open Source Bash Shell (bash) is becoming popular, but is usually not included. Which shell a user gets by default is configured in the “/etc/passwd” file and changed using “passwd –s <newshellpath>”, i.e. “passwd –s /usr/bin/ksh” The shells include a rudimentary scripting language with looping, tests, etc – but the C Shell language is not compatible with that for the other shells. The shells also include a number of built in commands. Environment variables, loaded from the “/etc/profile” or “.profile” file control the behaviour of many features.

    19. Command Shells (2) #!/bin/sh set -xv DATE=`date +%m%d` LOGFILE=/tmp/errpt_log_${DATE}.log echo "ERRPT LOGS FOR `date +%y%m%d`" > ${LOGFILE} echo "**********************************" >> ${LOGFILE} echo "errpt output" >> ${LOGFILE} echo "************" >> ${LOGFILE} errpt | grep ${DATE} >> ${LOGFILE} echo "*********************************************" >> ${LOGFILE} FROM="Colins RS6000 System <root@rs6000.RnREnterprise.dyndns.org>," MAILTO="colin.renouf@blueyonder.co.uk" SUBJECT="Error Logs for Today" ( echo "From: ${FROM}" echo "To: ${MAILTO}" echo "Subject: ${SUBJECT}" echo "Mime-Version: 1.0" echo "Content-Type: text/plain" echo echo cat ${LOGFILE} echo ) | sendmail -t -oi sendmail -q rm ${LOGFILE}

    20. Command Shells (2) #!/bin/sh set -xv DATE=`date +%m%d` LOGFILE=/tmp/errpt_log_${DATE}.log echo "ERRPT LOGS FOR `date +%y%m%d`" > ${LOGFILE} echo "**********************************" >> ${LOGFILE} echo "errpt output" >> ${LOGFILE} echo "************" >> ${LOGFILE} errpt | grep ${DATE} >> ${LOGFILE} echo "*********************************************" >> ${LOGFILE} FROM="Colins RS6000 System <root@rs6000.RnREnterprise.dyndns.org>," MAILTO="colin.renouf@blueyonder.co.uk" SUBJECT="Error Logs for Today" ( echo "From: ${FROM}" echo "To: ${MAILTO}" echo "Subject: ${SUBJECT}" echo "Mime-Version: 1.0" echo "Content-Type: text/plain" echo echo cat ${LOGFILE} echo ) | sendmail -t -oi sendmail -q rm ${LOGFILE}

    21. Pattern Matching One of the powerful built in features of Unix is its pattern matching facilities using tools like “sed”, “tr” and even “awk” which has its own language. Awk Example: You need to find all of the files in a directory containing the “shell” executable indicator and dynamically create a shell script to create a “tar” (like a zip file) of just those files. # grep bin/sh * | cut -f1 -d ":" | awk '{print "tar cvf shellfiles "$0}' > dotar.sh # cat dotar.sh tar cvf shellfiles emailer.sh tar cvf shellfiles tarthem.sh The “grep” command is used to find all of the files containing the shell executable indicator (i.e. “bin/sh”) in the current directory. The ‘|’ symbol after the “grep” command takes the output stream and pipes it into the input stream of the “cut” command. The “cut” command extracts the first field of the output of the “grep” command, where fields are delimited by a colon (“:”). The ‘|’ symbol after the “cut” command takes the output stream and pipes it into the input stream of the “awk” command. The “awk” command prints the string “tar cvf shellfiles” followed by the value of the variable “$0”. This command will be repeated for all values input that match this varfiable value (in this case all of them). The “>” command redirects the output of the “awk” command to a file called “dotar.sh” which will contain a list of commands to create a tar file called “shellfiles”

    22. Window Systems – X Windows X-Windows system provides the distributed windowing technology. X-client is on the server and the X-server is on the client, i.e. the client programs on the server are outputting via the X-Server on the client Window managers provide the look and feel, i.e. Motif (mwm), CDE (dtwm), KDE, Gnome, etc. The “Common Desktop Environment” standard is a common look and feel and suite of applications across Unix environments, using the Motif Window Manager (mwm) as a base and common Windowing applications (i.e. editors, consoles, file managers, etc). Applications written for pure-X can run on any Window manager. X-Windows applications include file management, editors, terminal emulators, etc Many Java applications, often used for systems management and installers require X-Windows.

    23. Window Systems – X Windows (2)

    24. General Unix Recommendations (1)

    25. General Unix Recommendations (2)

    26. What is AIX? IBM’s Unix, currently at version 5.3 64-bit For the POWER and PowerPC family POWER5 processors blow everyone else away Same processors used in Playstation, Xbox 360 BSD-derived Features from Mach microkernel / OSF 1 IBM Value Add Logical Volume Manager (LVM) Object Data Manager (ODM) – Like the Windows Registry! SMIT and WebSM Management

    27. Hardware for AIX AIX was designed for use on RISC-based systems (i.e. RS/6000, pSeries with POWER or PowerPC processors) A RISC (Reduced Instruction Set Computer) has a simplified set of instructions to enable each instruction to execute in 1 clock cycle. A Load-Store Architecture is used, i.e. copy from memory into registers then operate on, do not operate on memory directly. Compilers order the instructions to make best use of the hardware Superscalar in that some instructions execute in parallel streams. Superpipelined in that an instruction is broken up into many chunks (i.e. decode, read registers, operation, write registers). Most machines are SMP (Symmetric Multi-Processing) and support many CPUs running concurrently The POWER 4 and POWER 5 based pSeries boxes also support LPARs, i.e. a logical set of processors running a single operating system image on a subset of real processors. A number of LPARs can operate concurrently on a single system Dynamic LPARs allow the number of CPUs in use to be changed without rebooting – this can cause problems for some software. POWER 5 systems allow more virtualisation. POWER 4 and POWER 5 systems are multi-core (currently dual) and have two or more processors on a single physical chip.

    28. Hardware for AIX (2)

    29. Hardware for AIX (3) LPARs (Logical Partitions) Not so much logical, more at the firmware level Each LPAR can run a different OS and must have the OS installed into it. Hypervisor partitions the machine up into chunks of CPUs and I/O POWER 5 hardware supports Micro-Partioning where a CPU can be split into 0.1 CPU units and allocated to an LPAR Also Hardware Level Symmetric Multi-Threading LPARs were originally static and owned I/O cards, so had to be shut down and rebuilt to be resized. Dynamic LPARs allowed moving of CPUs and I/O without shutting the LPAR down, but some software struggles with this so a restart is needed. With MicroPartioning and the Virtual I/O Server (which owns I/O and maps it into the LPARs as required) a great deal of additional scalability is possible. Supports Capacity Upgrade On Demand (CuOD).

    30. Hardware for AIX (4)

    31. Hardware for AIX (5)

    32. AIX Logical Volume Manager (LVM) Physical disks are referred to as “physical volumes”. Logical volumes are the “virtual disks” onto which the JFS volumes are put, and which are mapped to a number of 8Mb chunks of the physical volumes. Journaled File System (JFS) file systems (and now JFS2 which has dynamic iNode support) are placed on top of logical volumes As logical volumes are virtual they can be resized easily, and can support mirroring internally. Groups of file systems and volumes make up a Volume Group. The most important one for the running of the system is “rootvg”.

    33. AIX Journaled File System (JFS) All changes to the structure of the file system are written to a log file before they are made, i.e. creations, deletions, or resizing of files. The changes to the data blocks themselves are not journaled, so only the file system structure is protected – not the data itself. Data blocks and their changes are written to a cache. Periodically (every 60 seconds by default) the “syncd” daemon process writes the changed disk blocks to disk in an “optimal manner” to avoid too much “disk seeking”. To protect data rather than file system structure an application can either manage its own persistent storage via “RAW” disks (such as with databases) or can “write through the cache” to disk directly (which slows disk access down). Journaled File System (JFS) file systems have a pre-allocated set of iNodes. JFS2 files systems can be larger and allocate iNodes dynamically. For rootvg the JFSLOG, i.e. changes to the file system structure, are written to logical volume hd8 by default.

    34. AIX File System Management - Clones AIX can clone its “rootvg” to another set of disks called “alt_rootvg” Installations then take place to this set of disks If the OS upgrade is problematic the bootlist can be changed to revert to the previous OS installation. This process is know as “alt_disk_install” Many sites clone their boot environment to another set of disks either before each installation or each night to act as a quick OS restore. This is additional to the operating system installation COMMIT and ROLLBACK facilities, so multiple layers of installation contingency are available.

    35. AIX File System Management – Clones (2)

    36. AIX Memory Management AIX is a 64-bit Operating System Both 32-bit and 64-bit kernels are available Both kernels can run both 32-bit and 64-bit executables There are two formats of 32-bit executable that can run on all recent AIX systems There are two formats of 64-bit executable, but those from AIX 4.3 cannot be executed on AIX 5+. The POWER processor is segmented into 256 Mb segments Segments have dedicated uses. Virtual memory paging is implemented on top of the segments 32-bit processes have 16 segments, 64-bit processes have 232 segments 32-bit processes have a 4GB address space 64-bit processes have a 1EB address space.

    37. AIX Memory Management (2)

    38. AIX Memory Management (3)

    39. AIX Object Data Manager (ODM) AIX maintains config in a “registry” like database – the ODM. Different information is separated into different object classes. Some config is also maintained in text files like other Unix’s Hardware config and installed software management is primarily in the ODM. The ODM database can be accessed using commands, and new entries added: odmcreate – adds a new object class odmdrop – removes an object class odmget – queries an object of a particular class (i.e. “odmget –q name=hdisk0 CuAt” retrieves the customised device attributes of hdisk0 – the physical volume id) odmadd – adds an object of a particular class odmchange – changes an object of a particular class odmdelete – deletes an object of a particular class odmshow – displays class definitions The “cfgmgr” runs before AIX is properly initialised to load drivers for disks and network interfaces that may be used for booting. This updates the ODM for “configured, found and customised” devices in “/etc/objrepos” based on the predefined devices in “usr/lib/objrepos”. Installed software and a history of updates is also maintained in the ODM.

    40. AIX Object Data Manager (ODM) (2)

    41. How is AIX Managed? AIX has its own built in tools to make it easier to manage Systems Management Interface Tool (SMIT) is a powerful menu driven tool as a front-end for management commands It can be used to learn AIX and generate scripts too The Hardware Management Console (HMC) manages hardware & partitions IBM Director is a new tool to replace the HMC functionality (initially). It may replace the CSM in the future as well Web Systems Manager (WebSM) is a Java/Web tool to manage an AIX box. It can be hosted locally, or on the CSM from where it can manage the cluster The IBM Director tool is likely to replace this, the CSM, and the HMC in the future Network Installation Manager (NIM) manages upgrades and installations. It contains a copy of the OS and system software binaries on an NFS share.

    42. How is AIX Managed (2)? Cluster Systems Manager (CSM) manages a group of AIX boxes together. It manages the system software config and running of the OS’s of all boxes in the cluster. This uses the NIM to hold the binaries and make them available to cluster members This will most likely be replaced by the IBM Director in the future High Availability Cluster Multi-Processing (HACMP) provides high availability mechanisms This manages standby machines and the ability to run scripts to reconfigure the environment when a failure occurs The scripts can be very complex!

    43. How is AIX Managed? (3)

    44. Clustering - HACMP High Availability Cluster Multi-Processing (HACMP) is a set of binaries to support failover. Scripts are produced to failover individual processes to a backup machine This doesn’t usually lead to an active-active environment The scripts can be complex and must take account of the different failure modes and dependencies of both the system software and the applications Consider failover of a WAS cluster on a machine in an MQ cluster that is connected to database server in the same data centre What should fail over first! Often this can stop failover working If you use this test, test and test again…

    45. Clustering - HACMP (2)

    46. Clustering - HAManager

    47. Clustering - HAManager (2)

    48. SMIT (Systems Management Interface Tool) IBMs powerful built-in management tool for menu-driven administration Runs in a command line mode as “smitty” or in an X-Windows GUI as “smit”. Creates a log file of commands executed that can be used for management scripts. On any SMIT command the “F6” key can be used to view the underlying command or script used to perform the requested task. SMIT can be extended to add new commands. Basic High Level Options are: Software Installation and Maintenance Software License Management Devices System Storage Management (Physical & Logical Storage) Security & Users Communications Applications and Services Print Spooling Problem Determination Performance & Resource Scheduling System Environments Processes & Subsystems Applications Cluster System Management Using SMIT (information only)

    49. SMIT (Systems Management Interface Tool) (2)

    50. Web Systems Manager IBMs more powerful web/Java systems administration and management tool Basis of future IBM administration tools Very powerful Client Java Application install is required

    51. AIX Threading (1) AIX 5L supports Unix 98 Threading. By default, 8 “user mode” threads are mapped to 1 “kernel thread” to avoid mode switches. On AIX this can cause thread synchronisation issues that affect the use of WebSphere MQ. Source: “AIX 5L Porting Guide” http://www.redbooks.ibm.com SG246034.pdf & APARS IY29933, IY25655, and others Recommended IBM workaround is to force 1:1 user:kernel threads and thus mode switches to occur for ALL processes on the box which slowwwwwwss it down. /etc/environment: AIXTHREAD_SCOPE=S and AIXTHREAD_MNRATIO=1:1 This is overkill! Fixed by WebSphere Platform Messaging in WAS 6 as MQ support is inside the JVM and sharing the same user mode thread pool.

    52. AIX Threading (2) AIXTHREAD_SCOPE – How user mode threads are to be used P = Process Scope – implies m:n user mode threads are used S = System Scope – implies kernel threads used for all threads, i.e. 1:1 AIXTHREAD_MNRATIO – For process scope, the ratio of kernel:user threads AIXTHREAD_SLPRATIO – For process scope, number of kernel threads to support sleeping user mode threads SPINLOOPTIME – For SMP systems, controls how long the user mode code will wait before yielding to the kernel YIELDLOOPTIME – The number of times to yield before going to sleep on a lock One option is to have user .profile files set these variables and run WAS and MQ under these users. This stops other processes from slowing down.

    53. AIX Threading (3) Be that thread! A game for two people… To understand the problem get 16 plastic cups, two boxes of Smarties, and two people. Have each person take 8 cups and put a Smartie in each cup – one at a time. Periodically have each person swap their tubes of Smarties. When they swap their tubes of Smarties the filling of the cups has to stop because they are both dependent on each other completing before they can return to their tasks….

    54. AIX Threading (4) RECOMMENDATION: Move to WAS 6 and use the WebSphere 6 Messaging Engine to support higher predictable loads. Get rid of the AIXTHREAD_XXXX settings altogether MQ Clustering is also a lot easier as WAS High Availability management is available.

    55. WPM / JMS Messaging Engine WebSphere Application Server 5.1 Uses WebSphere MQ 5.3 Server or the WEMPS cut-down version This requires a lot (9 or more) of separate processes WAS JVM is 1 process, MQ Support is 9 or more so a lot of task switching goes on between processes at the OS level for messaging, which affects performance. This can cause problems on some OS’s, i.e. AIX, where special settings are needed for threading.

    56. WPM / JMS Messaging Engine (2)

    57. Network Buffer Cache (NBC) / Fast Response Cache Architecture (FRCA) To avoid multiple copies into and out of kernel memory AIX 5 caches some data inside the kernel, close to the TCP/IP stack Kernel extension IBM HTTP Server (IHS) caches frequently used web pages inside kernel space HTTP GET enhanced and accessed by the TCP/IP stack before switching to user space Dynamic Data can be cached as well via access to the FRCA programming API. WAS/IHS can use this mechanism for high performance Not quite same as FRCA on other platforms (Fast Response Cache Accelerator)

    58. Fast Response Cache Architecture (FRCA) (2) For non-IHS web servers the fractrl utility can be used to manage the cache Design applications to make best use of the cache, i.e. design for the WAS dynacache facilities and separate as much static content from the dynamic content. Install the DynaCacheESI application in WAS For details for FRCA/NBC see AIX Differences Guide 5.2 Edition (http://www.redbooks.ibm.com SG245765) For WAS caching information and design details see http://www-128.ibm.com/developerworks/websphere/techjournal/0405_hines/0405_hines.html

    59. JVM Tracing and Profiling OS has always supported tracing with trace flags saying what to log AIX 5.2 has extended tracing to include information from the JVM. Performance of processes using tprof now includes info from JVMPI at the method level using the new –j option Can now mix and match tracing of external processes and WAS.

    60. JVM Tracing and Profiling (2)

    61. AIX and WAS in General

    62. AIX and WAS in General (2)

    63. What is Solaris? Sun’s Unix, currently at version 10 64-bit For the UltraSparc RISC, Intel x86, and AMD64 processor families BSD-derived Unix by which others are measured Sun Value Add Zones/Containers (i.e. lightweight VMs) DTrace tracing Solaris Management Console Management

    64. Solaris on SPARC Solaris is available for the UltraSPARC and Intel/AMD processor families, but WAS only supports the high-end UltraSPARC family. The UltraSPARC is a 64-bit RISC processor UltraSPARC processors are different to other RISC processors Have many more registers that get renamed on a function call Part of the Register Windows scheme Function result is left in register for return to calling function as window moves Register Windows are faster at handling function calls than other processors until the function depth gets high – resulting in normal memory stack usage Compiler can optimise the register usage, but avoid recursion and deep call stacks to get maximum performance

    65. Solaris on SPARC (2)

    66. Solaris Zones / Containers Zones are a separate process tree and file system running inside the “global zone”, i.e. a lightweight VM WAS, DB2, Oracle, IHS can be run in a zone Each zone is secure and appears to be a complete machine Each zone has its own hostname and IP address Zones cannot access the enclosing global zone so are good as “bastion” or “jail” hosts A zone can be rebooted and shutdown independently of the global zone This is really powerful and can be used to consolidate a number of applications onto a single machine WAS and IHS are installed in the global zone and shared into each zone instance as read-only. This aids in provisioning new servers and emulating a complex infrastructure on a single machine. See http://www.big-bubbles.fluff.org/blogs/bubbles/archives/000344.html

    67. Solaris Zones / Containers (2) To create a zone…. First create a file system area within the global zone (i.e. main OS) for the new zone to reside in. This is just a directory. Make sure only root can access it. Use the zonecfg tool and the create option to create a new zone. Give it an IP address and say how it maps to the real machine. Give it the path to the file system created above. Map packages/directories from the global zone to it. Use the zoneadm tool to “install” the packages above in the zone. Use the zoneadm tool to “boot” the zone To the outside world the new zone is an independent machine Install WAS, IHS, DB2, and Oracle as normal. Do check on the web for known issues though. However, all of these will work with zones. See http://www.blastwave.org/docs/Solaris-10-b51/DMC-0002/dmc-0002.html and http://www.bigadmin.com

    68. Solaris Zones / Containers (3)

    69. Solaris Zones / Containers (4)

    70. Solaris Zones / Containers (5)

    71. DTrace Tracing and Profiling A Solaris 10 New Feature Allows tracing and profiling through large parts of the kernel and into other areas, with minimal or no measurable overhead. Designed for use in production at runtime – not just a development tool! 30, 000 “probes” built into the OS but new ones can be added Kernel Extensions can add new probes which can then be integrated with existing ones A DTrace “scripting” language called “D” can be used to build complex tracing functions New traces can be added through kernel extensions Kernel extensions exist for 1.4.2 (with JVMPI) and Java 5 (with JVMTI) JREs Using the Java kernel extensions the mapping between Java, process, and kernel code can be mapped. The “DTrace” script controls what is logged.

    72. DTrace Tracing and Profiling (2) Each DTrace script makes use of a probe to identify what to trace and a predicate to define when to trace. Each probe consists of a provider, a module, a function, and a name. Some of these may be blank or not apply. Pattern matching can be used to trace more than one of these at a time. For example, to trace all system calls where the function name includes “sock”, i.e. to trace all socket calls you could use: syscall::*sock*:entry { trace(timestamp); } Here, “syscall” is the provider, there is no module, all functions with “sock” in the name are being traced, and we are probing for entry into the function. The “trace” call is an inbuilt function, although many other functions such as “printf” are available, and “timestamp” is a variable that allows checking of the time. See http://www.bigadmin.com for example DTrace scripts and a tutorial

    73. DTrace Tracing and Profiling (3) Scripts can be simple or complex, and include case statements, loops, etc. Files opened by process: dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' Pages paged in by process: dtrace -n 'vminfo:::pgpgin { @pg[execname] = sum(arg0); }' Oracle Client Monitoring: static struct sqlexd { unsigned int sqlvsn; unsigned int arrsiz; unsigned int iters; unsigned int offset; unsigned short selerr; unsigned short sqlety; unsigned int occurs; short *cud; unsigned char *sqlest; char *stmt; void *sqladtp; void *sqltdsp; void **sqphsv; unsigned int *sqphsl; int *sqphss; void **sqpind; int *sqpins; unsigned int *sqparm; unsigned int **sqparc; unsigned short *sqpadto; unsigned short *sqptdso; void *sqhstv[4]; unsigned int sqhstl[4]; int sqhsts[4]; void *sqindv[4]; int sqinds[4]; unsigned int sqharm[4]; unsigned int *sqharc[4]; unsigned short sqadto[4]; unsigned short sqtdso[4]; }; pid$1::sqlcxt:entry { this->s = (struct sqlexd *) copyin(arg2,sizeof(struct sqlexd)); self->query = copyinstr((uintptr_t)this->s->stmt); printf("%s: %s\n",execname, self->query); self->ts = timestamp; } pid$1::sqlcxt:return /self->ts/ { @c[self->query] = count(); @time[self->query] = quantize((timestamp - self->ts)/1000); self->ts = 0; self->query = 0; } END { printf("\nQuery counts:\n"); printa(@c); printf("\nQuery times(us):\n"); printa(@time); }

    74. DTrace Tracing and Profiling (4) DTrace can be used to trace the interaction between any Java app and the system as a whole. The DTrace probes for the JVM are: provider dvm { probe vm__init(); probe vm__death(); probe thread__start(char *thread_name); probe thread__end(); probe class__load(char *class_name); probe class__unload(char *class_name); probe gc__start(); probe gc__finish(); probe gc__stats(long used_objects, long used_object_space); probe object__alloc(char *class_name, long size); probe object__free(char *class_name); probe monitor__contended__enter(char *thread_name); probe monitor__contended__entered(char *thread_name); probe monitor__wait(char *thread_name, long timeout); probe monitor__waited(char *thread_name, long timeout); probe method__entry(char *class_name, char *method_name, char *method_signature); probe method__return(char *class_name, char *method_name, char *method_signature); }; The Drace kernel module and VM agent JNI C source code can be found at http://blogs.sun.com/roller/page/kto/20050509#using_vm_agents

    75. DTrace Tracing and Profiling (5) Note that the shared object “VM agent” is a JNI library so must be loaded on the command line as “java -Xrundvmpi[:options] ...” where options are: all same as: unloads,allocs,stats,methods help print help message unloads track class unloads allocs track object allocations and frees stats generate heap stats after each GC methods generate method entry exit events exclude=name exclude class names default is none of unloads, allocs, stats, or methods. Shared library name is libdvmpi.so for 1.4.2/JVMPI. Tiger/Java5 will use libdvmti.so and option Xrundvmti[:options]. Java 6 will include the Dtrace probes as shipped so no shared object will be required.

    76. DTrace Tracing and Profiling (6)

    77. Solaris Management Console 2.1 Suns powerful web/Java systems administration and management tool Basis of future Solaris administration tools Very powerful Similar functionality to the IBM AIX WebSM tool

    78. Solaris and WAS in General

    79. What is HP-UX? HP’s Unix, currently at version 11i v2 64-bit For the Intel Itanium (Integrity) and PA-RISC (HP-9000) processors. System V-derived Similar to AIX in many ways – particularly the LVM Possibly the most reliable Unix HP Value Add Logical Volume Management – based on a licensed Veritas VxVM HP Virtual Server Environments – Mix of AIX and Solaris solutions HP Open View, HP Systems Insight Manager and SAM Management

    80. HP-UX Processors PA-RISC is a 64-bit RISC processor very similar to the POWER or PowerPC processors The Intel Itanium processor is a Very Long Instruction Word (VLIW) processor running the EPIC instruction set EPIC = Explicitly Parallel Instruction Computing Also is PA-RISC compatible Both RISC and VLIW processors rely heavily on compilers (or the JVM for Java) to achieve high performance RISC processors rely on the compiler to order instructions to avoid stalls VLIW processors rely on the compiler to tie groups of instructions into “bundles” that can run in parallel independently Itanium processor uses predication for performance, i.e. multiple possible paths through a branch are run in parallel The test condition is performed at the end of the branch Avoids having to wait for a test to complete to know which branch to take.

    81. HP-UX Logical Volume Manager (LVM) Physical disks are referred to as “physical volumes”. Logical volumes are the “virtual disks” onto which the JFS/VxFS volumes are put, and which are mapped to a number of “extents” of the physical volumes. HP-UX Journaled File System (JFS) is a licensed Veritas VxFS file system derivative. JFS file systems are placed on top of logical volumes. As logical volumes are virtual they can be resized easily, and can support mirroring internally. Logging for HP-UX JFS/VxFS occurs on the same drive to the “intent log”. Groups of file systems and volumes make up a Volume Group. The most important one for the running of the system is “vg00”.

    82. HP Virtual Server Environment Not all features are available on all machine types (i.e. PA-RISC or Itanium) Most available on Superdomes…. Multiple levels of virtualisation ranging from physical, to logical (like AIX), to secure container virtual machines (like Solaris). Some machines can have some CPUs, memory, and I/O PHYSICALLY isolated to form “nPars”, “node partitions”, or “hard partitions”. Each subset of hardware can run its own set of images Each nPar can even run “vPars” or “soft partitions” – each with its own images. “Soft partitions”, “virtual partitions”, or “vPars”, are similar to AIX DLPARs. OS Images are mapped to logical processors which run on physical processors. Can run different OS images in each partition Virtual “Resource Partition” / “HP Integrity Virtual Machines” create environments in the main OS image, with grouped logical resources A “Resource Partition” can be assigned to a set of applications to give them the appearance of having a dedicated OS environment. Like Solaris Containers/Zones Handled by the Virtual Server Environment Process Resource Manager (PRM) On Itaniums partitions can run Windows or Linux as well as HP-UX.

    83. HP Virtual Server Environment (2)

    84. HP Virtual Server Environment (3)

    85. HP Virtual Server Environment (4)

    86. How is HP-UX Managed? HP-UX has its own built in tools to make it easier to manage The SAM (Systems Administration Manager) tool offers a powerful menu driven system as a front-end for management commands It can be used to learn HP-UX and generate scripts by examining the log file The HP Systems Insight Manager tool is a multi-platform management tool This will be the tool HP will build upon for the future Can be centralised Runs at the command line or as a J2EE web application The Software Depot manages installations and upgrades These tools integrate with the HP Open View enterprise management tools

    87. HP Systems Administration Manager HP’s systems administration and management tool Very similar to AIX SMIT Works as both a command line tool and an X-Windows tool Can view the underlying commands run in the SAM log file

    88. HP Systems Insight Manager HP’s general systems administration and management tool for multiple platforms. Web-based and command line tool Can run on a central management server (CMS) Uses a DBMS such as Postgres Uses a Tomcat J2EE web application server

    89. Managing WAS on HP-UX

    90. HP-UX and WAS in General

More Related