1 / 108

Montreal Tivoli User Group Large ITM v6.1 how-to and pitfalls to avoid

Montreal Tivoli User Group Large ITM v6.1 how-to and pitfalls to avoid. Michael Tan mtan@ca.ibm.com March 2007. Agenda. TEMS optimization TEPS optimization TDW optimization. Differences between ITM6.1 and ITM5.x/DM3.7. Company Design Performance. A TEMS system.

sara-kent
Download Presentation

Montreal Tivoli User Group Large ITM v6.1 how-to and pitfalls to avoid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Montreal Tivoli User GroupLarge ITM v6.1 how-to and pitfalls to avoid Michael Tan mtan@ca.ibm.com March 2007

  2. Agenda TEMS optimization TEPS optimization TDW optimization

  3. Differences between ITM6.1 and ITM5.x/DM3.7 Company Design Performance

  4. A TEMS system • 4000 agents/TEMS System w/ data collection – This guideline is used to determine the overall size of a TEMS System. A TEMS System is defined as all of the components of the ITM 6.x environment (hub TEMS, remote TEMS, TEPS, TEP, etc.). The 4000 agents are recommended for environments that have data collection enabled. If data collection is not enabled the TEMS System could support up to 8000-10000 agents. It is important to note that the machines may have more that one agent running. So the number of agents should not be misinterpreted as the number of machines that can be supported by a TEMS System. • Much of this has been already been reviewed in TTEC calls, Tivoli Advisor, etc.

  5. A TEMS system • 15 is the maximum number of Remote TEMS supported – This is the maximum number of Remote TEMS supported in a TEMS System. The number is related to the number of Remote TEMS that can be connected to a Hub TEMS. It is not a hard limit, but rather a scaling number if the Remote TEMS are fully loaded. • 50 concurrent users/TEPS – Each TEPS can support up to 50 concurrent users. Multiple TEPS can be used in a TEMS System to support additional users.

  6. A TEMS system • 4000 agents/TEMS System w/ data collection • 1 HUB TEMS • 10 Remote TEMS • 1 TEMS manages 400 agents • 1 TDW/S&P agent (on one physical machine) • Much of this has been already been reviewed in TTEC calls, Tivoli Advisor, etc.

  7. 8000(4000+4000) Agents design example: 1) still one S&P Agent 2) 2 Warehouse Proxy agent

  8. Install the HUB TEMS first the same user id/pw for TDWthe same encryption key

  9. 90% problems are due to misunderstanding and incorrect setup Project Manager and Project plan Architect and Architecture design QA/Testing Deployment plan and document

  10. The RIGHT WAY CONTINUES: Project target timelines Scope Development of an example Project Plan. Importance of an architectural design document Resource Monitoring requirements Application Monitoring Requirements Data Warehousing Requirements (RDBMS – sizing, etc, WHP-location, etc, Attribute Groups-what to collect) Firewall requirements (NAT, FWGP, EPHEMERAL) Deployment mechanism - local install, remote install Network constraints - bandwidth High Availability requirements Disaster Recovery plans Test and QA facilities - fixpacks, etc

  11. Tivoli Enterprise Monitoring Server (TEMS)

  12. Tivoli Enterprise Monitoring Server (TEMS) Function: Serves as the focal point for managing the environment, communicating with agents and other monitoring servers in a hierarchical configuration OS Platforms supported: * Windows • Windows 2000 Server and Advanced Server • Windows 2003 Server SE and EE (32 bit) with Service Pack 1 Unix • AIX V5.2 and V5.3 (32/64 bit) • Solaris Operating Environment V9 and V10 (32/64 bit) Linux on Intel • RedHat Enterprise and Desktop Linux 4 Intel • SUSE Linux Enterprise Server 9 Intel Linux on z/Series • RedHat Enterprise Linux 4 for z/Series 31-bit • SUSE Linux Enterprise Server 8 and 9 for z/Series 31-bit z/OS • z/OS 1.4, 1.5, 1.6 and 1.7 CPU requirements: Intel - 1 GHz (minimum), 2 GHz (or faster) recommended RISC – 500 MHz (minimum), 1GHz (or faster) recommended Single processor is sufficient Memory requirements: 50 – 300 MB Process Size (kdsmain), depending on size of monitored environment TEMS machine should have 300MB of available memory above the needs of the Operating System and other concurrent applications Disk requirements: 250 MB * Platform list taken from ITM 6.1 Installation and Setup Guide. Ongoing certification will add platforms – see Platform Support matrix for details.

  13. TEMS Deployment Recommendations • To minimize performance impact on z/OS systems, where possible, connect z/OS-based agents to a TEMS on a distributed platform (Windows, Linux, AIX, Solaris) • OMEGAMON XE for z/OS and OMEGAMON XE for Storage agents require local TEMS (on same LPAR) • Workloads that affect the performance of the remote TEMS • TEP client requests for agent data • Situation, event and policy processing • Heartbeat (to a small extent) • Web Services requests • Remote TEMS CPU utilization, memory usage and restart time increase in proportion to the number of agents being managed

  14. TEMS Deployment Recommendations … • The TEMS workload will differ from one environment to the next, depending on • The agent types being monitored • The situation, event and policy workloads • We suggest monitoring the CPU and memory usage of the remote TEMS (kdsmain process) • Monitor using the appropriate ITM 6.1 OS agent • Perhaps create a managed system list for ITM 6.1 platform servers • Steady-state CPU utilization should typically be low • 10 to 20% or less (of a single processor averaged over time) is a reasonable target • Allows TEMS to perform well during bursts of activity (transient workloads) • Performance of transient workloads is important • Server restart, situation distribution, etc. • If CPU utilization is low most of the time (using 2GHz or better Intel processor or 1GHz or better RISC processor) and if server restart time is acceptable (as well as other transient workloads), consider increasing the number of agents per remote TEMS

  15. TEMS Deployment Recommendations … • Deciding where to place remote TEMS • If agents are geographically located in several medium-large sites • Consider placing remote TEMS at each site to minimize latency and bandwidth usage for heartbeat, situation and policy processing • Network speed between central and remote sites will affect response time of TEP requests for agent data • If agents are geographically dispersed (e.g., bank branches) • Consider placing remote TEMS in central location • Consider network bandwidth and potential for sharing network links with other applications • Network speeds will affect response time of TEP requests for agent data • For backup in a large environment with multiple remote TEMS • Consider having a spare remote TEMS with no agents reporting to it • All agents could use it as backup remote TEMS

  16. TEMS Deployment Recommendations … • Configuring the agent heartbeat interval • Heartbeat mechanism to monitor the status of remote monitoring servers and monitoring agents • Hub TEMS maintains status for all agents • Remote TEMS receive heartbeat requests from agents that are configured to access them • Remote TEMS offload processing from Hub TEMS, only sending status changes to the Hub • Changes to offline status typically require two missed heartbeat requests for the status to change • Offline status is indicated by the node being ″grayed out″ in the portal client Navigator View • If the heartbeat interval is set to 10 minutes, an offline status change would be expected to take between 10 and 20 minutes before it is reflected on the TEP client Navigator View • Default agent heartbeat interval is 10 minutes • Consider lowering heartbeat interval for agents running on critical systems • Heartbeat intervals of 3 minutes do not cause a significant increase in TEMS CPU utilization • Agent heartbeat interval can be set using CTIRA_HEARTBEAT environment variable • KBBENV • KNTENV for Windows OS agent • Agent .ini file for Linux and Unix agent (e.g., klz.ini, kux.ini) Agent has to be stopped and restarted for setting to take effect • If increasing the heartbeat interval, be careful not to exceed “idle connection” timeout values for firewalls

  17. Tivoli Enterprise Portal Server (TEPS)

  18. Tivoli Enterprise Portal Server (TEPS) Function: Services requests from TEP clients and communicates directly with hub monitoring server OS/DB Platforms supported:* Windows • Windows 2000 Server and Advanced Server • Windows 2003 Server SE and EE (32 bit) with Service Pack 1 • Database: DB2 UDB version 8 Fix Pack 10 or Microsoft SQL Server version 2000 Linux on Intel • RedHat Enterprise and Desktop Linux 4 Intel • SUSE Linux Enterprise Server 9 Intel • Database: DB2 UDB version 8 Fix Pack 10 Linux on z/Series • SUSE Linux Enterprise Server 9 for z/Series 31-bit • Database: DB2 UDB version 8 Fix Pack 10 CPU requirements: Intel - 1 GHz (minimum), 2 GHz (or faster) recommended RISC – 500 MHz (minimum), 1GHz (or faster) recommended Single processor is sufficient Memory requirements: 60 – 300 MB Process Size for KfwServices, depending on size of monitored environment Database requires extra memory Machine should have 700 MB of available memory above the needs of the Operating System and other concurrent applications Disk requirements: 800 MB (including TEPS database) * Platform list taken from ITM 6.1 Installation and Setup Guide. Ongoing certification will add platforms – see Platform Support matrix for details.

  19. TEPS Deployment Recommendations • Database usage is light, so database tuning is not a concern • Can reside on same machine as the Hub TEMS • If running both on the same machine, consider a 2-way processor for larger environments to minimize impact of transient busy periods (server restart, situation distribution to a large number of agents, etc.) • CPU usage depends mainly on frequency of TEP client requests

  20. Tivoli Enterprise Portal (TEP)

  21. Tivoli Enterprise Portal (TEP) client Function: Java-based user interface for viewing and monitoring the enterprise OS Platforms supported:* Windows – Desktop or Browser via Internet Explorer Java plug-in • Windows 2000 Professional, Server and Advanced Server • Windows XP • Windows 2003 Server SE and EE (32 bit) with Service Pack 1 Linux on Intel – Desktop only • RedHat Enterprise and Desktop Linux 4 Intel • SUSE Linux Enterprise Server 9 Intel CPU requirements: 1 GHz (minimum), 2 GHz (or faster) recommended Single processor is sufficient Memory requirements: 150 – 300 MB Process Size, depending on size of monitored environment Client machine should have 300MB of available memory above the needs of the Operating System and other concurrent desktop applications * Platform list taken from ITM 6.1 Installation and Setup Guide. Ongoing certification will add platforms – see Platform Support matrix for details.

  22. TEP Deployment Recommendations • TEP client processing is the biggest contributor to response time • Choosing faster processor will provide better response time • Faster single processor preferable to slower multi-processor • If other desktop applications need to run concurrently, multi-processor would improve response time • For desktop, default Java heap size parameters are -Xms64m -Xmx256m -Xms64m specifies an initial Java heap size of 64 MB If not specified, the default for the 1.4.2 JRE is 1MB Specifying a higher value (like 64MB) improves startup time for the Java virtual machine -Xmx256m specifies the maximum Java heap size of 256 MB • For browser client, you must increase Java heap size parameters for the Java plug-in over the default values • Open the Windows Control Panel. • Double-click Java Plug-in for IBM Java version 1.4.2. • Click the Advanced tab. • Select the IBM JRE 1.4.2 from the Java Runtime Environment list. • In the Java Runtime Parameters field, enter –Xms64m -Xmx256m • Click Apply. • For small environments (< 400 agents), a maximum Java heap size of 128 MB is sufficient • Make sure that the machine has enough physical memory available for the entire Java heap to reside in memory • Otherwise you could see excessive paging activity • If response time slows and you see high CPU activity for TEP client, Java heap size may be too small • ITM 6.1 Problem Determination Guide has recommendations about Java heap sizing tuning

  23. TEP Deployment Recommendations … • Network traffic caused by typical OS agent workspace requests varies widely in size, depending on how many rows of data are returned • Workspaces requesting data from a large number of instances (Process, etc.) cause more network traffic • Average size of 150KB per request should be useful for planning purposes • Use LAN connectivity for the TEP clients where possible • Using 1.1 Mbps DSL connection adds 2 to 4 seconds to response time for typical OS agent workspace requests • Avoid using dial-up connections for the TEP client • If dial-up is the only connection speed available, • Consider making a remote desktop connection (using terminal services) to a machine with LAN connectivity to the TEPS • For TEP client login and agent workspace requests, network traffic from the end user machine is considerably lower when using a remote desktop connection to another machine running the TEP client • For connection speeds less than 10 Mbps, consider using TEP desktop client to minimize network traffic at user login

  24. Tivoli Data Warehouse

  25. Tivoli Data Warehouse (TDW) Function: Database to store historical data collected across monitoring environment Database Platforms supported:* • DB2 UDB version 8 Fix Pack 10 • Microsoft SQL Server version 2000 • Oracle 9.2 or 10.1 CPU requirements: Intel – 2 GHz (or faster) recommended RISC – 1GHz (or faster) recommended 2-way processor recommended for small to medium environments (10 GB or less of new data per day) 4-way processor recommended for large environments (greater than 10 GB of new data per day) Memory requirements: 2 GB RAM minimum to allow for large bufferpool 4 GB RAM recommended if Summarization and Pruning agent on same machine Disk requirements: Number of disks depends on the anticipated size of the database Use the ITM 6.1 / TDW 2.1 Warehouse Load Projections spreadsheet to estimate the database size Available from Tivoli OPAL at http://catalog.lotus.com/wps/portal/tm More smaller disks are preferable to fewer larger disks (to enable more parallel I/O) * Platform list taken from ITM 6.1 Installation and Setup Guide. Ongoing certification will add platforms – see Platform Support matrix for details.

  26. * Table 8 from ITM 6.1 Installation and Setup Guide Tivoli Data Warehouse (TDW) Recommended number of disks*

  27. Warehouse Solution- DB2 TDW

  28. Historical Data Background • ITM 6.1 provides two types of historical data • Short term - less than 24 hours old • Stored in a binary file on the agent system or on the TEMS system depending upon how user configured – Agent system is recommended • One binary file per attribute group • Binary file managed by TEMA framework • Persistent Data Store (PDS) used instead of file on z/OS • Long term – more than 24 hours old • Placed into TDW by WPA (Warehouse Proxy Agent) at interval specified by user – Warehouse Interval • Caveat’s: • Historical data configuration allows data less than 24 hours old to be placed into TDW (Warehouse Interval = 1 hour) • Binary files will be pruned once data has been successfully inserted into TDW and it’s greater than 24 hours old.

  29. Historical Data Background - continued • TDW storage requirement based on: • number of attribute groups configured • size of each attribute group • number of agents (instances of agents) • historical collection cycle (5, 15, 30, 60 min) • pruning and aggregation parameters • Each agent install guide should have a documentation of the attribute group size • Use the ITM 6.1 /TDW 2.1 Warehouse Load Projections spreadsheet on OPAL website for estimating database requirements http://catalog.lotus.com/wps/portal/tm/

  30. Starting Historical Collection - 1 The history Collection Configuration panel can be launched by clicking on the calendar icon from the Tivoli Enterprise Portal

  31. Starting Historical Collection - 2 Switch between multiple product groups Choose the collection interval (5,10,15,30 or 60 minutes) Choose the collection location (TEMS or TEMA) Choose the Warehouse Interval (1 hour, 1day, OFF) Configure/Unconfigure multiple tables concurrently: creates /deletes a UADVISOR entry into the TEMS TSITDESC table. QA1CSITF.DB. Display recommended attributes to collect Start/Stop: creates /deletes a UADVISOR entry into the TEMS TOBJACCL table QA1DOBJA.DB) Refresh Status: issues a query against the TEMS SITDB table (QA1CRULD.DB) looking for a UADVISOR entry for this table.

  32. Starting Historical Collection - 3 • If you have multiple TEMS: • When you click on start you will see the different TEMS available. You will need to choose the one you want to start the collection on. • When you select an attribute group, a panel reminds you on which TEMS this collection has been started.

  33. Seeing Historical Data When an attribute has been chosen for historical collection, the corresponding workspace shows a clock icon. Clicking on this icon allows you to modify the data you can see on the screen.

  34. Selecting the Time Span Selecting less than 24 hours will direct the query to the historical binary file. Selecting more than 24 hours will direct the query to the warehouse database. There can be only one history situation definition per table . It will have the same properties on all agents on all TEMS.

  35. Product Configuration Files (.HIS files) • Used by the history configuration program to determine which products are eligible for history collection. • Contains the history situation definition. • Must be supplied by each product except for the Universal Agent, which doesn’t require a .HIS file.

  36. Prerequisites on the Database Server The Warehouse database needs to be created before installing/configuring the warehouse proxy • Create the data warehouse database with UTF-8 encoding • The default database name is WAREHOUS, however this can be changed if needed. • For example, the following commands would create a DB2 database: db2 create database WAREHOUS using codeset utf-8 territory US • And the following are required to Setup the listener for DB2: db2set -i db2inst1 DB2COMM=tcpip db2 update dbm cfg using SVCENAME 50000 db2stop force db2start

  37. Userid/Pw for a DB2 TDW database A user needs to be created to access the TDW database The default userid/pw are itmuser/itmpswd1 DB2: • The user is a OS user. • When the TDW is DB2, the WPA checks that a buffer pool of page size 8k is created as well as 3 Table spaces using this buffer pool. If they do not exist, the WPA tries to create them. In order to create them, the user needs to have the adequate privileges. The easiest way is to make the OS user member of the Administrators group on windows, or member of the DB2 administrator group on Unix ( found by running this command : db2 get dbm cfg | grep SYSADM ). • If the customer does not want to give administrator privileges to the “itmuser”, the buffer pool and table spaces need to be created upfront so that the WPA does not fail when starting.

  38. DB2 – creation of a BP 8k and 3 TBSP -- CREATE a Buffer pool of page size 8K CREATE BUFFERPOOL ITMBUF8K IMMEDIATE SIZE 250 PAGESIZE 8 K; -- CREATE a Regular Table space using the 8K Buffer pool CREATE REGULAR TABLESPACE ITMREG8K PAGESIZE 8 K MANAGED BY SYSTEM USING ('itmreg8k') BUFFERPOOL ITMBUF8k; -- CREATE a System table space using the 8K Buffer pool CREATE SYSTEM TEMPORARY TABLESPACE ITMSYS8K PAGESIZE 8 K MANAGED BY SYSTEM USING ('itmsys8k') BUFFERPOOL ITMBUF8k; -- CREATE a User table space using the 8K Buffer pool CREATE USER TEMPORARY TABLESPACE ITMUSER8K PAGESIZE 8 K MANAGED BY SYSTEM USING ('itmuser8k') BUFFERPOOL ITMBUF8k;

  39. DB2 – minimum privileges for ITMUSER • If the BP and TBSP exist before the WPA connects, you can restrict the privileges of the “itmuser’ to CREATETAB and CONNECT. • With DB2 Control Center, and the db2admin privileges connect to the WAREHOUS database and Grant the administrative authorities CONNECT and CREATETAB to the ITMUSER

  40. Collection and Configuration • Historical collection and warehousing can considerably increase resource usage of TEMS and agents. • Only collect history data the user is interested in. • Avoid blindly enabling ALL historical collection for a product unless it is really needed. • Carefully consider where history data is to be collected (TEMS or agent). • Consider how often to store, summarize, and prune collected data

  41. History Collection at the Agent • We normally recommend collecting history data at the agents. • Reduces agent to TEMS network traffic since history data will not be sent to TEMS. • Decreases TEMS workload, especially when doing warehousing. • Reduces history data file size since data for only that agent or node exists in the file. Critical when warehousing is enabled.

  42. History Collection at the TEMS • Central control of history data files. • Single point of file management when roll off scripts or programs used instead of warehousing. • Needed when sites require unobtrusive or restricted resource usage on the agent machines. • Sites using history warehousing with agents outside of firewalls don’t require an additional network port be opened to firewall traffic for the Warehouse Proxy agent.

  43. Errors in Database Initialization Once the Warehouse Proxy service is started, multiple tests are done: • Check that the WPA can connect to the database. • Check, if the database is Oracle or DB2, that the encoding is set to UTF8 • Check, if the database is DB2, that a buffer pool of page size 8k is created and if not create one as well as 3 new table spaces using the 8k buffer pool. • The buffer pool is called: ITMBUF8K • The 3 table spaces are called: ITMREG8K, ITMSYS8K, ITMBUF8k • Creates a database cache that contains a list of all the tables and columns that exist in the database • If any one of these tests fails, a message will be written in the log if the class ERROR is set for the trace. Messages will also appear in the Event Viewer.The tests to the database will be retried until success every 10 minutes. There are 2 environment variables that can change this default setup: • KHD_CNX_WAIT_ENABLE is set to Y by default. It permits to wait before a retry. Changing this variable to N will allow to not wait before retries. Be careful when setting this variable to N as you can easily generate a huge log file if the tests to the database fail at each retry. • KHD_CNX_WAIT is the time in minutes to wait before trying to reconnect. Default is 10 minutes.

  44. WPA not exporting data • The WPA may show a status “started” but may not be connected to the database. • At reboot, the WPA may start before the database services. The WPA will then restart every 10 minutes to reconnect to the database. This is why the status is always started if the WPA did not crash. • You should always test the connection to the database in the Configuration panel and you should edit the C trace file and look for the words: • Connection with Data source “jdbc:db2://localhost:60000/WAREHOUS” successful • Tivoli Export Server ready for Operations

  45. Database connection issues -1 : JDBC drivers • Check the java trace file == 1 t=main java.lang.ClassNotFoundException: com.ibm.db2.jcc.DB2Driver at java.net.URLClassLoader.findClass(URLClassLoader.java:375) at java.lang.ClassLoader.loadClass(ClassLoader.java:562) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:442) at java.lang.ClassLoader.loadClass(ClassLoader.java:494) at java.lang.Class.forName1(Native Method) at java.lang.Class.forName(Class.java:180) at com.tivoli.twh.khd.khdxjdbc.initJDBC(khdxjdbc.java:110) == 2 t=main initJDBC : com.ibm.db2.jcc.DB2Driver • This indicates that the “correct” JDBC drivers have not been added and saved successfully in the Configuration Panel. Most of the time the Add button has not been used. • The added JDBC drivers can be seen in the hd.ini file in the KHD_WAREHOUSE_JARS and KHD_CLASSPATH variables KHD_WAREHOUSE_JARS=/opt/IBM/db2/V8.1/java/db2jcc.jar,/opt/IBM/db2/V8.1/java/db2jcc_license_cu.jar KHD_CLASSPATH=$CANDLEHOME$/$BINARCH$/bin/khdxjdbc.jar:/usr/opt/db2_08_01/java/db2jcc.jar:/usr/opt/db2_08_01/java/db2jcc_license_cu.jar:/opt/IBM/db2/V8.1/java/db2jcc.jar:/opt/IBM/db2/V8.1/java/db2jcc_license_cu.jar

  46. Database connection issues - 2 : use another tool • Test the database connection with another tool. For instance, Squirrel can be used to connect to any database vendor using any JDBC drivers. • http://squirrel-sql.sourceforge.net/ • Some security issues, user privileges can be found easily this way.

  47. Database connection issues - 3 : socket errors • Example of error that will happen even with Squirrel. • Error in the C trace: (448E63CD.0000-1:khdxjdbc.cpp,3851,"processJavaException") Exception message: com.tivoli.twh.khd.KHDException +448E63CD.0000 at com.tivoli.twh.khd.khdxjdbc.getDbConnection(khdxjdbc.java:261) (448E63CD.0001-1:khdxjdbc.cpp,3853,"processJavaException") Exception message: com.tivoli.twh.khd.KHDException +448E63CD.0001 at com.tivoli.twh.khd.khdxjdbc.getDbConnection(khdxjdbc.java:261) (448E63CD.0002-1:khdxdbb.cpp,599,"initializeDatabase")Connection with Datasource "jdbc:db2://localhost:60000/WAREHOUS" failed • Error in the Configuration Panel/Agent Parameter, after clicking on the button "Test database connection“: "Database connection failed. IO Exception Socket to server localhost on port 60000. Rmk: this error will be in the java trace file with defect 30987

  48. Database connection issues - 4 :listener not active IO Exception Socket Error: the listener is not active Solution for DB2: • db2set -i <instance> DB2COMM=tcpip. If the instance created is db2inst1, the cmd will be: db2set -i db2inst1 DB2COMM=tcpip • db2 update dbm cfg using SVCENAME <port>. If the port used is 60000, the cmd will be: db2 update dbm cfg using SVCENAME 60000. • You can check the port used in the file /etc/services. For a db2 instance called db2inst1 and a port 60000, you should find: DB2_db2inst1 60000/tcp • db2stop • db2start

  49. Database connection issues - 5 : getDbConnection • Database Connection failed.Error Opening socket to server localhost/127.0.0.1 on port 60000 with message: connection refused DB2Connection Correlator Solution for DB2: DB2 is not started, use the following command: • db2start • Database Connection failed. The application server rejected establishment of the connection. An attempt was made to access a database, WAREHOUS, which was not found: Solution for DB2: The WAREHOUS database has not been created. Use the following command to create it: • db2 create database Warehous using codeset utf-8 territory US

  50. Database connection issues - 6 : getDbConnection Possible Exceptions when using the Test Database Connection button in the Configuration panel: • Database Connection failed.Connection authorization failure occurred. Reason;password invalid Solution for DB2: Check that the userid/pw selected has been created and can connect to the database • db2 connect to WAREHOUS user <userid> using<pw>

More Related