Performance and Scalability in CA
This presentation is the property of its rightful owner.
Sponsored Links
1 / 32

Session Number: PP302PN PowerPoint PPT Presentation


  • 86 Views
  • Uploaded on
  • Presentation posted in: General

Performance and Scalability in CA Clarity ™ PPM:  Avoiding common configuration and customization pit-falls. Focus Area: Service & Portfolio Management. Kelly Limberg and Sean Harp CA Development Engineering Services. Session Number: PP302PN.

Download Presentation

Session Number: PP302PN

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Session number pp302pn

Performance and Scalability in CA Clarity ™ PPM:  Avoiding common configuration and customization pit-falls


Session number pp302pn

Focus Area: Service & Portfolio Management

Kelly Limberg and Sean Harp

CA Development Engineering Services

Session Number: PP302PN

Performance and Scalability in CA Clarity ™ PPM:  Avoiding common configuration and customization pit-falls.


Abstract

abstract

Kelly Limberg

CA Technologies Development, Senior Engineering Services Architect

Sean Harp

CA Technologies Development, Senior Engineering Services Architect

Abstract

CA Clarity ™ PPM is a powerful application with near limitless configuration possibilities.   While configuring Clarity for your business, it is important to  approach your design with performance and scalability in mind. Join this session to learn how to avoid the common configuration and customization pit-falls that can result in poor performance and scalability down the road.


Agenda

agenda

  • Portlets and List Pages

  • NSQL Queries

  • Business Objects Reporting

  • Timeslicing

  • Rate Matrix Extraction

  • Process Design

  • GEL Scripting


Portlets and list pages

Portlets and list pages

  • Set “Don’t show results until I filter” and “Expand Filter”

    • Set for all costly Portlets, List Pages and Look-ups

    • Users can set a default if they wish to get instant results


Portlets and list pages1

Portlets and list pages

  • Set Attribute Value Protection to faster settings

    • 3 options

      • Use display conditions and secured subpages to protect attribute values on this list

      • Use only secured subpages to protect attribute values on this list

      • Display all attribute values on this list

  • Also Known As

    • Slow

    • Fast

    • Fastest

  • Why?

    • For Each row Clarity Evaluates an XML document with the display conditions to determine what fields are eligible to be displayed on the list page.

    • Field level security check for each sub page / row combination

    • Row level security only

    • Can have a major performance impact on Export to Excel operations if set to “slow”.


  • Portlets and list pages2

    Portlets and list pages

    • Remove unnecessary Aggregation rows

      • It will still calculate if it’s configured and not marked “Show”

      • Very costly because it returns all rows to the app for calculation


    Portlets and list pages3

    Portlets and list pages

    • Aggregation rows continued…

      • Create a separate summary portlet (e.g., one line portlet with totals.)

      • Aggregation can be done on the database

      • Page level Filter on the page will give you a summary portlet along with detailed rows with no summarization.

    • Use Page Level Filters

      • Page level filters provide the capability to filter on all portlets present on the page


    Portlets and list pages4

    Portlets and list pages

    • Avoided putting portlets on the home / overview page

      • A poorly designed filter (or no filter at all) may launch a query returning all rows while the end user is trying to login

      • Perceived by users as a login performance issue

      • Restrict the use of the “configure” option on portlets on users’ home pages to further restrict the number of columns that can be displayed.

  • Minimize the amount of columns to return via your portlet

    • Limit the amount of columns returned to 10-15

  • Do not use portlets as an “ad-hoc” vehicle

    • Avoid selecting many columns into the query that are not displayed, in case the user wants to add columns to their grid later.


  • Nsql queries

    NSQL Queries

    • Analyze execution plans for queries that you develop for either portlets or dynamic lookups – ensure that queries perform optimally

      • You may need indexes on custom attributes

  • Consider limiting the rows being returned to prevent users from pulling back too much data

    • “ROWNUM < n” for Oracle

    • “SELECT TOP n” in SQL Server

  • All NSQL portlet queries should contain comment blocks, detailing the use, author and source portlet.


  • Business objects reporting

    Business Objects Reporting

    • Crystal Reports should be designed to pull data via direct SQL or a SQL Stored procedure.

      • Avoid using a universe for Crystal Reports.

        • This allows a report to be self contained or only have one depended object, the stored procedure.

        • Once a Crystal Reports is built off a Universe it is tied to that Universe by the CUID and cannot be repointed to a different universe.

    • Do not use the Legacy CA_Clarity.unv Universe for creating reports, Crystal or WEBI.

      • This universe exists only to support use of Legacy reports.


    Business objects reporting1

    Business Objects Reporting

    • Use SQL Comments in Reports SQL statements and Stored procedures indicating the Report and Author.

    • Thoroughly test reports in an environment with data volumes equal or greater than production.

    • Ensure that Required filter parameters are used when deploying reports as jobs in Clarity so that a report cannot be run with wide open parameters and effect system wide performance.


    Timeslicing

    Timeslicing

    • Keep Daily timeslices to a minimum!

      • Stay within 3 to 6 months in the past and 3 to 6 months in the future.

      • Direct impact to Datamart run times

  • Don’t slice allocation data if you don’t use it

    • Set the “Number of Periods” to “0” for the Allocation slice request. This will minimize the amount of data that is being stored for Allocation slices.

  • Changing a base calendar forces a reslice of many slices:

    • Resource slices (prj_resources.pravailcurve)

    • Team slices (prteam.pralloccurve)

    • Team hard curves (prteam.hard_curve)

  • Can potentially result in millions of records being resliced

  • Changes to base calendars should be done on weekends


  • Rate matrix extraction

    Rate Matrix Extraction

    • A full RME run is accomplished by selecting all options except Incremental Update Only.

      • Extract Cost and Rate Information for the Scheduler

      • Prepare Rate Matrix Data

      • Update Rate Matrix Data

    • Duration of the run will depend on the number of investments, resources, teams, assignments and tasks.

    • Run this once per week or during the off peak hours.

  • If you can, delete old Rows or Matrices


  • Rate matrix extraction1

    Rate Matrix Extraction

    • An Incremental RME run is accomplished by selecting all 4 options

      • The following 3 jobs will update the projects’ last_updated_date field. This will cause the incremental RME run to take longer than expected.

        • LDAP sync user Job

        • Investment Allocation Job

        • Earned Value update Job

      • Schedule these jobs prior to a full RME run followed by an incremental RME run.

    • For Oracle the RME job will automatically parallelize based on the number of CPUs.


    Implementing auditing

    Implementing Auditing

    • Do not audit every attribute on an object.

    • Set a reasonable value for “Days after which audit records will be purged” for each object you are auditing.

      • This is the maximum number of days the audit records will be kept before they are removed by the Purge Audit Trail.

      • Do not leave the default (empty) which is infinite retention.

  • Do not audit attributes that change very frequently

    • Do not audit an attribute that is updated nightly by a scheduled job.


  • Process design

    Process Design

    • Create multiple generic Process XOG users

      • Process XOG users would be restricted to just the rights required for the their particular area (i.e. XOGPROJ, XOGRESOURCE, etc)

    • Keep start condition, pre-condition and post condition expressions as simple as possible.

      • For an on-update start condition ensure actions the process takes result in the start condition evaluating to FALSE by the time the process finishes executing.

        • This will prevent the process from firing over and over again.


    Process design1

    Process Design

    • Keep the total number of active processes for an object minimal

      • Use a single "create" process with branching rather than having multiple "create" processes

  • Use the Delete Process Instance Job to clean up old processes

    • Some may need to be kept for auditing purposes. Identify processes that can be removed and delete them when they have completed via the job.

    • Target frequently run processes, which are good candidates to delete as they typically carry almost no auditing benefit.

  • During Background services startup, the process engine loads, compiles, and caches all active processes.

    • This can cause an initial spike in database CPU usage.


  • Process design2

    Process Design

    • Do not use <gel:log> excessively. Each log call results in a row in BPM_ERRORS.

    • Do not call <gel:log> within a loop with large numbers of iterations. This will affect the performance of your script.

    • Use a “debug” parameter on your GEL scripts and wrap your calls to <gel:log> with a <core:if>. This will allow you to enable logging when you need it.

      <core:if test=“${debugFlag == 1}”>

      <gel:log level=“debug”>My debug log message</gel:log>

      </core:if>


    Process engine maintenance

    Process Engine Maintenance

    • For Oracle run SHRINKS on NMS_MESSAGES daily

      • Often those tables get heavily fragmented due to bulk imports. When that happens, they grow in size consuming many blocks. That data is eventually deleted and normal usage does not need that many blocks.

        • ALTER TABLE NMS_MESSAGES ENABLE ROW MOVEMENT

        • ALTER TABLE NMS_MESSAGES SHRINK SPACE CASCADE

        • ALTER TABLE NMS_MESSAGE_DELIVERY ENABLE ROW MOVEMENT

        • ALTER TABLE NMS_MESSAGE_DELIVERY SHRINK SPACE CASCADE


    Gel scripting

    GEL Scripting

    • Using XOG in GEL Scripts

      • Create custom.properties files with URL, usernames and passwords stored outside of the database

        • xog.url=https://<servername>/niku/xog

        • xog.user=<xog username>

        • xog.pass=<xog password>

    • Add an entry to the BG services JVM arguments that provides the location of the custom.properties file

      • -Dcustom.properties=d:/niku/clarity/config/custom.properties

      • This removes any file location specific information from the database schema (Change the path as required. On Linux, you can utilize a UNC path to a shared file. ) Re-deploy the BG service(s).


    Gel scripting1

    GEL Scripting

    • The GEL scripts themselves can utilize this sample code to call the parameters from the custom.properties file:

    <!-- Read the local env custom properties --><!-- this will read the file defined by the -Dcustom.properties java argument --><core:catch var="exception"> <core:invokeStatic className="java.lang.System" method="getProperty" var="customPropertyFile">    <core:arg value="custom.properties"/>   </core:invokeStatic>   <util:properties file="${customPropertyFile}" var="customProperties"/></core:catch>

    <core:if test="${exception != null}"> <gel:log level="ERROR">Unable to open custom properties file</gel:log> <gel:log level="ERROR">Caught Exception was: ${exception}</gel:log></core:if>

    • Reference the property file values as follows

    <gel:log level="INFO">URL = ${customProperties.get("xog.url")}</gel:log><gel:log level="INFO">URL = ${customProperties.get("xog.user")}</gel:log><gel:log level="INFO">URL = ${customProperties.get("xog.pass")}</gel:log>


    Gel scripting2

    GEL Scripting

    • GEL Scripting Best Practices

      • All SQL and SOAP tags should be wrapped in gel exception handling.

      • All SQL tags should utilize bind parameters.

      • All debug statements should be controlled via a parameter that is optional when needed.

      • All complex SQL statements need to be analyzed for performance implications. This can be done via EXPLAIN PLAN in a SQL tool.

      • If the GEL script is performing XOG read/write actions in a loop, the XOG login and logout actions must be outside of the loop.

      • SQL update statements should not be utilized against anything other than custom tables or ODF_CA_<obj> tables.

      • SQL insert/delete statements should only be used on custom tables.


    Gel scripting3

    GEL Scripting

    • GEL Script Performance

      • Avoid using <util:sleep> in GEL scripts. This tag puts the thread executing the GEL script to sleep. There are only 12 GEL threads allocated per Process Engine instance. If all 12 are sleeping, other GEL steps will be unable to execute and processes will appear to hang.

    • In complex GEL scripts that build XOG XML from SQL query output, avoid excessive use of the <core:set> tag for performance reasons. This taguses Java Mbeans for variable storage. Multiple GEL scripts using frequent <core:set> calls will result in a significant synchronization performance hit. Instead of using <core:set>, consider using a native Java object such as a HashMap to store local variables inside tight loops.


    Gel scripting4

    GEL Scripting

    • Native HashMap example:

    <!– Create a HashMap instance-->

    <core:new className=“java.util.HashMap” var=“myHash”/>

    <!– Place some values in this hashMap from a previous query -->

    <core:invoke on=“${myHash}” method=“put”>

    <core:arg value=“projectId”/>

    <core:arg value=“${myQuery.rows[0].ID}”/>

    </core:invoke>

    <core:invoke on=“${myHash}” method=“put”>

    <core:arg value=“projectName”/>

    <core:arg value=“${myQuery.rows[0].NAME}”/>

    </core:invoke>

    <!– Log this value 

    <gel:log level=“info”>Processing project ${myHash.get(‘NAME’)}</gel:log>


    Summary a few words to review

    summarya few words to review

    • Keep the CA Clarity PPM UI Clean and Efficient

    • Think about the big picture with customization and configuration

    • Use Comments and bind Variables where you can

    • Use best practices when using Processes and GEL


    Session number pp302pn

    Q&A


    Thank you

    thank you


    Recommended sessions

    Recommended Sessions


    Related technologies

    Related Technologies

    • Booth 444 – Service Portfolio Management

    • Booth 449 – Sustain with PPM

    • Booth 445 – Innovate with PPM


    Please scan this image to fill in your session survey on a mobile device

    Please scan this image to fill in your session survey on a mobile device

    Session # PP302SN


    Terms of this presentation for information purposes only

    terms of this presentationfor information purposes only

    This presentation was based on current information and resource allocations as of November 2011 and is subject to change or withdrawal by CA at any time without notice. Notwithstanding anything in this presentation to the contrary, this presentation shall not serve to (i) affect the rights and/or obligations of CA or its licensees under any existing or future written license agreement or services agreement relating to any CA software product; or (ii) amend any product documentation or specifications for any CA software product. The development, release and timing of any features or functionality described in this presentation remain at CA’s sole discretion. Notwithstanding anything in this presentation to the contrary, upon the general availability of any future CA product release referenced in this presentation, CA will make such release available (i) for sale to new licensees of such product; and (ii) to existing licensees of such product on a when and if-available basis as part of CA maintenance and support, and in the form of a regularly scheduled major product release. Such releases may be made available to current licensees of such product who are current subscribers to CA maintenance and support on a when and if-available basis.  In the event of a conflict between the terms of this paragraph and any other information contained in this presentation, the terms of this paragraph shall govern.

    Certain information in this presentation may outline CA’s general product direction. All information in this presentation is for your informational purposes only and may not be incorporated into any contract. CA assumes no responsibility for the accuracy or completeness of the information. To the extent permitted by applicable law, CA provides this presentation “as is” without warranty of any kind, including without limitation, any implied warranties or merchantability, fitness for a particular purpose, or non-infringement. In no event will CA be liable for any loss or damage, direct or indirect, from the use of this document, including, without limitation, lost profits, lost investment, business interruption, goodwill, or lost data, even if CA is expressly advised in advance of the possibility of such damages. CA confidential and proprietary. No unauthorized copying or distribution permitted.


  • Login