Loading in 2 Seconds...
Loading in 2 Seconds...
Preventing, diagnosing, and resolving the 20 most common dashboard performance problems . Dr. Berg Comerit Inc. We already covered hardware sizing, compatibility and server options in a prior session, so now we will look at the application, design and interfaces.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
We will specifically look at dashboard design, query design, connectivity impacts, in-memory processing options as well as dashboard performance monitoring options.Background
1. BICS connectors performs well
2. Avoid the MDX interface
(it is slow)
3. Avoid direct
access to the
the BI analytical
engine in SAP
BW.Connectivity and Performance
Source: SAP AG, 2011
Always pick the fastest interface available for the data source you are building dashboard on
By leveraging the aggregation in Crystal Reports 2011, you can also get faster Xcelsisus dashboard response time.
Modularize the data and create sub-sets of data for really fast dashboarding.
Generic 'metrics' data tables can be created for summarized KPI and scorecard dashboards.
The summary, or snapshot data can be accessed much faster than underlying data tables with millions of records.
In this example, the company use snapshots for performance reasons
The dashboards are only built on the low volume daily snapshot cube (this is also placed in BWA for very high-performance).
Key Feature: Reading data during navigation minimizes the impact on the application server resources because only data that the user requires will be retrieved
Reserve the Read all data mode for special queries— i.e. when a majority of the users need a given query to slice and dice against all dimensions, or data mining
This places heavy demand on database and memory resources and may impact other BW processes
A query read mode can be defined on an individual query or as a default for new queries (transaction RSRT)
This approach separates the drill-down steps. In addition to accelerating query processing, it provides the user more manageable portions of data.
In 7.x BI: OLAP engine can read deltas into the cache. Does not invalidate existing query cache.
Examine the request status when reading the InfoProvider
Turn off/on parallel processing
When will the query program be
regenerated based on database
Displays the level of statistics collected.
Filters are especially valuable
when associated with large
dimensions, where there
is a large number of
characteristics such as
customers and document
numbers.Filters in Queries used in Dashboards
P1 of 3
The RSRT transaction is one of the most beneficial transaction to examine the query performance and to conduct 'diagnostic' on slow queries from the BW system.
P2 of 3
This suggests that an Aggregate would have been beneficial
P3 of 3
In this example, the basis team should be involved to research why the Oracle settings are not per SAP's recommendation
The RSRT and RSRV codes are key for debugging and analyzing slow queries.
HINT: Track front-end data transfers & OLAP performance by using RSTT in SAP 7.0 BI (RSRTRACE in BW 3.5)
Using RSRT you can execute the query and see each breakpoint, thereby debugging the query and see where the execution is slow.
Try running slow queries in debug mode with parallel processing deactivated to see if they run faster.
After SAP BusinessObjects Enterprise XI 3.1 FP 1.1, the impact of large number of key figures was somewhat reduced by retrieving metadata information only when the unit/currency metadata info is selected. However, this is still best practice
When Restrictive Key Figures (RKF) are included in a query, conditioning is done for each of them during query execution. This is very time consuming and a high number of RKFs can seriously hurt query performance
My Recommendation: Reduce RKFs in the query to as few as possible. Also, define calculated & RKFs on the Infoprovider level instead of locally within the query. Why?:
Benefit: Formulas within an Infoprovider are returned at runtime and held in cache.
Drawback: Local formulas and selections are calculated with each navigation step.
Calculated Key Figures(CKF) are computed
during run-time, and a many CKFs can slow
down the query performance.
How to fix this:
Many of the CKF can be done during data loads & physically stored in the InfoProvider. This reduces the number of computations and the query can use simple table reads instead. Do not use total rows when not required (this require additional processing on the OLAP side).
Recommendation for OLAP universes:
RKF and CKF should be built as part of the underlying BEx query to use the SAP BW back-end processing for better performance
Queries with a larger set of such KFs should use the “Use Selection of Structure Members” option in the Query Monitor (RSRT) to leverage the OLAP engine
In exceptional cases when you have leveraged other performance tuning methods, you may extend this to up to 1,000 rows.
The Length of each records (# of columns) and the data type also impacts performance
Returning query result sets with few records of a numeric type or with keys and indicators provides for the best dashboard performance
Link to Details Dashboard
Split your dashboards into logical units & get new data when drilldowns are executed. This keeps the result set for each query small and also decreases the load time for each dashboard
Complex logic and nested logic creates large swf files takes a long time to open. Try to keep as much of the calculations and logic in the query instead of the spreadsheet.
Sorting is done by the BI Analytical Engine. Like all computer systems, sorting data in a reports with large result sets can be time consuming.
Reduce the number of sorts in the 'default view'. This will provide the users with data faster. They can then choose to sort the data themselves.
User Sorts themselves
Hint: Reducing the text in query will also speed up the query processing time
These are dashboard objects you need to carefully consider before employing them.
It is hard to build a fast dashboard with many queries and panels without BW Accelerator. This provides in-memory processing of queries that is 10-100 faster.
What we simply do is placing the data in-memory and retrieving it much faster. There are also some limited OLAP functionality that can be built in BWA 7.3, but most data processing still occurs in the BI Analytical engine.
You can also place non-SAP data in-memory, using BOBJ data Services.
Number of Queries
The major improvement is to make query execution more predictable and overall faster
Number of Queries
Query ExecutionWithout SAP NetWeaverBW Accelerator
Query ExecutionWith SAP NetWeaver BW Accelerator
Information Broadcasting /Precalculation
Information Broadcasting /Precalculation
SAP BW Accelerator
Aggregates can be replaced with SAP BW Accelerator, while the memory cache is still useful.
The OLAP Cache is used by BW as the core in-memory data set. It retrieves the data from the server if the data set is available.
The Cache is based on First-in --> Last out.
This means that the query result set that was accessed by one user at 8:00am may no longer be available in-memory when another user is accessing it at 1:00pm.
Therefore, queries may appear to run slower sometimes.
The MDX cache is used by MDX based interfaces, including the OLAP Universe.
You can increase query speed by broadcasting the query result of commonly used queries to the cache.
Users do not need to execute the query from the database. Instead the result is already in the system memory (much faster).
The OLAP Cache is by default 100 MB for local and 200 MB for global use
This may be too low...
Look at available hardware and work with you basis team to see if you can increase this.
If you decide to increase the cache, use the transaction code RSCUSTV14.
WARNING: The Cache is not used when a query contains a virtual key figure or virtual characteristics, or when the query is accessing a transactional DSO, or a virtual InfoProvider
To monitor the usage of the cache on each of the application servers, use transaction code RSRCACHE and also periodically review the analysis of load distribution using ST03N – Expert Mode
PS! The size of OLAP Cache is physically limited by the amount of memory set in system parameter rsdb/esm/buffersize_kb.
The settings are available in RSPFPAR and RZ11.
We can create proposals from the query, last navigation by users, or by BW statistics
Create aggregate proposals based on queries that are performing poorly.
The process of turning 'on' the aggregates is simple
Turn off the cache (we assume all hits 'new data')
Monitor database, portal and BI system load
Log response time and have multiple browsers and PCs hitting the data from multiple locations (network testing)
Stress Testing is done at 40% of named user base
The test is done the same way as on the load testing, just with more 'users'
The system may not be able to pass at this level, but the break-points are identifiedPerformance Testing: Load and Stress
All Dashboard systems should be load tested to 20% of user base prior to go-live
Consider the network topology, capacity and the user locations before implementing global dashboards
A “goldmine” for system recommendations
EarlyWatch reports are available since Solution manager version 3.2 SP8.
The more statistics cubes you have activated in BW, the better usage information you will get. Depending on your version of SAP BW, you can activate 11-13 InfoCubes. Also, make sure you capture statistics at the query level (set it to 'all').EarlyWatch Reports in Solution Manager
System issues can be hard to pin-down without access to EarlyWatch reports. Monitoring reports allows you to tune the system beforeuser complains
This system is about to 'crash'
The system is growing by 400+ Gb per month, the app server is 100% utilized and the Db server is at 92%.
This customer needed to improve the hardware to get the query performance to an acceptable level
2. The server locations and networks - Check Loads
3. Query review - Look at database time, calculation time and design
4. Interface review - Make sure you are using the best for the data source
5. Dashboard review - Look at Excel logic, container usage, number of flash objects, sorts, size of result set & simplification opportunities
6. In-memory review - Look at cache usage, hit rations and BWA usage
7. Review data sources - Examine if snapshots can be leveraged and look for
possibilities to create aggregates
8. Examine compatibilities between browsers, flash and office versions
9. Review PC performance issues - memory, disk and processorsThe Dashboard Performance Checklist
Performance is complex, look at more than one area (i.e. web portal bottlenecks and LDAP servers)
Requires log-on at www.SAPInsider.com
SAP Business Objects Tuning by Steve-Bickerton
SAP MarketPlace for Sizing guidelines
SBO_BI_4_ 0_Dashboard_designer.pdf - requires log-on to service.sap.comResources
You have to spend time on the backend performance tuning
Avoid direct querying of high data volumes, create summaries instead
Consider in-memory processing for all critical dashboards
Your interface to the data will impact the performance - avoid MDX
Size your hardware one size 'too big' - it is hard to make a second 'first impression'.
Use a gradual rollout of your dashboards, monitor the performance and conduct load and stress tests before any major go-lives.7 Key Points to Take Home
How to contact me: