Best practices to ensure efficient data models, fast data activation, and performance of your SAP NetWeaver BW 7.3 data warehouse. Dr. Bjarne Berg COMERIT. What We’ll Cover …. Introductions EDW Data Design and Data Modeling Data Loading and Fast Activations
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Best practices to ensure efficient data models, fast data activation, and performance of your SAP NetWeaver BW 7.3 data warehouse
Dr. Bjarne Berg
SAP BW 7.3 SP-3 has a set of 10 templates to help build a layered data architecture for large-scale data warehousing
An organization has two fundamental choices:
Both solutions are feasible, but organizations that selects an evolutionary approach should be self-aware and monitor undesirable add-ons and ‘workarounds”.
Failure to break with the past can be detrimental to an EDW’s long-term success…
This company implemented a full LSA Architecture and also partitioned the Infoproviders for faster data loads and faster query performance.
While this architecture has benefits, there are significant issues around data volumes and the Total cost of Ownership when changes are made to the data model(s)
Since many of the benefits sought by the LSA architecture are inherent in HANA, significant simplifications can be made to the data design and data flows
This design has a dramatically smaller cost of ownership (fewer objects to maintain, design and change) than the traditional LSAs
Consolidation Cube (OC_CON)
BPC Staging Cube (BPC_C01)
GL Summary Cube (FIGL_C03)
Production Issues included:
Dependent jobs not running sequentially, i.e., load from Summary cube to Staging cube is sometimes executed before the summary cube data is loaded and activated, resulting in zero records in the staging cube.
Long latency with 6 layers of PSA, DSOs, and InfoCubes before consolidation processes can be executed.
Persistent Staging Area (PSA)
Consolidation Cube (OC_CON)
GL Summary Cube (FIGL_C03)
Long-term benefits included reduced data latency, faster data activation, less data replication, smaller system backups as well as simplified system maintenance.
Persistent Staging Area (PSA)
Problem: To reduce data volume in each InfoCube,
data is partitioned by Time period.
A query must now search in all InfoProviders to find
the data. This is very slow.
Solution: We can add “hints” to guide the query execution. In the RRKMULTIPROVHINTtable, you can specify one or several characteristics for each MultiProvider, which are then used to partition the MultiProvider into BasicCubes.
An entry in RRKMULTIPROVHINT only makes sense if a few attributes of this characteristic (that is, only a few data slices) are affected in the majority of, or the most important, queries (SAP Notes: 911939. See also: 954889 and 1156681).
SPO Wizards create all Data Transfer Processes (DTP), transformations, filters for each data store, and a process chain
InfoCubes should be performance tuned if the number of records exceeds 100 million and partitioned before they are approaching 200+ million records. This creates faster loads, better query performance, and easier management.
Explore the use of line item dimensions for fields that are frequently conditioned in queries. This model change can yield faster queries.
Many companies should redesign large InfoCubes with high-cardinality to take advantage of the standard performance enhancements available.
In this example, many of the very large DSOs are not partitioned, and several objects have over 250 million records
Additionally, 101 DSO objects were flagged as being reportable. This resulted in System IDs (SIDs) being created during activation.
Combined, these resulted in frequent locks on the Oracle database and failed parallel activation jobs
Partition DSOs. The lock on very large DSOs during parallel loads are well known and SAP has issued several notes on the topic: 634458 'ODS object: Activation fails - DEADLOCK' and 84348 'Oracle deadlocks, ORA-00060.'
Database: Oracle version 11.2g
BW system: BW version 7.3
Operation systems: HP-UNIX; Linux for BWA; AIX 6.4 for three app servers
In this section, we take a look at a real example of a BW Implementation and explore what we can learn from it.
Structured design review sessions should be undertaken as part of every project to assure that this design did not continue.
This resulted in many reads on the NRIV table that slowed down data activation and process chains
(see SAP notes: 857998, 141497, 179224 and 504875)
Start buffering of number ranges of dimensions and InfoObjects or use 7.3 data activation instead (‘packet fetch’)
The 7.3 initial load runtime option “Insert only” and the “Unique data records only” prevents all lookups during activation and can dramatically improve data loads when used correctly
Solution Manager has been updated to take advantage of these new monitors.
In SAP NetWeaver BW 7.3, the Near Line Storage (NLS) has been enhanced to include archiving, support for write optimized DSOs
This wizard reduces the number or manual steps needed to load data. It also simplifies the development process and makes ETL work much easier.
For this System, Developers should revisit extractor design for lookups on source system instead of inside BW
For many systems, database statistics are outdated and may cause database performance to perform significantly poorer than otherwise would be the case. Sampling should often be changed and process chains may be re-scheduled.
Rebuilding bitmap indexes in load processing for large objects should not be a default answer for all designs. Any process chains that do that, may need to be revisited.
Delete unused aggregates. By reducing the data volume in the underlying statistical cubes (cleanup), the remaining aggregates will reduce in size and processing time.
For most companies queries are using CKF and sums and sorts extensively, the cache read mode for most queries should be turned on
Set up Java connectivity ASAP and use the Broadcasting feature to prefill the MDX cache (OLAP Universes) for BI analytical processing intensive functions such as CKF, Sorts, Exceptions, Conditions
This solution provides for really fast queries, but delta logic has to be custom designed
This is a good option if you have a low volume of new records and a high number of queries or operational dashboards
Warning: Virtual cubes with many users may place high-stress on the ERP system
The database has grown by 732Gb (26%) in the last year, and the growth is uneven.
Schedule “housekeeping” jobs. Better management of cleanup would result in more predictable patterns.
(i.e. we found PSA data that had 10 months of load history).
Use RSDDSTAT and select “Delete Data” for old stats and also schedule periodic jobs using standard process chains.
This will still provide access to the data for the few users who infrequently need to see this old data. You will also be able to query it when BW is on HANA, but it does not need to be in-memory.
Create standard practices for PSA cleanup and schedule regular jobs that take care of this in the future
Inside the Computing Engine of SAP HANA, we have many different components that manage the access and storage of the data. This includes MDX and SQL access, as well as Load Controller (LC) and the Replication Server.
BusinessObjects Data Services
SAP has a checklist tool for SAP NetWeaver BW powered by HANA (thanks Marc Bernard).
In this tool, SAP provided automatic check programs for both the 3.5 version and the 7.x version of BW. These are found in SAP Note: 1729988.
In version 2.x of this tool, hundreds of checks are done automatically in the BW system. This includes platform checks on database and application and system information.
There are even basis checks for support packs, ABAP/JAVA stacks, Unicode, BW releases, and add-ons to your system.
The idea of the checklist tool from SAP is that you run it several times throughout the project.
Once before you start, then periodically as you resolve issues and upgrade requirements, and then finally when the system has been migrated to HANA.
The checklist tool also has specific checks for the HANA system that can help you identify any issues before turning over the system to end users..
SAP has released a new ABAP based tool that generates a report significantly better sizing fro SAP BW than using just the QuickSizer above. This program takes into consideration existing database compression, different table types and also include the effects of non-active data on the HANA system.
The higher precision you run the estimate at the longer the program is going to run.
To increase speed, you can also suppress analysis tables with less than 1 MB size.
With 14 parallel processors and 8Tb data warehouse, it is not unusual to see 45-75 minutes run time.
Since timeouts are common when running the sizing program, you can temporarily change the parameter in rdisp/max_wprun_time to 0 in BW transaction RZ11. Finally, you estimate the growth for the system as a percentage, or as absolute growth.
After you have downloaded and installed the program, and-selected the parameters above, you can go to SE38 and run SDF/HANA_BW_SIZING as a background job.
The output is stored in the file you specified and the file can now be email emailed to hardware vendors for sizing input and hardware selection.
This program is attached to SAP Note: 1736976 on SAP Marketplace
Many experienced developers are not aware that moving BW to HANA can in some cases result in slower transformations during data loads.
a. Select for all entries (SFAE) statements without HANA DB hints --> add hints
b. Select * --> specify fields to select
c. Database access in the field routines --> move to start routine
d. Loops which do not assign field symbols --> use field symbols
e. Selects from master data tables --> Use the read master data rule
f. Selects from DSOs --> Use the read DSO rule
g. Direct updates to BW object tables --> Do not update tables directly
There are currently 7 different certified HANA hardware vendors with 13 different products.
Some boxes can be used as single nodes with others are intended for scale-out solutions for large multi-node systems
In this box, we are see the inside of an IBM x3950 HANA system.
The system basically consists of memory, disk, processors and network cards
The hardware vendor will install, connect and to a health check on your system before handing it over to you. A 3-year service plan is also normally required.
Source: SAP May 2013,
BW 7.4 SP05 Should be released this fall. BW 7.3 SP09 is already available
How to contact me:
Dr. Bjarne Berg
Please remember to complete your session evaluation
SAP, R/3, mySAP, mySAP.com, SAP NetWeaver®, Duet®, PartnerEdge, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other product and service names mentioned are the trademarks of their respective companies. Wellesley Information Services is neither owned nor controlled by SAP.