common analysis framework june 2013 n.
Skip this Video
Loading SlideShow in 5 Seconds..
Common Analysis Framework, June 2013 PowerPoint Presentation
Download Presentation
Common Analysis Framework, June 2013

Loading in 2 Seconds...

play fullscreen
1 / 23

Common Analysis Framework, June 2013 - PowerPoint PPT Presentation

  • Uploaded on

Common Analysis Framework, June 2013. Database solutions and data management for PanDA system Gancho Dimitrov (CERN). Outline. ATLAS databases – main roles and facts Used DB solutions and techniques : Partitioning types – range, automatic interval Data sliding windows Data aggregation

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Common Analysis Framework, June 2013' - elaina

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
common analysis framework june 2013
G. DimitrovCommon Analysis Framework, June 2013

Database solutions and data management

for PanDA system

Gancho Dimitrov (CERN)

G. DimitrovOutline
  • ATLAS databases – main roles and facts
  • Used DB solutions and techniques :
    • Partitioning types – range, automatic interval
    • Data sliding windows
    • Data aggregation
    • Scheduled index maintenance
    • Result set caching
    • Stats gathering settings
    • Potential use of Active Data Guard (ADG)
  • JEDI component and relevant DB objects
  • Conclusions
atlas databases topology
G. DimitrovATLAS databases topology

Snapshot taken from the PhyDB Oracle Streams replication monitoring

ADC applications PanDA, DQ2, LFC, PS, AKTR, AGIS

are hosted on the ADCR database

ACDR database cluster HW specifications:

4 machines

2 quad core CPUs Intel Xeon@ 2.53GHz


10 GigE for storage and cluster access

NetApp NAS storage with 512 GB SSD cache

panda system
G. DimitrovPanDA system
  • The PanDA system is the ATLAS workload management system for production and user analysis jobs
  • Originally based on a MySQL database. Migrated in 2008 to Oracle at CERN.
  • Challenges:

- PanDA system has to manage millions of grid jobs daily

- Changes into jobs statuses, sites loads have to be reflected on the database fast enough. Fast data retrievals to the PanDA server and monitor are key requirements

- Cope with spikes of user’s workload

- DB system has to deal efficiently with two different workloads: transactional from PanDA server and (to some extent) data warehouse load from PanDA monitor.

trend in number of daily panda jobs
G. DimitrovTrend in number of daily PanDA jobs

Jan – Dec 2011

Jan – Dec 2012

Jan- June 2013

the panda operational and archive data
G. DimitrovThe PANDA ‘operational’ and ‘archive’ data
  • All information relevant to a single job is stored in 4 major tables. The most important stats are kept separately from the other space consuming attributes like job parameters, input and output files.
  • The ‘operational’ data is kept in a separate schema which hosts active jobs plus finished ones of the most recent 3 days.

Jobs that get status ‘finished’, ‘failed’ or ‘cancelled’ are moved to an archive PANDA schema (ATLAS_PANDAARCH).


panda data segments organization
G. DimitrovPanDA data segments organization


partitioned on a ‘modificationtime’ column.

Each partition covers a time range of a day

ATLAS_PANDAARCH archive tables partitioned on the ‘modificationtime’ column.

Some table have defined partitions each covering three days window, others a time range of a month

Partitions that can be dropped. An Oracle scheduler job is taking care of that daily

Panda server

DB sessions

Oracle scheduler job

Inserts the data of the last complete day

Filled partitions

Empty partitions relevant to the future. A job is scheduled to run every Monday for creating seven new partitions

Time line

Certain PanDA tables have defined data sliding window of 3 days, others 30 days.

This natural approach showed to be adequate and not resource demanding !

Important: For being sure that job information will be not deleted without being copied in the PANDAARCH

schema, a special verification PLSQL procedure is taking place before the partition drop event!

panda data segmentation benefits
G. DimitrovPanDA data segmentation benefits
  • High scalability: the Panda jobs data copy and deletion is done on table partition level instead on a row level
  • Removing the already copied data is not IO demanding (very little redo and does not produce undo ) as this is a simple Oracle operation over a table segment and its relevant index segments (alter table … drop partition )
  • Fragmentation in the table segments is avoided. Much better space utilization and caching in the buffer pool
  • No need for indexes rebuild or coalesce operations for these partitioned tables.
panda pandaarch data flow machinery
G. DimitrovPANDA=>PANDAARCH data flow machinery
  • The PANDA => PANDAARCH data flow is sustained by a set of scheduler jobs on the Oracle server which execute a logic encoded in PL/SQL procedures

Daily job which copies data of a set of partitions


Weekly job which creates daily partitions relevant to the near future days

Daily job which verifies that all rows of certain partition have been copied to PANDAARCH and drops the PANDA partition if the above is true.

monitoring on the panda pandaarch data flow
G. DimitrovMonitoring on the PanDA => PANDAARCH data flow
  • PanDA scheduler jobs have to complete successfully.

Very important are the ones that pre-create partitions, copy data to the archive schema and removes partitions which data has been already copied.

Note: Whenever new columns are added to the PanDA JOBS_xyz tables, the relevant PLSQL procedure for the data copying has to be updated to consider them.

  • In case of errors, a DBA or other database knowledgeable person has to investigate and solve the issue(s).

Oracle keeps a record from each scheduler job execution (for 60 days)

  • If happen that the 3 days data sliding window in the PanDA ‘operational’ tables is temporary not sustained by whatever reason, the PanDA server is not affected.

For the PanDA monitor is not very clear yet. However there were no complains for the two such occurrences in the last 30 months.

the panda plsql code
G. DimitrovThe PanDA PLSQL code
  • Currently certain pieces in the PanDA PLSQL code are bound to the ATLAS PanDA account names (e.g. DB object names are full qualified with hardcoded object owner)
  • Efforts done to parameterize the PLSQL procedures for being flexible on the schemes they interact on.
  • Full set of the modified PLSQL procedures are under validation on the INTR testbed.
  • After a successful validation of the modified PLSQL procedures and Oracle scheduler jobs these pieces of the PanDA database layer will be ready for the CAF (Common Analysis Framework)
improvements on the current panda system
G. DimitrovImprovements on the current PanDA system
  • Despite the increased activity on the grid, using several tuning techniques the server resource usage stayed in the low range: e.g. CPU usage in the range 20-30%
  • Thorough studies on the WHERE clauses of the PanDA server queries resulted in:

- revision of the indexes: removal or replacement with more approriate multi-column ones

- weekly index maintenance (rebuids triggered by a scheduler job) on the tables with high transaction activity

- tuned queries

- an auxiliary table for the mapping Panda ID <=> Modification time

db techniques used in panda 1
G. DimitrovDB techniques used in PanDA (1)

Automatic interval parititioning

Partition is created automatically when a user transaction imposes a need for it (e.g. user inserts a row with a timestamp for which a partition does not yet exist)

CREATE TABLE table_name ( … list of columns …)


( PARTITION data_before_01122011 VALUES LESS THAN

(TO_DATE('01-12-2011', 'DD-MM-YYYY')) )

In PanDA and other ATLAS applications interval partitioning is very handy for transient type of data where we impose a policy of agreed DATA SLIDING WINDOW (however partition removal is done via a home made PLSQL code)

db techniques used in panda 2
G. DimitrovDB techniques used in PanDA (2)

Result set caching

This technique was used on well selected set of PanDA server queries - useful in cases where data do not change often, but is queried on a frequent basis.

The best metric to consider when tuning queries, is the number of Oracle block reads per execution. Queries for which result has been gotten from the result cache shows ‘buffer block reads = 0’

Oracle sends back to the client a cached result if the result has not been changed meanwhile by any transaction, thus improving the performance and scalability

The statistics shows that 95% of the executions of the PanDA server queries (17 distinct queries with this cache option on) were resolved from the result set cache.

db techniques used in panda 3
G. DimitrovDB techniques used in PanDA (3)

Data aggregation for fast result devilery

PanDA server has to be able to get instantaneously details on the current activity at any ATLAS computing site (about 300 sites) – e.g. number of jobs at any site with particular priority, processing type, status, working group, …etc

This is vital for justifying on which site the new coming jobs is best to be routed.

Query execution addresing the above requirement showed to be CPU expensive because of high frequency execution.

Solution:A table with aggregated stats is re-populated by a PLSQL procedure on an interval of 2 min by an Oracle scheduler job (usual elapsed time 1-2 sec)

The initial approach relied on a Materialized view (MV), but it showed to be NOT reliable because the MV refresh interval relies on the old DBMS_JOB package

db techniques used in panda 4
G. DimitrovDB techniques used in PanDA (4)

Customized table settings for the Oracle stats gathering

Having up-to-date statistics on tables data is essential for having optimal queries’ data access path. We take advantage from statistics collection on partitioned tables called incremental statistics gathering.

Oracle spends time and resources on collecting statistics only on partitions which are transactional active and computes the global table statistics using the previously ones in an incremental way.


potential use of adg from panda monitor
G. DimitrovPotential use of ADG from PanDA monitor
  • PanDA complete archive now hosts information of 900 million jobs – all jobs since the job system start in 2006
  • ADCR database has two standby databases:
    • Data Guard for disaster recovery and backup offloading
    • Active Data Guard (ADCR_ADG) for read-only replica
    • PanDA monitor can benefit from the Active Data Guard (ADG) resources

=> An option is to sustain two connection pools:

- one to the primary database ADCR

- one to the ADCR’s ADG

The idea is queries that span on time ranges larger than certain threshold to be resolved from the ADG where we can afford several paralell slave processes per user query.

=> Second option is to connect to the ADG only and fully rely on it.

Read-only replica

new development jedi a component of panda
G. DimitrovNew development: JEDI (a component of PanDA)
  • JEDI is a new component of the PanDA server which dynamically defines jobs from a task definition. The main goal is to make PanDA task-oriented.
  • Tables of initial relational model of the new JEDI schema (documented by T. Maeno) complement the existing PanDA tables on the INTR database
  • Activities in the last months:

- understanding the new data flow, requirements, access patterns

- address the requirement of storing information on event level for keeping track of the active jobs’ progress.

- studies for the best possible data organization (partitioning) from manageability and performance point of view.

- get to the most appropriate physical implelentaion of the agreed relational model

- tests with representative data volume

jedi database relational schema
G. DimitrovJEDI database relational schema

Yellow: JEDI tables

Orange: PanDA tables

Green: Auxiliary tables

for speeding up queries

transition from current panda schema to a new one
G. DimitrovTransition from current PanDA schema to a new one
  • The idea is the transition from the current PanDA server to the new one with the DB backend objects to be transparent to the users.
  • JEDI tables are complementary to the existing PanDA tables. The current schema and PANDA => PANDAARCH data copying will be not changed.
  • However the relations between the existing and the new set of tables have to exist. In particular:

- Relation between JEDI’s Taskand PanDa’s Job by having a foreign key in all JOBS* tables to the JEDI_TASKS table

- Relation between JEDI’s Work queue (for different shares of workload) and PanDa’s Job by having a foreign key in all JOBS* tables to the JEDI_WORK_QUEUE table

- Relation between JEDI’s Task, Dataset and Contents (new seq. ID) and PanDA’s Job processing the file (or fraction of it) by having a foreign key in the FILESTABLE4 table to the JEDI_DATASET_CONTENTS table

(when a task tries a file multiple times, there are multiple rows in PanDA’s FILESTABLE4 while there is only one parent row in the JEDI’s DATASET_CONTENTS table)

Note:"NOT NULL" constraints will not be added to the new columns on the existing PANDA tables for allowing to be used standalone without the use of JEDI tables.

jedi db objects physical implementation
G. DimitrovJEDI DB objects – physical implementation
  • Data segmenting is based on a RANGE partitioning on the JEDI’s TASKID column with interval of 100000 IDs (tasks) on six of the JEDI tables (uniform data partitioning).
  • The JEDI data segments are placed on dedicated Oracle tablespace (data file) separate from existing PanDA tables
  • Thanks to the new CERN license agreement with Oracle, now we take advantage of the Oracle advanced compression features – compression of data within a data block while application does row inserts or updates.

PanDA tests on tables with and without OLTP

compression showed that Oracle was right in the

predictions of the compression ratio.

panda database volumes
G. DimitrovPanDA database volumes
  • Disk space used by the PanDA

in 2011 is 1.3 TB

in 2012 is 1.7 TB

for the first half of 2013 is 1 TB.

  • According to the current analysis and production submission tasks rates of 10K to 15K per day, the estimate for the JEDI needed disk space is in the range 2 to 3 TB per year.

However with the OLTP compression is place, the disk space usage will be reduced.

Activating the same type of compression on the existing PanDA tables would be beneficial as well.

G. DimitrovConclusions
  • The slides content presented the current PanDA data organization and the planned new one with respect to the JEDI component
  • Deployment of the new JEDI database objects in the ATLAS production database server is planned for 2th July 2013

Thank you!