Loading data into cmdb best practices for the entire process
This presentation is the property of its rightful owner.
Sponsored Links
1 / 18

Loading data into CMDB - Best practices for the entire process PowerPoint PPT Presentation


  • 216 Views
  • Uploaded on
  • Presentation posted in: General

Loading data into CMDB - Best practices for the entire process. Shivraj Chavan. Anand Ahire BMC Software . Agenda. Why you should never do CMDB only project Guidance on – ‘Should this be in the CMDB?’ The Life of a CI Various best practices Q&A. Typical Failed CMDB Project.

Download Presentation

Loading data into CMDB - Best practices for the entire process

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Loading data into cmdb best practices for the entire process

Loading data into CMDB - Best practices for the entire process

Shivraj Chavan

Anand Ahire

BMC Software


Agenda

Agenda

  • Why you should never do CMDB only project

  • Guidance on – ‘Should this be in the CMDB?’

  • The Life of a CI

  • Various best practices

  • Q&A


Typical failed cmdb project

Typical Failed CMDB Project

  • “We need to have a CMDB”

    • Why? … because

  • Let’s load data into it

    • What data? … whatever data we have laying around

  • So, that took a long time!

    • And the CMDB is big, out of date, and isn’t bringing any value

  • See, I told you that CMDB thing was complex and useless hype

    • Another big data store offering no value is obviously not the desire

Avoid doing a CMDB only project


Consumers vs providers

CONSUMERS vs. Providers

  • Although providers supply the data for the CMDB, the important players for the CMDB are really the consumers

  • Consumers do interesting and useful things with the data

  • Providers simply load data

  • Without consumers – who cares what data is loaded

    • In fact, if no one consumes the data, it shouldn’t be loaded


Have an xyz project that includes using the cmdb for xyz substitute incident change problem

Have an XYZ project, that includes using the CMDB(for XYZ substitute – Incident, Change, Problem, …)

  • We need to improve our Change Management process

    • The CMDB is not an end in itself, it is an enabler for other processes

    • You must have a goal and a focus for how you want to USE the CMDB

  • Change Management needs to know about servers, applications, services, and their relationships

    • If no one is consuming a piece of data, it should not be in the CMDB

    • When in doubt, DO NOT put data into the CMDB until someone asks for it

  • Look at the improvements in the Change Management process

    • Failed changes and disruption to service because of change are down

    • I can see how the CMDB makes Change Management better

  • Let’s look at the Incident Management process; how can we improve?

    • There will be many different XYZ projects that all increase content and use of that content in the CMDB

  • The CMDB is a long journey; but there is incremental value at every step along the way


Choose your data sources wisely

Choose your data sources wisely

  • Good data providers do the following:

    • Provides data for CDM classes you need to populate in the CMDB

    • Provides data that is not already provided by a different data source

    • Can populate attribute values which can uniquely identify CI

    • Periodically updates data

    • Periodically flags data as no longer present in the environment

    • Indicates when the data was last updated

    • Updates, maintains, and deletes relationships as well as CIs

  • Manual Data entry:

    • Example: Asset Sandbox in ITSM

    • There are some classes we expect to populate manually, like Business Service

CMDB provides context NOT content


Automated discovery is a requirement

Automated Discovery is a Requirement

Without automated discovery processes, data accuracy CANNOT be maintained

Data is inaccurate before you can complete loading it


Value path

Value Path

Services

Atrium CMDB

HighValue

Applications

Running Software

Incident, Problem, Change, Config

Virtual Layer: Virtual Machines

Less Value

Physical Layer: Servers, Network Devices

= CI, CI Attributes, CI Relationships Auto maintained by likes of ADDM in Atrium CMDB

= CI

= CI, CI Attributes, CI Relationships Maintained by Atrium CMDB

= Relationship


The life of a ci

The Life of a CI

Consume

Transform

Cleanse and Reconcile

Extract

Load

  • Only load data that you need!

  • Define dataset per provider

  • Have different plan for Initial vs delta loads

  • Run multiple copies of key steps like CMDBOutput step in spoon

  • Think about error handling especially for custom jobs

Atrium CMDB

ADDM

ADDM

Dataset

CIs

Atrium

Integrator

MS SCCM

SCCM

Dataset

IMPORT

Dataset

.

.

.

.

.

.

.

CIs

Any Data

Source

CIs


The life of a ci1

The Life of a CI

Consume

Transform

Cleanse and Reconcile

Extract

Load

  • Normalize before you Identify

  • Don’t normalize all classes

  • Batch mode – initial or large data, Continuous – steady state

  • Use Impact Normalization for Change Mgmt or BPPM

  • Use Suite Rollup / Version rollup for SWLM

  • Always use Reconciliation, even for a single source

  • Keep your data clean, normalized, and identified

  • Use qualifications to filter data

  • Use Standard Identification and Merge Rules

  • Put your most specific identification rule first

Atrium CMDB

Product

Catalog

N

O

R

M

A

L

I

Z

A

T

I

O

N

ADDM

Dataset

R

E

C

O

N

C

I

L

I

A

T

I

O

N

CIs

SCCM

Dataset

.

.

.

.

.

.

IMPORT

Dataset

Production

Dataset

CIs

CIs


The life of a ci2

The Life of a CI

Consume

Transform

Cleanse and Reconcile

Extract

Load

  • Do not modify data in production dataset directly.

  • Always use sandbox datasets for manual changes

  • If no one consumes the data, it shouldn’t be loaded

  • Periodically check for duplicates and take remediation action

Atrium CMDB

ITSM

SIM

ITBM

Production

Dataset

Dashboards

.

.

.

.

BPPM


The life of a ci3

The Life of a CI

Consume

Transform

Cleanse and Reconcile

Extract

Load

Atrium CMDB

Product

Catalog

ADDM

N

O

R

M

A

L

I

Z

A

T

I

O

N

ITSM

ADDM

Dataset

R

E

C

O

N

C

I

L

I

A

T

I

O

N

CIs

CIs

SIM

Atrium

Integrator

MS SCCM

SCCM

Dataset

ITBM

.

.

.

.

.

.

IMPORT

Dataset

.

.

.

.

.

.

.

Production

Dataset

CIs

CIs

Dashboards

.

.

.

.

Any Data

Source

CIs

CIs

BPPM


Normalization and reconciliation example

Normalization and Reconciliation example

Normalized Data

Host Name: John Smith Laptop

Model: Apple MacBook Pro 15"

Software: Microsoft Word

Version: 2004

Reconciled Data

Data Source 1

Host Name: John Smith Laptop

Model: Apple MacBook Pro 15“

Software: Microsoft Word

Version: 11.3.8

Host Name: John Smith Laptop

Model: MB134B/A

Software: MSWord

Version: 2004

Host Name: John Smith Laptop

Model: Apple MacBook Pro 15"

Software: Microsoft Word

Version: 11.3.8

Database

Web

Services

Data Source 2

Atrium CMDB

Production Dataset

Host Name: John Smith Laptop

Model: Apple MacBook Pro 15"

Software: MSWD

Version: 11.3.8


Performance considerations

Performance considerations

  • Establish an Integration Server

  • In many cases when performance is an issue, poor database configuration and / or indexing is the cause

  • Consider indexing attributes used in Identification rules

  • Check query plans, review and correct them

  • Are DB backups happening when Reconciliation jobs are running?

  • Use qualifications whenever possible to filter your data

  • “Fine tune” thread settings and use Private Queue


Summary

Summary

  • Don’t do standalone CMDB project, CMDB is a means to ends

  • Approach CMDB project from consumer side not provider

  • Don’t boil the ocean

    • Start small, prove value and iterate

    • but there is incremental value at every step along the way

  • Normalize before you reconcile

  • Always reconcile and use sandbox for manual editing

  • Service orientation is where real value lies; model services NOW


Loading data into cmdb best practices for the entire process

Q & A

Anand Ahire

Principal Product Manager – Atrium Core

[email protected]


You are allowed to extend the cdm but don t

You are Allowed to Extend the CDM – BUT DON’T

  • Do EVERYTHING possible to design using the CMDB default data model

    • There is a mapping paper on the web site to help with mapping decisions

    • https://communities.bmc.com/docs/DOC-16471

  • If there is a request to extend, really evaluate whether there is really no existing class that it would be appropriate to map things into

  • If you do extend the model, make sure you follow best practices

    • Model for the CONSUMER not the provider

    • Add as few extensions as possible

    • Consider that not all consumers can see a new class


References

References

  • Hardware Requirements and Sizing – Documentation

  • Best Practices for CMDB Design & Architecture – Webinar

  • What CIs should I push into my CMDB? – Documentation

  • Understanding Atrium Integrator – Webinar

  • Understanding Normalization and the Product Catalog – Webinar

  • Importing custom Product Catalog data – Documentation

  • Understanding Reconciliation – Webinar

  • Common Data Model and mapping data to CMDB – Documentation

  • Fine tuning ARS for CMDB applications like NE, RE, etc. – KA

  • https://docs.bmc.com/docs/display/public/ac81/Investigating+CMDB+Data+Issues


  • Login