1 / 25

System management Architecture

System management Architecture. The pillars: Central Configuration Database Node Configuration Management Base system installation Software Package Management Monitoring. Node Configuration Management. GUI. Node Configuration Management. Client. Server. Component. Access API.

haruki
Download Presentation

System management Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. System management Architecture • The pillars: • Central Configuration Database • Node Configuration Management • Base system installation • Software Package Management • Monitoring

  2. Node Configuration Management

  3. GUI Node Configuration Management Client Server Component Access API High Level API HLDL Low Level API PAN DBM XML Notification + Transfer

  4. Node Configuration Management (NCM) • Client software running on the node which takes care of “implementing” what is in the configuration profile • Sits on top of the low-level Config access library (NVA-API) • Modules: • “Components” • Component support libraries • Framework

  5. NCM: Components • “Components” (like SUE “features” or LCFG ‘objects’) are responsible for updating local config files, and notifying services if needed • Components register their interest in configuration entries or subtrees, and get invoked in case of changes • Components do only configure the system • Usually, this implies regenerating and/or updating local config files (eg. /etc/sshd_config) • One method only: Configure() • Use standard system facilities (SysV scripts) for managing services • Reuse the standard facilities: most services come with a SysV script. • Components can notify services using SysV scripts when their configuration changes. • Components are kept small&simple

  6. Component example sub Configure { my ($self) = @_; # access configuration information my $config=NVA::Config->new(); my $arch=$config->getValue('/system/architecture’); # low-level API $self->Fail (“not supported") unless ($arch eq ‘i386’); # (re)generate and/or update local config file(s) open (myconfig,’/etc/myconfig’); … # notify affected (SysV) services if required if ($changed) { system(‘/sbin/service myservice reload’); … } }

  7. Existing component taxonomy • Components can be classified into basic, service-specific and Gridcomponents • Basic components: Manage basic system configurations • eg. network,NFS, printers, security, time… • Service specific components: For batch nodes, servers (provided by service managers) • Eg. LSF, PBS, Castor, accounting, root mail, … • Grid components (provided by Grid middleware WP’s) • Eg. Globus, GDMP, LCAS, Resource Broker… • Existing components are available both from SUE and LCFG • SUE features: Basic and Service specific • LCFG components: Basic and Grid components • Need complete classification (which components to port, which ones to rewrite) • Functionality -> component

  8. NCM: Component support libraries Component support libraries for recurring tasks: • Logging and error reporting • Template processor (for fast config file generation) • SUE sysmgt libraries • Eg. check/edit files, system information, regexps, crontab, (x)inetd… • Monitoring integration

  9. NCM: framework A light weight framework (cdisp) glues the components to the Configuration Client. Overall functionality: • Register which components are interested in which configuration entries/subtrees • Upon a config update, look up the components which need to be invoked • Ordering of component invocations according to dependencies • Invocation of components

  10. XML server client cdisp Components Components Components Node Config Client Node Config Mgmt Component libs SUE sysmgt Logging Template processor Monitoring interface NCM Architecture DBM Cache registration & notification “Low-level” API NVA API “High-level” API CCConfig Invocation Configure()

  11. Base System Installation

  12. Base System Installation • Nodes are installed using the default system installer. • Use standard installation infrastructure (DHCP, install servers, repositories) • Configuration information stored in the CDB, generate ks or js files • A NCM component generates ks/js files out of node profiles. • Template ks/js files used for substituting partition, network, timezone, and other miscellaneous information • Keep the installation simple: installation server should not need any tweaking excepting adding the NCM required packages. • Can be used in combination with existing environment, eg. AIMS for updating NFS & DHCP servers • Site-specific databases should not directly be accessed, only CDB (instead, update CDB with LANDB and CCDB information) • Hook into Node Configuration Management afterwards (‘post-install’) • Start the NCM which downloads the latest node profile • It will bring the machine to its state as reflected in the CDB. • (Monitoring can be activated at this moment).

  13. Software Package Management

  14. Software Package Management

  15. Software Distribution Architecture •  A packaging tool takes care of installing/deinstalling/upgrading packages on a computer node and keeps an inventory of currently installed packages. •  The packages themselves are stored on a managed Software Repository accessible via multiple protocols (eg. HTTP, FTP, shared file system,..) •  Information on which packages are to be deployed on which nodes, and which packages are available on which repositories, is kept in the fabric-wide Configuration Database. •  A 'glue' application (running on the target nodes) computes the necessary package upgrade operations and invokes the packaging tool. • (The SW generation and packaging process is outside the scope.)

  16. Software Package Manager (SPM) • The SPM is the glue application. Functionality: • Compares the packages currently installed on the local node with the packages listed in the configuration • Computes the necessary install/deinstall/upgrade operations • Invokes the packager with the right operation transaction set • The SPM is driven via a local configuration file • For batch/servers: A component generates/maintains this cf file out of CDB information • For desktops: Possible to write a GUI for locally editing the cf file • The SPM core is independent of which packaging format is used.

  17. Software Package Manager (II) • Packager: the standard system packaging format is used (rpm for Linux, pkg for Solaris) • rpmt (for rpm ‘transactional’) used for transactions handling • Packager (rpmt) functionality: • Read operations (transaction) • downloads new packages from Repository • orders the transaction operations taking into account dependency information • Executes the operations (installs/removes/upgrades)

  18. Central Config DB Component rpmt SPM external SPM Repository packages Package files filesystem SPM Architecture Desktops GUI “desired” configuration Local Config file Packages (RPM, pkg) Installed pkgs Transaction set HTTP(S), NFS, FTP RPM db

  19. SPM and the CDB • SW packages are modelised in the Global Schema as follows: • Repository configuration: List of available repositories, repository directories, and packages • Node configuration: list of used repositories, and list of selected packages. • (see next slides) • The CDB template inclusion mechanism allows to define multiple default profiles which can then be refined • Production-packages-rh72.tpl -> lxplus7.tpl -> lxplus043 • More flexibility than current ASIS profiles (Certified, InProduction)

  20. SPM and Global Schema (I) SW repository structure (maintained by repository managers, updated automatically in CDB): /sw/known_repositories/Arep/url = (host, protocol, prefix dir) /owner = /extras = /directories/dir_name_X/path = (asis) /platform = (i386_rh61) /packages/pck_a/name = (kernel) /version = (2.4.9) /release = 31.1.cern /architecture = (i686) /dir_name_Y /path = (sun_system) /platform = (sun4_58) /packages/pck_b/name = (SUNWcsd) /version = 11.7.0 /release = 1998.09.01.04.16 /architecture = (?) (The first version won’t have his information in the CDB)

  21. SPM and Global Schema (II) Node information (maintained by node administrator) /sw/used_repositories/0/rep_name_A = /1/rep_name_B = /sw/packages/package_name/version = /arch = /flags = /sw/packages/package_name/version = /arch = /flags =

  22. Software Package Management • Package config flags: • Ignore locally installed packages (useful for desktops) • Local exception list, or backup of previous configuration • Reboot when package is upgraded • Eg. glibc updates • Do not upgrade package when node is in production (eg. runlevel X) • Or: upgrade this package only when machine is starting up • Standard RPM flags (no dependencies, force, no pre/post installs, etc..) • Issue: per-package flags not supported by standard RPM libraries … but by rpmt ;-)

  23. Desktop Computers • Difference btw. desktop computers & batch/server nodes: • Local sysadmins should be allowed to install/remove packages “by hand” • In batch/server nodes, only the packages in the configuration file have to be kept • Local packages have to be preserved on desktops. How to do it? Possibilities: • “only upgrade existing packages but do not install new ones” in local config • “do not upgrade this package” in local config • Flag for marking certain packages as “mandatory” or “unwanted” (eg. security upgrades) • Typical use cases: 1. upgrade ‘openssh’ in any case, and 2. upgrade ‘mozilla’ (5 rpm’s) only if installed • Desktop computers are not or primary target, but we have to seek for a common solution if possible.

  24. Others… • Coding standards and • Foundation class library: coordinate with configuration task • Libs for file manipulation, logging, error processing, application framework • Reusing existing software • SUE libraries and features • LCFG component libraries and components • ASIS libs • Platform independence • Keep design open for a port to Solaris. Isolate well OS and packager-specific parts • We might switch to Debian (dpkg) in the future ;-) • A first implementation might not take into account all the above…

  25. Benefits over current LCFG • New HLDL Language • New configuration access interface • Made available in reduced form for LCFG (edg-lcfg-nvaapi) • Adapting LCFG to HLDL or global schema was proven too complex • Global Configuration Schema • Agreed in WP4 • Use standard tools/components whenever possible • Installer • SysV init scripts • Architecture more an evolution than a complete redesign of LCFG • We take the good and remove the less good things ;-) • Functionality is reduced at the beginning • We need a base system working soon.

More Related