1 / 49

ATST Software Conceptual Design

ATST Software Conceptual Design. ATST Conceptual Design Review 26 Aug 2003. Presentation Structure. Introduction Approach Things to watch for Requirements Functional design overview Technical design overview Virtual Instrument Model. Design Architectures. Functional Design.

teva
Download Presentation

ATST Software Conceptual Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ATST Software Conceptual Design ATST Conceptual Design Review 26 Aug 2003

  2. Presentation Structure • Introduction • Approach • Things to watch for • Requirements • Functional design overview • Technical design overview • Virtual Instrument Model

  3. Design Architectures Functional Design Requirements Behavior Technical Design Implementation

  4. Special points • Configurations • How observations are modeled in system • Virtual Instrument Model • Provides flexibility in laboratory-style operations • Device Model • Uniform implementation and control of devices • Container/Component Model • Flexibility in a distributed environment

  5. Key Science Requirements • Combine multiple post-focus instruments • Operate simultaneously • Coordinated observing with remote sites • Match flexibility and adaptability achieved by DST • Support ‘laboratory-style’ operation (modular instruments) • Support visitor instruments • < 30 min switching time between of active instrument set • >40 year lifetime • Massive data rates • Track on/off solar disk (up to 2sr off)

  6. Software Requirements Reality Science Requirements Software Requirements Common Sense Software Design

  7. What types are there? • Functional – what must the system do? • Performance – how well must the system run? • Interface – how does the system talk with the outside? • Operational – how is the system to be used? • Documentation – how is the system to be described? • Security – who/what can do what/when? • Safety – what can’t go wrong?

  8. Functional Design • Purpose • Focus on behavior and structure (what and why) • Measure against requirements • Use cases/Overall design • Information flow • Principal Systems • Observatory Control System (OCS) • Data Handling System (DHS) • Telescope Control System (TCS) • Instruments Control System (ICS)

  9. Overall Design Approach • Want to adapt a conventional modern observatory software architecture to the special needs of the ATST • Avoid re-invention, but… • Concentrate on multiple instruments operating simultaneously in a laboratory environment • Flexibility is a key requirement of the functional design • Overall functional design derived from ALMA, Gemini, and SOLIS, with consideration from other projects as well. • All share a common overall structure (with wildly different implementations) • All highly distributed with strong communications infrastructure

  10. Overall Design User Interfaces Core Software

  11. Virtual Instrument Observations Device Driver Configurations Device Driver Device Driver Information flow Experiment Hardware Component Component Hardware Component Device Data Hardware Device Quick Look Engineering Archives

  12. Operating Characteristics • Distributed architecture using a communications bus • Components can be placed anywhere (and moved as needed) • Devices may be constrained by device driver constraints • Components locate each other by name, not location • Language and other environment choices are independent of behavior • Behavior separated from control by a control surface • Communications bus • Multiple channels • Provides inter-language peer-to-peer communication • Locates components and provides connection handles • Monitors connectivity (detects communication failures) • High-speed, robust

  13. Observatory Control System (OCS) • Roles: • Construct sequence of configurations for each observation • Coordinate operation of TCS, ICS, and DHS • Provide user interfaces for operations • Provide services for applications • Provide ATST Common Software for all systems

  14. Data Handling System • Roles • Accumulate science data (including header information) • handle data rates • handle data volumes • Analyze data for system performance (quick look) • Provide archival, retrieval and distribution services

  15. Telescope Control System Requirements • Coordination and control of telescope components • Interface to the Observatory Control System • Configuration management • Safety interlock handling • Ephemeris, pointing, and tracking calculations. • Time base control and distribution • Pointing models • Target trajectory distribution • Image quality • Active and adaptive optics management • Thermal management

  16. Telescope Control System TCS • Subsystems • M1 Control • M2 Control • Feed Optics Control • Adaptive Optics Control • Mount Control • Enclosure Control • General Systems • Time base • Global Interlock • Thermal Management M1 M2 Feed Optics Adaptive Optics Mount Enclosure Acquisition, Track, and Guidance Image Quality Thermal Management Global Interlock

  17. Trajectory Interlock Global Interlocks Image quality data Telescope Control System • High-Level Data Flows • OCS configurations • ICS configurations • TCS events and archiving • Low-Level Data Flows • Subsystem configurations • Trajectories • Image quality data • Interlock events OCS cfgs events ICS cfgs TCS cfgs FOCS ECS MCS M1CS AOCS M2CS

  18. Telescope Control System • Virtual Telescope Model • Tip of the hat to Pat Wallace (DRAL). • Several points of view (Instruments, AO WFS, aO WFS). • Pointing and Tracking • Off-axis telescope should be irrelevant. • Tracking will be at solar rate. • Coordinate systems include ecliptic and heliocentric. • Limited references to build pointing map (1 object/6 months). • Open-loop tracking for coronal and non-AO work. • Closed loop tracking uses AO as guider. • Thermal Management • Daily thermal profile. • Monitor heat loads and dome flushing.

  19. Mount Control System Pointing and Tracking • All ephemeris and position calculations are done by the TCS. • The MCS follows a 20 Hz trajectory stream provided by the TCS. • This stream consists of (time, position) values that the MCS must follow. • Current position, demand position, torques, and rates are output at 10 Hz. Thermal Management • The MCS must keep the mount structure at ambient temperature. • May be provided by a separate controller (TBD). Interlocks • Interlock conditions cause a power shutdown, brakes on, cover closed. • Caused by: GIS, over-speed, over-torque, mechanical obstruction (locking pin, manual drive crank, liftoff failure). • Provided by a separate controller (PLC-based) that is always operational.

  20. Mount Control System Servo Requirements • Pos = 3 arcsec • Vslew = TBD °/sec • Vtrk = TBD °/sec • Aslew = TBD °/sec2 • Atrk = TBD °/sec2 • Jitter = TBD °/sec M2 Bias 0.01 Hz Azimuth Bias Elevation Trajectory Trajectory 20 Hz Coudé Trajectory summing and smoothing.

  21. M1 Mirror Control System TCS • Axial Support: blending and averaging AO information, applying force map, correcting servo feedback. • Mirror Position: detecting translation and rotation errors, (feedback to actuators?). • Thermal management: controlling temperature, applying thermal profile estimates. • Controller: interfacing to TCS & GIS, simulator. GIS AOCS M1CS Axial Support Mirror Position Thermal Management M1 Controller Actuators, Force Sensors Translation, Rotation Sensors Aperture Stop Interlocks Force Map Thermal Profile Interfaces Blowers, Coolers, Exchangers Simulator

  22. Translation Rotation X Y Z X Y Z M2 Control System Tip-Tilt-Focus • Base configuration set by TCS. • Corrections from AOCS in 10 Hz stream. • Blending data • Conversion into off-axis translation and rotation. Thermal Management • Secondary Mirror • Heat Stop • Lyot Stop Base Configuration TCS aO & AO TTF 10 Hz Tip-Tilt Offset 0.01 Hz AOCS M2CS MCS Thermal Management Blending Conversion

  23. Feed Optics Control System Small Mirrors • M3: Gregorian feed. • M7: Coudé feed. Other Optics • aO WFS beamsplitter • Polarizers • Filters? Thermal Management • Mirror Cooling • Tube Ventilation • Coudé entrance

  24. Adaptive Optics Control System Active optics system aO WFS TTM6 DHS OCS AOCS On Demand 2 KHz On Demand AO WFS DM5 Mount M1 M2 On Demand 10 Hz 0.1 Hz Command Channel 0.01 Hz Event Channel Data Channel

  25. Adaptive Optics Control System M2 Tip-tilt bias offload ~0.01 Hz TTF bias offload ~10 Hz Low-order figure offload ~0.1 Hz Adaptive Optics/ Active Optics Control System ~2 KHz aO/AOCS Active Optics 10 Hz WFS Deformable Mirror ~2 KHz M5 Mount M1 M6 Tip-Tilt Mirror ~2 KHz Adaptive Optics ~2 KHz WFS

  26. Enclosure Control System TCS • Azimuth: drives, brakes, encoders, sensors. • Shutter: drive, brakes, encoders, sensors, sun shade • Auxiliary: vent gates, thermal controller, cranes • Controller: interface, interlock, simulator GIS ECS MCS Azimuth Shutter Auxiliary M1 Controller Drives, Brakes Drives, Brakes Thermal Management Interlocks Encoders, Sensors Vent Gates Encoders, Sensors Interfaces Sun Shade Cranes Simulator

  27. Instrument Control System Requirements • Laboratory/Experiment Style Observing • Flexible instrument configuration • Use of multiple components • Instruments • Facility instruments follow all ATST interfaces • Visitor instruments obey a minimal set of ATST interfaces • Multiple instruments must work together. • Telescope • Control the beam position • Control modulators, AO, and other image modifiers • Development • Several development locations with numerous partners.

  28. Instrument Control System Management • Controls the lifecycle of virtual instruments • Allocates components Interface • Controls configurations from the OCS • Provides user interfaces • Presents TCS and DHS as resources. Development • Provides standard instrument as template OCS ICS TCS DHS NIrSP ViSP VisTF VI VI VI Available Components Component Component Component Component Component Component Component Component Component Component Component

  29. Instrument Control System OCS Management Lifecycle • Select from list of components • Build a VI or retrieve an existing VI • Register VI with ICS • Submit configuration to OCS • OCS schedules configuration • ICS enables your VI • VI takes control of components • Interact with your VI [Engineering Mode] 5 4 ICS 6 3 8 2 My VI 2 7 1 Available Components

  30. Technical Design Overview • Purpose • Focus on implementation issues (how) • Must allow implementation of functional design • Identify options, make choices • Tiered hierarchy • Isolates technology layers • Allows technology replacement

  31. ATST Technical Architecture Applications Admin Apps App Framework Scripting Support UIF Libraries APIs & Libraries Data Handling Support High-level APIs/Tools Container Support Alarm System Archiving System Astro Libraries Services Device Drivers Core Support Integrated APIs/Tools Development Tools Communications Middleware Base Tools

  32. Communications • Communications Bus • Notification Service • Synchronous Communications • Asynchronous Communications

  33. Services • Logging • Events • Alarms • Connection • Persistent Stores

  34. Interfaces • Logically separated into three classes: • Lifecycle (startup/reset/shutdown of components) • Functional (command/action/response - behavior) • Service access (connection, log, event, alarm, stores) • Functional interfaces define accessible behavior • Physically extend the lifecycle interface • Narrow interfaces (few commands used) • All devices use a common interface • Formally specified and enforced using communications middleware

  35. Container/Component Model • Industry standard approach to distributed operations (.NET, EJB, CORBA CCM, etc). • Components implement functionality • Containers provide services to components and manage component lifecycle Component Functional interface Container Service interface Lifecycle interface

  36. Containers • ATST supports two types of Containers • Porous – Components provide direct access to their functional interfaces • Tight – Containers wrap component functional interfaces Interface Wrapper

  37. Components • Hierarchically named • Can (currently) be either Java or C/C++ • Three lifecycle models for components • Eternal: created on system start, run to system stop • Long-lived: created on demand, run until told to stop • Transient: exist only long enough to satisfy request • Exist only inside containers • Common base interface allows manipulation by container • Critical attributes maintained in separate persistent store

  38. Observations - 1 • Program of configurations • Configurations flow through system: status is updated as they are operated on by system • Components and devices respond to configurations • Configurations implemented as sets of attributes (name/value pairs) • Configurations are uniquely named and permanently archived (as are observations)

  39. Observations - 2 • Observations constructed using Observing Tool • Choice of OT is TBD (ALMA, Gemini, STScI, others) • External representation is XML • Configurations may be composed from other configurations and components may decompose a configuration into a set of smaller configurations

  40. Observations - 3 • Observation managers track sets of observations as they are operated on by the system • Components tag header information to identify the component associated with the generation of that header information

  41. Real-time Systems • Laboratory-style operations • Rapid setup and reconfiguration • Engineering observations • Inexpensive • Existing software implementation or design • Rapid deployment • Code reuse • Contractual design and development • Easily partitioned work packages • Common infrastructure and tools • Simulators

  42. Device Model • The device model is used by all real-time components • Other models are available for OCS services (log, notification, etc.). • It provides a common interface to: • High-level objects (OCS and DHS). • Other devices. • Low-level hardware drivers. • Global services (log, event, database, etc.). • It can be inherited by more complex devices. • It forces all systems to obey the command/action model. • It operates in a peer-to-peer environment.

  43. Device Model • Devices have common properties: • Attributes: state, health, debug, and initialized. • Command Interface: offline, online, start, stop, pause, resume, get, set. • Services Interfaces: event, database. • Devices have common operations: • Initialize, power-up, check parameters, change state, execute actions, handle errors, respond to queries, receive and generate asynchronous events. • Devices have common communications: • Name/connection registration. • Event listeners and posters. • Databases, log, and alarm services.

  44. Device Command Interface • Devices have a simple command interface: • Offline, online, start, stop, pause, resume, set, and get. • Each command moves the device state machine to another state: • States are: OFF, IDLE, BUSY, PAUSED, FAULT OFF offline offline online offline offline IDLE (fault) PAUSED stop/done FAULT start pause (fault) resume BUSY

  45. Configurations • All devices use configurations to transport information. • A group of attributes and corresponding value • i.e., a filter wheel might act upon: {position=red; rate=10; starttime=10:38:18}. • Values may be any native data type, arrays, and lists. • Configurations are followed throughout the system. • Each device action is traceable to a configuration. • Header information is reconstructed from configuration events. • Configurations have states: • A configuration may be in multiple states in multiple devices. • States do not iterate. • Final state has an associated completion code. Created Queued Running Done

  46. Inherited Devices • The Device class is an abstract class; it needs to be inherited by another class. • These classes specify information and operations unique to a particular device. They may create configuration templates, run background tasks, handle hardware control, and generate specific information. • For example, the MotorDevice inherits the Device class to operate servo motors and define positions, limits, power, and brake operations. • An extended class may itself be extended. • The DiscreteMotorDevice inherits the MotorDevice class to provide defined positions and name-to-position conversion.

  47. High-Level Devices • Some devices do not operate hardware, they operate other devices or connect to high-level, non-device objects (OCS/DHS). • The SequenceDevice runs other devices in a defined order and phase. • The ControllerDevice executes scripts. • The MultiAxisDevice coordinates simultaneous actions. • High-level devices are an aggregation pattern. • They do not override or hide the low-level devices. • They allow the low-level devices to operate independently.

  48. Command/Action Model Commands cause external actions to occur. • A command will return immediately. The action begins in a separate thread. • Multiple commands can be given while actions are on-going. This allows us to “stop” an action, or queue up the next “start”. Actions return asynchronous state information. • A device will transition to the BUSY state, then either back to the IDLE state, or to the ERROR state. Response Config Command Process Synchronization Mechanism Shared Data Action Process Completion Actions Actions Actions

  49. Peer-to-Peer Communications • Devices must have flexible connections. • Depending upon the requested operation, a device may need to communicate with several different devices. • Devices must not exclusively control another device. • Peer-to-peer communications allows a loose federation of devices. • No single point failures, outside of the communications system and the naming service. • Multiple federations can exist simultaneously in the system.

More Related