1 / 45

Horizon: The Simulation Framework Overview

Horizon: The Simulation Framework Overview. Prof. Eric A. Mehiel and the Horizon Team. The System Simulation Problem. There are several existing ways to approach the problem of system level requirements verification via system simulation

lida
Download Presentation

Horizon: The Simulation Framework Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Horizon: The Simulation Framework Overview Prof. Eric A. Mehiel and the Horizon Team

  2. The System Simulation Problem • There are several existing ways to approach the problem of system level requirements verification via system simulation • MDD/MDO: Varying system design parameters to reach a satisfactory (or optimal) design point • Process Integration for Design Exploration: Products like Model Center network the various custom design data sources together with output visualization • Visualization Simulation: STK, FreeFlyer and SOAP are excellent for visualizing the behavior of systems and determining geometric access to targets

  3. What is the Horizon Simulation Framework? • The Framework is a library of integrated software tools supporting rapid system-of-systems modeling, simulation and utility analysis • The Framework provides an extensible and well-defined modeling capability concurrent with a scheduling engine to generate operational schedules of Systems-of-Systems with corresponding state data • As a compliment to the Multi-Disciplinary Optimization (MDO) approach, the Framework answers the following question: Does the current design meet system level requirements that are based on a Use Case and cannot be verified by direct analysis?

  4. Why is Horizon Useful? • Fills the niche between generalized integration tools and specialized geometric-access and visualization tools • Can implement subsystem models, CONOPS requirements and Use Case scenarios while producing valid simulation output data • All subsystem and asset modules created within Horizon are modular • Allows system modeling at any level of fidelity to support the design process from Conceptual Design through CDR • Helps find any design bottleneck or leverage points hidden within the system design

  5. The Horizon Team • California Polytechnic State University, San Luis Obispo (Cal Poly) • Prof. Eric A. Mehiel – Aerospace Engineering Department • Current Student • Derek Seibel • Brian Butler • Seth Silva • Past Students • Cory O’Connor • Daniel Treachler • Travis Lockyer • Prof. John Clements – Computer Science Department • Einar Phersen • Cutting Edge Communications, LLC • Dave Burnett • Derek Wilis

  6. The Horizon Design Philosophy • Simply put, Horizon was designed to be useful and reusable • In the software architecture design interfaces are key! • Three guiding principles • Modularity • Flexibility • Utility Simulation Parameters (Input) System Parameters (Input) Horizon Simulation Framework System Model Main Scheduling Algorithm Subsystem Subsystem Subsystem Final Schedule, State Data (Output) Scheduler/System Interface Interface between Subsystems

  7. The Horizon Design Philosophy: Modularity • Modularity increases simulation component value and simplifies extension • Two degrees of Horizon modularity: • Modularity between the scheduler and the system model • Modularity between subsystems inside the system model Simulation Parameters (Input) System Parameters (Input) Horizon Simulation Framework System Model Main Scheduling Algorithm Subsystem Subsystem Subsystem Final Schedule, State Data (Output) Scheduler/System Interface Interface between Subsystems 1 A B C 2 3

  8. The Horizon Design Philosophy: Flexibility • Enables comprehensive modeling and simulation capability • Two main degrees of flexibility: • Flexibility of fidelity • Capable of simulating systems as simple or complex as user desires • Flexibility of system • Capable of simulating any system (satellites, aircraft, ground vehicles, troops, etc..) • No preset vehicle or subsystem “types” Simulation Parameters (Input) System Parameters (Input) Horizon Simulation Framework System Model Main Scheduling Algorithm Subsystem Subsystem Subsystem Final Schedule, State Data (Output) Scheduler/System Interface Interface between Subsystems Subsystem Subsystem Subsystem Subsystem Subsystem Subsystem Subsystem Subsystem Subsystem Subsystem Subsystem

  9. The Horizon Design Philosophy: Utility • Utility Libraries promote rapid system modeling • Current Utilities Include: • Matrix class • Quaternion class • Coordinate rotations and transformations • Single Value Decomposition • Eigenvalue/Eigenvector algorithms • Runge-Kutta45 integrator Simulation Parameters (Input) System Parameters (Input) Horizon Simulation Framework System Model Main Scheduling Algorithm Subsystem Subsystem Subsystem Final Schedule, State Data (Output) Scheduler/System Interface Interface between Subsystems svd() eig() Matrix Quaternion

  10. The Horizon Software Architecture Version 1.2

  11. Architecture: The Fundamental Scheduling Elements • Four fundamental scheduling elements • Task – The “objective” of each simulation time step. It consists of a target (location), and performance characteristics such as the number of times it is allowed to be done during the simulation, and the type of action required in performing that task. • State – The state vector storage mechanism of the simulation. The state contains all the information about system state over time and contains a link to its chronological predecessor. • Event – The basic scheduling element, which consists of a task that is to be performed, the state that data is saved to when performing the task, and information on when the event begins and ends. • Schedule – Contains an initial state, and the list of subsequent events. The primary output of the framework is a list of final schedules that are possible given the system.

  12. Architecture: The Main Algorithm Increment Simulation Current Time Has the Simulation End Been Reached? Start at the Simulation Begin Time Output Resulting List of Schedules Are there too many Schedules? Crop Schedules Find Next Old Schedule to Add New Events To Has Each Old Schedule Been Attempted? Find Next Task to be Completed Is the System in the Schedule Currently Performing a Task? Have all Tasks been Attempted to be Added? Has that Task been Performed in this Schedule too many Times Already? Decision “No” Decision “Yes” Can the System Perform this Task? Add New Schedule to List

  13. Architecture: The Main Algorithm • Contains the interface between the main scheduling module and the main system simulation module • Guides the exhaustive search in discrete time steps and keeps track of the results • Essentially a call to the main system simulation routine inside a series of nested code loops, with checks to ensure that schedules that are created meet certain criteria from simulation parameters • Outermost loop is a forward-time progression stepping through each simulation time step • Avoids recursion, where subsystems “reconsider” their previous actions • Then, it checks to see if it needs to crop the master list of schedules (more on that next slide) • The inner-most loop attempts to add new tasks onto each current schedule • Checks that schedule is finished with previous event at current time step • Checks whether the task can be performed again • Checks whether the system can perform this combination of schedule and new task • The “system simulation” step • Adds state data to state • If successful, create new event with the new task and state, and add it to the end of a new schedule copied from the current one

  14. Architecture: The Fundamental Modeling Elements • Four fundamental modeling elements • Constraint – A restriction placed on values within the state, and the list of subsystems that must execute prior to the Boolean evaluation of satisfaction of the constraint. Also the main functional system simulation block, in that in order to check if a task can be completed, the scheduler checks that each constraint is satisfied, indirectly checking the subsystems. • Subsystem – The basic simulation element of the framework. A subsystem is a simulation element that creates state data and affects either directly or indirectly the ability to perform tasks. • Dependency – The limited interface allowed between subsystems. In order to keep modularity, subsystems are only allowed to interact with each other through given interfaces. The dependencies specify what data is passed through, and how it is organized. Dependencies collect similar data types from the passing subsystems, convert them to a data type the receiving subsystem is interested in, and then provide access to that data. • System – A collection of subsystems, constraints, and dependencies that define the thing or things to be simulated, and the environment in which they operate.

  15. Architecture: The Constraint-Checking Cascade • Primary algorithm when checking whether a system can perform a task • Internal constraint process: • Subsystems which contribute state data to Qualifier are evaluated • Qualifier evaluates validity of state • Constraint fails if a subsystem or the qualifier fails New Task Constraint Subsystem 1 Fail Pass Subsystem 2 Constraint Fails State Data Fail Pass Subsystem N Fail Pass Qualifier Fail Pass Constraint Passes

  16. Architecture: The Constraint-Checking Cascade • Constraint-Checking Cascade: • Constraints are checked in user-specified order contributing subsystem data to the state while they execute • The remaining subsystems not needed to evaluate a constraint are then checked • If any of the checks fail, no event is added to the schedule and the state is discarded • If all of the checks succeed, the task and state are used in the creation of a new event, which is added to the end of the schedule • “Fail-fast” constraint methodology New Task Constraint 1 Pass Constraint 2 State Data Pass Constraint N Pass Remaining Subsystems Scheduler Possible Schedule Pass EVENT EVENT

  17. Architecture: Subsystems • Subsystems are state transition objects • Describe how the subsystem goes from its old state to new state in performing the task • Inputs • Old state of the subsystem • Task to be performed • Environment to perform it in • Position of their asset • CanPerform() is the main execution method • Code describes the governing equations of the subsystem

  18. Architecture: Dependencies • Dependencies are the interpreters between subsystems • Example: Power subsystem dependent on ADCS subsystem for power input of solar panels due to solar panel incidence angle to sun vector • ADCS only interested in orientation • Power only interested in how much power other subsystems generated • The dependency function would translate the orientation of the spacecraft into how much power the solar panels generate • Dependencies structured as they are to avoid “subsystem creep” • Information about and functions from each subsystem slowly migrate into the other subsystems • Evolutionary dead-end in simulation frameworks • Against the tenets of object-oriented programming

  19. State Power Subsystem Other Subsystems Time Watts 0 0 0.5 300 0.85 270 1.5 500 Architecture: The System State • State is unique to each event • All the data generated over the course of the event is stored in its corresponding state • Storage like a bulletin board • Only changes from previously recorded values are posted • Most recent saved value of the variable is also the current value • Many objects have access to the state, including subsystems, constraints, dependencies, data output classes and schedule evaluation functions 500 300 270 Watts 0.5s 0.85s 1.5s Event End Event Start Time

  20. Architecture: Schedule Evaluation and Cropping • Scheduler attempts to create new schedules by adding each task (in the form of an event) to the end of each schedule in the master list from the previous simulation time step • Number of possible schedules grows too quickly during a simulation to keep every possible schedule • When number of schedules exceeds a simulation parameter (maxSched), the scheduler rates them based on a user-defined “value function” and then keeps only a user defined number (schedCropTo) of schedules • Changes the basic scheduler from exhaustive search to a “semi-greedy” algorithm

  21. Horizon 1.2 Runtime Analysis

  22. D – Target deck size Smax – Maximum number of schedules allowed before cropping n – Number of time steps in simulation Tsys – Mean system execution time Horizon 1.2 Parametric Runtime Analysis

  23. Aeolus: The Horizon Framework Baseline Test Case Horizon Version 1.2

  24. Aeolus Mission Concept • Aeolus: The Greek god of wind • Extreme-weather imaging satellite • Circular, 1000km, 35 degree inclined orbit • Simulation date: August 1st 2008 for 3 revolutions • Targets clustered into high-risk areas, including Southeast Asia and the Gulf of Mexico • Sensor has ability to generate data while in eclipse

  25. Aeolus System Model • Subsystems • Access – Generates access windows for different types of tasks • Attitude Dynamics and Control System – Orients spacecraft for imaging • Electro-Optical Sensor – Captures and compresses images when it has access to an imaging target and sends data to the Solid-State Data Recorder • Solid-State Data Recorder – Keeps imagery data before being sent down to a ground station • Communications System – Transmits imagery data when it has access to a ground station • Power – Collects power usage information from the other subsystems, calculates solar panel power generation and depth of discharge of the batteries • Constraints • During imaging, no information can be sent to ground stations • The data recorder cannot store more than 70% of its capacity • The depth of discharge of the batteries cannot be more than 25%

  26. 100 100 Event Start 50 Buffer Usage (%) Battery DOD (%) 50 0 0 0 5000 10000 15000 0 5000 10000 15000 Simulation Time (s) Simulation Time (s) 4 150 3 100 Generated Solar Panel Power (W) Downlink Data Rate (Mb/s) 2 50 1 0 0 0 5000 10000 15000 0 5000 10000 15000 Simulation Time (s) Simulation Time (s) Aeolus Simulation Results: Power/Data

  27. Other Test Cases • Developed • LongView – a micro-class space based telescope for K-12 and College Educational Use • Proposed • PolySat 3 and 4 post launch simulation for received telemetry data trending and analysis

  28. The Horizon Simulation Framework Version 2.0

  29. The Horizon Simulation Framework Version 2.0 Drivers • Several functional and architectural problems with Version 1.2 • Not a natively multi-asset simulation framework • Subsystems had no hierarchical information • StateVar objects subverting C++ type-checking and opening user to data-corrupting errors • Reading and writing to state was difficult, single value input and output

  30. Version 2.0 Architecture Changes: SubsystemNodes • SubsystemNodes solve the problem of subsystems having no hierarchical information • Adjacency list hierarchies like those from the Boost library specify chronological predecessors • Version 2.0 implements this network structure to confirm that previous subsystems have already run • SubsystemNodes point to the SubsystemNodes they are dependent on, as well as the Subsystem they represent • There is an added benefit in that multiple SubsystemNodes can point to the same Subsystem • No circular dependencies allowed! Order of Execution: - Sub 10 is called - Sub 10 recurses to Sub 8 - Sub 8 recurses to Sub 6 - Sub 6 recurses to Sub 1 - Sub 1 executes - Sub 6 recurses to Sub 2 - Sub 2 executes - Sub 6 executes - Sub 8 executes - Sub 10 recurses to Sub 9 - Sub 9 recurses to Sub 3 - Sub 3 executes - Sub 9 executes - Sub 10 executes SubsystemNode 1 SubsystemNode 4 SubsystemNode 5 SubsystemNode 2 SubsystemNode 3 SubsystemNode 7 SubsystemNode 6 SubsystemNode 8 SubsystemNode 9 SubsystemNode 11 SubsystemNode 10 Subsystem of interest SubsystemNode 12

  31. Version 2.0 Architecture Changes: Assets • In order to Task multiple assets, Asset objects were created • Assets defined as: Any actor that had subsystems as members and knowable motion • Functionally, an asset is an attribute of a subsystemNode due to the fact that it just specifies involvement in a Task and a Position • Assets can have dependencies on one another by having involved subsystemNodes have dependencies to one another SubsystemNode 4 SubsystemNode 5 SubsystemNode 1 SubsystemNode 2 SubsystemNode 3 SubsystemNode 6 SubsystemNode 7 Asset 2 SubsystemNode 9 SubsystemNode 8 Asset 1 SubsystemNode 10 SubsystemNode 11 Asset 3 SubsystemNode 12

  32. Event Event Event Event Event Event Version 2.0 Architecture Changes: SystemSchedules and AssetSchedules • In Version 1.2, a schedule was a series of events where system performed a task • In 2.0, Having multiple assets requires the ability to task each asset independently • Each asset must then have its own schedule (which is called an AssetSchedule) • AssetSchedules hold a list of events and an initial state • The whole system must have a unique schedule (now called a SystemSchedule) • SystemSchedules hold a list of AssetSchedules Initial State assetSchedule 1 systemSchedule Initial State assetSchedule n Simulation Time

  33. Version 2.0 Architecture Changes: Scheduling Changes • In Version 2.0 it does not make sense to require that each asset be able to perform the same task or that each asset must perform a task at all • Instead, at each simulation time step in order to create a new systemSchedule at least one of the assets must be able to start a new task • All the combinations of assets performing each of the tasks are created for each old schedule • One serious implication - Assets that are not scheduled must extend their previous events further in a “cruise mode” • Called the canExtend() function • Simply checks that given nominal operation of a system in its current environment, can it continue to pass the system’s constraints and the subsystems requirements

  34. Version 2.0 Architecture Changes: States and Profiles • Previously, retrieving incorrect data was possible given a relatively common user error • Something type-safe needed at compile-time to let user know they are asking for the wrong data type • State now contains vectors of maps of Profiles of different types • Profile is a templated class • All access to the State is done by setting and retrieving Profiles • 2 main benefits • Profile has mathematical methods to do extremely common tasks, reducing modeling time significantly (50-90% from tests) • Profiles and StateVarKey (the key used to store and retrieve values) are both templated • Specifying a Profile return type when looking up a variable that is incompatible with the StateVarKey type causes a compiler error

  35. Horizon 2.0 Runtime Analysis

  36. Horizon 2.0 Parametric Runtime Order Equation • Theoretical due to the fact that algorithm didn’t significantly change • Number of tasks possible changes to number of combinations of assets and tasks • Access pre-generation algorithm (not included in Thesis) is being currently added before the version freeze • Parametric evaluations done before this addition would be invalid • D – Target deck size • A – Number of assets • Smax – Maximum number of schedules allowed before cropping • n – Number of time steps in simulation • Tsys – Mean system execution time

  37. The Aeolus Constellation: The Horizon Framework Multi-Asset Baseline Test Case Horizon Version 2.0

  38. The Aeolus Constellation: Implementation • Two assets were created, and the constituent subsystemNodes were duplicated from the same subsystems found in the previous test case • Constraints and dependencies were changed to use the accessors from the new profile class • At heart, the system that was modeled is identical, albeit with the number of assets doubled • The first asset kept the original asset’s orbit • The second asset was initialized using the same orbital parameters except the RAAN was rotated 180 degrees • Assets get better ground track coverage of targets in one revolution

  39. The Aeolus Constellation: Results

  40. The Aeolus Constellation: Results

  41. The Aeolus Constellation: Results

  42. The Aeolus Constellation: Results

  43. Horizon Conclusions

  44. Horizon 2.0 Strengths and Weaknesses • Ultimately, the modeling mantra is still true: “The better the model the better the output” • It is incumbent on the modeler to create an accurate system to simulate in order to create data provides value to the analyst • Horizon is capable of producing useful data given relatively simple subsystem models • Four main points concerning software design and architecture • Highly Variable Degrees of Fidelity (GOOD) • Easy Access to State Data (GOOD) • No GUI (BAD) • Model Creation is Complex (BAD)

  45. Future Plans • Genetic Algorithm for Schedule Generation • Dependency Integration into SubsystemNode • Drag-and-Drop Simulation Creation GUI • Automatic “Sanity-Checking” for User-Specified Code • Parallelization • LHLV Coordinate System Support • Matrix Templatization • Error Recording • Module Library Creation • “Just Flying Along” Mode

More Related