Loading in 2 Seconds...

Computer Systems Lab TJHSST Current Projects 2004-2005 Third Period

Loading in 2 Seconds...

- 162 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Computer Systems Lab TJHSST Current Projects 2004-2005 Third Period' - niveditha

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

Current Projects, 3rd Period

- Robert Brady: A Naive Treatment of Digital Sentience in a Stochastic Game
- Blake Bryce Bredehoft: Robot World: A Evolution Simulation
- Michael Feinberg: Computer Vision: Edge DetectionsVertical diff., Roberts, Sobels

2

Current Projects, 3rd Period

- Scott Hyndman: Agent Modeling and Optimization of a Traffic Signal
- Greg Maslov: Machine Intelligence Walking Robot
- Eugene Mesh:
- Thomas Mildorf: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of Sorting

3

Current Projects, 3rd Period

- Carey Russell: Graphical Modeling of Atmospheric Change
- Matthew Thompson: Genetic Algorithms and Music
- Justin Winkler: Modeling a Saturnian Moon

4

Developing a Learning AgentThe goal of this project was to create a learning agent for the game of bridge. I think my current agent, which knows the rules, plays legally, and finds some basic good plays, is a step in the right direction. This agent could and will be improved upon over the course of the year and will become smarter and learn faster throughout the year

5

My techlab project deals with the field of Artificial Intelligence or more specifically, Machine Learning. I am designing an agent/environment for the card game of bridge. After it learns the rules, I will run simulations where it decides on its own what the best play is. The level of play for the agent will increase as the year continues because it will look up past decisions in its history to determine what the best bid or play is in a current state of the environment.

A Naive Treatment of Digital Sentience in a Stochastic GameRobert BradyMachine learning has been researched in the past and has dealt with bridge before. This area is new, though, and anything from intelligent agents for games to the traveling salesman problem count as part of it. An algorithm used for one problem can be applied in a similar manner to another such as the minimax algorithm or the backtracking search. To build on current work, there would have to be some sort of improvement on current bridge-playing agents such as Bridge Baron or GIB. Both of these programs play at a moderate level, but none of them can compare to an expert player. The reason why an intelligent bridge-playing agent has been hard to program in the past is that bridge is a partially observable environment.

A Naive Treatment of Digital Sentience in a Stochastic GameRobert BradyIn games such as chess, or checkers, the agent could conceivably come up with the best solution (given enough time to think about it) because it knows where everything is. In bridge, there are certain cards that haven't been played yet and although you may be able to guess where they are, you can not determine this with 100% certainty. This makes programming a learning agent for a partially observable environment much harder. Progress For the first semester, I worked on this program with regards to finishing programming in the rules to the game and some simple AI commands.

A Naive Treatment of Digital Sentience in a Stochastic GameRobert BradyWhen this was completed about a week before the semester was over, I began researching different AI algorithms that could be implemented for searching. This research halted once I realized the tree that would be searched was difficult to construct. I consulted my professional contact Fred Gitleman and he had also encountered this problem when programming a similar search algorithm. He talked me through the problems I had and gave some advice on where to find information that would help me with those problems. During the third quarter, I worked solely on running an algorithm (the minimax algorithm) through a depth-first search with pruning of bad nodes.

A Naive Treatment of Digital Sentience in a Stochastic GameRobert BradyThe algorithm still has a few problems and will hopefully be finished soon. Another portion of the code that I added this quarter deals with the machine learning part of my pro ject. This part of the pro ject stores information from the hand that the computer just played in a file that is essentially the computer's "brain." The brain stores information about how many tricks were taken with the combined hands in a trump suit or in no-trump. It uses this information for the bidding stage of the following hands. If the numbers it reads from the file are much lower than what it believes the current state of the environment is, it will bid higher and if the numbers are higher, it will try to refrain from bidding.

A Naive Treatment of Digital Sentience in a Stochastic GameRobert BradyMy future plans include testing of the program against other players at the school. I will use students from my tech-lab class for a preliminary test and then after the program has established a competitive nature with these kids, it will play against the bridge club on Fridays during school. I hope it will be able to compete with the students of the club at some point in the next quarter, but if this is an unrealistic goal, I will just try and improve upon it's algorithm as much as I possibly can before the end of the year. As it is currently, the program needs a little more work on the algorithm to make it fully operational before I open it up to tests from fellow students. Co de This section is pretty much self explanatory. Some important sections are in bold and commented while the less important parts have been left out.

A Naive Treatment of Digital Sentience in a Stochastic GameRobert Brady1. Fred Gitelman (fred@bridgebase.com) A programmer who also is an expert bridge player. He advised me on how to look through a tree to find the solution comparable to a minimax search.

2. Russell, S. and Norvig, P. Artificial Intelligence A Modern Approach Seco Prentice Hall, NJ. 2003.

A Naive Treatment of Digital Sentience in a Stochastic GameRobert BradyRobot SwarmsMy project is an agent based simulation, posing robots in a “game of life”, with each new generation of robot comes new genes using a random number selection process creating the mutations and evolutions that in real life we experience for DNAcross over and such.

13

My project is an agent based simulation, posing robots in a "game of life", with each new generation of robot comes new genes using a random number selection process creating the mutations and evolutions that in real life we experience for DNA cross over and such. There are two versions of my program a Simulation and a Game.

Robot World: A Evolution SimulationBlake Bryce BredehoftThe Simulation: As stated in my abstract.

The Process: The base of my program was not the genetics, but the graphics itself. First one robot, then a random Artificial Intelligence for it. I then modified the world to sustain several robots, with dying a breeding. After introducing two more Artificial Intelligences a "group" Artificial Intelligence and a "battery" Artificial Intelligence promoting grouping and collecting batteries selectively. I then tweaked the environment and code until it could sustain life. I then programed in several possible places for environmental interaction, like viruses and the batteries. I added the graphical output for easier analysis. Finally I created the random number selection process to splice the genes of the parents and create a child. The heart of the program.

Robot World: A Evolution SimulationBlake Bryce BredehoftThere is the above stated "simulation" version of my program, and a later created "game" version. The game version includes all the same components as the simulation but also has a user controlled robot with "laser eyes" and "grenade launchers" use to kill the other robots. It also includes "Bosses".The Process: Taking the simulation version and modifying it was easy, first removing the natural deaths and graph. Then I introduced and tweaked the user controls and the user controlled robot. Then adding lasers and grenades and all the necessary coding, i finally added a status display embedded window in the top left corner. I continually add new pieces of flare to the program, such as bosses.

Robot World: A Evolution SimulationBlake Bryce BredehoftMy project has several basic components: 3D modeling, Artificial Intelligence, The selection process, and then basic game theory. All these components form the amalgam that makes up my project. Both the simulation and the game use all these components except the simulation doesn't have any game theory.

Robot World: A Evolution SimulationBlake Bryce Bredehoft2 Background 2.1 Monte Carlo Simulation

Since Monte Carlo is a Simulation technique, let's first define exactly what we mean by Simulation. A true Simulation will merely describe a system, not optimize it! (However, it should be noted that a true simulation may be modified in a manner such that it can be used to significantly enhance the efficiency of a system.) Therefore, our primary goal in Simulation is to build an experimental model that will accurately and precisely describe the real system. However, the breadth and extent of Simulation models is extensive!

Robot World: A Evolution SimulationBlake Bryce BredehoftThis can be illustrated by considering the three general "classifications" of Simulation Models, below. And in each of these "classifications", I have defined two possible "characteristics".

1. Functional Classification Deterministic Characteristic - These are "exact" models that will produce the same outcome each time they are run. Stochastic Characteristic - These models include some "randomness" that may produce a different outcome each time it is run. This randomness forces us to make a large number of runs to develop a "trend" in our "collection" of outcomes. Further, the exact number of how many "runs" you must make to obtain the "right trend" is simply a matter of statistics.

Robot World: A Evolution SimulationBlake Bryce Bredehoft2. Time Dependence Static Characteristic - These models are not time-dependent. This even includes the calculation of a specific variable after a fixed period of time. Dynamic Characteristic - These models depict the change in a system over many time intervals during the calculation process.

3. Input Data Discrete Characteristic - The input data form a discrete frequency distribution.

Robot World: A Evolution SimulationBlake Bryce BredehoftDiscrete frequency distributions are characterized by the random variable X taking on an enumerable number of values xi that each have a corresponding frequency, or count, pi. Continuous Characteristic - The input data can be described by a continuous frequency distribution. Continuous frequency distributions are characterized by a continuous analytical function of the form y = (x) where y is defined as the frequency of x. This definition is valid for all possible values of x (over the domain of the function).

Robot World: A Evolution SimulationBlake Bryce BredehoftWe can now say that Monte Carlo Simulations are "True Stochastic Simulations" in that they describe the "final state" of a model by just knowing the frequency distributions of the parameters describing the "beginning state" and the appropriate metric that maps, or transforms, the beginning state to the final state. They can also be either static (easy) or dynamic (more difficult). If a prediction were required, then "every possible" option would have to be considered and this is where the well-known "Variance Reduction Methods" (antithetic variables, correlated sampling, geometry splitting, source biasing, etc.) would be used to reduce the number of iterations required in the simulation.

Definition courtesy of JAMES F. WRIGHT, Ph.D Ltd. Co. at http://www.drjfwright.com/

Robot World: A Evolution SimulationBlake Bryce Bredehoft3.1 3D Modeling My graphics are done using OpenGL. In the simulation version there are three different aspects to the graphical output: the agents (the robots), the environment (the floor and batteries), and the events (explosions) and the population graph. The game version has all these same component except the agents include a user controlled robot and bosses, the events include grenades, lasers and grenade explosions, there is no graph, and there is a stat indicator, and a mini map. There is also the interface of my program from where you launch the simulations and games.

Robot World: A Evolution SimulationBlake Bryce BredehoftThe environment consist of a floor and background and small cubes representing batteries. Simple enough (below right). The robots consist of prisms and spheres to form arms, legs, torso, head and facial features (below left). The explosions are tori spinning on the y-axis that get more transparent as the grow in size (below center). The is also a program that is able to output numbers for the counters in the game version.

See appendix A.1 for example code.

Robot World: A Evolution SimulationBlake Bryce BredehoftThere is also a graph out put that is fairly simple. Every iteration it plots a new point on the grid, and never erases, therefore creating a line graph for the populations of each artificial intelligence type (below left). The game mode utilizes a mini map and a status bar, the bar includes life, number of grenades, and number of kills (bottom right).

Robot World: A Evolution SimulationBlake Bryce Bredehoft3.2 Artificial Intelligence There are three different main artificial intelligences in the program and one that uses a combination of the others. The first is a random artificial intelligence that is the most basic, second is the group artificial intelligence that condones forming groups for reproduction, and the last is the battery or food artificial intelligence that promotes eating. The fourth is one that is advanced, and has the agent use the battery artificial intelligence when it requires energy, and uses the group artificial intelligence when he doesn't require energy. The random artificial intelligence first randomly decides whether to turn left, right, or go forward.

Robot World: A Evolution SimulationBlake Bryce BredehoftIt has a preference to turn if it turned the iteration before. This done over time produces interesting behavior. The fact that offspring spawn close to parents and that parents have to be close to produce offspring means that after a while these will group, and any robot that randomly strays from the group will die off, where as those in the group re spawn as fast as they die.

Code is located in Appendix A.2.

Robot World: A Evolution SimulationBlake Bryce BredehoftThe group artificial intelligence goes through the list of robots and first recognizes all the robots that are of a color suitable for reproduction for the given robot. Then from this list the closest robot is chosen, the robot then turns towards this robot and walks. This obviously forms groups that are more efficient than the groups produced by the random artificial intelligence, because robots will not stray off.

Code is located in Appendix A.2.

Robot World: A Evolution SimulationBlake Bryce BredehoftThe battery artificial intelligence goes through the list of batteries and finds the one closest to the robot and then turns the robot towards it and walks. Robots will end up heading after the same battery and form groups, these groups may or may not be able to reproduce though, but when a pair find each other these groups produce to be fairly strong.

Code is located in Appendix A.2.

Robot World: A Evolution SimulationBlake Bryce BredehoftThe group that is the strongest is an amalgamation of both group artificial intelligent robots and battery artificial intelligent robots. These groups stay together due to the number of group robots and will search for food due to the battery robots. Proving to be extremely effective in keeping alive. Sooner or later however one of the artificial intelligences ends up getting bred out. There is a fourth artificial intelligence that is an amalgamation of the group artificial intelligence and the battery artificial intelligence. When the agent is low on energy it uses the battery artificial intelligence, until it has a decent amount then it uses the group artificial intelligence.

Robot World: A Evolution SimulationBlake Bryce BredehoftThe selection process doesn't start with selecting but instead starts at the beginning of every iteration. The process is began by inventorying the population, tallying the number of robots and their characteristics, this then produces a few tables stored as a "genome" (code for class in Appendix A.3). This tables are formed to make graphs of the frequency of the specific gene settings. When the selection is called, it goes thorough and finds the optimal gene, in the parents gene pool, using a random number process, and the fore mentioned graphs. This is done for each gene. There is also a level of randomness calculated in the allows for mutations. These mutations give a new status to the gene, not in the gene pool.

Code can be seen in Appendix A.3.

Robot World: A Evolution SimulationBlake Bryce BredehoftWhile the theory behind this selection process may seem somewhat simple the code on the other hand is not.

3.4 Game Theory

There are lasers and grenades. Your tools for destroying the surrounding robot population. Combine their power by shooting the grenade as it falls with your laser and produce a powerful explosion. After every 50 kills boss robots appear. One will appear the first 50, two the second 50 and so on. The bosses use a boss artificial intelligence of their own. The boss artificial intelligence will find the user controlled robot a turn it toward it and walk.

Robot World: A Evolution SimulationBlake Bryce BredehoftAbstract and paper needed Computer Vision: Edge DetectionsVertical diff., Roberts, SobelsMichael Feinberg

Optimization of a Traffic SignalThe purpose of this project is to produce an intelligent transport system (ITS) that controls a traffic signal in order to achieve maximum traffic throughput at the intersection. To produce an accurate model of the traffic flow, it is necessary to have each car be an autonomous agent with its own driving behavior. A learning agent will be used to optimize a traffic signal for the traffic of the autonomous cars.

35

Traffic in the Washington, D.C. area is known to be some of the worst traffic in the nation. Optimizing traffic signal changes at intersections would help traffic on our roads flow better. This pro ject is to produces an intelligent transport system (ITS) that controls a traffic signal in order to achieve maximum traffic throughput at the intersection. In order to produce an accurate model of the traffic flow through an intersection, it is necessary to have each car be an autonomous agent with its own driving behavior.

Agent Modeling and Optimization of a Traffic SignalScott HyndmanThe cars cannot all drive the same because all the drivers on our roads do not drive the same. A learning agent is used to optimize a traffic signal for the traffic of the autonomous cars. Note: The results and conclusion pieces of the abstract are not included yet because the pro ject is not finished.

Agent Modeling and Optimization of a Traffic SignalScott HyndmanTraffic in the Washington, D.C. area is known to be some of the worst traffic in the nation. optimizing traffic signal changes at intersections would help traffic on our roads flow better. the purpose of this pro ject is to produce an intelligent transport system (its) that controls a traffic signal in order to achieve maximum traffic throughput at the intersection. in order to produce an accurate model of the traffic flow through an intersection, it is necessary to have each car be an autonomous agent with its own driving behavior. the cars cannot all drive the same because all the drivers on our roads do not drive the same. a learning agent will be used to optimize a traffic signal for the traffic of the autonomous cars .

Agent Modeling and Optimization of a Traffic SignalScott Hyndman1.1 Traffic Signal Control Strategies

There are three main traffic signal control strategies: pretimed control, actuated control, and adaptive control.

1.1.2 Pretimed Control

Pretimed control is the most basic of the three strategies. In the pretimed control strategy, the lights changed based on fixed time values. The values are chosen based on data concerning previous traffic flow through the intersection. This control strategy operates the same no matter what the traffic volume is.

Agent Modeling and Optimization of a Traffic SignalScott HyndmanThe actuated control strategy utilizes sensors to tell where cars are at the intersection. It then uses what it learns from the sensors to figure out how long it should wait before changing the light colors. For example, if the signal picks up a car coming just before the green light is scheuled to change, the length of the green light can be extended for the car to go through.

Agent Modeling and Optimization of a Traffic SignalScott HyndmanThe adaptive control strategy is similar to the actuated control strategy. It differs in that it can change more parameters than just the light interval length. Adaptive control estimates what the intersection will be like based on data from a long way up the road. For example, if the signal notices that there is a lot of traffic building up down the road during rush hour, it might lengthen the green light intervals on the main road and shorten them on the smaller road.

Agent Modeling and Optimization of a Traffic SignalScott Hyndman1.2 Driver Behavior

None at this time.

1.3 Machine Learning

This piece of the pro ject has not been started.

Agent Modeling and Optimization of a Traffic SignalScott HyndmanI am using MASON software to do my traffic simulation. MASON is a Javabased modeling package that is distributed by George Mason University. My simulation is based on the MAV simulation included with the MASON download. In MASON, everything runs from the Schedule class. The Schedule keeps track of time and moves the simulation along one step at a time. Ob jects that move implement the Steppable interface. Thus, each has its own Step method that the Schedule calls at each step in time. There is also a Stoppable interface that takes ob jects off the Schedule.

Agent Modeling and Optimization of a Traffic SignalScott HyndmanIn this program, the visible simulation is made by the CarUI Car User Interface class. The CarUI starts the CarRun class running. The CarRun class is what starts the Schedule and creates everything in the simulation. CarUI takes information from CarRun to display on the screen. CarRun creates Continuous2D's, one for each of the ob ject types used in CarRun - Car, Region, Signal, and eventController. Continuous2D's store ob jects in a continuous 2D environment. They make it easier keep track of the ob jects in the simulation. The Continuous2D breaks the space of the simulation into "buckets."

Agent Modeling and Optimization of a Traffic SignalScott HyndmanIf you want to find an ob ject in a certain area of the simulation, you can check in the bucket there. For example, if you want to see if a car has another car near its front, you can look in the bucket that that car is in and check to see where the other cars in that bucket are. The Car class contains the information for how each autonomous car runs. It implements both the Steppable and Stoppable interfaces. Regions are what goes on the background of the visual output. Examples of Region ob jects are the roads and medians. The Signal class is almost identical to the Region class. However, the signals are redrawn at every time iteration while the Regions are only redrawn if they change loaction or size.

Agent Modeling and Optimization of a Traffic SignalScott HyndmanLastly, because there is no way in MASON to control when actions happen in the Schedule, I made the eventController class to tell actions when to happen. The eventController class uses functions defined in other classes to control the ob jects of those classes.

2.2 Driver Behavior

None at this time.

2.3 Optimization

This piece of the pro ject has not been started.

Agent Modeling and Optimization of a Traffic SignalScott HyndmanSorting Parts of Variable WidthProblem Statement. To analyze the efficacy of sort parts by using slots and utilizing the variable angular velocities that result when parts of distinct physical dimensions move off of a relatively flat inclined surface. Purpose. The final goal is to assess the feasibility of quality control based on taking advantage of the different orientations at various time after release that are caused by deviations from the original product.

51

Apology.

In a manufacturing environment, it is crucial to establish a high standard of quality control while at the same time maintaining a balanced budget. Robustness of production as well as the minimization of risk are also of concern. Ergo, simple, automated techniques for weeding out defective pieces are desirable. It is the intention of this pro ject to analyze the effectiveness of one such technique.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

It is not uncommon for constructs to be delivered by conveyor belts as they are processed in a factory. Their continuous motion, when directed over the end of such a surface, induces a certain rotation that accompanies each item during the pursuant fall. Proposed is to sort these items by exploiting variance in this rotation via the precise positioning of one or more slots. It is hoped that this motion will be sufficiently sensitive to deformation as for this procedure to be feasible.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

The scope of this pro ject is exploratory in nature; there is no sense in attempting to develop a general method. Thus, we will work with a rather simple subset of possible pieces. Moreover, due to logistic constraints, experimentation will be conducted primarily within a digitally rendered environment. The model will be coded from scratch so as to give me total control of the physics involved, and repeated trials with dependent variance will be employed to discern the efficacy of this sorting technique.

A Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

Derivation of Theoretical Equations. We will work under admittedly simplistic circumstances, assuming that the ob jects being sorted are approximated by a rectangular block of length, height, and depth l, h, and d respectively. We call the generic block B . We suppose furthermore that there exists a uniform density within B , so that its mass M , generally given by V M= dV is instead given by M = hld . In a real environment, products are typically delivered by conveyor belt. Due to aforementioned logistic constraints, we use the approximation of an inclined plane, which we call P . B will be released from the top of P .

We write for the angle beween P and the gravitational equipotential contour. Let µs and µk denote the static and kinetic coefficients of friction, respectively, between B and P . Under our assumptions of uniform density, the relevant calculations are straightforward. If µs tan(), then no motion results du, to static friction, otherwise B moves under the force of gravity and friction. he If > - tan-1 l then the block tumbles down the incline; we assert that this is not the case. 2 The normal component of the contact force, Fn , is given by |Fn | = MB g cos(), and the frictional component, Fk , by |Fk | = MB g µk cos(). B then slides down P along path C until enough of it hangs over the edge of P for it to begin to rotate.

Let us call this phase of sliding initiation. Initiation is governed by Newton's 2nd Law applied in the direction x, with the positive direction ^pointing straight down P : F

x ^

MB g sin() - MB g µk cos() = MB ax ^

= Fgx - Fk = MB ax ^ ^

ax = g (sin() - µk cos()) ^

1 ^ If the plane has length D in the x direction, B slides a distance of D - 2 (l + h tan()) straight down the plane, after which it begins to pivot about the edge of the plane. Let us call this edge e. Calculation of its velocity v at this time can be simplified via conservation of energy: C 1 F · dr = -P Eg - WF MB |v|2 = K E = k 2 D | 1 = MB g H - MB g µk - (l cos() + h sin()) 2 2 T D 1 v| = g - (l cos() + h sin()) H - µk 2

Thus far, our equations have dealt with constants. In this phase (angular velocity) is determined. Accordingly, we call it glide-rotation. During glide-rotation, critical variables that govern the movement of B , such as Ie and Te , the moment of interia and torque about e respectively, are functions of the position of B itself. Let eo denote the axis parallel to e that passes through the 3-D center of B . The calculation of the moment of intertia of B about eo is straightforward: Ieo = V r dm =

2

2 -l 2

h2 + l2 h2 + l2 = MB 12 12 , h2 2 +l 2 The parallel axis theorem yields Ie = MB where r is the distance between e and 12 + r eo . Torque is given by Te = MB g x , where x is the horizontal displacement of the center of the block past e. The torque equation then gives us e T = Te = Ie = dhl MB g x = MB d dt Where x = cos( ) r 2 l -h 2 h2 0 d x2 + y2 dz dy dx =

g xA Case Study: A Self-Propagating Continuous Differential System as Limiting Discrete Numerical Construct and Device of SortingThomas Mildorf.

h2 + l 2 12 h2 + l2 + r2 12

d dt + r2 .

A central calculation of determines the - h2 4 + sin( ) h and = + 2 3

torque due to the contact force between B and P , which in turn yields Fn : e T = TF + TFn = Ieo k o Fn µ kh 2 ± r h2 2- 4 =

MB h2 + l2 12 g x h2 + l 2 12 T

Fn = + r2 MB g x - h2 4 µ h k2 ± r 2 1 + 12r 2 h2 + l 2

This set of implicit differential equations is much too difficult to resolve by elementary methods, and thus requires a computational model. he observant reader will surely notice that the theoretical T

h µ2 + 1

k equation for Fn is asymptotic as r . This anomaly is the result of a

theoretical Newtonian 2 decomposition that is invalid as the direction of the combined contact force approaches that of r. A graph of the normal force reveals that the interval of gravely affected r is rather slim; hence, it will be reasonable patch this anomaly by assuming a constant force until our equation is valid again. Furthermore, we assume that contact between B and any slots is by nature a rigid-body collision in which the slots are fixed and infinitely massive. Moreover, we assume a constant elasticity E [0, 1] for all of the collisions. Let r, po , p, i , and vi denote the vector pointing from the center of B to the parcel of B in the collision, a unit vector in the direction of the impulse delivered, the impuls·e delivered, the initial angular rotation and velocity of B .

By our hypotheses, 1 1 1 2 2 2 E = 2 MB |vf |2 + 1 Io f . Now, 2 MB |vi | + 2 Io i 2 r × po d = signedmagnitude Io f = i + |p|d v v 2 2+ · 1 |p| 1 |p| 1 1 2 2 E= pox + iy + poy MB |vi | + Io i MB Io (i + |p|d )2 ix + 2 2 2 MB MB 2 = 1 122 1 1 |p|2 2 0 = |p| (vi · po ) + + Io i d |p| + Io d |p| + (1 - E ) · MB |vi |2 + Io i 2 MB 2 2 2 ( ( 1 -(vi · po + Io i d ) + vi · po + Io i d )2 - 2 MB + Io d2 1 - E )Ei |p| = 1 2 MB + Io d

1 2 where Ei is the initial energy 2 MB |vi |2 + 1 Io i and vi · po = vix pox + viy poy . We choose + · · · 2 because as E 1- , the expression with - · · · tends to 0. (Recall, for instance, that vi · po is always negative.) Of course, the fixed elasticity opens the possibility that |p| is computed to be imaginary. For such instances, we adopt the convention of perfect elasticity, which is uniquely determined by computing |p| with E = 1. The crux of my pro ject is to create and refine this model to the degree that I can predict the that will result when B is released from the top of P . Hopefully, there will be a high enough degree of sensitivity to the dimensions of B that I will be able to sort good and bad pieces based on the variation in .

Computational Mo deling In order to separate the model itself from the input and graphics implementation, the source has been partitioned into three files:

1. PhysModel.cpp - The body that contains most of the physics equations and graphics routines to render the set up.

2. Parse.cpp - The file which takes the command line input, defaulting unassigned variables to the previous run via an intermediary storage file.

3. Polygon.cpp - The source that defines general graphics functions, the structs Polygon and Vertex, and several polygonal intersection routines.

The fundamental hypothesis of this pro ject is the assumption that the complicated motion of the block can be modeled as a discrete set of equations repeatedly propagated through a small time step. The goal, then, is to apply such modeling techniques to a system that is hopefully sufficiently complicated as to exhibit a high degree of sensitivity to independent variables such as height and length. Implementation begins with calls to several initialization routines. The first call interprets the command line input.

From the command line, independent variables can be assigned by appending the word "*VariableName=Value" to the end of the command line call. Graphics and a number of debugging flags can be assigned via the word "-flags". (Graphics, for example, is the flag "g", which then propagates as GFX=1.) Then the Sine, Cosine, Tangent, and Sqrt look-up tables are calculated. By precomputing the values of sine, cosine, tangent, and square root at millions of points, we can effectively negate the cost-expensiveness of computationally expense Taylor series calculations without intolerable loss of precision. Indeed, a brief scan of direct comparison indicates accuracy to roughly 6 correct decimal places.

Implementation continues with a call to display, which serves to manage thousands of repeated calls to Model, in which the independent variables of length, height, and the position of the rightmost slot-block are minimally altered. The values returned are tabulated in the file specified by the ofstream DAT. The variable ANOMALOUS is a global that keeps track of the validity of the computed outcome: the block being either or rejected by the slot. The potential loss of validity is a function of the theoretical assumptions that we have made which are not necessarily valid.

Some potential errors and mitigating devices I have used to address them include · Error induced by Eulerian time discretization. Because of the complex nature of the system, it is difficult to determine precisely how this affects accuracy. The runs I have conducted on a typical PC use what appears to be a small time step: between one and three hundredths of a second. · Perfectly rigid collision. As we shall see, there is a small tolerance built into impulse function which governs this collision. · Look-up table error; as mentioned, calculated sufficiently many times, this error can be reduced to on the order of 1 part in 1,000,000. Implementation continues to the model itself.

A few qualitative comparisons are conducted. Static friction must meet or exceed kinetic friction, but must not be so great as to prevent any motion at all. Moreover, the assumption that block-plane contact remains face-to-face throughout initiation asserts a simple trigonometric relation. Finally, if the block is simply too large to fit through the slot, the rejection is categorically recorded as a rejection with zero anomaly. After the said preconditioning, the model simulates the triphasic experiment in three consecutive loops. checking its center with a set of horizontal and vertical thresholds. Model returns 0 for a rejection and a 1 for acceptance.

The variable IPS is employed in conjunction with SDL DELAY to add sufficient artificial delay so as to obtain the desired number of iterations per second. The delay function itself takes only integer arguments; thus, all values of IPS in excess of 1000 are equivalent. Otherwise, the loops are governed by a discrete version of the physics described in detail above. The Collision function returns whether or not the block overlaps with any of the slot polygons, assigning to every edge of each shape the number of other edges it intersects with.

Upon return to the base loop making the call, the program calls impulse to handle the relevant impulse transfer. impulse runs time forwards and backwards with progressively smaller time steps until it has precisely determined when the said collision occurred. Again, we have employed a calculational technique to lower the number of computations necessary while maintaining high precision. The function then computes under the standard two-dimensional Newtonian dichotomy:

1. Either a corner of the block has protruded over an edge of a slot-block, or

2. A corner of a slot-block has protruded over an edge of the block.

That is, it assumes that we do not have the scenario where two corners mutually protrude over one another. The above physics equations are then applied. The direction of the frictional component is determined via a sequence of vector calculations. The normal component is then scaled according to COLFRICOF, the fixed coefficient of friction for these impacts, and impulse is delivered. The potential error is of the block rotating partially into the slot-block due to its rigidity. A small fraction of the collisions result in this anomalous motion; hence, the resultant motion is checked by another loop governed by the Collision function. If at least one iteration is spent with such positioning, the current iteration is flagged with an anomaly value of 2. If too many such iterations occur, a value of 4 is used instead.

Finally, the acceptance / rejection of the block is determined by checking its center with a set of horizontal and vertical thresholds. Model returns 0 for a rejection and a 1 for acceptance.

Conclusion

In converting the records of the trials into images for a cursory and qualitative analysis, it becomes clear that the sensitivity obeys a semi-chaotic decay. Green pixels are plotted for trials resulting in acceptance and black is plotted for rejection. Anomalous cases are those appearing in red. Here we refer to the parameters of the subsequent image.

For this particular image, the horizontal scale is one pixel equal to one millimeter in block variance, with the base value being plotted in the leftmost column and increased with each pixel to right. Vertically, one pixel is a half of a centimeter in slotwidth flux, with the initial value at the top and increases in slotwidth with each pixel down. The base block is one meter long and half a meter in height. The image itself actually consists of two sensitivity tests - the top half corresponds to flux in length, with the bottom depicting sensitivity to increases in height. Small, discrete regions of acceptance are visible.

This exploratory experiment seems to indicate that the setup can be manipulated to exact sensitivities of at least one part in one hundred. Moreover, there is nothing to confine the application of this methodology to simply one slot. Though we do not review a precise analytic treatment here, it is readily apparent that an additional venue of selection would be the use several ramps and slots.

In conclusion, further refinement in this method of sorting appears to be possible.

Appendix A - Code

For completeness, the three source files PhysModel.cpp, Parse.cpp, and Polygon.cpp are reproduced here. PhysMo del.cpp #include #include #include #include #include #include #include #include <stdlib.h> <SDL/SDL.h> <GL/glut.h> <iostream> <math.h> <fstream> "Polygon.cpp" "Parse.cpp"

Modeling Atmosperic ChangeMy goal is to create a model of the atmosphere over time, predicting its strength given the increasing amount of pollution as well as the controversial but effective Montreal Protocol. Many projects are in place to save the ozone, and this model will assist in assessing the impact of anti-pollution movements and determine the longterm possible outcome given many parameters. This model features usercontrolled variables, allowing the user to manipulate the year, solar flux, and existence of anti-pollution projects.

79

1. Abstract The goal is to create a model of the atmosphere over time, predicting it's strength given the increasing (and in this case, user controlled) amount of pollutants like greenhouse gases. Many projects are in place to save the ozone, and this model will assist in assessing the impact of antipollution movements and determine the long-term possible outcome giving the many and flexing parameters.

2. Background

2.1 The Ozone

Ozone, a feared word in the media, is essential to life on Earth. Averaging about three molecules per ten million, O3 is very rare, representing a minute fraction of atmospheric composition. Nearly 90% of all ozone is found in the stratosphere, the atmospheric layer between 10 and 40 km above the Earth's surface, where it shields the surface from ultra-violet radiation (UV-B).

Graphical Modeling of Atmospheric ChangeCarey RussellOzone filters out the high energy radiation below 0.29 mircrometers, allowing only a small amount to reach the Earth's surface. Strongest at about 25 km altitude, this layer is known as the ozone layer; damages to this layer result in subsequent increases in UV-B radiation and risks of eye damage, skin cancer, and adverse effects on marine and plant life. 2.2 The Hole In the 1980's, scientists noticed a noticeable and dramatic increase in the amount of UV-B reaching the surface. At first, they began to suspect and then detect a steady thinning of the ozone layer. Scientific concern morphed into public alarm when the British Antarctic Survey announced the detection of the first Antarctic 'hole' in 1985.

Graphical Modeling of Atmospheric ChangeCarey RussellIn truth, this ozone 'hole' is not a gap in the ozone layer at all; merely, it is a sharp decline in the stratospheric ozone concentrations over most of Antarctica for several months during the southern hemisphere spring. Continued research revealed and satellite data recorded depleting ozone levels over Antarctica growing worse with each passing year.

Graphical Modeling of Atmospheric ChangeCarey Russell1985 Antarctic Hole 2003 Antarctic Hole

http://www.ucar.edu/communications/atmosphere-timeline.html http://www.noaanews.noaa.gov/stories/s2099.htm

Research now shows that the ozone layer over Antarctica thins to 55-44% of its pre-1980s level. The result is up to a 70% deficiency for short time periods, and at some altitudes, ozone destruction is practically total. The picture above to the right shows the current satellite images of the ozone hole, now more than two and a half times the size of Europe.

Graphical Modeling of Atmospheric ChangeCarey Russellpossibility that CFCs could lead to serious ozone decomposition, policy makers worldwide signed the Montreal Protocol treaty in 1987. In brief, this protocol limits CFC production and usage. By 1992, ozone loss was continuing to increase exponentially. These evidence prompted leaders to strengthen the Montreal Protocol. The revision called for a complete phase out of CFC production in industrialized countries by 1996.

http://www.eohandbook.com/ceos/part2e.html

Graphical Modeling of Atmospheric ChangeCarey RussellAs a result, most CFC concentrations are decreasing around the globe. Production in developed countries has fallen by 95%. Current research suggest that the Montreal Protocol is working relatively effectively. The abundance of CFCs and other ozone-depleting substances in the lower atmosphere peaked in 1994 and has now begun to decline. There is a key distinction, however; the rate of atmospheric destruction is now decreasing. Many people translate this to mean that the atmosphere is repairing itself, but unfortunately, merely the rate of decomposition is decreasing not the amount of decomposition itself. On the positive, resulting from the Montreal Protocol, the ozone layer is expected to recover gradually over the next 50 years.

Graphical Modeling of Atmospheric ChangeCarey Russell4. Materials and Apparatus 4.1 The Software (NetLogo)

NetLogo began with StarLogo. It's the next in the generation of multi-agent model languages, building "off the functionality of [the] major product StarLogoT, and adds significant new features and a redesigned language and user interface." So in summary, NetLogo is a modeling environment for simulations. It very well suited for modeling complex system such as natural or social phenomena. Because NetLogo is programmable, modelers can give instructions to hundreds or thousands of independent agents operating simultaneously. This makes it possible to discover the connection between individual behavior and group patterns. Additionally, it lets users open simulation and "play" with them, exploring various conditions.

See References for citation information.

Graphical Modeling of Atmospheric ChangeCarey RussellThe "Net" in NetLogo is "meant to evoke the decentralized, interconnected nature of the phenomena you can model with NetLogo. It also refers to HubNet, the networked participatory simulation environment included in NetLogo. The 'Logo' part is because NetLogo is a dialect of the Logo language.

Graphical Modeling of Atmospheric ChangeCarey RussellSo far, the program has modeled successfully the modern day deterioration of the atmosphere. The user is able to "switch" on or off the Montreal Protocol and see the difference the mere existence of the program. Additionally, the user can watch the absorption rate and watch as the radiation reaches earth. The user is alerted when (or if) the radiation reaches dangerous levels. The solar flux is considered a constant, especially because on such a short time scale (through 3500) and the small fluctuations make a mediocre difference in the overall radiation to earth. Next on the to-do list is continued research into the inter-workings of the atmosphere. I plan to expand on the time scale, and allow for atmospheric regeneration, which is expected (although not currently observable).

Graphical Modeling of Atmospheric ChangeCarey RussellAlso, so far the user cannot change the amount of incoming radiation because (as stated above) the solar flux has been defined as constant. However, soon the ability to manipulate the IR flux will be added. A necessary complexity in the coding is allowing for the release of radiation emitted back from earth, which as of current is left unattended.

6. References/Appendixes Much credit goes to Netlogo, and the National Wildlife Association for posting so many articles concerning the future of the ozone. The Montreal Protocol is my guide for regeneration programs.

Graphical Modeling of Atmospheric ChangeCarey RussellGenetic Algorithms and MusicGenetic algorithms use feedback resulting from evaluating data sets to optimize these data sets for the best performance as defined by the user. The main dataprocessing is done in LISP. The creation of audio files is done using Csound.

90

This paper documents my work of researching and testing genetic algorithms.

1 Introduction

Genetic algorithms use feedback resulting from evaluating data sets to optimize these data sets for optimum performance, where optimum performance is defined by the user. The main data processing is done in LISP. The program has a simple shell script as its frontend. CSound is used to convert the data sets to audio files, which are heard and evaluated by the user.

An Investigation of Genetic Algorithms Using Audio OutputMatthew ThompsonThe first area of research was into various forms of genetic algorithms[1]. Topics covered included different methods of storing data, such as in a tree, list, or array. In a tree, mutation operators include subtree destructive, node swap, and subtree swap, and a single point subtree exchange as a crossover method. In a list, mutations can be generative, destructive, element flip, node swap, or sequence swap, and crossover can be single point or order based. In an array, mutations can be destructive, element flips, or element swaps, and crossovers can be single point or variable length.

An Investigation of Genetic Algorithms Using Audio OutputMatthew ThompsonThe second area of research was into music theory, to ensure that the program would, even with random data, produce something that sounded decent. To do this, I wrote the program such that a melody will stay on key, and used a hash table of notes and frequencies[2] to accomplish this.

An Investigation of Genetic Algorithms Using Audio OutputMatthew ThompsonThe initial program was made to store data in lists, mutate using element flips, and crossover being single point. I used this type of genetic algorithm because at the time of starting the pro ject, it was the type of genetic algorithm with which I was most familiar. After finishing a simple score processing function that would turn a list of numbers into a usable audio file, I integrated this with the genetic algorithm code so that the algorithm's evaluation function was user input telling what the user thought of the melody a specific population member created. Melodies were rated on a scale of 1 to 9, with higher numbers indicating a stronger like of the melody. A shell script was written to serve as a frontend for the algorithm.

An Investigation of Genetic Algorithms Using Audio OutputMatthew ThompsonTesting of the program involved running it over repeated trials, using different data storage, mutation, and crossover methods, and observing trends in the improvement of the melodies created by the program.

An Investigation of Genetic Algorithms Using Audio OutputMatthew Thompson[1] Intro to Genetic Algorithms http://lancet.mit.edu/ mbwall/presentations/IntroToGAs/index.html

[2] Frequencies of Musical Notes http://www.phy.mtu.edu/ suits/notefreqs.html

An Investigation of Genetic Algorithms Using Audio OutputMatthew ThompsonModeling a Saturnian MoonThis project hopes to add to our understanding of space systems by providing a comprehensive simulation of the Saturnian moon system. By doing this, this project attempts to expose what phenomena can't be explained with modern models and perhaps suggest theories to explain the unexplained.

97

The Saturnian moon system is home to many fascinating and unusual astronomical phenomena. For example, Epimetheus and Janus share orbits and exchange momentum every four years. Hyperion has chaotic rotation. Our understanding of these phenomena, however, is unfortunately limited. This project hopes to add to our understanding of space systems by providing a comprehensive simulation of the Saturnian moon system. By doing this, this project attempts to expose what phenomena can't be explained with modern models and perhaps suggest theories to explain the unexplained.

Space System Modeling: Saturnian Moons Justin WinklerThis project focuses on the modeling of complex space systems. A problem with the realm of modeling is that there are nearly always discrepancies in our explanations of certain phenomena. The purpose of this project is to create a simulation of the Saturnian moon system in hopes of better understanding unexplained occurrences within the system. This project therefore aims to reveal phenomena which current models do not explain, and possibly offer explanations of such phenomena. The scope of this project is limited only by time and computer resources.

Space System Modeling: Saturnian Moons Justin WinklerBy adding more parameters and factors to create more complex and accurate models, simulations could be improved with no foreseeable end. Unfortunately, time is limited and the calculations necessary for such a simulation may eventually exceed the computational resources of the lab after enough model alterations. Nonetheless, given the current resources, this project is still able to create a comprehensive model. Background The Saturnian moon system is a hotbed of interesting phenomena. There are moons that have odd orbital inclinations, there are moons that are unusually colored, and some moons may contribute to the regulation of Saturn's rings.

Space System Modeling: Saturnian Moons Justin WinklerOne moon is effected by the forces within the system to such a degree that it's rotation is chaotic. There are two moons that share an orbit, with the appearance that one will overtake the other and the two will collide. This does not occur, however, as every four years they exchange momentum, making the slower moon faster than the originally faster moon. Nowhere else in the solar system do phenomenas such these occur in such abundance. This makes the Saturnian moon system a natural choice for simulating. Numerous solar system simulators exist today.

Space System Modeling: Saturnian Moons Justin WinklerA simple example, named Orrery, can be found at http://orrery.unstable.cjb.net/. Other simulations have been made concerning the N-Body problem, which attempt to find subsequent motions of bodies based on initial parameters. One of these simulations, which uses NetLogo, is found at http://ccl.northwestern.edu/netlogo/models/N-Bodies. This project will build upon these past models by applying some of their techniques to the Saturnian moon system. One major technique used to model space systems is Newton's Law of Universal Gravitation. This is a basic law used in innumerable simulations. Newton's Law of Universal Gravitation is as follows: F = (G * m1 * m2) / (r2)

Space System Modeling: Saturnian Moons Justin WinklerWhere F is the force (newtons) exerted on a massive body through gravity, G is the gravitational constant (approximately 6.67 × 10-11 N m2 kg-2), m1 is the mass (kilograms) of the body upon which the gravitational force is being exerted, m2 is the mass (kilograms) of the body that is exerting the gravitational force, and and r is the distance (meters) between the two bodies. To incorporate this law into a 3-D model, 3 force vectors are calculated to account for movements in the x, y, and z directions.

Space System Modeling: Saturnian Moons Justin WinklerPlease note that, while very important, Newton's Law of Universal Gravitation is not the only important factor. For example, an attempt to simulate the effects of Saturn's magnetosphere on the may be revealing, but would have an immensely different focus. Time is still a limiting factor in this project, after all, and the processes that would need to be simulated for this to be an adequate portrayal would be numerous and complex. As such, this project will generally avoid such factors and focus on the movement of objects within the system unless enough time can be put aside to add these factors in. By simulating the Saturnian moon system, one hopes to better understand the extent of our understanding.

Space System Modeling: Saturnian Moons Justin WinklerBy basing this simulation upon commonly used models, we can gage how accurate and effective these models are. Furthermore, we can determine which phenomena we know how to explain which we don't, making it clearly which events are worth further research. Development The simulation runs using the computer language C. It should also be noted that OpenGL is heavily utilized. OpenGL is a graphics library for C and C++, and was mainly implemented for testing purposes. Because of the number of data pieces involved for proper modeling of a space system, I was drawn immediately to the idea of creating a struct to hold data for each object in the system.

Space System Modeling: Saturnian Moons Justin Winkler/* Struct representing planets, satellites, or other massive bodies */ struct mBody { double *xLocs; double *yLocs; double *zLocs; char *name; double mass; /* In kilograms */ double xLoc; /* X component of location (distance from one point to the adjacent is in kilometers) */

Space System Modeling: Saturnian Moons Justin Winklerdouble yLoc; adjacent is in kilometers) */ double zLoc; adjacent is in kilometers) */ double xVelocity; double yVelocity; double zVelocity; double red; double green; double blue; };

/* Y component of location (distance from one point to the /* Z component of location (distance from one point to the /* X component of velocity vector (km/sec) */ /* Y component of velocity vector (km/sec) */ /* Z component of velocity vector (km/sec) */

Space System Modeling: Saturnian Moons Justin WinklerThe pointers *xLocs, *yLocs, *zLocs are used to store previous x, y, and z locations which are then printed to the GL window as dots, thereby tracing the path of each body. The amount of data to be stored is user set, with a default of 10000. It should be noted that these pointers are used solely for graphical purposes. The string *name stores the name of each body (ex: Titan, Saturn, Hyperion, Mimas). This string is used to create file streams to files with their names and the string ".txt" appended to the end (ex: Titan.txt, Saturn.txt, Hyperion.txt, Mimas.txt). These files are then used for data storage concerning their respective bodies.

Space System Modeling: Saturnian Moons Justin WinklerThe variable mass refers to the objects mass. The variables xLoc, yLoc, and zLoc all store the current location of the body (with Saturn always at the origin). The variables xVelocity, yVelocity, and zVelocity indicate the vector components for the speed of the object. Red, green, and blue are used solely to determine the rgb values with which GL prints each body to the window. This project stores the entire space system in a single array (s[]) of this struct. The project then iterates to the appropriate runtime, each time recalculating the values of each mBody within the array. It should be noted that I created helper functions that does the necessary projectile calculations. Here follows these helper functions:

Space System Modeling: Saturnian Moons Justin Winklerdouble distance(struct mBody a, struct mBody b) { double xDist = a.xLoc - b.xLoc; double yDist = a.yLoc - b.yLoc; double zDist = a.zLoc - b.zLoc; return pow(xDist * xDist + yDist * yDist + zDist * zDist, .5); } /* Recalculate an mBody's parameters during a time step */ void recalcL(struct mBody s[], int ind) { // printf("\n\nFrom Recalc:\nCurrent distance: %lf\n", dist); // printf("%s => x: %lf y: %lf z: %lf\n", s[1].name, s[1].xLoc, s[1].yLoc, s[1].zLoc);

Space System Modeling: Saturnian Moons Justin Winkler// printf("%s => xVel: %lf yVel: %lf zVel: %lf\n\n", s[1].name, s[1].xVelocity, s[1]. yVelocity, s[1].zVelocity); s[ind].xLoc = s[ind].xLoc + (s[ind].xVelocity * tstep); s[ind].yLoc = s[ind].yLoc + (s[ind].yVelocity * tstep); s[ind].zLoc = s[ind].zLoc + (s[ind].zVelocity * tstep); } void recalcV(struct mBody s[], int ind, int numbodies) { double g = 6.6742 * pow(10, -20); double xDist; double yDist; double zDist; double dist; double gforce = 0; double xForce = 0; double yForce = 0; double zForce = 0; int number;

Space System Modeling: Saturnian Moons Justin Winklerfor(number = 0; number < numbodies; number++) { if(number != ind) { xDist = s[number].xLoc - s[ind].xLoc; yDist = s[number].yLoc - s[ind].yLoc; zDist = s[number].zLoc - s[ind].zLoc; dist = pow(xDist * xDist + yDist * yDist + zDist * zDist, .5); gforce = ((g * s[number].mass * s[ind].mass) / (dist * dist)); xForce += gforce * (xDist / dist); yForce += gforce * (yDist / dist); zForce += gforce * (zDist / dist); } } // printf("\n\nFrom Recalc:\nCurrent distance: %lf\n", dist); // printf("%s => x: %lf y: %lf z: %lf\n", s[1].name, s[1].xLoc, s[1].yLoc, s[1].zLoc); // printf("%s => xVel: %lf yVel: %lf zVel: %lf\n\n", s[1].name, s[1].xVelocity, s[1]. yVelocity, s[1].zVelocity); s[ind].xVelocity = s[ind].xVelocity + ((xForce * tstep) / (s[ind].mass)); /* Newton's Gravitational Constant */

Space System Modeling: Saturnian Moons Justin Winklers[ind].yVelocity = s[ind].yVelocity + ((yForce * tstep) / (s[ind].mass)); s[ind].zVelocity = s[ind].zVelocity + ((zForce * tstep) / (s[ind].mass)); } First of all, please note that commented sections, particularly printfs, are most likely used for debugging. As for the functions, distance is pretty self-explanatory, returning the distance between mBody a and mBody b. recalcV recalculates the velocity of the mBody at the passed index (ind) by summing the gravitational force vertors exerted by surrounding bodies and recalcL recalculates the velocity of the mBody at the passed index (ind) based upon the current velocity.

Space System Modeling: Saturnian Moons Justin WinklerThese commands are executed for each iteration of the program for every mBody. Note that tstep is a global variable that denotes the amount of time that passes in an iteration, and the size of tstep determines the accuracy of the model (accuracy is indirectly related to tstep). Tstep can be user set, with a default of 500 seconds. These sections of code are primarily responsible for all calculations (excluding the for loop that calls them). After I created these functions, I focused on making the project easier to test by setting up a graphical depiction using OpenGL, as well as setting up filestreams to store data.

Space System Modeling: Saturnian Moons Justin WinklerAs such, these calculator functions have changed very little since their conception. I believe that now I will focus again on calculations. Here are some ideas I intend to implement in my code: -First and foremost, I need to get a plotting software to allow for proper analysis of the stored data. I am currently tinkering with gnuplot. -Another major concern is that I need to account for the irregular shape of the bodies, particularly concerning moons or objects similar to Hyperion (the irregular shape of Hyperion may partially account for its chaotic motion). This will be a difficult idea to implement, but I believe it is necessary for the accuracy of the model.

Space System Modeling: Saturnian Moons Justin WinklerI believe I can accomplish this by having each body being composed of numerous particles, and each will be acted upon by gravity. The mechanics of this technique, however, require more research. -Because of the growing computational demands of this program, I have been considering using MPI to increase the programs speed. This task should be relatively simple to accomplish. -For an accurate model, I need accurate data on the initial and relative positions of the bodies in question. I will need to obtain a sky chart for Saturn (or something to that effect). -There are many more phenomenas that could be modeled, and I intend to look into the simulation of these phenomenas at a later date.

Space System Modeling: Saturnian Moons Justin Winkler
Download Presentation

Connecting to Server..