1 / 33

A Sequential Methodology for Integrated Physical and Simulation Experiments

Isaac Newton Institute for Mathematical Sciences. DEMA 2008 Workshop. Cambridge, 11-15 August 2008. A Sequential Methodology for Integrated Physical and Simulation Experiments. Daniele Romano. joint with Alessandra Giovagnoli. Dept. of Mechanical Engineering, University of Cagliari

keelie-peck
Download Presentation

A Sequential Methodology for Integrated Physical and Simulation Experiments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Isaac Newton Institute for Mathematical Sciences DEMA 2008 Workshop Cambridge, 11-15 August 2008 A Sequential Methodology for Integrated Physical and Simulation Experiments Daniele Romano joint with Alessandra Giovagnoli Dept. of Mechanical Engineering, University of Cagliari Piazza d’Armi, Cagliari, Italy e-mail: romano@dimeca.unica.it

  2. The problem Design or improve a physical system by combined use of physical and simulation experiments Assumptions • Physical observations are more reliable (closer to reality) • Simulation runs cost less Schematisation Two-treatment sequential experiment T0: physical experiment T1: simulation experiment T1 T0  T1 T1… … T0 Stop Stopping rule is an essential part of the approach Additional task: design of the experiments at each stage (i.e. choice of doses in clinical trials)

  3. Questions • Is this problem relevant to applications? • Has it already been investigated? Partially in “Calibration of computer models” but the objective is different and sometimes field data are not designed (Kennedy and O’Hagan, 2001, Bayarri et al., 2007). Calibration could be part of the method.

  4. Analogies with other statistical problems • George Box’s sequential experimentation (Box and Wilson, 1957). However, there are no simulation experiments in that methodology and experiments are decided based mainly on expert judgment. • Sample surveys by questionnaires. Information can be obtained directly or by proxy, and a main question is how many resources to allocate to direct observations and how many to proxy ones. We are not aware however of a sequential approach. • Computer models with different level of accuracy (Qian et al., 2004) • Sequential experiments in clinical trials

  5. Two motivating applications Design of a robotic device (Manuello et al., 2003) Improvement of a manufacturing process (Masala et al. 2008) Sequence of experiments is based on judgement

  6. Climbing robot simulation model developed in Working Model 21 factors investigated

  7. 88% 12% Confirmation Optimization Computer model modification Exploration of the feasible region Feasibility check on the prototype Extensive exploration Note the efficient allotment of experimental effort

  8. Benefits The robot can climb steadily with speed seven times higher than in the initial design configuration and virtually on any post surface (robustness)  better design We just built one physical prototype instead of tens to investigate on 21 factors  cost saving Computer exploration makes us discover that the robot can descend passively, by using gravity innovation The comparison of physical vs numerical gave the designer the idea of how to modify the code improving the model  simulation model improved

  9. evo 3 40 20 0 allungamento [mm] -20 -40 -60 -80 2 0 1 tempo [s] operating modes of the robot climb steadily fall in control innovation fall no move climb and then fall

  10. Improvement of the flocking process Flock yarns Car component covered by flock fabric Two simulation models developed: one for the electric field inside the chamber (FEMLAB), one for the motion of the flock (MATLAB). 9 factors investigated thread flock

  11. Simulation runs: 153 (63%) Physical runs: 90 (37%) Electric field simulator + flock motion simulator 13 144 runs 12 11 22 runs Lab 10 9 Production line 35 runs 8 7 22 runs Pilot plant 6 5 11 runs Pilot plant 4 3 Electric field simulator 9 runs 2 1 Simulation experiments Expert reasoning Physical experiments 63% 37%

  12. Benefits Operating conditions considered potentially unsafe have been tried on the simulator, obtaining golden information for improving the process. These conditions would never have been tried in the field.  process efficiency increases Increased process efficiency can be exploited to raise productivity (up to 50%) ( process improvement) or to produce yarns with new characteristics  product innovation Results from physical and simulation experiments were used for tuning some unknown parameters of the simulator  computer model calibration A mechanistic model of the whole process was developed by combining the simulation models with the results of a physical experiment (determining the rate of lifted flock).  new process design tool

  13. are taken as response surfaces and are estimated by polynomial regression over the region of interest and Response models Reality e(0,s2) with independent errors, Physical trials x  D (a hyper-rectangle) Simulation b(x) is the bias function, estimated by Objective Locate a sufficiently high value of the true response over the domain D by using simulation as much as possible, provided that simulation is found reliable enough.

  14. Sequential procedure At each step k of the procedure we must decide on: dk=0: the experiment is physical  Pk - the type of the experiment dk=1: the experiment is numerical Sk • - the region where the experiment is run, Rk • - the runs size, nk • - the design, Dk We make simple choices on the type of the region and the design throughout: • - Rk is a hypercube of fixed size (centre Ck) • - Dk is a Latin Hypercube Design

  15. Rationale of the procedure We want to use a physical experiment only in two particular situations: • A satisfactory response level has been found by simulation and it is worth checking it • There is the need to update the measure of the unreliability of simulation in order to check if it is worth going on or stopping A physical experiment is always run in the region of the preceding simulation experiment dk=0: Rk = Rk-1 We want to stop the procedure in two particular situations: • A satisfactory response level has been found by simulation and it has been proven in the physical set-up  SUCCESS • The simulator is found too unreliable after a check by a physical experiment In all other circumstances we will use simulation experiments

  16. Allowed transitions S1 P2 START: Sk+1 Sk Stop Sk Pk+1 Sk+1 Pk Pk+1 Pk Stop

  17. Performance measures at stage k Satisfaction Increment in satisfaction wrt the last experiment of the same kind • DSAT(k) =FSAT(k) - FSAT(k-l*) gradient Expected improvement in next simulation experiment Rk frontier Total cost

  18. continued Unreliability of the simulation (after Pk) : error variance estimated at step k Unreliability of the simulation (after Sk) mk: number of regions where both kind of experiments were done up to step k d :length of the hypercube edge

  19. How are transitions ruled? Sk+1 Sk+1 otherwise otherwise Sk Pk Pk Stop or (r1,r2,r3,r4) = 1 FSAT(k)>sC or >uC r1: FSAT(k)>sC r2: FUNREL(k)>uC r3: r4: Satisfaction (after a physical experiment) is high Simulation is too unreliable Too many stages without any actual improvement Allowable cost exceeded

  20. High-level flow diagram of the procedure k=1 S1 k=k+1 k=k+1 Pk Sk N Check stop Check S->P N Y Y STOP Only high level decision is made explicit here

  21. Block Sk or Pk Select Rk,nk,Dk Run Dk Estimate y(x) Compute performance measures FSAT(k), DSAT(k), FUNREL(k), FIMPR(k), c(k)

  22. Low-level decisions: select region, run size and design Region dk=0: Rk = Rk-1 dk=1: Rk ≠ Rk-1 Y FIMPR(k)>0 N Draw Ck at random Compute Ck (Rk adjacent to Rk-1)

  23. x2 R4 C4 B R3 R6 C3 R1=R2 A C6 R5 C1= C2 D C5 sample space x1

  24. Run size dk=0: Pk Run size of each physical experiment is such that it costs as much as the simulation experiment preceding it nk = r nk-1, r = cS/cP , 0<r<1 dk=1: Sk Run size of simulation experiments is proportional to the expected increase of the response (if any) at the centre, Ck, of the next region, i.e., dk=1: Parameter h can be tuned by setting the willingness to pay for obtaining an improvement Dy, for example When region Rk is drawn at random, we put nk= n1

  25. x2 R4 + + + C4 + + + B R3 + + + + + + R6 C3 + + R1=R2 + + + A + + + + + + + + C6 + R5 C1=C2 + + + D C5 sample space x1

  26. Demonstrative case A computer code for implementing the procedure has been developed in Matlab reality simulation

  27. case 1 sequence of experiments: S1P2S3S4P5Stop FUNREL(k)>uC active stopping rule: reality physical experiment * prediction simulation experiment * prediction 2 R3 R4=R5 R1=R2

  28. case 2 sequence of experiments: S1P2S3P4Stop FSAT(4)>uS active stopping rule: reality physical experiment * prediction simulation experiment * prediction 3 R1=R2 R3=R4

  29. simulation = reality case 3 sequence of experiments: S1P2S3S4 P5 Stop FSAT(5)>uS active stopping rule: reality physical experiment * prediction simulation experiment * prediction 2 R1=R2 R4 =R5 R3

  30. 25 20 15 10 5 0 -5 -10 -15 -20 -25 -8 -6 -4 -2 0 2 4 6 8 10 simulation = reality case 4 sequence of experiments: S1P2S3S4 S5 S6 P7 Stop FSAT(7)>uS active stopping rule: reality physical experiment * prediction simulation experiment * prediction 2 R5 R4 R3 R6 =R7 R1=R2

  31. Conclusions The scope of the approach is wide. In general, it is apt to deal with any situation where the response can be measured by two (or more) instruments realizing a different quality-cost trade-off. The method is aimed at performance optimisation (maximisation of a distance measure in the output space). However, the basic sequential mechanism can be applied to different goals. Testing and validation in real applications is needed.

  32. George Box, commenting on the sequential experimentation method: “The reader should notice the degree to which informed human judgement decides the final outcome”. Box, G. P. E., Hunter, W. G., Hunter, J. S. (1978): Statistics for experimenters, p. 537. human judgement or automation? Shall we ask Newton?

  33. References Box, G.E.P., Wilson, K.B.: On the Experimental Attainment of Optimum Conditions, Journal of the Royal Statistical Society: Series B, 13, 1-45 (1951) • Kennedy, M.C., O’Hagan, A.: Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B, 63 Part 3, 425-464 (2001) Bayarri, M.J., Berger, J.O., Paulo, R., Sacks, J., Cafeo, J.A., Cavendish, J., Lin, C.-H., Tu, J.: A Framework for Validation of Computer Models, Technometrics, 49(2), 138-154 (2007) Qian, Z., Seepersad, C.C., Joseph, V.R., Allen, J.K., Wu, C.F.J.: Building Surrogate Models Based on Detailed and Approximate Simulations. ASME 30th Conf. of Design Automation, Salt Lake City, USA. Chen, W. (Ed.), ASME Paper no. DETC2004/DAC-57486 (2004) Manuello, A., Romano, D., Ruggiu, M.: Development of a Pneumatic Climbing Robot by Computer Experiments. 12th Int. Workshop onRobotics in Alpe-Adria-Danube Region, Cassino, Italy. Ceccarelli, M. (Ed.), available on CD Rom (2003) Masala, S., Pedone, P., Sandigliano, M. and Romano, D.: Improvement of a manufacturing process by integrated physical and simulation experiments: a case-study in the textile industry. Quality and Reliability Engineering Int., to appear

More Related