1 / 33

Application of Parallel Processing to Probabilistic Fracture Mechanics Analysis of Gas Turbine Disks

Application of Parallel Processing to Probabilistic Fracture Mechanics Analysis of Gas Turbine Disks. Harry Millwater 1 , Brian Shook 1 , Sridhar Guduru 2, George Constantinides 1 1-Department of Mechanical Engineering 2- Department of Computer Science University of Texas at San Antonio.

fawzia
Download Presentation

Application of Parallel Processing to Probabilistic Fracture Mechanics Analysis of Gas Turbine Disks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Application of Parallel Processing to Probabilistic Fracture Mechanics Analysis of Gas Turbine Disks Harry Millwater1, Brian Shook1, Sridhar Guduru2, George Constantinides1 1-Department of Mechanical Engineering 2- Department of Computer ScienceUniversity of Texas at San Antonio University of Texas at San Antonio

  2. Overview • Introduction • Methodology overview • UTSA parallel processing network • Application problems • Future work • Summary and Conclusions

  3. Anomaly Distribution NDE Inspection Schedule Probability of Detection Pf vs. Flights Finite Element Stress Analysis Probabilistic Fracture Mechanics Risk Contribution Factors Material Crack Growth Data DARWIN® OverviewDesign Assessment of Reliability With INspection

  4. Risk Assessment Results • Risk of Fracture on Per Flight Basis

  5. Risk Contribution Factors • Identify Regions of Rotor With Highest Risk of Failure

  6. Zone-based Risk Assessment • Define zones based on similar stresses, inspections, defect distributions, lifetimes • Defect probability determined by defect distribution, zone volume • Probability of failure assuming a defect computed using Monte Carlo sampling or advanced methods Prob. of having a defect Prob. of failure given a defect Pi = Pi[A] * Pi[B|A]- zone PfDISK  Pi- disk

  7. Zone-based Risk Assessment Impeller

  8. Zone-based Risk Assessment Impeller

  9. Risk Reduction Required Risk 10-9 Maximum Allowable Risk A C B Components Use of DARWIN by Industry • FAA Advisory Circular 33.14 Requests Risk Assessment Be Performed for All New Titanium Rotor Designs • Designs Must Pass Design Target Risk for Rotors

  10. Unix Workstation Windows 2000 PC Inputs Results Windows 2000 PC Unix Workstation Spatial Zone-based Domain Decomposition GUI User N Zones Job-S1.dat Job-S2.dat … Job-SK.dat

  11. Spatial Zone-based Domain Decomposition • Divide the zones into any number of input files • Number of zones in a file is user defined (limit - one zone per file) • Graphical interface creates input files: jobname-S1.dat jobname-S2.dat ... • Creates jobname-Master.dat which contains all zones and a list all “worker” input files, e.g., jobname-S1 • User runs the jobname-S*.dat input files in parallel • Creates jobname-S*.ddb files • Jobname-Master.dat is run which combines jobname-S* results

  12. User Definition of Input Files • Select “Create Parallel File Set” • User specifies number of files • Result – master and worker files are stored for future execution

  13. Spatial Zone-based Domain Decomposition • Zone analyses can be run independently but .. • Some random variables are dependent across zones • Stress scatter factor, time of inspection -- fully dependent • Approach: • Dependent variables: enforce the same starting seed • Independent variables: enforce random starting seed

  14. Job Scheduler • Condor (http://www.cs.wisc.edu/condor/) job scheduler implemented at UTSA • Free public domain • Cross platform: • Windows, MacOSX, Unix (HP, Linux, OSX, SGI, SUN) • Makes use of unused compute time (“cycle stealing”) • Can activate/deactivate depending upon computer usage • Efficiently works with heterogeneous set of computers • Processes a queue of jobs using available machines • Allows individual jobs to specify minimum system requirements • Handles inter-job dependencies, i.e., job sequences • Transitions interrupted jobs to available machines • Allows users to set job priorities

  15. UTSA Parallel Processing Network • 39 Windows 2000 PC’s, 4 CPU SGI Origin - College of Engineering resources • More machines can be added as they come online • Non-dedicated resources • Computers primarily used for teaching and research • Resources fluctuate during analysis • Dynamic load balancing essential • Condor run as “non-intrusive”. Any keyboard or mouse activity suspends Condor - resumes after 5 minutes of inactivity

  16. UTSA Parallel Processing Network • 39 Windows PC’s, 4 CPU SGI Origin, 4 locations

  17. UTSA Parallel Processing Network Average Flops

  18. UTSA Network Availability • 24 hour duration, 1 Minute intervals - averaged over one week

  19. UTSA Network Availability Per one week

  20. UTSA Network Availability

  21. Application Examples • 80 zone (best case) • Install executable beforehand • Dedicated local network • Homogeneous network • Realistic problem otherwise • 6250 zone (worst case) • Pass executable each time • Shared distributed network • heterogeneous network • Hypothetical problem to test limits of the system

  22. Application Example

  23. Application Examples r Po Speed w Cross-section discretized into zones R2 6800 rpm t L R1 x w

  24. Level 1 - 80 Zone Example • 80 zone AC problem • Divide into 80 input files and master • ac80-S1.dat, ac80-S2.dat, …, ac80-S80.dat, ac80-master.dat • Run the files in parallel • Run ac80-Master to combine results 1/2 of cross-section modeled per symmetry

  25. 80 Zone Example • Cluster of 5 PCs (900 Mhz, 256 Mbyte RAM) • Near linear speed up • Further increases expected with more computers

  26. 6250 Zones • 29 PC’s • Hypothetical problem to test the system • Pass the executable each time • Convenient method to update the executable but causes a lot of communication time • Files sent to worker darwin.exe (5.5 Mbytes) Finite element stress results (2 Mbytes) Input file (10-200 Kbytes) • Files returned jobname.ddb (results database, up to 10 Mbytes)

  27. 6250 Zones • Determine the optimum file size for parallel processing • User defines how may zones to include in input files • Too many(few large files) - poor load balancing • Too few(many small files) - increased communication overhead

  28. 6250 Zones Results • Optimum result: approximately 120 files, 52 zones in each Batch 384 minutes Min Parallel 27 minutes Min

  29. Speed Up / Efficiency Results Recommend number of files about 3 or 4 times the number of computers available

  30. Future Application • Engine health management - (fusion of damage-based risk assessment with statistical reasoning tools) • Determine optimum inspection times • Examine affects of usage

  31. Future Work • Apply multi-threading technology for shared memory multi-processor computers • Use OpenMP - cross platform Fortran & C standard (www.openmp.org) • Transparent to the user- no input file changes nor runtime changes • Automatically takes advantage of multiple CPUs if present. No slowdown if only one CPU. • Will work for single zone or multiple zone analyses

  32. Summary and Conclusions • Zone-based spatial domain decomposition methodology developed for probabilistic analysis of gas turbine disks with inherent material anomalies • Regions of the disk cross-section are solved in parallel then recombined • User defines number of zones within an input file per local optimization • Condor job scheduler used to distribute & manage jobs • Near linear speed up for optimum situation, i.e., executable previously installed, dedicated system • Speedup of 16, efficiency of 77% realized for large number of zones, heterogeneous, shared processing network

  33. Summary and Conclusions • New methodology significantly reduces execution time for multi-zone problems (several times reduction) • Future applications to engine health management straightforward

More Related