1 / 44

Training Spatial Skills: A Meta-analysis

Training Spatial Skills: A Meta-analysis. Linda Liu Hand, David H. Uttal & Loren Marulis Northwestern University Nora S. Newcombe Temple University. Importance of training?. Potential to improve skills relevant to STEM (Hedges & Chung, in prep; Shea, Lubinski & Benbow, 2001)

cantero
Download Presentation

Training Spatial Skills: A Meta-analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Training Spatial Skills:A Meta-analysis Linda Liu Hand, David H. Uttal & Loren Marulis Northwestern University Nora S. Newcombe Temple University

  2. http://www.spatialintelligence.org Importance of training? Potential to improve skills relevant to STEM(Hedges & Chung, in prep; Shea, Lubinski & Benbow, 2001) High spatial ability: More likely to have STEM major and STEM job Can also reduce disparities in STEM achievement How and how much? Goal: To aggregate systematically past research on spatial training to determine consensus in literature.

  3. http://www.spatialintelligence.org Overview What is training? How can we compare training effectiveness across studies? Research questions: 1. How much do (vs. can) spatial skills improve? Might vary by task – Embedded Figures vs. Water-level Task? 2. What works? Impact of grouping variables 3. Are training effects durable? 4. Does training generalize (transfer) to untrained tests?

  4. http://www.spatialintelligence.org Examples: Video games Effect of playing videogames (Tetris) on mental rotation and Paper Folding Test (Wright, Thompson, Ganis, Newcombe & Kosslyn, 2008)‏. MRT (g = 1.09) PFT (g = .87)

  5. http://www.spatialintelligence.org Examples: Spatial experience Effects of various life experiences on spatial skills: Fashion designers: Effect of experience with pattern-making on spatial skills (Workman, Caldwell & Kallal, 1999)‏ Dress pattern making Differential Aptitude Test-Spatial Relations (g = .32)

  6. http://www.spatialintelligence.org Examples: Spatial coursework Improved Purdue Spatial Visualization Test performance (Sorby, 2008) Engineering course using multi-media software and workbook • Isometric pictorials from coded plans • Multi-view drawings • Paper folding/2-D to 3-D transformations • Object rotations about one axis • Object rotations about two or more axes • Cutting planes and cross sections • Surfaces and solids of revolution • Combining solids g = 2.02

  7. Examples: Repeated practice Repeated practice on different Group Embedded Figures (Chance & Goldstein, 1971; Schaeffer & Thomas, 1999) Pretest GEFT Training Posttest GEFT g = 1.12 http://www.spatialintelligence.org

  8. http://www.spatialintelligence.org Methods Searched for both published and unpublished work: Dissertations, conference posters, technical reports. Electronic searches, references lists, direct contacts Coded on several grouping variables, including: Age, sex, ability level (i.e., prescreened for low performers?) Outcome measure, type of training Publication status, random assignment, location of study (classroom?), feedback provided (yes/no), training frequency

  9. http://www.spatialintelligence.org Effect Sizes Standard measure of efficacy across studies Does not depend on individual measurement (raw score) Expresses mean change, as a result of training or experience, in standard deviation units. Final “sample” 101 published (76) and unpublished (25) studies

  10. http://www.spatialintelligence.org Analysis Plan How do we make sense of various training methods and dependent variables? Created 5 conceptual categories of dependent variables and 3 categories of training. Describe each category then compare size of training effects in each category.

  11. http://www.spatialintelligence.org Categories of dependent variables Spatial perception Perceive objects amidst distracting background Example: Mazes, Embedded Figures Task Perspective taking Visualize a scene from a different location Assembly/ transformation Put together objects into larger config. or transform (3-D to 2-D)‏ Mental rotation Rotation of 2-D or 3-D pictures or objects Spatial principles Understand abstract principles (e.g., horizontality)‏

  12. http://www.spatialintelligence.org Categories of dependent variables Spatial perception Perceive objects amidst distracting background Example: Three mountains task Perspective taking Visualize a scene from a different location Assembly/ transformation Put together objects into larger config. or transform (3-D to 2-D)‏ Mental rotation Rotation of 2-D or 3-D pictures or objects Spatial principles Understand abstract principles (e.g., horizontality)‏

  13. http://www.spatialintelligence.org Categories of dependent variables Spatial perception Perceive objects amidst distracting background Example: Form Board Test Perspective taking Visualize a scene from a different location Assembly/ transformation Put together objects into larger config. or transform (3-D to 2-D)‏ Mental rotation Rotation of 2-D or 3-D pictures or objects Spatial principles Understand abstract principles (e.g., horizontality)‏ Yes or No?

  14. http://www.spatialintelligence.org Categories of dependent variables Spatial perception Perceive objects amidst distracting background Example: MRT, Card rotation Perspective taking Visualize a scene from a different location Assembly/ transformation Put together objects into larger config. or transform (3-D to 2-D)‏ Mental rotation Rotation of 2-D or 3-D pictures or objects Spatial principles Understand abstract principles (e.g., horizontality)‏ Same or different?

  15. http://www.spatialintelligence.org Categories of dependent variables Spatial perception Perceive objects amidst distracting background Example: Water level task Perspective taking Visualize a scene from a different location Assembly/ transformation Put together objects into larger config. or transform (3-D to 2-D)‏ Mental rotation Rotation of 2-D or 3-D pictures or objects A B Spatial principles Understand abstract principles (e.g., horizontality)‏ Draw a line in B for the water.

  16. Types of Training Video games Designed for recreation and entertainment. Example: Tetris, Zaxxon Courses Full-length or short-term enhanced courses. Spatial task training - Specific Direct rehearsal or practice on outcome measure of interest. Spatial task training- Transfer Transfer of training to reference tests. 17 http://www.spatialintelligence.org

  17. Video games Designed for recreation and entertainment. Example: Dress-making, spatial modules, Drafting (vs. water purification) Courses Full-length or short-term enhancements. Spatial task training - Specific Direct rehearsal or practice on outcome measure of interest. Spatial task training- Transfer Transfer of training to reference tests. 18 http://www.spatialintelligence.org

  18. Video games Designed for recreation and entertainment. Example: Repeated practice on the GEFT, VMRT, WLT Courses Full-length or short-term enhancements. Spatial task training - Specific Direct rehearsal or practice on outcome measure of interest. A B Spatial task training- Transfer Transfer of training to reference tests. Same or different? 19 http://www.spatialintelligence.org

  19. Video games Designed for recreation and entertainment. Example: Regular WLT  test on irregular WLT; Tetris  test on PFT Courses Full-length or short-term enhancements. Spatial task training - Specific Direct rehearsal or practice on outcome measure of interest. A B Spatial task training -Transfer Transfer of training to reference tests. 20 http://www.spatialintelligence.org

  20. Results Overall effectiveness of training Control group effects Age and Sex Are some kinds of training better than others? Are some outcome measures more malleable than others? Duration Transfer

  21. Overall Effectiveness 101 studies Mean effect size = .65 (i.e., 2/3 a SD of improvement)‏ “Moderate” improvement (Cohen, 1988)‏ For IQ (SD = 15), .65 SD would be an increase of 9.75 points.

  22. Control group effects

  23. Experimental groups do significantly exceed control groups Overall, treatment groups improve significantly more from training than control groups do. Control groups g = .56 Treatment groups g = .75 24 g http://www.spatialintelligence.org

  24. http://www.spatialintelligence.org But,…. • Type of control group and improvement in control group really matters for understanding • Overall effectiveness • What works depends on what did or did not happen to the control group • Different effects of different types of training

  25. Crestor = New anti-cholesterol drug. Similar drug Vytorin halted = Clinical trial failed to show it was any better than an already-available medication. Difference? Vytorin study: Head-to-head comparison with another drug. Crestor study based on comparing its effectiveness to placebo = nothing Why control groups matter 27 http://www.spatialintelligence.org

  26. Why control groups matter Important to separate control and treatment groups Spatial principles highest Ec effect size, lowest Control group g Spatial perception lowest Ec effect size, highest g for Control. A B 28 http://www.spatialintelligence.org

  27. Why control groups improve so much Classic sources of test-retest effects Understanding the task (e.g, which key to press when) Test-specific strategies (e.g., eliminating foils quickly; keeping fingers on the correct keys; looking for similar problems) But also, the possibility of more interesting learning from the tests Fluency in finding correct structures Better allocation of attention and working memory Multiple tests provide a form of indirect training Alignment and comparison (Gentner & Markman, 1994, 1997) 29 http://www.spatialintelligence.org

  28. http://www.spatialintelligence.org Our specific claim • Some of the learning in the control groups is not just of the boring type • Some people learn something from taking the tests • Points to malleability of spatial skill • Even without instruction, just a chance to practice, people improve, often rather dramatically

  29. http://www.spatialintelligence.org What’s the evidence to support this claim? • Acknowledge: Post-hoc—NEED EXPERIMENTAL RESEARCH • But…. • Magnitude of control group improvement is about twice as great as for other tests • Control groups show transfer! • Hard to explain on basis of “boring” effects alone • Variation helps

  30. Test variety is effective training Test-retest effect: Not just number of repetitions Number of separate tests given during pretest-posttest: 32 http://www.spatialintelligence.org

  31. Age effects: It’s all in the control group Does malleability vary by age? On average, effect size significantly higher for children than for adults, p < .05 Initially, appears that children are more malleable… 33 http://www.spatialintelligence.org

  32. Age effects: Control and Experimental When looking at control groups, adult control groups improve significantly more. Thus, children may appear more malleable because their control groups improve less than adults’ do. 34 http://www.spatialintelligence.org

  33. Sex differences Does malleability vary by sex? No difference in mean effect size, both sexes respond to training (same g)‏ Overall, results from prior work are most consistent with last scenario. Male advantage is similar in magnitude at pre and post M M F F Training  Training  35 http://www.spatialintelligence.org

  34. Training works – What works? • Focus on treatment effect sizes: 36 http://www.spatialintelligence.org

  35. Training works – What works? • Focus on treatment effect sizes: 37 http://www.spatialintelligence.org

  36. Duration: Training lasts Majority of studies (85%) tested only immediate impact of training. Among treatment groups: No significant decline in effect size measured immediately, 2 weeks after, or more than 2 weeks after end of training (which includes up to 3 months later). 38 http://www.spatialintelligence.org

  37. Training transfers Why does this matter? Suggests training is NOT just a practice effect If spatial training has effects that extend beyond mere practice, training should transfer to untrained tasks. Near vs. Far transfer: Near g = 1.01 Far g = .56 But Far is more durable. A B 39 http://www.spatialintelligence.org

  38. Training transfers • Studies expecting to obtain far transfer might use training that produces especially durable effects: 40 http://www.spatialintelligence.org

  39. Real impact of training? Real value of a .65 SD increase? Marginal improvements on raw scores may lead to important gains in other areas: • Increase of .65 SD in height = 1.63 inches in height (among females18-24 yrs)? • Increase of .65 SD on the LSAT = 3 points (Average score is 156/180 among current law school students)? • Value of 1 inch increase above average? $789 per year1 • Value of 1 point gain on LSAT? $2,600 in starting salary2 1 Berkowitz, Ruth. "One Point on the LSAT: How Much Is It Worth?" American Economist 42 (2) 1998. 2 Judge, T. A., & Cable, D. M. “The Effect of Physical Height on Workplace Success and Income” Journal of Applied Psychology, 89(3) 2004.

  40. M F Training  42 http://www.spatialintelligence.org

  41. Conclusions Training leads to improvements in spatial skills that are: Durable - No significant losses in pretest-posttest improvement, even when retested 3 months later. Generalizable to other tasks – Training leads to improvements on untrained tasks. 43 http://www.spatialintelligence.org

  42. Conclusions How much can spatial skills improve? Use longer periods of training 47% of studies performed only one single session of training 85% conducted only an immediate posttest When long periods of training are used, durable effects AND far transfer are observed. Test a larger range of outcome measures 48% of outcome measures are mental rotation Vs. 9% perspective taking, 11% spatial principles, etc. Include a variety of methods of training Allows for alignment and comparison across problems (Gentner & Markman, 1994, 1997) 44 http://www.spatialintelligence.org

  43. Future directions To develop best-practice guidelines for spatial interventions at elementary and high school levels. Investigate transfer to STEM in more detail. Understand thresholds for success 45 http://www.spatialintelligence.org

  44. Acknowledgements Larry Hedges (NU) Spyros Konstantopoulos (NU) David B. Wilson (George Mason University) Chris Warren and Alison Lewis Research assistance: Kate O’Doherty Bridget O’Brien Eleanor Tushman Maggie Carlin Laura Mesa, Bonnie Vu, Melissa Sifuentes 46 http://www.spatialintelligence.org

More Related