1 / 41

Experiments Set Up (Hardware)

The Final Experiment For Project: “Measuring User Preferences On Design Variations Through VR” Is Completed !!!!. Experiments Set Up (Hardware). MuseV3. Traditional. Desktop CAVE VR special construction frame 2 projectors (front and top) front screen tablet mouse, keyboard

Download Presentation

Experiments Set Up (Hardware)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Final Experiment For Project:“Measuring User Preferences On Design Variations Through VR”Is Completed !!!!

  2. Experiments Set Up (Hardware) MuseV3 Traditional • Desktop CAVE VR • special construction frame • 2 projectors (front and top) • front screen • tablet • mouse, keyboard • navigation joystick • set of speakers • PC station • standard PC • navigation joystick • set of speakers

  3. VR Experiment Set Up MuseV3

  4. CA Experiment Set Up Traditional

  5. Users & Experiment • Steps through the experiment: • Pick up a “ticket” with description and order of tasks • Introduction movie. • Task Explanation. • Tutorial • Experiment • Change of Stations (for VR or CA) • Introduction movie. • Task Explanation. • Tutorial. • Experiment. • Evaluation Questionnaire. FIRST TASK SECOND TASK

  6. User Tasks There are in total four experiment types (FMVR, OEVR, MECA, VECA). Two in each of two groups (VR and CA). Each respondent had to complete two random tasks (one from each group), however each combination of tasks should be presented approximately equal number of times.

  7. Experiment Types

  8. SC Experiment, attributes

  9. SC Experiment, attributes Design Description (Multimedia and Verbal) The FFD consists of 32 profiles (to estimate main effect). The design includes 9 HOLDOUTS. In total 41 profiles were presented to each respondent. Consequently: Each respondent viewed 20 choice sets. Each choice set contained 3 profiles (2 at random + BLD).

  10. VR Experiment, attributes Regardless of the form of presentation we were looking for following changes of the design:

  11. Short explanation of BN • What it is? • Belief network (BN) also known as a Bayesian network or probabilistic causal network • BN captures believed relations (which may be uncertain, stochastic, or imprecise) between a set of variables which are relevant to some problem (e.g. coefficients and choices). How does it work? After the belief network is constructed, it may be applied to a particular case. For each variable you know the value of, you enter that value into its node as a finding (also known as “evidence”). Then Netica does probabilistic inference to find beliefs for all the other variables. Incremental learning. After the beliefs are found (post priori) MuseV updates the network, so they become a’ priori for the next respondent.

  12. Step 0 Step 1 Step 5 Step 15 Step 64

  13. VR Experiment - Network Description Personal Information Coefficients  Probability for choosing design element User Choices

  14. ANALYSES

  15. ANALYSES The truth about the respondents: We sent 1,600 letters in total !!!! The preparations took 2,5 days for two people (Vincent and Maciek) Within 2 weeks we received 96 positive conformations. At the end of the experiment we end up with solid number of 64 respondents that have completed the both appointed to them tasks! 5 of the 64 respondents would not buy the house that they have designed. 4 respondents did not completed second task (as the design was not relevant to them) 2 respondents did not started the experiment for the same reason!

  16. ANALYSES The truth about the respondents: First VR Experiment: First CA Experiment:

  17. Final QuestionnaireThe most preferred system

  18. Final QuestionnaireThe difficultness and pleasure of the tasks.

  19. Final QuestionnaireHouse situation

  20. Final QuestionnaireHours

  21. ANALYSES - BN

  22. ANALYSES - BN We have analyzed the data collected via BN as follows:

  23. ANALYSES – BNGoodness-of-fit measures for choice prediction (1) GOF - overall : comparison between observed and predicted across 6x32 cases (2) GOF - by choice i : as before for attribute i only (across 32 cases) 32*6 pred obsv pred obsv logLL= Sum ( ln (P * P + P * P ) ) k=1 k1 k1 k2 k2 32*6 obsv obsv logLL(0)= Sum ( ln ( 0.5 * P + 0.5 * P )) k=1 k1 k2 32*6 InitStat obsv InitState obsv LL(InitState)= Sum ( ln ( P * P + P * P )) k=1 k1 k1 k2 k2 r1 = ( LL - LL(0) ) / LL(0) r2 = ( LL - LL(InitState) ) / LL(InitState)

  24. ANALYSES - BN Example of GOF for FMVR BN with personal information R1 R2

  25. ANALYSES – BN Goodness-of-fit between networks (FM and OE) for estimation of betas • OVERALL G-O-F • G-O-F for each BETA • G-O-F for each PROFILE • G-O-F for each profile and beta • 1 2 • The GOF is based on the expected value of the attribute: GOF(N1:N2) = | E - E | • where: j j • N1 k N1 • E = SUM (P * B ) • j i ij ij

  26. ANALYSES - BN The following graphs show the differences between parameter estimation for Network 1 (FMVR) and Network 2 (OEVR): G-O-F = 2.657

  27. ANALYSES - BN G-O-F = 1.72 G-O-F = 1.677

  28. ANALYSES - BN G-O-F = 0.089 G-O-F = 0.292

  29. ANALYSES - BN G-O-F = 4.378 G-O-F = 0.111

  30. ANALYSES – BNConvergence of BETAS n 2 LL = Sum (P ) j k=1 jk BN with NO personal information (COMBINED)

  31. ANALYSES - CA

  32. ANALYSES – CACHECKING FOR ORDER EFFECTCHECKING ON THE Rho2

  33. ANALYSES – CA CHECKING FOR ORDER EFFECT (in all cases the CA was done first)SCALE EFFECT

  34. ANALYSES – CA CHECKING FOR ORDER EFFECT CONCLUSIONS Based on the tests we can conclude that order effect has no significant influence on the estimated models for following combination of experiments: Multimedia(CA) with Preset Options(VR) Verbal(CA) with Free Modification(VR) There is an order effect for following combination of experiments: Multimedia(CA) with Free Modification(VR) Verbal(CA) with Preset Options(VR) In this case the fact that VR that was done as the first task, improves the model estimation.

  35. ANALYSES – CAMODEL ESTIMATIONCHECKING ON THE Rho2

  36. ANALYSES – CAMODEL ESTIMATIONPARAMETERS VALUES AND THEIR SIGNIFICANCE

  37. ANALYSES – CACOMPARING PREFERENCES OF DIFFERENT GROUPS(SCALE EFFECT)

  38. ANALYSES – CAPREDICTING HOLDOUTS(counting correct and wrong predictions based on highest utility) The table illustrates the numbers of correct predictions against wrong.

  39. ANALYSES – CA & BNG-O-F of HOLDOUTS PREDICTION (Rho2 calculation based on log likelihood) • Based on Estimated parameters for each listed model model, we did: • calculating utility for each profile, as Vj=SUM(Bi*Xi) • calculating market shares for each profile k in each choice set as Pk=expVk/Sum(exp(Vj)) j=1 to 3 • choice in a choice set is made based on the highest utility or highest market share • calculating logLL(B)= SUM (ln(Pi)); where Pi - probability for max utility value in the choice set j • Based on null model: • calculating logLL(null model), calculations as above, but the utility Vj=0 for each profile • calculating Rho2 = 1 - logLL(B)/logLL(0)

  40. Benchmark (RL) against CA and BN G-O-F of REAL LIFE PREDICTION (Rho2 calculation based on log likelihood)Calculations based on BETAS GOF (CA) = 0.417 GOF (BN) = 0.502 • Creating choice sets: • All possible combinations of attributes are combined into one choice set for one respondent. In total we have 15 choice sets (the same as the number of respondents or households), in each choice set there are 32 profiles. In case of BN each profile is translated into choices available in the network. Choices for the most preferred profile are made based on the real life data (actual decisions that the respondents did while buying a house). • Steps In Calculations: • utility for each profile Vj=SUM(Bi*Xi) • market shares for each profile k in each choice set as Pk=expVk/Sum(exp(Vj)), j = 1 to 32 • log likelihood logLL(B)= SUM (ln(Pj)); where Pi - probability value for the profile that the respondent have chosen • calculating logLL(null model), calculations as above, but the utility Vj=0 for each profile • calculating Rho2 = 1 - logLL(B)/logLL(0)

  41. Benchmark (RL) against CA and BN The table illustrates ratio (percentage) of choosing certain design option. In case of real life - based on numbers of subjects buying certain option. In case of BN based on beliefs.

More Related