1 / 19

Analysis of GWAP-based Geospatial Tagging Systems

Analysis of GWAP-based Geospatial Tagging Systems. Ling-Jyh Chen, Yu-Song Syu, Bo-Chun Wang Academia Sinica, Taiwan. Wang-Chien Lee The Pennsylvania State University. Geospatial Tagging Systems. (GeoTagging). An emerging location-based application

muncel
Download Presentation

Analysis of GWAP-based Geospatial Tagging Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of GWAP-based Geospatial Tagging Systems Ling-Jyh Chen, Yu-Song Syu, Bo-Chun Wang Academia Sinica, Taiwan Wang-Chien Lee The Pennsylvania State University

  2. Geospatial Tagging Systems (GeoTagging) • An emerging location-based application • Helps users find various location-specific information • e.g., “Find a good restaurant nearby” • Conventional GeoTagging services • 3 major drawbacks • Two-phase operation model • Photo  go back home  upload • Clustering at hot spots • Tendency to popular places • Lack of specialized tasks • Restaurants allowing pets

  3. GWAP-based geotagging services (Games With APurpose) • Collect information through games Where is the Capital Hall? asker Take a picture for the White House solver : pending unsolved tasks : Locations of Interest (LOI) • Avoid the 3 major drawbacks • Tasks are uploaded right after taking photos • Tasks are assigned by the system • Tasks can be specialized

  4. Problems • Which task to assign? • Will the solver accept the assigned task? • How to measure the system performance?

  5. Acceptance rate of a solver • When a solver u appears, the system decides to assign the task in LOI v • u is more likely to accept the task when… • Population(v) ↗, • Distance(u,v) ↘, Pv[k]: probability that k users appear in v τ Sigmoid Function

  6. Evaluation Metrics (1/3) • Throughput Utility: • To solve as many tasks as possible • Increase #tags • assign easily accepted tasks • Results cluster at hot spots System Throughput #solved tasks (throughput) fairness All solved tasks from the beginning at all locations Starvation Problem

  7. Evaluation Metrics (2/3) • Fairness Utility: • To balance number of solved tasks at LOIs • Balancing • assigntasks at unproductive LOIs • Tasks are more easily rejected Coefficient of Variation Balancing (fairness) throughput c.v. of normalized #solved tasks at all locations Equality of Outcome

  8. Evaluation Metrics (3/3) • System Utility: • To accommodate Uthroughput& Ufairness

  9. Task Assignment Strategies • Simple Assignment (SA) • Only assign the task at the same LOI with the solver (Local Task) • Random Assignment (RA) • Provide abaseline of system performance • Least Throughput First Assignment (LTFA) • Prefer the task from the node of the least throughput •  to maximize Ufairness • Acceptance Rate First Assignment (ARFA) • Prefer the task of the highest acceptance rate •  to maximize Uthroughput • Hybrid Assignment (HA) • Assign the task contributing the highest System Utility (Usystem)

  10. Simulation – Configurations • An equal-sized grid map • size: 20 x 20 • #askers:#solvers = 2:1 • We repeat 100 Times to achieve the average performance

  11. Simulation – Assumptions • Players arrive LOIi at a Poisson Rateλi • λ is unknown in real systems • Approximate based on current & past population at LOIi • EMA - exponential moving average • Here, α = 0.95 α: smoothing factor Ni(t): current population in LOIi at time t

  12. Network Scenarios • EXP • λi (i=1…N) is an exponential distribution with the parameter 0.2  E(λ) = 5 • SLAW (Self-similar Least Action Walk, Infocom’09) • SLAW waypoint generator • Used in simulations of “Human Mobility” • generate fractional Brownian Motion waypoints • In this work, population of LOIs • TPE • A real map in Taipei City • λiis determined by #bus stops at LOIi

  13. Throughput Performance: Uthroughput EXP scenario SLAW scenario Equality of outcome TPE scenario

  14. Fairness Performance: Ufairness EXP scenario SLAW scenario Starvation Problem TPE scenario

  15. Overall Performance: Usystem EXP scenario SLAW scenario Average Spent Time TPE scenario

  16. Usystem(100) Assigning multiple tasks Usystem(100) EXP scenario SLAW scenario • When a solver appears, the system assigns • more than 1 task to the solver • Solver can choose 1 or none of them • K: Number of tasks that the system assigns to • the solver in a round Usystem(100) TPE scenario

  17. Work in progress • Include “time” and “quality” factors in our model • Different values of “#askers/#solvers” • Consider more complex tasks • E.g., what is the fastest way to get to the airport from downtown in rush hour?

  18. Conclusion • Study GWAP-based Geotagging games analytically • Propose 3 metrics to evaluate system performance • Propose 5 task assignment strategies • HA achieves best system performance • computation-hungry • LTFA is the most suitable one in practice • comparable performance to the HA scheme • Acceptable computation complexity • Considering multiple tasks,system performance ↗ when K ↗ • but players may be sick of too many tasks assigned in a round • It’s better to assign multiple tasks1-by-1, rather than all-at-once • For higher System Utility

  19. Thank You

More Related