1 / 45

Voting and Mathematics: What We Learned from the 2008 Elections

Voting and Mathematics: What We Learned from the 2008 Elections. by: Dr. Jason Gershman Assistant Professor of Mathematics and Statistics Coordinator of Mathematics Nova Southeastern University. Outline. History of Political Polling What Can Go Wrong? Scrutiny of the 2008 Polling Process

tpadilla
Download Presentation

Voting and Mathematics: What We Learned from the 2008 Elections

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Voting and Mathematics: What We Learned from the 2008 Elections by: Dr. Jason Gershman Assistant Professor of Mathematics and Statistics Coordinator of Mathematics Nova Southeastern University

  2. Outline • History of Political Polling • What Can Go Wrong? • Scrutiny of the 2008 Polling Process • What was Learned from the 2008 Election • Improving Predictions • Some Interesting Legal and Moral Issues

  3. What is it? • A political opinion poll is a survey (done in one of a variety of formats including in person, by mail, by telephone, by email, etc) of people’s opinions about a certain political topic • Have you ever taken part in a political survey for a political race at the local level, the state level, or at the national level?

  4. History of Political Polling • Political Polls: • taken for almost 200 years • first political poll in a United States presidential race occurred in 1824 between Andrew Jackson and John Quincy Adams • Are used to make predictions about the outcomes of political races. • Procedures change as pollsters adapt their methods based on past successes and failures.

  5. Early National Polls • The earliest national polls were done by the Literary Digest • A national magazine • They mailed postcards and counted the results of the postcards that were mailed back. • They predicted some elections correctly in the 1910s and 1920s in this manner.

  6. 1936: What Can Go Wrong • In the 1936 presidential race, Democrat Franklin D Roosevelt ran for his 2nd term in office against Republican Alf Landon. • The Literary Digest magazine ran a poll and predicted that Alf Landon would win in a landslide.

  7. The 1936 Literary Digest Poll • Structure: 10 million ballots were mailed to potential voters using telephone records • 2.3 million ballots were returned (high response rate) • Predicted 57% of the vote for Landon to just 43% to FDR

  8. Literary Digest Poll • At election time, FDR beat Alf Landon in the electoral college 523 votes to 8 votes. • Literary Digest is wrong, embarrassed and soon goes out of business. • What went wrong???

  9. Foundations of Modern Polling • Young George Gallop correctly predicted the 1936 race with a sample size of 50,000. • The Gallop organization is one of many organizations today which do scientific polling.

  10. Types of Political Polling • Cross-Sectional Studies • Snapshot in time; one sample, one time • Tracking Polls • Rolling averages across a time-frame • Exit Polls • Asked as people are exiting the voting booth on election day

  11. The 2000 Florida Debacle • In November 2000, Voter News Services mismanaged polling in the state of Florida. • First, they called the state of Florida in favor of Al Gore. • Later in the evening, they reversed their call in favor of George W. Bush. • Finally reversed that call in favor of “too close to call” early the next day.

  12. The Florida 2000 Debacle • What went wrong? • This whole event led to more careful oversight and a more conservative estimate when “calling a race” for a candidate on tv.

  13. RealClearPolitics • Website, founded in 2000, by John McIntyre • serves as a clearinghouse for political polling results • displays charts and graphs and takes averages among the most recent polls to get an overall picture of the predicted results of an election

  14. RealClearPolitics • Major news outlets now often speak of the RealClearPolitics average • CNN calls this a “Poll of Polls” • Let’s examine the formula for the “RCP Average” • Can improvements be made?

  15. What the site looks like

  16. What is Displayed? • The Polling Organization • Dates of polling • The Sample Size/Type of Person Asked • The Margin of Error • % of vote received by each candidate • The winner/margin of victory

  17. Differences Between Polls • Natural sampling variability • Biases due to: • Polling Organization/Political Leaning • Sample Sizes/Margins of Error • Types of Person Asked • When the poll was done

  18. Looking at the RCP Average • They take a raw average of the percentages of the five (or sometimes 10) most recent polls done by different organizations • They ignore the factors on the last page which my affect the quality of the average • We’ll come back to this later

  19. 2008 Primary Elections • Primary Election season began in January of 2008 • Multiple candidates for each of the two major political parties • Data is multinomial • Hard to analyze but is informative

  20. The Iowa Caucus • The first primary election is actually not an election but a caucus • A caucus is a meeting, like a town hall meeting, among all citizens eligible to vote in a precinct • Caucuses can take a variety of forms but they share common principles • Are caucuses as fair as primaries to gauge people’s intentions as to their candidate of choice.

  21. Iowa Results

  22. Polling Data New Hampshire Reaction

  23. What Happened? • Let’s examine the long term and short term trends and make conjectures as to what happened?

  24. New Hampshire • After his victory over Clinton in Iowa, Obama received a bandwagon bump in New Hampshire polling. • Polls in New Hampshire showed him up by 13% less than 2 days before the election.

  25. New Hampshire • RealClearPolitics: predicted the result would be Obama- 38.3%, Clinton 30.0% • In reality, the final results were: Clinton 39.0% and Obama 36.4% • Instead of an 8.3% win for Obama, it was a 2.6% win for Clinton • What went wrong?

  26. My Theory • The Bandwagon bump was short-lived • Some people who went over to his side after Iowa came back to Clinton’s side before the vote. • The long-term trend showed Clinton ahead.

  27. Long Term Trend

  28. Primary Election Season • Independent Study Students worked with me to research the accuracy of primary elections • We looked at binomial cases for two candidates in the democratic party once the race was down to Obama vs Clinton

  29. Factors in our Analysis • Type of Election • When the election took place • Geography • Open vs Closed Primary • Geography • Red vs Blue State • # of Delegates at Stake • Polling Company

  30. Conclusions from This Analysis • States in the old south, namely Georgia, South Carolina, and Alabama featured the worst polling • Why? Effect of race and turnout at the polls. • Of the major polling companies/tv stations/wire services, Reuters did the best polling (accuracy 89%) and Fox News did the worst polling (accuracy 50%)

  31. Adaptations for the General Election • The polling companies for the most part fixed their over/under estimations by election day in November 2009 • Major polling sites like RealClearPolitics and 538.com were nearly 100% accurate when using data from the major polling services

  32. What Happens Next? • The polling companies don’t take a vacation after the general, election • Preparation for 2010 midterm House of Representative elections are already under way

  33. Other Political Polls • Tracking Polls are done constantly for the Presidential Approval Rating, Congressional Approval Rating, and the Direction of the Country • George W. Bush set a record for a 74% disapproval rating last October after the economic crisis • Direction of Country: 7% right 91% wrong

  34. Appendix 1: Fixing the RCP Average • It’s a simple autoregressive model • They take a raw average of the percentages of the five (or sometimes 10) most recent polls done by different organizations

  35. What’s Lacking in the Model? • This model ignores: • Differences in quality and structure between different polling companies/sample sizes • Different levels of polling audience (Likely Voters, Registered Voters, Adults) • All older polls and redundant polls which might help show long term trends

  36. Markov Chains • A Markov Chain view of political polling takes into account long term trends • Some voters have static opinions • Others have “dynamic opinions” • May change over time due to a debate or a result in another state

  37. Weighted Average • Better idea: A weighted average which takes into account all polls • including older polls and “redundant polls” • My students and I are working on this analysis and our preliminary work shows some improvement in the modelling.

  38. Appendix II: Role of Intrade • Pundits make predictions • Some are biased based on their goals • Fox News • MSNBC • Polls gauge people’s interest • But, when you put your money where your mouth is, then you better be accurate in your predictions

  39. No • Intrade is one of the largest prediction market traders in the world • You can buy or sell shares to predict outcomes of sporting events, elections, reality television contests, stock market indices, and environmental disasters among other things • Is it Legal? • Is it Moral?

  40. Legality and Morality • It is illegal to offer wagering on United States elections within the borders of the United States • Las Vegas casinos cannot make lines on outcomes of elections • But, offshore outlets do • The morality issue is one for which there is no one correct answer.

  41. Example • At Intrade, you could have wagered on the following event last November: • Barack Obama to win Presidential Election • Bid 15.0 Ask 22.9 • What does that mean: If you wager 22.9 dollars on a share and the market wins you get back 100 dollars. • If you wager 85.0 dollars to sell short the share and the market loses, you get back 100 dollars. • The different in the ask and the bid prices is the “vig” or the amount of money the casino wins

  42. How are these Prices Set • They are set by professional linesmakers and professional political scientists working together • The goal of the casino is to get near true odds to get “equal amounts” wagered on both sides by experts in the field • If everyone bets in one direction and that direction wins, the casino can go broke • The larger the “vig”, the less fair it is to the bettor

  43. Hedging your Bet in Life • The larger the betting volume, the more “accurate” the lines will appear • Like in sports, sometimes you bet on the team you like (Obama voters bet on Obama.) • Sometimes, you “hedge your bet in life” • Obama voters betting on McCain (if Obama wins, then that helps their political cause…if McCain wins then they win some money) Some casinos count on this behavior in their modeling of betting patterns.

  44. Preparing for 2012 • Some polling companies and some offshore sportsbooks have already begun preparing for 2012…they ask questions like: • What are the odds that Obama will be reelected? • What are the odds Sarah Palin will win the Republican Nomination? • What are the odds Joe Biden will be replaced as VP on the ticket by Hillary Clinton? • What are the odds of Ralph Nader running again?

  45. Concluding Remarks • Thank you to my students Kelly Koziol and Naida Alcime who assisted in data collection and analysis • Thank you to Anne and the entire RUSMP MLI team for inviting me back again • Thank you for being a fantastic audience. • Any questions???

More Related