1 / 39

Ph.D. Methods Seminar

Ph.D. Methods Seminar. Aaron K. Chatterji March 16th, 2009. Agenda. I. Patents 101 2. Analyzing Count Data (Poisson/NB) 3. Review Assigned Papers 4. Assignment for next week.

neola
Download Presentation

Ph.D. Methods Seminar

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ph.D. Methods Seminar Aaron K. Chatterji March 16th, 2009

  2. Agenda • I. Patents 101 • 2. Analyzing Count Data (Poisson/NB) • 3. Review Assigned Papers • 4. Assignment for next week

  3. Understanding drivers of corporate innovation and the link to performance is a fundamental question in strategy research • How do we measure innovation? • For many scholars, the answer has been patents. • Patent based research has exploded in recent years but has also endured criticism as well • What do patents really measure? • Quality vs. Quantity • Are they all novel and useful? • See Lerner and Jaffe’s book (Innovation and its Discontents, 2006) • A patent on how to swing a swing • Are we missing the bigger picture of firm innovation? • We have detailed data, but should that drive research?

  4. Are patents the right measure of innovative activity? • Are patents the inputs or outputs to innovation? • Something in between? • What other measures of innovation might we use? • Academic papers? • Products? • See Cohen et al. surveys for other important measures of innovation.

  5. What is a patent? • A set of exclusive rights granted in exchange for disclosure of invention • Criteria include novelty and utility • Millions of patents have been granted in the U.S. • Initial costs vary between $5000 and $25000 • Average of 2 years between application and grant • Some key differences with Europe that scholars have considered • First to invent (US) vs. First to File (Europe) • Differences in ability to challenge granted patents

  6. Example of a patent • Patent Number: 4739762 • Inventor:Palmaz, Julio C. • Assignee:Expandable Grafts Partnership (San Antonio, Texas) • Filed: Nov. 3, 1986 • US Classification: 623/1.11 ; 604/104; 604/96.01; 623/1.46 • IPC: A61F 2/06 (20060101); A61F 2/06 (20060101); A61M 029/00

  7. Patent characteristics • Classification • U.S. Patent Classification • IPC (more rigorous for a few different reasons—See Lerner, 1994) • More focused on economic value • Better oversight (USPTO has not revisited their system since 1872!) • Citations • Both to other patents and publications • Claims • Granted patents vs. patent applications

  8. Patent Data Sources • NBER Patent Data • http://www.nber.org/patents/ • Linked to Compustat through assignee codes • Includes patents from 1963-99 (up to 2004 available) • citations from 1975-1999 • inventor • class • ...more • USPTO Website • Delphion, MicroPatent, etc. • European and Japanese Patent data

  9. History (via Trajtenbeg slides and papers) • Schmookler (1966) first began to match patents to industries • Grillches(1984) matched patents to specific firms in Compustat • Linking the data to industrial data made it more valuable (scholars still creating new combinations today) • But simply counting patents was not good enough • Why? Not all patents are created equal! • Trajtenberg, Henderson, and Jaffe begin to explore new dimensions • Citations data became popular in the 1990s • Measures of originality, generality, and breadth • Since then, researchers have focused on the following aspects • Dates • Geography • Patent Class • Assignee • Citations (self citations, examiner added citations) • Citation weighted patents

  10. What are patent citations? • References to prior patents and non-patent prior art like academic papers • Forward citation refers to a patent that cited your patent • Backward citation refers to a patent that your patent cited • 16 million citations between 1975-1999

  11. Using Citations to Measure Knowledge Transfer • Firms struggle to generate ideas in house so external knowledge is often required • Start-ups, universities, other firms • Patent citations are one way to measure knowledge flows from one organization to another. (See Trajtenberg, 1990; Jaffe et al. 1991)

  12. Can citations really be used to measure knowledge spillovers? • What is the incentive for the inventor (or her lawyer) to include citations? • Examiner added citations • See Alcacer and Gittelman (2006) • Do different kinds of inventors (university, corporate, and independent) use citations differently? • Does a citation really mean that one is building on that knowledge? • University patents tend to get a lot of cites • Government patents tend to get few cites

  13. There are other challenges in using the NBER patent data • Over 2 million patents with on average 2 inventors • Name Matching • The John Smith problem • Same name, different person • The John Smth problem • Same person, mispelled name • Lots of variation in patenting propensity across industries • Pharma/Med device---patenting is very important • Less so in software and chemicals • How do they protect IP in these industries? • Can we credibly measure innovation in these industries?

  14. Agenda • I. Patents 101 • 2. Analyzing Count Data (Poisson/NB) • 3. Review Assigned Papers • 4. Assignment for next week

  15. How to think about count data • Patent data are a type of count data, non-negative integers • Other examples? • Number of crimes committed by individuals in a city • Number of faculty publications • Length of hospital stay • Why can’t we use OLS to analyze count data? • OLS can yield negative predicted values • Count data, like patents are often highly skewed • Some firms have a lot of patents, most firms have just a few • This violates normality assumptions of OLS

  16. Poisson and Negative Binomial are the two most common models used to analyze count data • Both models based on Poisson distribution • Interpret coefficients same for both • Both estimated using log likelihood This equation represents the probability of observing any given count. Poisson assumes that mean and variance are equal, in this case both equal to “lamda”. Basically, the strong and testable assumption in Poisson is that all subjects that have the same covariates have the identical rate of the outcome. When this assumption does not hold (variance greater than the mean), we use the negative binomial model. The negative binomial is simply a more general form of Poisson that accounts for overdispersion (Variance greater than mean). Poisson would provide estimates with artificially low standard errors

  17. Count data, especially patent data, often contains many zeros • Many firms do not patent in a given year • Zero inflated Poisson and Zero inflated Negative binomial. • These models adjust for zeroes by directly modeling zero counts • Stata does an alpha test to help you choose between Poisson and NB • The null hyp. is that alpha is equal to zero which means that there is no overdispersion (use Poisson because it has less other assumptions than NB) • Rule of thumb: If p value is less than .05 than there is overdispersion

  18. Using patent data in regression analysis (Summary) • Patent data are count data, or non negative integers • Count data not suitable for OLS estimation---data is not normally distributed • Typically, a few firms have many patents and many firms have a low number of patents • Poisson Model • Key assumption is that mean and variance are equal in the Poisson distribution • Negative Binomial Model • A more general case of the Poisson model • allows for the case where the mean and variance are not equal

  19. More on Zero Inflated NB • Zero inflated negative binomial model is suitable when there are “excess zeros”, as is often the case when studying corporate patenting • Best example is to think of estimating the number of fish caught on a particular day by a group of campers • Some people fish but do not catch anything (True Zero) • Some people do not fish at all (Excess zero) • There are 2 processes at work • A Poisson process determining whether there are true zeroes and another determining whether there are excess zeroes • Poisson/NB will predict less zeroes than actually occur in the data • ZINB accounts for this

  20. Zero Inflated Negative Binomial Regression • Stata command • zinb count IVs, inflated() vuong • Output will display “alpha” a dispersion parameter • If this is significantly different than zero, we have overdispersion and negative binomial (nbreg) is preferred to Poisson (poisson) • A significant Z test on the Vuong test implies that the zero inflated model is the way to go. • See Examples on handout

  21. Agenda • I. Patents 101 • 2. Analyzing Count Data (Poisson/NB) • 3. Review Assigned Papers • 4. Assignment for next week

  22. Lessons from Jaffe et al. paper • Jaffe et al. (1991) find that patents citations are localized, or that it is more likely that a patent will cite another patent from the same city than a otherwise similar patent from a different city • Is this surprising? • What are the implications for strategy research? • Moreover, the localization effect declines over time • What might explain this finding? • How might global trends impact the speed of this decline? • What are the unanswered questions left by this paper? • Any ideas for how you would design a follow up study?

  23. Lessons from Alcacer and Gittleman Paper • Main finding: 2/3 of patent citations are added by the examiner • What does this mean for the empirical strategy literature using citations? • Isn’t it interesting that patent examiners are more likely to add self citations than the inventor herself? • Why might we be observing this pattern? • This paper has been particularly influential and well received---the type of paper a junior faculty would dream of writing---why do you think so?

  24. Lessons from Chatterji and Fabrizio (1) • In this study, we compared physician innovations to corporate innovations • Name matching challenge---who are the doctors? • How do we compare them? • Number of backward and forward citations • Generality • Basically a Herfindahl index where the most general patents are those are which are cited across the most number of classes • Originality is similar but based on the diversity of classes the focal patent cites

  25. User Innovation paper (continued) • Here is what the regression might look at in STATA • xi: zinb cite6302 DrID ncites i.appyear, fe i(orgno) • We actually use the xtqmlp command (to properly cluster our SEs) which you can learn about here • http://scripts.mit.edu/~pazoulay/Software.html

  26. We find support for H2-H4, in that user (physician) innovations receive more citations, including citations from industry, and have broader impact

  27. Lessons from Chatterji and Fabrizio (2) (presented 3/11) • xi: xtqmlp CiteWtdPats ln_DrPats_lag1 ln_PatStock ln_InnovStock ln_emp_lag ln_rd_lag i.year, fe i(firmID) cluster(firmID) • xi: xtqmlp NumInnov ln_DrPats_lag1 ln_PatStock ln_InnovStock ln_emp_lag ln_rd_lag i.year, fe i(firmID) cluster(firmID)

  28. We find an increase in inventive performance following collaboration with doctors

  29. We also find a separable increase in innovative performance following collaboration with doctors

  30. Spawning Paper Patent Analysis Highlights • First, create set of “treatment” patents • Spawned firms • Why do I need a benchmark? • Create controls • Match on patent class and application year • Choose patent with the closest grant date to the treatment patent • Sanity check in the paper (top ten cited organizations)

  31. How do you do this in STATA? • Use propensity score matching (psmatch2 command in STATA) • Propensity scores are used when treatment is not random • Instead, each subject gets a probability score based on their covariates of being selected into the sample • Some subjects with similar propensity scores could be in treatment or control, so this is how you build the comparison set • If you are interested in using this • Read STATA help guide • We can follow up offline

  32. Agenda • I. Patents 101 • 2. Analyzing Count Data (Poisson/NB) • 3. Review Assigned Papers • 4. Assignment for next week

  33. Exercise: Using patent data to measure spillovers from “parent” to spin-off • Advanced Cardiovascular Systems was one of the prolific spawners in the medical device industry • I have a list of 27 companies who were founded by employees of ACS • I want to know how much technical knowledge has flowed from ACS to its spawns • Homework • What are the major classes ACS patented in? (Before 1986) • How often did the patents of these companies cite ACS? • Compared to a benchmark of control patents? (Challenging) • How often did they patent in a subclass that ACS worked in? • Can you find other ways to assess the relatedness of these patents? • Be creative!

  34. Steps in the analysis • Locate ACS and its spawns in NBER patent dataset • Organize each company and its patents and its patent citations in one dataset • Calculate relatedness measures • Citations? • Patents in the same areas? • Make a 30-45 min team presentation next class on the findings and limitations • Bring your questions and critiques

  35. Thank You

  36. Extra Slides

  37. NB • If assumptions do not hold for poisson, standard errors will be artificially low • The mean of NB is still lamda but the variance includes a dispersion parameter, r. When r goes to infinity, the variance equals lamda so Poisson and NB are equivalent.

  38. How do you choose between NB/Poisson and the Zero Inflated version? • Use countfit command in Stata. • Example • http://www.ats.ucla.edu/stat/stata/faq/countfit_faq.htm • Negative binomial estimator is basically a continuous mix of Poisson where the mixing distribution is Gamma.

  39. Poisson • First and 2nd moment both equal to lamda • NB • First moment is M • 2nd is m(1+alpha(M))

More Related