1 / 29

TTN – WP3 TTN meeting, June 10-11, 2010

TTN – WP3 TTN meeting, June 10-11, 2010. WP3 members. WP3 - objectives and methodology. To investigate, define and classify a set of criteria for TT activities measurement in PP How? To build a set of indicators and metrics overview of the situation in our institutions

corleyr
Download Presentation

TTN – WP3 TTN meeting, June 10-11, 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TTN – WP3TTN meeting,June 10-11, 2010

  2. WP3 members

  3. WP3 - objectives and methodology • To investigate, define and classify a set of criteria for TT activities measurement in PP • How? To build a set of indicators and metrics • overview of the situation in our institutions • elements of comparison between us, and us vs overseas • guidance for newcomers • performance improvement measurement •  How to select those indicators? • bibliography • adjustment to our research profile • testing using a questionnaire • define the final ones: the TT KPIs for HEP institutions Presented last TTN meeting (December 2009)

  4. Questionnaire – sent in April 2009 Presented last TTN meeting (December 2009) 50 variables

  5. Schedule at end 2009 Presented last TTN meeting (December 2009)

  6. Questionnaire analysis: last TTN meeting issues At the end of 2009, 19 ‘2008 questionnaires’ were received, 14 considered as valid (not too much empty fields), splitted in subsets: EG, EH (using FTE-HEP), EU, NG ALL vs ASTP, HEP vs a ENCHMARK=(BNL+EPFL) • the questionnaire was designed to get answers on the HEP activity, but responses covered a mix between HEP and non HEP • empty cells are significant – for ex. we can show that facilities agreements are indicated only in HEP institutions-, but calculations are disturbed • possible misunderstandings in the answers • various organisations and roles for the TTO, and in general large variance in the data • results sensitive to the quality of data and the choice of selected questionnaires in each subset • not the same indicators than ASTP  better identification of homogeneous subsets  add other questionnaires • other ways of calculations (mean on significant variables by reducing the impact of empty cases…) • consolidation of KPI choice

  7. Major evolutions since previous TTN meeting • WP3 meeting (22/01/10 in Paris), with main decisions: • selection of KPIs for the analysis and for future questionnaires • distribution in two subsets: HEP institutions and ‘BENCHmark institutions’ (having high performance in TT) • report and booklet structure preparation • new schedule with the objective to add more questionnaires to be more confident on the results: that’s where the shoe pinches, because of delays in the receipt of new completed questionnaires! • Only two new completed questionnaires were received (more were expected), classified as ‘Universities’ : • University College London (GB) • Politecnico Di Milano (IT) • Reorganisation in two groups: - HEP institutions (all facts considered only through HEP activities) - BENCHMARK: multipurpose institutions having a part of their activity in HEP and high performance in TT [BNL (US), EPFL (S), UCL (UK)] • Important correction on some data (Thursday, June 10th!)

  8. Work done - Selection of KPIs • One reference: • #FTE (*) • Eleven KPIs (* see comments next page) • 2.1.1 # invention disclosures/year • 2.1.2 # priority patent applications/year • 2.1.5 portfolio of patent families • 2.1.6 portfolio of commercially licensed patents • (missing in Q2008) total portfolio of licenses (including software and know-how) • (missing in Q2008) license revenue/year • 2.2.1 # IP transfer or exploitation agreements/year • 3.1.1 # R&D cooperation agreements/year • 3.1.1.4 R&D cooperation agreements revenues/year • (incomplete in Q2008): licenses+services+facilities revenue/year * • 2.3.4 # startups still alive since 2000 (not really significant, but for information)

  9. Revenues from KTT activities IP commercialisation R&D cooperation (collaborative and contract research) Licensing Services, Consultancy, Access to facilities Products & GDP New IP Research disciplines Comments on these KPIs • Maturity of HEP institutions is an interesting KPI; it was evaluated through an aggregate built from various answers with more ore less weighting; unfortunately, as it is today this indicator only measures if written rules exist • Revenues related to knowledge and technology transfer activities have two sources: • the commercialisation of IP comprising licensing, services, consultancy and access to facilities; • and R&D cooperation comprising collaborative and contract research • (*) #TTO cannot be used due to discrepancies in their role • some ratios could be added

  10. Work done - Synthesis • 21 ‘2008 questionnaires’ were received, 17 considered as valid (not too much empty fields) • Anonymisation of questionnaires, on the request of some institutions • Splitting in a first group of two subsets: • ALL 2006 (17 Q) mixing HEP, multipurpose institutes and Universities • ASTP (Association of European Science and Technology Transfer Professionals) survey added for comparison • Splitting in a second group of two subsets: • 10 HEP institutions having a profile pure HEP (all facts considered only through HEP activities) • BENCHMARK: 3 generic multipurpose institutions (existing HEP activity is not the measure) with high performance in TT [BNL (US), EPFL (S), UCL (UK)] – also active in TT but n • Analysis: • Descriptive statistics • Selection of KPIs • Comparison on KPI means • Research of explaining factors in each subset using multiple correlation • Comparison ALL vs ASTP • Comparison HEP vs BENCHMARK • Radar graphs

  11. Next steps

  12. Results – June 2010

  13. Questionnaire analysis –methodology (1) • 1 Input raw data from questionnaires to Excel, with a quantification of qualitative data (particularly those relative to maturity) • 2 Preparation of the synthesis • Total # FTEs= FTE for general institutions and HEP FTEs for HEP institutions • One worksheet splitted in two sets: ALL inputs (17 institutions) vs ASTP 2006,to have a global vision • One worksheet splitted in two sets: • 10 HEP European institutions( having provided HEP specific data) vs a BENCHMARK of 3 institutions

  14. Questionnaire analysis –methodology (2) • 4 Multiple correlation on aggregates: Search for explanatory factors • NB: empty cells have been set to zero; indeed, the Excel tool cannot work on non numeric values • 5 Normalised aggregates: aggregates are normalised to 1000 FTE equivalent per site, then all values are normalised between 0-1 for radar graphics and histograms • 6 Comparison of means between each set of selected institution (normalised to 1000 FTS): to see where are the main differences and if HEP institutions are specific • 7 Graphs criteria: a radar graph comparing all institutions on a selected KPI • 8 Graphs Institutes: a radar graph on strengths and weakness of each institute

  15. Descriptive statistics Our 18 relevant institutions represent: • 67767 FTE (73339 if we include all questionnaires), on which 8043 FTE are devoted to HEP • 144 TT officers In 2008, they have produced: • 566 invention disclosures • 29 new startups – with 125 still alive • 88 IP agreements • 823 R&D contracts • 179 M€ revenues from R&D contracts

  16. Ratio ALL selected institutions / ASTP per 1000 FTE The comparison of the figures resulting of the ALL questionnaires vs ASTP gives: -less TTO (75%) -quite the same invention disclosures per year -more licensed patents (maybe due the calculation on 1000 FTE and Top ten HEP institutes + 3 ‘BENCHMARK’ institutes)

  17. Ratio HEP institutions / BENCHMARK per 1000 FTE In this comparison, we have HEP institutions compared a benchmark set (2 EU, 1 US), normalized to 1000 FTE in each institute: -less TT officers per 1000 FTE in the BENCHmark (prob. due to their large size) -a bad score in terms of licenses -services & facilities are specific to some HEP institutions (vs no answer for the others)

  18. KPI means analysis • The objective is to compare KPI means of institutions [ALL, HEP, BENCH and ASTP] • The means are listed below: arithmetic means, normalised on 1000 FTE, normalised on 1000 FTE and between 0 to 1:

  19. KPI means analysis per 1000 FTE

  20. Comparison on means • Preliminary remarks: • the normalisation of each institute on 1000 FTE improves the results for ‘HEP institutes’, particularly for those <<1000 • the results are not for all HEP institutes but for the top ten in TT • If the mean number of TT officers between subsets can be compared, it’s very variable from one institute to another • HEP invention disclosures and priority patents are satisfactory, with a good result in patent portfolio and patent licensing (compared to ASTP)…but for CERN, GSI and STFC • Contracts: the number of R&D contracts is difficult to appreciate independently of their amount, but HEP institutes have very good results in terms of revenue, thanks to GSI • Service and facilities revenues of some HEP institutes represent an interesting result, and will be grouped with license revenues in next questionnaires

  21. Explaining factors • Multiple correlation analysis has been used to measure the impact of each KPI on the others • The threshold to consider if a high correlation exists has been chosen to 0,707 (see next figures) considering freedom=6 and confidence p=5%

  22. Explaining factors for ALL QTTN • In term of explaining factors, we have to suppress trivial correlations (patents # vs invention disclosures is an example, but the conclusions could also be expressed as: get incentive to have more Inv. Discl. to have more patents) • Possible links: • # invention disclosures and # off TTOs • startups alive and # patents

  23. Explaining factors for HEP • In HEP, the relation between # research agreements and patents is strong; could we say that patents are related to R&D agreements?

  24. Explaining factors for BENCH empty cells • Very interesting factors in BENCHMARK (high TT results) correlation between R&D contracts and patents is confirmed ( objective for HEP institutions) • -The negative correlation between #TTO and Portfolio commercially licensed is due to phenomena on only three questionnaire

  25. Radar graphs Radar graphs are used to give an easy way in comparing more than three axis of values at a first time, and to see the evolution of the results on each axis vs the others. We have defined two categories of radar graphs: • Graphs criteria: a radar graph comparing all institutions on a selected KPI; by this way, each institution may compare its results vs the other ones • NB: values are normalised on 1000 FTE per institution and values between 0 to 1 to facilitate comparisons • 8 Graphs Institutes: a radar graph on strengths and weakness of each institute, to know where to put efforts • NB: values are normalised on 1000 FTE per institution and values between 0 to 1 Following figures are shown as examples. Each institution having answered the questionnaire will received their full set.

  26. Graph ‘Criteria’ for HEP institutions Example of Graphs ‘Criteria’ (performance of each institution per kpi) -Radar graphs show that each institution is specific and may have strengths & weaknesses -The high performance obtained by some institutions must be regarded as an objective by others, and improved each year

  27. Graph ‘Institute’ compared to another institutions Example of Graphs ‘Institutes’ (Strengths & weakness per Institution) for two institutions: These graphs show where are the weakness of your institution, and where you have to work with the institution management...for better results next year.

  28. TBD • Other ratios (CP) • Std deviation and weighted mean impacts, if any (MC)

  29. Report and booklet (tbd) Booklet structure (In italic, chapters pasted from the report) • 1 Purpose • 2 Scope and methodology of this survey • 3 Indicators selected (and meaning) • 4 Analysis and results • 5 Recommendations for improvement • 6 Future plans • 7 Summary of conclusions Distribution • CERN Council • PP institution Directors • Policy makers • TTN members • Other comparable networks • European Commission? • Specific distribution to questionnaire senders with added figures Presented last TTN meeting (December 2009)

More Related