1 / 5

Workshop on Transfer Learning Evaluation: Data Format and Analysis

Join us for a comprehensive workshop aimed at exploring evaluation data formats and analytic methods in Transfer Learning. Scheduled for May 2-3, 2006, the workshop will cover the structure for organizing IDs, variable settings, and data points critical for measuring performance in learning curves. Attendees will learn to provide functions to compute various statistics on learning curves, compare their results, and analyze data using established and custom metrics. Make the most of this opportunity to enhance your Transfer Learning evaluations.

lanai
Download Presentation

Workshop on Transfer Learning Evaluation: Data Format and Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transfer Learning Evaluation Workshop 2-3 May 2006 Evaluation Data Format & Analysis Clayton T. Morrison, Yu-Han Chang, Paul R. Cohen USC Information Sciences Institute {clayton,ychang,cohen}@isi.edu

  2. Data Format ( list of IDs and variable settings for each ((ID repl#) (list of points on curve)) . . . ((ID repl#) (list of points on curve)) ) ( ( ( :<condition-name> (:<variable_1> <value_1>) . . . (:<variable_n> <value_n>) ) . . . ) ( ( ( :<condition-name> <replication-number> ) ( (x1 y1) (x2 y2) . . . (xn yn) ) ) . . . ) ) • We’ll expect x1=0 so we can measure jump-start. • We prefer that x1, x2, … for each curve to be consistent but will interpolate if necessary

  3. An example ( ( ( :exp1-b (:near-far 0.5) (:mtn NIL) (:train NIL) ) ( :exp1-ab (:near-far 0.5) (:mtn NIL) (:train T) ) ) ( ( ( :exp1-b 1 ) ( (0 15.4)(20 28.3)(40 33.1)(60 54.2)(80 56.1)(100 56.2) ) ) ( ( :exp1-b 2 ) ( (0 11.5)(20 23.9)(40 31.9)(60 54.0)(80 55.5)(100 55.9) ) ) . . . ( ( :exp1-b 7 ) ( (0 13.5)(20 28.9)(40 39.4)(60 56.0)(80 55.9)(100 56.2) ) ) ( ( :exp1-ab 1 ) ( (0 35.5)(20 40.4)(40 60.3)(60 62.4)(80 66.7)(100 66.9) ) ) . . . ( ( :exp1-ab 7 ) ( (0 40.1)(20 56.2)(40 59.1)(60 66.4)(80 66.9)(100 66.8) ) ) ) )

  4. Provide TL Analysis Functions • If you want to test any statistics other than the standard ones (transfer ratio, jump-start, etc), simply provide a function that operates on a pair of learning curves: (defun your-stat-fn (curve1 curve2) <your code to compute a statistic f>) where curve1 looks like ((0 10.3) (20 12.8) … (100 34.1)) • We’ll provide code for calculating the standard statistics. • You can provide replacement functions for the statistic (e.g. transfer-ratio), summarization (e.g. average), and asymptote (e.g. max-over-all-data) • We’ll also place all of this code online so you can use it. • To compare your curves, you’ll do something like (process-data :exp1-b :exp1-ab 7 #’transfer-ratio)

  5. End Example of good curve. * bonus points for group that provides us with one

More Related