fine tuning ranking models
Skip this Video
Download Presentation
Fine-tuning Ranking Models:

Loading in 2 Seconds...

play fullscreen
1 / 27

Fine-tuning Ranking Models: - PowerPoint PPT Presentation

  • Uploaded on

Fine-tuning Ranking Models:. Vitor Jan 29, 2008 Text Learning Meeting - CMU. a two-step optimization approach. With invaluable ideas from …. Motivation. Rank, Rank, Rank… Web retrieval, movie recommendation, NFL draft, etc. Einat ’s contextual search Richard ’s set expansion (SEAL)

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about ' Fine-tuning Ranking Models:' - darice

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
fine tuning ranking models

Fine-tuning Ranking Models:


Jan 29, 2008

Text Learning Meeting - CMU

a two-step optimization approach

With invaluable ideas from ….

  • Rank, Rank, Rank…
    • Web retrieval, movie recommendation, NFL draft, etc.
    • Einat’s contextual search
    • Richard’s set expansion (SEAL)
    • Andy’s context sensitive spelling correction algorithm
    • Selecting seeds in Frank’s political blog classification algorithm
    • Ramnath’s thunderbird extension for
      • Email Leak prediction
      • Email Recipient suggestion
help your brothers
Help your brothers!
  • Try Cut Once!, our Thunderbird extension
    • Works well with Gmail accounts
  • It’s working reasonably well
  • We need feedback.

Thunderbird plug-in

Leak warnings:

hit x to remove recipient


hit + to add

Pause or cancel send of message

Email Recipient Recommendation

Timer: msg is sent after 10sec by default

Classifier/rankers written in JavaScript

email recipient recommendation1
Email Recipient Recommendation


[Carvalho & Cohen, ECIR-08]

aggregating rankings
Aggregating Rankings

[Aslam & Montague, 2001]; [Ogilvie & Callan, 2003]; [Macdonald & Ounis, 2006]

  • Many “Data Fusion” methods
    • 2 types:
      • Normalized scores: CombSUM, CombMNZ, etc.
      • Unnormalized scores: BordaCount, Reciprocal Rank Sum, etc.
  • Reciprocal Rank:
    • The sum of the inverse of the rank of document in each ranking.
aggregated ranking results
Aggregated Ranking Results

[Carvalho & Cohen, ECIR-08]

can we do better
Can we do better?
  • Not using other features, but better ranking methods
  • Machine learning to improve ranking: Learning to rank:
    • Many (recent) methods:
      • ListNet, Perceptrons, RankSvm, RankBoost, AdaRank, Genetic Programming, Ordinal Regression, etc.
    • Mostly supervised
    • Generally small training sets
    • Workshop in SIGIR-07 (Einat was in the PC)
pairwise based ranking
Pairwise-based Ranking

Goal: induce a ranking function f(d) s.t.

Rank q









We assume a linear function f

Therefore, constraints are:

ranking with perceptrons
Ranking with Perceptrons
  • Nice convergence properties and mistake bounds
    • bound on the number of mistakes/misranks
  • Fast and scalable
  • Many variants[Collins 2002, Gao et al 2005, Elsas et al 2008]
    • Voting, averaging, committee, pocket, etc.
    • General update rule:
    • Here: Averaged version of perceptron
rank svm
Rank SVM

[Joachims, KDD-02],

[Herbrich et al, 2000]

  • Equivalent to maximing AUC

Equivalent to:

loss functions
Loss Functions
  • SVMrank
  • SigmoidRank

Not convex

fine tuning ranking models1
Fine-tuning Ranking Models

Base ranking model

Final model

Base Ranker

Sigmoid Rank

e.g., RankSVM, Perceptron, etc.


Minimizing a very close approximation for the number of misranks

set expansion seal results
Set Expansion (SEAL) Results

[Wang & Cohen, ICDM-2007]

[Listnet: Cao et al. , ICML-07]

learning curve
Learning Curve

TOCCBCC Enron: user lokay-m

learning curve1
Learning Curve

CCBCC Enron: user campbel-m

regularization parameter
Regularization Parameter





some ideas
Some Ideas
  • Instead of number of misranks, optimize other loss functions:
    • Mean Average Precision, MRR, etc.
    • Rank Term:
    • Some preliminary results with Sigmoid-MAP
  • Does it work for classification?