50 likes | 141 Views
Explore improved classification strategies leveraging Eth80 and Caltech101 databases, with insights on optimizing Lambda and feature vectors for enhanced accuracy. Discussions on level weights and kernel matrix evolution for next-level performance.
E N D
Presentation- Week 9 Maya Shoham
Baselines • Eth80 Database • 30 Train, 50 Test • 70.25% Accuracy • The paper that used the eth80 database used all 400 pictures for training + test and got a best classification rate of 83%. • Caltech101 Database • 30 train, 1-50 test • Lambda=20, Iterations=5000, 20% accuracy • Possibly not optimal lambda, but code takes about 9 hrs to run so its hard to check multiple lambdas.
Other Changes • Split up the feature vectors into training and testing before the kernel matrix is generated. • Allows for greater flexibility in generating different kernel matrices. • Sped up the softmax logistic regression
Training the Level Weights • Feature vectors are 4200 x number of images. • 4200 corresponds to the 21 bins x 200 clusters. • Kernel Matrix to train the level weights is a 4200 x 4200.
Whats Next? • Should the level weights be weighted by feature? (4200 weights) or by bin? (21 weights) • How do we label the training kernel for the learning of the level weights? • Write code to convert from the 4200x4200 kernel matrix to the #images x #images kernel matrix so that we can alternately weight the levels weights and the other weights.