Etiseo
This presentation is the property of its rightful owner.
Sponsored Links
1 / 74

ETISEO PowerPoint PPT Presentation


  • 60 Views
  • Uploaded on
  • Presentation posted in: General

ETISEO. François BREMOND ORION Team, INRIA Sophia Antipolis, France. Fair Evaluation. Unbiased and transparent evaluation protocol Large participation Meaningful evaluation. Tasks evaluated. GT & Metrics are designed to evaluate tasks all along the video processing chain:

Download Presentation

ETISEO

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Etiseo

ETISEO

François BREMOND

ORION Team, INRIA Sophia Antipolis, France


Fair evaluation

Fair Evaluation

  • Unbiased and transparent evaluation protocol

  • Large participation

  • Meaningful evaluation


Tasks evaluated

Tasks evaluated

  • GT & Metrics are designed to evaluate tasks all along the video processing chain:

  • Task 1: Detection of physical objects,

  • Task 2: Localisationof physical objects,

  • Task 3: Classification of physical objects,

  • Task 4: Tracking of physical objects,

  • Task 5: Event recognition.


Matching computation

Matching Computation

To evaluate the matching between a candidate result and a reference data, we may use following distances:

  • D1-The Dice coefficient: Twice the shared, divided by the sum of both intervals: 2*card(RDC) / (card(RD) + card(C)).

  • D2-The overlapping: card(RDC) / card(RD).

  • D3-Bertozzi and al. metric: (card(RDC))^2 / (card(RD) * card(C)).

  • D4-The maximum deviation of the candidate object or target according to the shared frame span: Max { card(C\RD) / card(C), card(RD\C) / card(RD) }.

RD

C


Metrics 1

metrics (1)

T1- DETECTION OF PHYSICAL OBJECTS OF INTEREST

C1.1 Number of physical objects

C1.2 Number of physical objects using their bounding box

T2- LOCALISATION OF PHYSICAL OBJECTS OF INTEREST

  • C2.1 Physical objects area (pixel comparison based on BB)

  • C2.2 Physical object area fragmentation (splitting)

  • C2.3 Physical object area integration (merge)

  • C2.4 Physical objects localisation

    • 2D and 3D

    • Centroïd or bottom centre point of BB


Metrics 2

metrics (2)

T3- TRACKING OF PHYSICAL OBJECTS OF INTEREST

  • C3.1 Frame-To-Frame Tracking: Link between two frames

  • C3.2 Number of object being tracked during time

  • C3.3 Detection time evaluation

  • C3.4 Physical object ID fragmentation

  • C3.5 Physical object ID confusion criterion

  • C3.6 Physical object 2D trajectory

  • C3.7 Physical object 3D trajectory

T4- CLASSIFICATION OF PHYSICAL OBJECTS OF INTEREST

C4.1 Object Type over the sequence

C4.2 Object classification per type

C4.3 Time Percentage Good Classification

card{ RDC, Type(C) = Type(RD) } / card(RDC)

T5- EVENT RECOGNITION

C5.1 Number of Events recognized over the sequence

C5.2 Scenario parameters


Metric evaluation

Metric Evaluation

  • Distance for matching groundtruth and algorithms results

    • Similar measures: D1, D2, D3, D4.

  • Few main metrics measure general trends

    • Discriminant and meaningful

    • Detection M1.2.1: CNumberObjectsBoundingBox

    • Localization M2.4.3: CCentroid2DLocalisationPix.

    • Tracking M3.3.1: CtrackingTime

    • Object Classification M4.1.3: CobjectTypeOverSequenceBBoxID

    • Event Recognition M5.1.2: CNumberNamedEvents


Metric evaluation cont d

Metric Evaluation (cont’d)

  • Secondary metrics:

    • Complementary information

      • Pixel-based (M2.1.1) versus object-based (M1.2.1) metrics

    • Potential algorithm errors.

      • Example: M3.3.1 complemented (eg., about stability) by M3.2.1, M3.4.1 and M3.5.1.

  • Non-informative Metrics:

    • Add noise to the evaluation or non-discriminative

    • Example: M1.1.1 CNumberObjects gives the object number per frame without position information.

    • The same for M4.1.1 and M5.1.1.


Evaluation on eti vs2 be 19 c1

Evaluation on ETI-VS2-BE-19-C1


Global results video

Global Results: Video

  • Remarks:

  • For similar scenes, very dissimilar results!

  • For different scenes, results can spread over a large range or concentrate in a narrow range.


Detection of physical objects eti vs2 be 19 c1 xml

Detection of Physical Objects (ETI-VS2-BE-19-C1.xml)

M1.1.1: NumberObjects

M1.2.1: NumberObjectsBoundingBoxD1


Detection of physical objects eti vs2 be 19 c1 xml1

Detection of Physical Objects (ETI-VS2-BE-19-C1.xml)

M1.2.1:NumberObjectsBoundingBoxD1

M2.1.1: ObjectsArea


Detection of physical objects eti vs2 be 19 c1 xml2

Detection of Physical Objects (ETI-VS2-BE-19-C1.xml)

M2.2.1: SplittingD5

M2.3.1: MergingD2


Summary on detection of physical objects

Summary on Detection of Physical Objects

  • Main metric measures:

    • Detection M1.2.1: CNumberObjectsBoundingBox

    • Problems: static objects, contextual objects, background, masks…

    • Advantages: objects vs pixels, large objects and bounding boxes

  • Secondary metrics:

    • M2.1.1 (area): indication on the precision and handling shadows

  • Split/Merge measures (M2.2.1, M2.3.1):

    • Advantage: indicate potential merge

    • Inconvenients: threshold-dependent, non-detected objects not taken into account


Localisation of physical objects

Localisation of Physical Objects


Localisation of physical objects eti vs2 be 19 c1 xml

Localisation of Physical Objects(ETI-VS2-BE-19-C1.xml)

M2.3.1: MergingD2

M2.4.3: Centroid2DLocalisationPixD1


Localisation of physical objects eti vs2 be 19 c1 xml1

Localisation of Physical Objects(ETI-VS2-BE-19-C1.xml)

M2.4.1: Centroid2DLocalisationD1

M2.4.3: Centroid2DLocalisationPixD1


Localisation of physical objects eti vs2 be 19 c1 xml2

Localisation of Physical Objects(ETI-VS2-BE-19-C1.xml)

M2.4.2.: Centroid3DLocalisationD1


Summary on localisation of physical objects

Summary on Localisation of Physical Objects

  • M2.4.1, M2.4.2, M2.4.3, main metrics:

    • Problems: low utilisation of 3D info and calibration

    • Good performance: good precision on reliable TP (handling shadow and merge)

    • Advantages: complementary to the Detection; normalised, pixel or meter metrics


Tracking of physical objects

Tracking of Physical Objects


Tracking of physical objects eti vs2 be 19 c1 xml

Tracking of Physical Objects(ETI-VS2-BE-19-C1.xml)

M3.2.1: NumberObjectTrackedD1

M3.3.1: TrackingTime


Tracking of physical objects eti vs2 be 19 c1 xml1

Tracking of Physical Objects(ETI-VS2-BE-19-C1.xml)

M3.4.1: PhysicalObjectIdFragmentation

M3.5.1: PhysicalObjectIdConfusion


Tracking of physical objects eti vs2 be 19 c1 xml2

Tracking of Physical Objects(ETI-VS2-BE-19-C1.xml)

M3.6.1: PhysicalObject2DTrajectories

M2.4.1: Centroid2DLocalisationD1


Summary on tracking of physical objects

Summary on Tracking of Physical Objects

  • M3.3.1, main metric:

    • Problems: propagation of detection errors

    • Advantages: good global overview

  • M3.2.1, secondary metric:

    • Good performance: consistent TP over time for few TPs

    • Problems: not taking into account of complete FN

  • Fragmentation/confusion (M3.4.1, M3.5.1):

    • Advantage: indicate potential ID switching

    • Inconvenients: not discriminative; favoring under-detection (few IDs); over-detection (multiple IDs)


Object classification

Object Classification


Object classification eti vs2 be 19 c1 xml

Object Classification(ETI-VS2-BE-19-C1.xml)

M4.1.1: ObjectTypeOverSequence

M4.1.1b: ObjectTypeOverSequenceBoundingBoxD1

Subtype


Object classification eti vs2 be 19 c1 xml1

Object Classification(ETI-VS2-BE-19-C1.xml)

M4.1.3: ObjectTypeOverSequenceBoundingBoxIdD1

M4.1.2: ObjectTypeOverSequenceBoundingBoxD1


Object classification1

Object Classification


Object classification2

Object Classification


Summary on object classification

Summary on Object Classification

  • M4.1.2, M4.1.3, same main metrics:

    • Problems: low classification of subtypes (doors, bikes, bags), favoring a few good quality TPs.

    • Advantage: reliable.

  • M4.1.1 (without BBox):

    • Inconvenients: wrong evaluation result in case of double errors (classified noise and FN)

    • Advantage: indicate potential double errors.


Event recognition

Event Recognition


Event recognition eti vs2 be 19 c1 xml

Event Recognition(ETI-VS2-BE-19-C1.xml)


Event recognition eti vs2 be 19 c1 xml1

Event Recognition(ETI-VS2-BE-19-C1.xml)


Event recognition eti vs2 be 19 c1 xml2

Event Recognition(ETI-VS2-BE-19-C1.xml)

M5.1.1: NumberEvents

M5.1.2: NumberNamedEventsD1


Summary on event recognition

Summary on Event Recognition

  • M5.1.2 (with time), main metrics:

    • Problems: lack of understanding of ground truth definition

    • Advantages: good global overview per scenario type.

  • M5.1.1, secondary metric:

    • Problems: not taking into account of occurrence time


Evaluation results

Evaluation Results


Evaluation on eti vs2 be 19 c4

Evaluation on ETI-VS2-BE-19-C4


Detection of physical objects eti vs2 be 19 c4 xml

Detection of Physical Objects (ETI-VS2-BE-19-C4.xml)

M1.2.1: NumberObjectsBoundingBoxD1

M2.1.1: ObjectsArea


Tracking of physical objects eti vs2 be 19 c4 xml

Tracking of Physical Objects (ETI-VS2-BE-19-C4.xml)

M3.3.1.D1: TrackingTime

M3.2.1: NumberObjectTrackedD1


Event recognition eti vs2 be 19 c4 xml

Event Recognition (ETI-VS2-BE-19-C4.xml)

M5.1.2: NumberNamedEventsD1

M5.1.1: NumberEvents


Evaluation results1

Evaluation Results


Evaluation on eti vs2 mo 1 c1

Evaluation on ETI-VS2-MO-1-C1


Detection of physical objects eti vs2 mo 1 c1 xml

Detection of Physical Objects (ETI-VS2-MO-1-C1.xml)

M1.2.1: NumberObjectsBoundingBoxD1

M2.1.1: ObjectsArea


Tracking of physical objects eti vs2 mo 1 c1 xml

Tracking of Physical Objects (ETI-VS2-MO-1-C1.xml)

M3.3.1.D1: TrackingTime

M3.2.1: NumberObjectTrackedD1


Event recognition eti vs2 mo 1 c1 xml

Event Recognition (ETI-VS2-MO-1-C1.xml)

M5.1.1: NumberEvents

M5.1.2: NumberNamedEventsD1


Evaluation results2

Evaluation Results


Evaluation on eti vs2 rd 6 c7

Evaluation on ETI-VS2-RD-6-C7


Detection of physical objects eti vs2 rd 6 c7 xml

Detection of Physical Objects (ETI-VS2-RD-6-C7.xml)

M1.2.1: NumberObjectsBoundingBoxD1

M2.1.1: ObjectsArea


Etiseo

Detection of Physical Objects: Reference Data Filtering (ETI-VS2-RD-6-C7.xml: M1.2.1 - NumberObjectsBoundingBoxD1)

No filtering

With filtering


Detection of physical objects reference data filtering eti vs2 rd 6 c7 xml m2 1 1 objectsarea

Detection of Physical Objects: Reference Data Filtering (ETI-VS2-RD-6-C7.xml: M2.1.1 - ObjectsArea)

No filtering

With filtering


Tracking of physical objects eti vs2 rd 6 c7 xml

Tracking of Physical Objects (ETI-VS2-RD-6-C7.xml)

M3.3.1.D1: TrackingTime

M3.2.1: NumberObjectTrackedD1


Etiseo

Tracking of Physical Objects: Reference Data Filtering (ETI-VS2-RD-6-C7.xml: M3.2.1 - NumberObjectTrackedD1)

No filtering

With filtering


Tracking of physical objects reference data filtering eti vs2 rd 6 c7 xml m3 3 1 d1 trackingtime

Tracking of Physical Objects: Reference Data Filtering (ETI-VS2-RD-6-C7.xml: M3.3.1.D1 - TrackingTime)

No filtering

With filtering


Event recognition eti vs2 rd 6 c7 xml

Event Recognition (ETI-VS2-RD-6-C7.xml)

M5.1.1: NumberEvents

M5.1.2: NumberNamedEventsD1


Event recognition eti vs2 rd 6 c7

Event Recognition(ETI-VS2-RD-6-C7)


Evaluation results3

Evaluation Results


Evaluation on eti vs2 rd 10 c4

Evaluation on ETI-VS2-RD-10-C4


Detection of physical objects eti vs2 rd 10 c4 xml

Detection of Physical Objects (ETI-VS2-RD-10-C4.xml)

M2.1.1: ObjectsArea

M1.2.1: NumberObjectsBoundingBoxD1


Tracking of physical objects eti vs2 rd 10 c4 xml

Tracking of Physical Objects (ETI-VS2-RD-10-C4.xml)

M3.3.1.D1: TrackingTime

M3.2.1: NumberObjectTrackedD1


Event recognition eti vs2 rd 10 c4 xml

Event Recognition (ETI-VS2-RD-10-C4.xml)

M5.1.2: NumberNamedEventsD1

M5.1.1: NumberEvents


Evaluation results4

Evaluation Results


Evaluation on eti vs2 ap 11 c7

Evaluation on ETI-VS2-AP-11-C7


Detection of physical objects eti vs2 ap 11 c7 xml

Detection of Physical Objects (ETI-VS2-AP-11-C7.xml)

M1.2.1: NumberObjectsBoundingBoxD1

M2.1.1: ObjectsArea


Etiseo

Detection of Physical Objects: Reference Data Filtering (ETI-VS2-AP-11-C7.xml: M1.2.1 - NumberObjectsBoundingBoxD1)

No filtering

With filtering


Detection of physical objects reference data filtering eti vs2 ap 11 c7 xml m2 1 1 objectsarea

Detection of Physical Objects: Reference Data Filtering (ETI-VS2-AP-11-C7.xml: M2.1.1 - ObjectsArea)

No filtering

With filtering


Tracking of physical objects eti vs2 ap 11 c7 xml

Tracking of Physical Objects (ETI-VS2-AP-11-C7.xml)

M3.3.1.D1: TrackingTime

M3.2.1: NumberObjectTrackedD1


Etiseo

Tracking of Physical Objects: Reference Data Filtering (ETI-VS2-AP-11-C7.xml: M3.2.1 - NumberObjectTrackedD1)

No filtering

With filtering


Tracking of physical objects reference data filtering eti vs2 ap 11 c7 xml m3 3 1 d1 trackingtime

Tracking of Physical Objects: Reference Data Filtering (ETI-VS2-AP-11-C7.xml: M3.3.1.D1 - TrackingTime)

No filtering

With filtering


Event recognition eti vs2 ap 11 c7 xml

Event Recognition (ETI-VS2-AP-11-C7.xml)

M5.1.1: NumberEvents

M5.1.2: NumberNamedEventsD1


Event recognition eti vs2 ap 11 c7 xml1

Event Recognition (ETI-VS2-AP-11-C7.xml)

M5.1.2: NumberNamedEventsD1

M5.1.1: NumberEvents


Evaluation results5

Evaluation Results


Understanding versus competition

Understanding versus Competition

  • ETISEO Goal

    • Not a competition nor benchmarking

    • Emphasis on gaining insight into video analysis algorithms

    • Better understanding of evaluation methodology

  • Why? ETISEO limitations:

    • Algorithm results depend on time and manpower (parameter tuning),

    • format understanding (XML), objective definition (ground truth), and algorithm capacities (static, occluded, portable and contextual objects)

    • previous similar experiences,

    • number of processed videos, frame rate, start frame

    • Metrics and parameters (split/merge)

    • learning stage required or not.


Understanding versus competition cont d

Understanding versus Competition (cont’d)

  • Warmest thanks to the 16 teams:

    • 8 teams achieved high quality results

    • 9 teams performed event recognition

    • 10 teams produced results on all priority sequences

  • Special thanks to teams 1, 8, 12, 14 and 28 :

    • Stable and high-quality results on a large video set

  • More evaluation results…


Conclusions

Conclusions

  • Good performance comparison per video: automatic, reliable, consistent metrics.

  • A few insights into video surveillance algorithms. For example,

    • Shadows

    • merge

  • A few limitations:

    • Lack of understanding of the evaluation rules (output XML, time-stamp)

    • Data subjectivity: video, background, masks

    • Metrics and evaluation parameters

  • Future improvements: flexible evaluation tool

    • Filters for reference data

    • Selection of metrics and parameters

    • Selection of videos


  • Login