GATE track 1 session: Difference between revisions

Jump to navigation Jump to search
Line 35: Line 35:
* Evaluation metric - mathematically against human annotated
* Evaluation metric - mathematically against human annotated
* Scoring - performance measures for annotation types
* Scoring - performance measures for annotation types
* Precision = correct / correct + spurious
* Recall = correct / correct + missing
* F-measure is precision and recall (harmonic mean)
* F=2⋅(precision⋅recall / precision+recall)
* GATE supports average, strict, lenient


* Result types - Correct, missing, spurious, partially correct (overlapped)
* Result types - Correct, missing, spurious, partially correct (overlapped)

Navigation menu