Evaluation
AlignmentEvaluator()
Class that provides evaluation metrics for alignment.
Source code in src/deeponto/align/evaluation.py
23 24 |
|
precision(prediction_mappings, reference_mappings)
staticmethod
The percentage of correct predictions.
\[P = \frac{|\mathcal{M}_{pred} \cap \mathcal{M}_{ref}|}{|\mathcal{M}_{pred}|}\]
Source code in src/deeponto/align/evaluation.py
26 27 28 29 30 31 32 33 34 |
|
recall(prediction_mappings, reference_mappings)
staticmethod
The percentage of correct retrievals.
\[R = \frac{|\mathcal{M}_{pred} \cap \mathcal{M}_{ref}|}{|\mathcal{M}_{ref}|}\]
Source code in src/deeponto/align/evaluation.py
36 37 38 39 40 41 42 43 44 |
|
f1(prediction_mappings, reference_mappings, null_reference_mappings=[])
staticmethod
Compute the F1 score given the prediction and reference mappings.
\[F_1 = \frac{2 P R}{P + R}\]
null_reference_mappings
is an additional set whose elements
should be ignored in the calculation, i.e., neither positive nor negative.
Specifically, both \(\mathcal{M}_{pred}\) and \(\mathcal{M}_{ref}\) will substract
\(\mathcal{M}_{null}\) from them.
Source code in src/deeponto/align/evaluation.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
|
hits_at_K(reference_and_candidates, K)
staticmethod
Compute \(Hits@K\) for a list of (reference_mapping, candidate_mappings)
pair.
It is computed as the number of a reference_mapping
existed in the first \(K\) ranked candidate_mappings
,
divided by the total number of input pairs.
\[Hits@K = \sum_i^N \mathbb{I}_{rank_i \leq k} / N\]
Source code in src/deeponto/align/evaluation.py
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
|
mean_reciprocal_rank(reference_and_candidates)
staticmethod
Compute \(MRR\) for a list of (reference_mapping, candidate_mappings)
pair.
\[MRR = \sum_i^N rank_i^{-1} / N\]
Source code in src/deeponto/align/evaluation.py
94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
|
Last update:
February 1, 2023
Created: January 22, 2023
Created: January 22, 2023
GitHub: