Ontology Alignment Evaluation Initiative - OAEI-2024 Campaign

Bio-ML Track

OAEI 2024::Bio-ML Track

General description

The 2024 edition involves the following ontologies:

Compared to the 2023 edition, we removed the training subsumption mappings that can be used to infer testing subsumption mappings through deductive reasoning.

A complete description is available at the Bio-ML documentation.

Resources

Evaluation

Full details about the evaluation framework (global matching and local ranking) and the OAEI participation (result format for each setting in the main Bio-ML track and the Bio-LLM sub-track) are available at the Bio-ML documentation.

We accept direct result submission via this google forms based on trust. We will also release results for systems based on our implementations and for systems submitted via MELT. These three categories will be specified on the result tables.

Results

Note: Click the column names (evaluation metrics) to sort the table; Cells with empty values suggest that the corresponding scores are not available.

The super-script symbols indicate that the results come from MELT-wrapped systems (†), our own implementations (‡), and direct result submission (∗) , respectively. It is important to notice that direct result submissions are based on trust.

Note: New results of the submitted systems are being updated.

Bio-ML Equivalence Matching Results

For equivalence matching, we report both the global matching and local ranking results.

For the global matching evaluation, the test mapping sets for unsupervised (not using training mappings) and semi-supervised (using training mappings) systems are different; the unsupervised test set is the full reference mapping set while the semi-supervised test set is the 70% reference mapping set (excluding 30% training mappings). Some systems may not use the training mappings (e.g., BERTMapLt, LogMap, etc.), but we still report their performances on the semi-supervised test set. The use of training mappings for the semi-supervised setting is indicated by ✔ (used) and ✘ (not used).

For the local ranking evaluation, we keep one ranking test set for both unsupervised and semi-supervised systems.

Bio-ML Subsumption Matching Results

For subsumption matching, we report only the local ranking results as the subsumption mappings are inherently incomplete (see explanation in our resource paper).

For the local ranking evaluation, we keep one ranking test set for both unsupervised (marked by ✘) and semi-supervised (marked by ✔) systems.

(IC) indicates the isolated class setting of BERTSubs.

(del) indicates that the subsumptions in the target ontology, whose involved classes are among the target classes of the equivalence testing mappings, are deleted from the training samples. This is to prevent the system from utilizing the equivalence mappings that are used to generate the subsumption mappings.

For the local ranking evaluation, we keep one ranking test set for systems that use (marked by ✔) and do not use (marked by ✘) the training mappings.

Bio-LLM Results

For Bio-LLM, we report both matching and ranking results but with respect to the selected subsets.

It is important to notice that since the evaluation is conducted on the subset of an OM dataset, the evaluation metrics are tailored to the settings. Check the Bio-LLM documentation for full details.

Note: The LLMap model was proposed along with the Bio-LLM dataset.

Organisers

The Bio-ML track is organised by Yuan He, Pedro Giesteira Cotovio, Lucas Ferraz, Jiaoyan Chen, Hang Dong, Ernesto Jiménez-Ruiz, Catia Pesquita and Ian Horrocks.