TY - GEN
T1 - A Critical Re-evaluation of Record Linkage Benchmarks for Learning-Based Matching Algorithms
AU - Papadakis, George
AU - Kirielle, Nishadi
AU - Christen, Peter
AU - Palpanas, Themis
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Entity resolution (ER) is the process of identifying records that refer to the same entities within one or across multiple databases. Numerous techniques have been developed to tackle ER challenges over the years, with recent emphasis placed on machine and deep learning methods for the matching phase. However, the quality of the benchmark datasets typically used in the experimental evaluations of learning-based matching algorithms has not been examined in the literature. To cover this gap, we propose four complementary approaches to assessing the difficulty and appropriateness of 13 commonly used datasets: two theoretical ones, which involve new measures of linearity and existing measures of complexity, and two practical ones -the difference between the best non-linear and linear matchers, as well as the difference between the best learning-based matcher and the perfect oracle. Our analysis demonstrates that most existing benchmark datasets pose rather easy classification tasks. As a result, they are not suitable for properly evaluating learning-based matching algorithms. To address this issue, we propose a new methodology for yielding benchmark datasets. We put it into practice by creating four new matching tasks, and we verify that these new benchmarks are more challenging and therefore more suitable for further advancements in the field.
AB - Entity resolution (ER) is the process of identifying records that refer to the same entities within one or across multiple databases. Numerous techniques have been developed to tackle ER challenges over the years, with recent emphasis placed on machine and deep learning methods for the matching phase. However, the quality of the benchmark datasets typically used in the experimental evaluations of learning-based matching algorithms has not been examined in the literature. To cover this gap, we propose four complementary approaches to assessing the difficulty and appropriateness of 13 commonly used datasets: two theoretical ones, which involve new measures of linearity and existing measures of complexity, and two practical ones -the difference between the best non-linear and linear matchers, as well as the difference between the best learning-based matcher and the perfect oracle. Our analysis demonstrates that most existing benchmark datasets pose rather easy classification tasks. As a result, they are not suitable for properly evaluating learning-based matching algorithms. To address this issue, we propose a new methodology for yielding benchmark datasets. We put it into practice by creating four new matching tasks, and we verify that these new benchmarks are more challenging and therefore more suitable for further advancements in the field.
KW - Benchmark Analysis
KW - Deep Learning
KW - Record Linkage
KW - Supervised Matching
UR - http://www.scopus.com/inward/record.url?scp=85200508289&partnerID=8YFLogxK
U2 - 10.1109/ICDE60146.2024.00265
DO - 10.1109/ICDE60146.2024.00265
M3 - Conference contribution
AN - SCOPUS:85200508289
T3 - Proceedings - International Conference on Data Engineering
SP - 3435
EP - 3448
BT - Proceedings - 2024 IEEE 40th International Conference on Data Engineering, ICDE 2024
PB - IEEE Computer Society
T2 - 40th IEEE International Conference on Data Engineering, ICDE 2024
Y2 - 13 May 2024 through 17 May 2024
ER -