TY - JOUR
T1 - Consumer Health Search at CLEF eHealth 2021
AU - Goeuriot, Lorraine
AU - Suominen, Hanna
AU - Pasi, Gabriella
AU - Bassani, Elias
AU - Brew-Sam, Nicola
AU - González-Sáez, Gabriela
AU - Kelly, Liadh
AU - Mulhem, Philippe
AU - Seneviratne, Sandaru
AU - Upadhyay, Rishabh
AU - Viviani, Marco
AU - Xu, Chenchen
N1 - Publisher Copyright:
© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
PY - 2021
Y1 - 2021
N2 - This paper details materials, methods, results, and analyses of the Consumer Health Search Task of the CLEF eHealth 2021 Evaluation Lab. This task investigates the effectiveness of information retrieval (IR) approaches in providing access to medical information to laypeople. For this a TREC-style evaluation methodology was applied: a shared collection of documents and queries is distributed, participants' runs received, relevance assessments generated, and participants' submissions evaluated. The task generated a new representative web corpus including web pages acquired from a 2021 CommonCrawl and social media content from Twitter and Reddit, along with a new collection of 55 manually generated layperson medical queries and their respective credibility, understandability, and topicality assessments for returned documents. This year's task focused on three subtask: (i) ad-hoc IR, (ii) weakly supervised IR, and (iii) document credibility prediction. In total, 15 runs were submitted to the three subtasks: eight addressed the ad-hoc IR task, three the weakly supervised IR challenge, and 4 the document credibility prediction challenge. As in previous years, the organizers have made data and tools associated with the task available for future research and development.
AB - This paper details materials, methods, results, and analyses of the Consumer Health Search Task of the CLEF eHealth 2021 Evaluation Lab. This task investigates the effectiveness of information retrieval (IR) approaches in providing access to medical information to laypeople. For this a TREC-style evaluation methodology was applied: a shared collection of documents and queries is distributed, participants' runs received, relevance assessments generated, and participants' submissions evaluated. The task generated a new representative web corpus including web pages acquired from a 2021 CommonCrawl and social media content from Twitter and Reddit, along with a new collection of 55 manually generated layperson medical queries and their respective credibility, understandability, and topicality assessments for returned documents. This year's task focused on three subtask: (i) ad-hoc IR, (ii) weakly supervised IR, and (iii) document credibility prediction. In total, 15 runs were submitted to the three subtasks: eight addressed the ad-hoc IR task, three the weakly supervised IR challenge, and 4 the document credibility prediction challenge. As in previous years, the organizers have made data and tools associated with the task available for future research and development.
KW - Dimensions of relevance
KW - EHealth
KW - Evaluation
KW - Health records
KW - Information storage and retrieval
KW - Medical informatics
KW - Self-diagnosis
KW - Test-set generation
UR - http://www.scopus.com/inward/record.url?scp=85113557630&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85113557630
SN - 1613-0073
VL - 2936
SP - 751
EP - 769
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021
Y2 - 21 September 2021 through 24 September 2021
ER -