Using Knowledge Graphs and Agentic LLMs for Factuality Text Assessment and Improvement

Linda Kwan, Pouya G. Omran, Kerry Taylor

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper addresses the challenge of assessing and enhancing the factual accuracy of texts generated by large language models (LLMs). Existing methods often rely on self-reflection or external knowledge sources, validating statements individually and rigidly, thus missing a holistic view. We propose a novel approach utilizing a comprehensive knowledge graph (KG), such as Wikidata, to assess and improve the factuality of generated texts. Our method dynamically retrieves and integrates relevant facts during the assessment process, providing a more interconnected and accurate evaluation. Integrating KG with LLM capabilities enhances the overall factual integrity, leading to more reliable AI-generated content. Our results demonstrate improvements in factual accuracy, highlighting the effectiveness of our approach.

Original languageEnglish
Number of pages6
JournalCEUR Workshop Proceedings
Volume3828
Publication statusPublished - 2024
EventISWC 2024 Posters, Demos and Industry Tracks: From Novel Ideas to Industrial Practice, ISWC-Posters-Demos-Industry 2024 - Baltimore, United States
Duration: 11 Nov 202415 Nov 2024

Fingerprint

Dive into the research topics of 'Using Knowledge Graphs and Agentic LLMs for Factuality Text Assessment and Improvement'. Together they form a unique fingerprint.

Cite this