Leveraging ancestral sequence reconstruction for protein representation learning

D. S. Matthews, M. A. Spence*, A. C. Mater, J. Nichols, S. B. Pulsford, M. Sandhu, J. A. Kaczmarski, C. M. Miton, N. Tokuriki, C. J. Jackson*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Protein language models (PLMs) convert amino acid sequences into the numerical representations required to train machine learning models. Many PLMs are large (>600 million parameters) and trained on a broad span of protein sequence space. However, these models have limitations in terms of predictive accuracy and computational cost. Here we use multiplexed ancestral sequence reconstruction to generate small but focused functional protein sequence datasets for PLM training. Compared to large PLMs, this local ancestral sequence embedding produces representations with higher predictive accuracy. We show that due to the evolutionary nature of the ancestral sequence reconstruction data, local ancestral sequence embedding produces smoother fitness landscapes, in which protein variants that are closer in fitness value become numerically closer in representation space. This work contributes to the implementation of machine learning-based protein design in real-world settings, where data are sparse and computational resources are limited.

Original languageEnglish
Article number1914
Pages (from-to)1542-1555
Number of pages14
JournalNature Machine Intelligence
Volume6
Issue number12
DOIs
Publication statusPublished - Dec 2024

Fingerprint

Dive into the research topics of 'Leveraging ancestral sequence reconstruction for protein representation learning'. Together they form a unique fingerprint.

Cite this