Nav: Home

Model learns how individual amino acids determine protein function

March 25, 2019

A machine-learning model from MIT researchers computationally breaks down how segments of amino acid chains determine a protein's function, which could help researchers design and test new proteins for drug development or biological research.

Proteins are linear chains of amino acids, connected by peptide bonds, that fold into exceedingly complex three-dimensional structures, depending on the sequence and physical interactions within the chain. That structure, in turn, determines the protein's biological function. Knowing a protein's 3-D structure, therefore, is valuable for, say, predicting how proteins may respond to certain drugs.

However, despite decades of research and the development of multiple imaging techniques, we know only a very small fraction of possible protein structures -- tens of thousands out of millions. Researchers are beginning to use machine-learning models to predict protein structures based on their amino acid sequences, which could enable the discovery of new protein structures. But this is challenging, as diverse amino acid sequences can form very similar structures. And there aren't many structures on which to train the models.

In a paper being presented at the International Conference on Learning Representations in May, the MIT researchers develop a method for "learning" easily computable representations of each amino acid position in a protein sequence, initially using 3-D protein structure as a training guide. Researchers can then use those representations as inputs that help machine-learning models predict the functions of individual amino acid segments -- without ever again needing any data on the protein's structure.

In the future, the model could be used for improved protein engineering, by giving researchers a chance to better zero in on and modify specific amino acid segments. The model might even steer researchers away from protein structure prediction altogether.

"I want to marginalize structure," says first author Tristan Bepler, a graduate student in the Computation and Biology group in the Computer Science and Artificial Intelligence Laboratory (CSAIL). "We want to know what proteins do, and knowing structure is important for that. But can we predict the function of a protein given only its amino acid sequence? The motivation is to move away from specifically predicting structures, and move toward [finding] how amino acid sequences relate to function."

Joining Bepler is co-author Bonnie Berger, the Simons Professor of Mathematics at MIT with a joint faculty position in the Department of Electrical Engineering and Computer Science, and head of the Computation and Biology group.

Learning from structure

Rather than predicting structure directly -- as traditional models attempt -- the researchers encoded predicted protein structural information directly into representations. To do so, they use known structural similarities of proteins to supervise their model, as the model learns the functions of specific amino acids.

They trained their model on about 22,000 proteins from the Structural Classification of Proteins (SCOP) database, which contains thousands of proteins organized into classes by similarities of structures and amino acid sequences. For each pair of proteins, they calculated a real similarity score, meaning how close they are in structure, based on their SCOP class.

The researchers then fed their model random pairs of protein structures and their amino acid sequences, which were converted into numerical representations called embeddings by an encoder. In natural language processing, embeddings are essentially tables of several hundred numbers combined in a way that corresponds to a letter or word in a sentence. The more similar two embeddings are, the more likely the letters or words will appear together in a sentence.

In the researchers' work, each embedding in the pair contains information about how similar each amino acid sequence is to the other. The model aligns the two embeddings and calculates a similarity score to then predict how similar their 3-D structures will be. Then, the model compares its predicted similarity score with the real SCOP similarity score for their structure, and sends a feedback signal to the encoder.

Simultaneously, the model predicts a "contact map" for each embedding, which basically says how far away each amino acid is from all the others in the protein's predicted 3-D structure -- essentially, do they make contact or not? The model also compares its predicted contact map with the known contact map from SCOP, and sends a feedback signal to the encoder. This helps the model better learn where exactly amino acids fall in a protein's structure, which further updates each amino acid's function.

Basically, the researchers train their model by asking it to predict if paired sequence embeddings will or won't share a similar SCOP protein structure. If the model's predicted score is close to the real score, it knows it's on the right track; if not, it adjusts.

Protein design

In the end, for one inputted amino acid chain, the model will produce one numerical representation, or embedding, for each amino acid position in a 3-D structure. Machine-learning models can then use those sequence embeddings to accurately predict each amino acid's function based on its predicted 3-D structural "context" -- its position and contact with other amino acids.

For instance, the researchers used the model to predict which segments, if any, pass through the cell membrane. Given only an amino acid sequence, the researchers' model predicted all transmembrane and non-transmembrane segments more accurately than state-of-the-art models.

Next, the researchers aim to apply the model to more prediction tasks, such as figuring out which sequence segments bind to small molecules, which is critical for drug development. They're also working on using the model for protein design. Using their sequence embeddings, they can predict, say, at what color wavelengths a protein will fluoresce.

"Our model allows us to transfer information from known protein structures to sequences with unknown structure. Using our embeddings as features, we can better predict function and enable more efficient data-driven protein design," Bepler says. "At a high level, that type of protein engineering is the goal."

Berger adds: "Our machine learning models thus enable us to learn the 'language' of protein folding -- one of the original 'Holy Grail' problems -- from a relatively small number of known structures."
Written by Rob Matheson, MIT News Office

Related links

PAPER: "Learning protein sequence embeddings using information from structure."

ARCHIVE: Cryptographic protocol enables greater collaboration in drug discovery

ARCHIVE: Protecting confidentiality in genomic studies

ARCHIVE: Protecting privacy in genomic databases

ARCHIVE: Getting metabolism right

Massachusetts Institute of Technology

Related Amino Acids Articles:

CoP-electrocatalytic reduction of nitroarenes: a controllable way to azoxy-, azo- and amino-aromatic
The development of a green, efficient and highly controllable manner to azoxy-, azo- and amino-aromatics from nitro-reduction is extremely desirable both from academic and industrial points of view.
Origin of life insight: peptides can form without amino acids
Peptides, one of the fundamental building blocks of life, can be formed from the primitive precursors of amino acids under conditions similar to those expected on the primordial Earth, finds a new UCL study published in Nature.
Metabolic reprogramming of branched-chain amino acid facilitates drug resistance in lung cancer
Research teams led by Dr. Ji Hongbin at the Institute of Biochemistry and Cell Biology of the Chinese Academy of Sciences, Dr.
Researchers develop fast, efficient way to build amino acid chains
Researchers report that they have developed a faster, easier and cheaper method for making new amino acid chains -- the polypeptide building blocks that are used in drug development and industry -- than was previously available.
Characterisation of the structure of a member of the L-Amino acid Transporter (LAT) family
Mutations in L-amino acid transporters (LATs) can lead to a wide range of conditions, such as autism, hearing loss and aminoacidurias.
Model learns how individual amino acids determine protein function
A machine-learning model from MIT researchers computationally breaks down how segments of amino acid chains determine a protein's function, which could help researchers design and test new proteins for drug development or biological research.
Starving leukemia cells by targeting amino acids
Eliminating ASCT2 selectively stops the growth of leukemia cells, while having limited effects on healthy blood cells and hematopoetic (blood-forming) stem cells.
Unveiling the role of selenocysteine, the mysterious 21st amino acid
Selenocysteine is an essential amino acid for certain species, such as humans and the other vertebrates, although it has disappeared from others, such as insects.
Novel electron microscopy offers nanoscale, damage-free isotope tracking in amino acids
A new electron microscopy technique that detects the subtle changes in the weight of proteins at the nanoscale -- while keeping the sample intact -- could open a new pathway for deeper, more comprehensive studies of the basic building blocks of life.
Enzyme that breaks down amino acids may promote aging
Permanently arrested cell growth is known as 'cellular senescence', and the accumulation of senescent cells may be one cause of aging in our bodies.
More Amino Acids News and Amino Acids Current Events

Top Science Podcasts

We have hand picked the top science podcasts of 2019.
Now Playing: TED Radio Hour

Why do we revere risk-takers, even when their actions terrify us? Why are some better at taking risks than others? This hour, TED speakers explore the alluring, dangerous, and calculated sides of risk. Guests include professional rock climber Alex Honnold, economist Mariana Mazzucato, psychology researcher Kashfia Rahman, structural engineer and bridge designer Ian Firth, and risk intelligence expert Dylan Evans.
Now Playing: Science for the People

#541 Wayfinding
These days when we want to know where we are or how to get where we want to go, most of us will pull out a smart phone with a built-in GPS and map app. Some of us old timers might still use an old school paper map from time to time. But we didn't always used to lean so heavily on maps and technology, and in some remote places of the world some people still navigate and wayfind their way without the aid of these tools... and in some cases do better without them. This week, host Rachelle Saunders...
Now Playing: Radiolab

Dolly Parton's America: Neon Moss
Today on Radiolab, we're bringing you the fourth episode of Jad's special series, Dolly Parton's America. In this episode, Jad goes back up the mountain to visit Dolly's actual Tennessee mountain home, where she tells stories about her first trips out of the holler. Back on the mountaintop, standing under the rain by the Little Pigeon River, the trip triggers memories of Jad's first visit to his father's childhood home, and opens the gateway to dizzying stories of music and migration. Support Radiolab today at