SHAPEIT4: An algorithm for large-scale genomic analysis

December 20, 2019

Haplotypes are a set of genetic variations that, located side by side on the same chromosome, are transmitted in a single group to the next generation. Their examination makes it possible to understand the heritability of certain complex traits, such as the risk of developing a disease. However, to carry out this analysis, genome analysis of family members (parents and their child) is usually necessary, a tedious and expensive process. To overcome this problem, researchers from the Universities of Geneva (UNIGE) and Lausanne (UNIL) and the SIB Swiss Institute of Bioinformatics have developed SHAPEIT4, a powerful computer algorithm that allows the haplotypes of hundreds of thousands of unrelated individuals to be identified very quickly. Results are as detailed as when family analysis is per­formed, a process that cannot be conducted on such a large scale. Their tool is now available online under an open source license, freely available to the entire research community. Details can be discovered in Nature Communications.

Nowadays, the analysis of genetic data is becoming increasingly important, particularly in the field of personalized medicine. The number of human genomes sequenced each year is growing exponentially and the largest databases account for more than one million individuals. This wealth of data is extremely valuable for better understanding the genetic destiny of humanity, whether to determine the genetic weight in a particular disease or to better understand the history of human migration. To be meaningful, however, these big data must be processed electronically. "However, the processing power of computers remains relatively stable, unlike the ultra-fast growth of genomic Big Data", says Olivier Delaneau, SNSF professor in the Department of Computational Biology at UNIL Faculty of Biology and Medicine and at SIB, which led this work. "Our algorithm thus aims to optimize the processing of genetic data in order to absorb this amount of information and make it usable by scientists, despite the gap between its quantity and the comparatively limited power of computers."

Better understand the role of haplotypes

Genotyping makes it possible to know an individual's alleles, i.e. the genetic variations received from his or her parents. However, without knowing the parental genome, we do not know which alleles are simultaneously transmitted to children, and in which combinations. "This information - haplotypes - is crucial if we really want to understand the genetic basis of human variation, explains Emmanouil Dermitzakis, a professor at the Department of Genetic Medicine and Development at UNIGE Faculty of Medicine and SIB, who co-supervised this work. This is true for both population genetics or in the perspective of precision medicine."

To determine the genetic risk of disease, for example, scientists assess whether a genetic variation is more or less present in individuals who have developed the disease in order to determine the role of this variation in the disease being studied. "By knowing the haplotypes, we conduct the same type of analysis, says Emmanouil Dermitzakis. However, we are moving from a single variant to a combination of many variants, which allows us to determine which allelic combinations on the same chromosome have the greatest impact on disease risk. It is much more accurate!"

The method developed by the researchers makes it possible to process an extremely large number of genomes, about 500,000 to 1,000,000 individuals, and to determine their haplotypes without knowing their ancestry or progeny, while using standard computing power. The SHAPEIT4 tool has been successfully tested on the 500,000 individual genomes present in the UK Biobank, a scientific database developed in the United Kingdom. "We have here a typical example of what Big Data is, says Olivier Delaneau. Such a large amount of data makes it possible to build very high-precision statistical models, as long as they can be interpreted without drowning in them."

An open source license for transparency

The researchers have decided to make their tool accessible to all under an open source MIT license: the entire code is available and can be modified at will, according to the needs of researchers. This decision was made mainly for the sake of transparency and reproducibility, as well as to stimulate researchers from all over the world. "But we only give access to the analysis tool, under no circumstances to a corpus of data", Olivier Delaneau explains. "It is then up to each individual to use it on the data he or she has."

This tool is much more efficient than older tools, as well as faster and cheaper. It also makes it possible to limit the digital environmental impact. The very powerful computers used to process Big Data are indeed very energy-intensive; reducing their use also helps to minimize their negative impact.

Université de Genève

Related Big Data Articles from Brightsurf:

Predicting sports performance with "big data"
Smartphones and wearable devices are not simple accessories for athletes.

Big data could yield big discoveries in archaeology, Brown scholar says
Parker VanValkenburgh, an assistant professor of anthropology, curated a journal issue that explores the opportunities and challenges big data could bring to the field of archaeology.

Army develops big data approach to neuroscience
A big data approach to neuroscience promises to significantly improve our understanding of the relationship between brain activity and performance.

'Big data' for life sciences
Scientists have produced a co-regulation map of the human proteome, which was able to capture relationships between proteins that do not physically interact or co-localize.

Molecular big data, a new weapon for medicine
Being able to visualize the transmission of a virus in real-time during an outbreak, or to better adapt cancer treatment on the basis of the mutations present in a tumor's individual cells are only two examples of what molecular Big Data can bring to medicine and health globally.

Big data says food is too sweet
New research from the Monell Center analyzed nearly 400,000 food reviews posted by Amazon customers to gain real-world insight into the food choices that people make.

Querying big data just got universal
A universal query engine for big data that works across computing platforms could accelerate analytics research.

What 'Big Data' reveals about the diversity of species
'Big data' and large-scale analyses are critical for biodiversity research to find out how animal and plant species are distributed worldwide and how ecosystems function.

Big data takes aim at a big human problem
A James Cook University scientist is part of an international team that's used new 'big data' analysis to achieve a major advance in understanding neurological disorders such as Epilepsy, Alzheimer's and Parkinson's disease.

Small babies, big data
The first week of a newborn's life is a time of rapid biological change as the baby adapts to living outside the womb, suddenly exposed to new bacteria and viruses.

Read More: Big Data News and Big Data Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to