Tuesday, February 1, 2011

DNA SEQUENCE ASSEMBLY AND

DNA SEQUENCE ASSEMBLY AND
GENE IDENTIFICATION
DNA sequence assembly refers to aligning and merging fragments of a much longer DNA sequence in order to reconstruct the original sequence. This is needed as DNA sequencing technology cannot read whole genomes in one go, but rather small pieces between 20 and 1000 bases, depending on the technology used. Typically the short fragments, called reads, result from shotgun sequencing genomic DNA, or gene transcript (ESTs).
The problem of sequence assembly can be compared to taking many copies of a book, passing them all through a shredder, and piecing a copy of the book back together from only shredded pieces. The book may have many repeated paragraphs, and some shreds may be modified to have typos. Excerpts from another book may be added in, and some shreds may be completely unrecognizable.

Genome Assembler: The first sequence assemblers began to appear in the late 1980s and early 1990s as variants of simpler sequence alignment programs to piece together vast quantities of fragments generated by automated sequencing instruments called DNA sequencers. As the sequenced organisms grew in size and complexity (from small viruses over plasmids to bacteria and finally eukaryotes), the assembly programs needed to increasingly employ more and more sophisticated strategies to handle.
1. Terabyte of sequencing data which need processing on computing clusters;
2. Identical and nearly identical sequences (known as repeats) which can, in the worst case, increase the time and space complexity of algorithms exponentially;
3. And errors in the fragments from the sequencing instruments, which can confound assembly.
Faced with the challenge of assembling the first larger eukaryotic genomes, the fruit fly Drosophila melanogaster. in 2000 and the human genome just a year later, scientists developed assemblers like Celera Assembler and Arachne able to handle genomes of 100-300 million base pairs. Subsequent to these efforts, several other groups, mostly at the major genome sequencing centers, built large-scale assemblers, and an open source effort known as AMOS was launched to bring together all the innovations in genome assembly technology under the open source framework.

EST ASSEMBLER: EST assembly differs from genome assembly in several ways. The sequences for EST assembly are the transcribed mRNA of a cell and represent only a subset of the whole genome. At a first glance, underlying algorithmical problems differ between genome and EST assembly. For instance, genomes often have large amounts of repetitive sequences, mainly in the inter-genic parts. Since ESTs represent gene transcripts, they will not contain these repeats. On the other hand, cells tend to have a certain number of genes that are constantly expressed in very high amounts (housekeeping genes), which again leads to the problem of similar sequences present in high amounts in the data set to be assembled. Furthermore, genes sometimes overlap in the genome (sense-antisense transcription) and should ideally still be assembled separately. EST assembly is also complicated by features like (cis-) alternative splicing, trans-splicing, single-nucleotide polymorphism, recoding, and post-transcriptional modification.
In sequence assembly, two different types can be distinguished:
1. de-novo: assembling reads together so that they form a new, previously unknown sequence
2. mapping: assembling reads against an existing backbone sequence, building a sequence that is similar but not necessarily identical to the backbone sequence
In terms of complexity and time requirements, de-novo assemblies are orders of magnitude slower and more memory intensive than mapping assemblies. This is mostly due to the fact that the assembly algorithm need to compare every read with every other read (an operation that is has a complexity of O(n2) but can be reduced to O(n log(n)). Referring to the comparison drawn to shredded books in the introduction: while for mapping assemblies one would have a very similar book as template (perhaps with the names of the main characters and a few locations changed), the de-novo assemblies are more hardcore in a sense as one would not know beforehand whether this would become a science book, or a novel, or a catalogue etc.

Influence of technological changes:
The complexity of sequence assembly is driven by two major factors: the number of fragments and their lengths. While more and longer fragments allow better identification of sequence overlaps, they also pose problems as the underlying algorithms show quadratic or even exponential complexity behaviour to both number of fragments and their length. And while shorter sequences are faster to align, they also complicate the layout phase of an assembly as shorter reads are more difficult to use with repeats or near identical repeats.
In the earliest days of DNA sequencing, scientists could only gain a few sequences of short length (some dozen bases) after weeks of work in laboratories. Hence, these sequences could be aligned in a few minutes by hand.
In 1975, the Dideoxy termination method (also known as Sanger sequencing) was invented and until shortly after 2000, the technology was improved up to a point were fully automated machines could churn out sequences in a highly parallelised mode 24 hours a day. Large genome centers around the world housed complete farms of these sequencing machines, which in turn led to the necessity of assemblers to be optimised for sequences from whole-genome shotgun sequencing projects where the reads
• are about 800–900 bases long
• contain sequencing artifacts like sequencing and cloning vectors
• have error rates between 0.5 and 10%
With the Sanger technology, bacterial projects with 20,000 to 200,000 reads could easily be assembled on one computer. Larger ones like the human genome with approximately 35 million reads needed already large computing farms and distributed computing.
By 2004 / 2005, pyrosequencing had been brought to commercial viability by 454 Life Sciences. This new sequencing methods generated reads much shorter than from Sanger sequencing: initially about 100 bases, now 400 bases and expected to grow to 1000 bases by the end of 2010. However, due to the much higher throughput and lower cost than Sanger sequencing, the adoption of this technology by genome centers pushed development of sequence assemblers to deal with this new type of sequences. The sheer amount of data coupled with technology specific error patterns in the reads delayed development of assemblers, at the beginning in 2004 only the Newbler assembler from 454 was available. Presented in mid 2007, the hybrid version of the MIRA assembler by Chevreux et al. was the first freely available assembler who could assemble 454 reads and mixtures of 454 reads and Sanger reads; using sequences from different sequencing technologies was subsequently coined hybrid assembly.
Ironically, technological development of sequencing continued to improve in the wrong way (from a sequence assembly point of view). Since 2006, the Solexa technology is available and heavily used to generate roundabout 100 million reads per day on a single sequencing machine. Compare this to the 35 million reads of the human genome project which needed several years to be produced on hundreds of sequencing machines. The downside is that these reads have a length of only 36 bases (expected to grow to 50 bases by the end of 2008). This makes sequence alignment an even more daunting task. Presented by the end of 2007, the SHARCGS assembler by Dohm et al. was the first published assembler that was used for an assembly with Solexa reads, quickly followed by a number of others.

No comments:

Post a Comment