Advances in bio-technologies and computer software have helped make genome sequencing much more common than in the past. But still in question are both the accuracy of different sequencing methods and the best ways to evaluate these efforts. Now, computer scientists have devised a tool to better measure the validity of genome sequencing.
The method, which is described in the journal PLOS ONE, allows for the evaluation of a wide range of genome sequencing procedures by tracking a small group of key statistical features in the basic structure of the assembled genome. Such sequence-assembly algorithm lays out the individual short reads (strings of DNA's four nucleic acid bases sampled from the target genome) to put together the complete genome sequencemuch like a complex jig-saw puzzle. The method uses techniques from statistical inference and learning theory to select the most significant features. Surprisingly, the method concludes that many features thought by human experts to be the most important were actually highly misleading.
The work was conducted by researchers at New York University's Courant Institute of Mathematical Sciences, NYU School of Medicine, Sweden's KTH Royal Institute of Technology, and Cold Spring Harbor Laboratory.
Current evaluation methods of genome sequencing are typically imprecise. They rely on what amounts to "crowd sourcing," with scientists weighing in on the accuracy of a sequencing method. Other evaluations use apples-to-oranges comparisons in making assessments, thus limiting their value.
In the PLOS ONE work, the researchers expanded upon an earlier system they created, Feature Response Curve (FRCurve), which offers a global picture of how genome-sequencing methods, or assemblers, are able to deal with different regions and different structures in a large complex genome. Specifically, it points out how an assembler might have traded off one kind of quality measure at the expense
|Contact: James Devitt|
New York University