As early as the 1920s, scientists uncovered clues that mitochondria might play a role in causing cancer. But like the other DNA in the cell nucleus, scientists lacked the needed research tools throughout most of the 20th century to systematically study the chemical composition of the mitochondrial genome, or complete set of genes, and its association to human disease.
In the early 1980s, scientists in England performed the then-Herculean feat of sequencing the complete human mitochondrial genome. The genome consisted of 16,568 base pair, or units, of DNA and encoded 37 contiguous genes. But because of the balky sequencing tools of the day and their high cost, much of the subsequent research progressed slowly or stalled.
By 1996, new technology brought new opportunity. Scientists with the company Affymetrix in Santa Clara, Calif. developed the first mitochondrial sequencing microarray. Roughly the size of a quarter, the silicon chip had lithographically annealed to it up to 135,000 short, arrayed bits of DNA sequence that, collectively, spanned most of a single strand of mitochondrial DNA.
The chip exploited the fact that DNA exists naturally as a double-stranded molecule. By gathering mitochondrial DNA and breaking it into short, single-stranded bits, the scientists showed that each bit would pair, or hybridize, with its complementary sequence arrayed on the chip. By crude analogy, each bit is like a unique magnet that sticks to its mirror image.
But if the extracted DNA contains mutations or other variations from the standard consensus sequence annealed to the chip, the bits with those changes would appear abnormal to the specially designed computer software programs that read the chip. The software programs will read not only the identity of th
Source:NIH/National Institute of Dental and Craniofacial Research