James F. Eliason, Ph.D., Asterand plc
The path to identification of new and improved drugs for the treatment of human diseases often begins with the identification of novel gene targets through RNA expression analysis. Most high throughput gene expression profiling studies use high density microarrays, which require RNA preparations with purity and integrity of the highest quality. Additional methods for target identification and validation also exist, which may not require as intact RNA. This is especially valuable for gene expression studies in human tissues, as optimal conditions for the preservation of RNA integrity cannot always be met when the samples are excised surgically or from post mortem donors. Thus it is important to have reliable methods for analyzing RNA extracted from human tissues and to understand the level of RNA integrity in each sample, so that the appropriate experimental design may be employed for target evaluation.
RNA purity is generally determined spectrophotometrically. The ratio of absorbances at 260 nm and 280 nm (A260:A280) determines the degree of protein contamination and the A260:A230 ratio is used to identify any contamination by organic solvents. These ratios should be >1.8.
RNA is rapidly digested by RNase enzymes that are nearly ubiquitous, leading to formation of shorter fragments, which can confound experimental results. Thus, it is important to test the RNA integrity. This has been traditionally performed by agarose gel electrophoresis and staining the RNA either with ethidium bromide or SYBR Green dye. In the past, the ratio between the ribosomal bands (28S:18S) was viewed as the primary indicator of RNA integrity, with a ratio of 2.0 considered to be typical of high quality intact RNA. Recent widespread use of the Agilent 2100 Bioanalyzer has shown that this measure is in fact a very poor indicat