Navigation Links
Exploiting the Life Science Data Explosion to Speed New Drug Discovery

Turn Massive Amounts of Data into Gems of Knowledge Using discoveryHub™


Abstract
This paper discusses key problems and opportunities created by the explosion in both the degree (volume) and complexity of life science data available today, and presents a strategic and technological approach to data integration that enables rapid ‘harvesting’ and analysis of critical scientific data in order to accelerate drug discovery timelines and significantly increase drug R&D productivity. Anyone involved in drug research and discovery will benefit from the insights and questions contained herein. It is our goal at GE Healthcare Informatics to eliminate the data integration bottleneck impacting today’s biotechnology and pharmaceutical companies by providing a flexible, extensible framework for data access and integration of ALL life science data on a global scale.


Contents
Abstract ...................................................................................................................................................... 2
Contents ..................................................................................................................................................... 2
Introduction ................................................................................................................................................. 3
Background – Understanding the Problem ............................................................................................. 4
Powerful New Observational Techniques Transform Science ......................................................... 4
Containing Complexity is Key .............................................................................................................. 4
The solution space ..................................................................................................................................... 5
Traditional Solutions ............................................................................................................................. 5
GE Healthcare Informatics Approach ................................................................................................. 5
Innovative yet simple architecture ........................................................................................................ 6
Beyond Database Technology ................................................................................................................. 7
Why is this important to the Biotechnology Company? ..................................................................... 7
Making Biological Data Accessible .................................................................................................... 7
A Manageable Data Asset ................................................................................................................... 8
Simplifying Data Complexity ................................................................................................................ 8
Comparative Examples ............................................................................................................................. 9
Example: discoveryHub approach to relational database population ............................................. 9
The relational vs. non-relational challenge .......................................................................................... 9
Relational Approach ............................................................................................................................. 9
Nested Relational Approach ..............................................................................................................10
Example: Using Public Data, while preserving privacy ...................................................................11
Data Integration across private / cached public sources .....................................................................12
Summary and Conclusion ........................................................................................................................12


Introduction
Life science researchers are experiencing a “data explosion” due to vast amounts of raw data being produced from both public sources and from high-throughput sequencing and other industrial-scale technologies being utilized in-house. This exponential growth curve is expected to continue. In fact, genomic data alone is now doubling every 12 months according to UBS Warburg. From a scientific perspective, this wealth of data creates exciting and unprecedented opportunities for new drug discovery. But being able to quickly and effectively turn this data into knowledge has proven to be a serious challenge and a concerting drain on drug R&D productivity.

Ironically, huge gains in efficiency in the “front end” of the discovery pipeline have created huge “down stream” inefficiencies because the data cannot be accessed, integrated, and analyzed quickly enough to meet the demands of drug R&D. The industry has outgrown traditional proprietary data capture and integration methods, and traditional “big IT” approaches solve only part of the problem. First generation integration solutions that centered on the concept of local repositories (silos, warehouses) have not scaled well, are costly to maintain, and ultimately are limited in long-term usefulness.

The discoveryHub product from GE Healthcare Informatics presents an alternative approach. The discoveryHub is a robust and powerful data integration software platform specially designed to handle the volume, diversity and complexity of life science data while fitting easily into existing IT infrastructures. The discoveryHub platform enables seamless, live access to the widely disparate formats, structures and geographic locations characteristic of today’s life science data sources from a centralized, mediated “hub”.

With discoveryHub you can shift the model from “access, integrate and store” to “access, integrate and use” because it enables direct access and manipulation of disparate sources in their native forms. This allows scientists to focus on applying their domain expertise to transform information into knowledge, rather than on the mechanisms for accessing information. From a business perspective, resources are more effectively leveraged. The combined result is increased productivity, reduced risk and accelerated time-to-market for critical drug discovery initiatives.

Market Need
It has been widely stated and tacitly recognized that one of the biggest challenges in life science informatics and drug discovery today is data access and integration. This issue has become a major bottleneck to R&D productivity for many biotechnology and pharmaceutical companies. Biological data sources are constantly changing, geographically distributed, diverse in data types and complex in structure. With the unprecedented growth of scientific data, the challenge has now become managing the complexity in order to allow researchers efficient access to the underlying information. This process is critical for turning data into knowledge - the crown jewels of drug discovery. GE Healthcare Informatics uniquely enables companies to fully exploit this valuable asset by providing a robust data integration platform to underpin and thereby facilitate all drug research and development initiatives. IDC research predicts that by 2006, $38B will be spent on IT for life sciences.


Powerful New Observational Techniques Transform Science
Breakthroughs in genomics, proteomics, instrumentation and related technologies have created unprecedented abilities to observe, collect and generate data. These advances are transforming the life sciences from small-scale, hypothesis-driven experimental sciences into large-scale, data- and discovery-driven knowledge factories. This transformation is in turn driving exponential growth in available data and thereby creating unprecedented opportunities for new drug discovery for those companies that can fully exploit the wealth of information. So why hasn’t every company and research organization taken advantage of this? Existing infrastructures and cultures are not yet prepared to support nor exploit this rate of growth. The integration and transformation of data into information, and information into knowledge, is the key if the full promise of in-silico discovery is to be realized.

This kind of exponential growth, while a new phenomenon in the life sciences arena, is familiar to other high-tech fields. To realize the promise of ever decreasing IC geometries and increasing gate densities, for example, new design disciplines and automated synthesis techniques had to be created. Achieving greater levels of integration and ever faster logic required blurring of the borders between traditional engineering disciplines. Applying similar concepts to life sciences today will enable the acceleration opportunity that the wealth of data creates.

Containing Complexity is Key
To understand the barriers to life science data integration, we must consider not only the volume of data now available, but realize the expanding complexity as well. Biological data often does not fit easily into traditional object, data, or relational representations, and so numerous methods have been created to represent the multi-dimensional nested structure of biological data. The result is many, very different representations. Attempting to correlate data between such sources creates a complexity explosion.

Traditional attempts to integrate these complex sources have centered on extraction, transformation and ultimately warehousing the combined data using relational and (or) object databases. This approach was initially useful in specialized applications, but in the general case the source data ill-fits these models, and these techniques become the logical equivalent to forcing a square peg into the round hole.

Further cluttering the landscape, researchers currently depend on proprietary “legacy” systems for warehousing, accessing and using their data. Huge investments have been made in applications that depend on the existing infrastructure, while these applications are constrained by the underlying limitations of the infrastructure. Key to moving forward is the ability to adopt incremental integration and access technology that allows the organization to overcome the limitations inherent in the underlying infrastructure while leveraging the huge investment in their existing research data and applications.

Containing complexity is the key to realizing scalable integration solutions. There is a way to manage this complexity effectively through a shift in the model: by leaving existing sources in their original form, you eliminate the need to convert or to “normalize” the form, enabling integration of different sources in a seamless, ad-hoc manner. This provides researchers and bioinformaticists with a simplified view of the data, and eliminates the need for intermediate data stores. A coherent data integration platform that is agnostic to the underlying “store” enables such a shift: for example, allowing simultaneous access to local (proprietary) sources, legacy systems and up-to-date public sources from a central, mediated interface. The ability to leverage existing systems into more scalable solutions, with incremental deployment, greatly reduces the risk of adopting new systems and can, as we will demonstrate, have a dramatic impact on drug R&D productivity.


The Solution Space
Traditional Solutions
Traditional data integration solutions depend on a great deal of “custom coding” to create local data silos of various forms. Each approach has pros and cons, but all share some inherent limitations and typically do not scale well. In general, scientists and bioinformaticists have been forced to hand code data access and integration tools in order to conduct scientific queries for their research. This vertical approach has created huge IT inefficiencies which slow the discovery process. Further, the result is a limited database that rapidly becomes out of date as data sources change and expand. Issues of data staleness, consistency, lack of flexibility and ultimately performance limit the growth of such approaches.

Traditional relational and object database systems require compromises in the structure of the data model, and source data often does not fit into these structures naturally. Emerging database technologies, such as XML databases, address some structural limitations, but imply a huge conversion investment which still results in huge local silos that share the limitations inherent in a warehousing approach. What is needed is a truly adaptive integration platform that can deal with data sources in their native forms, efficiently and transparently.

GE Healthcare Informatics Approach
GE Healthcare Informatics has created a solution from the ground up that supports virtually any scientific data source. The discoveryHub platform offers an open, embeddable architecture, designed to provide the foundation for breakthrough flexibility and performance for data intensive life science applications.


The discoveryHub architecture provides direct access to any source, both internal and external. It also allows integration of existing applications into the query processing chain. The direct approach and flexibility of the discoveryHub architecture, coupled with a well designed set of access APIs, provide a simple to use platform for application development and incremental deployment.

Innovative yet Simple Architecture
With the discoveryHub product, scientists and their IT support organizations get a complete biological data integration platform with a familiar SQL interface. On the back-end, discoveryHub provides a powerful engine that efficiently accesses, integrates and transforms scientific data from both internal and external sources in a secure and private manner without prior conversion. On the front-end, the various application programming interfaces (APIs) allow the development of user interfaces using either Java, JavaScript or .Net or the use of third-party applications such as Spotfire™ and many others.


Beyond Database Technology
discoveryHub is an access technology, not a database. It is able to work with arbitrarily structured data “ad-hoc”, combining disparate sources without the need to normalize the structure. Using the discoveryHub tools, simplified and integrated views can be created which present information in the form most relevant for a particular application. Further, discoveryHub provides access to any external algorithm or application as part of the integrated view. The discoveryHub platform provides compatible access and export capability to the most popular object and relational database management systems on the market.

Because you are not forced to “coerce” or flatten data into a particular form in order to use it, you need not “convert and store”, but can instead access sources “live”. discoveryHub thus circumvents issues of data currency when accessing dynamic sources. Yet, because it is inherently agnostic about physical representations, existing “legacy” silos remain accessible as “yet another data source” to discoveryHub.

So, in real-world settings, discoveryHub offers a platform and tool set that can be used for direct access and integration, is embeddable into life science applications, and provides an essential element to building a scalable life sciences data management infrastructure. discoveryHub accomplishes this through a series of simple yet powerful APIs that provide easy access to core engine, freeing the software developer to focus on the scientific application, not data integration. Products built using the discoveryHub APIs inherit independence of the underlying representation and access infrastructure. Thus, the discoveryHub is well suited for integration into an existing infrastructure such as an application framework. Included are application services components that make discoveryHub available via standard web services interfaces such as SOAP, EJB, servlet instantiation, etc.

Why is this important to Biotechnology Companies?
The life sciences are experiencing an industrialization of what were previously cottage industries. Where it was once acceptable to depend solely on data from your own lab, it is now essential to become part of a global data community. The need to manage data in a secure, organized and controlled manner has become a major issue. Quite simply, hard coded, hand managed systems are no longer good enough. Likewise, proprietary ‘integrated warehouses’ become cumbersome and costly to manage. Such solutions simply do not scale and do not meet the ever-increasing pace of drug discovery.

Making Biological Data Accessible
In an increasingly heterogeneous world, we find data sources crossing the bounds of technology. While this has been a reality in the IT world for some time, the requirements for data integration in the life sciences arena differs from traditional IT requirements in several important ways. The IT problem is characterized by the need to deal with data held in a variety of different systems within the organization, each a controlled system. In the life sciences space, data must be integrated from internal and external sources, where external sources are public and not under the control of the using organization. Public sources are, by their nature, dynamic: constantly, new data is added and existing data updated. To complicate the picture further, the methods available to access this data are often unstable, subject to revision by the controlling agency in response to their own changing requirements.

A variety of methods are needed to make this data accessible. First, you need an architecture that removes the physical storage and access mechanisms from the usability issues. Second, you need a consistent mediation point within the architecture that accepts a single ‘language’, which can be leveraged to efficiently query against disparate data sources. Standard IT based languages are not equipped with the depth of syntax required to deal with the intricacies of the complex life sciences data environment. To address these issues GE Healthcare Informatics has created a hybrid query definition language called sSQL – based on SQL standards, enhanced to provide the capability to simply define complex queries on arbitrarily structured and nested sources.

This query mediation level provides the initial touch point to the emerging data universe that traditional IT staff can understand and work with, thus enabling them to provide greater value. As a result, we begin to open up the world of biology to a larger audience and relieve some of the conceptual barriers between IT and Biology.

A Manageable Data Asset
At many points in the discovery process we see that the data produced from research must flow back into the organization, and as organizations mature we see data become an ever more critical asset. This presents us with a problem and an opportunity: the problem is data collection and the opportunity is the exploitation of the data assets in ever more global ways. Many leading biotechnology companies have realized the value in the data they are amassing, but have not yet fully leveraged that value.

To manage data in a coherent way, GE Healthcare Informatics provides collection mechanisms that allow integration of many different data sources transparently - agnostic to a source’s physical form.

One example of this would be the data flowing from instruments and Laboratory Instrument Management Systems (LIMS). The mechanisms available to extract data from such systems obviously exist, but are generally specialized and not easily “merged” with other external sources. The GE Healthcare Informatics “light weight wrapper” technology enables integration of such sources with little effort. This allows discoveryHub to utilize the existing extraction systems to ‘pull’ this data and process it efficiently, thus removing some data processing bottlenecks. The results are then available to be combined with other sources in any way required.

In this example, the GE Healthcare Informatics wrappers effectively act as a collection level to capture, pre-process and propagate the data into the larger data universe. This collection level also readily supports the conversion of non-standard data when required to allow accessibility to other systems.

Simplifying Data Complexity
The data complexity within this space exists at many different levels, with diversity both geographically and structurally, without consistency in naming, formats or access methods. The discoveryHub technology provides the means to manage all of these issues.

The GE Healthcare Informatics system is agnostic to geographical and structural diversity of data sources. The wrapper technology addresses the geographically dispersed nature of data by presenting each source as if it were local. The query engine allows each data source to be transposed into any model, thus solving the issue of structural inconsistency. Finally the ontology level allows the creation of a consistent data model that maps all physical data into a definable ontological data model. The ontology enables us to understand the relationships between data sources and must eventually become all encompassing in this field. The transformation level enables us to take the related data sets and provide a layer to enable the physical conversion between potentially many data sources.

Ultimately, the discoveryHub approach allows the complex universe of data to be transformed into a targeted, focused view of the relevant pieces of information from an unlimited set of data sources. In essence, this allows the researcher to extract the particular “trees” of interest that will ultimately lead to understanding the “forest”.


Comparative Examples
Example: discoveryHub Approach to Relational Database Population
There are applications where a local repository is required and a relational technology is appropriate for localized data. The implementation of the discoveryHub as a middleware platform alongside the relational technology (for localized data) can be a powerful combination. Many companies simply do not have the short-term resources to create a full entity relational schema to model the complete public and privately available data from all the various sources, yet they need to reduce some subset of both public and private sources into ‘privatized’ datasets accessible via a relational infrastructure. To address this, we have created an alternative to the design-focused long-term data model – the discoveryHub approach to life science data population.

The Relational vs. Non-Relational Challenge
In cases where local repositories are required and ultimately a simplified relational view is adequate, good value can be offered by the established relational database vendors. But this creates a challenge: how do we load and store a highly complex data structure into a relational model? There are many answers to this question – and many factors that influence the answer. To illustrate, let us consider two simple alternatives that will help to understand some of the issues.

Relational Approach
The typical relational approach to this problem is shown in diagram #1. This approach applies the same set of rules to this environment as to any other transactional system. We define a set of entities such as Feature, Lineage, etc. create a relationship model and develop a number of ‘synthetic’ keys between each relevant level.

This is an obvious and reasonable approach to creating a complex relational data model, and we have found that this technique is well established in its application to life sciences. Experience has shown that in the longer term, however, this approach presents us with scalability and performance issues because it relies on 3 major dependencies:

1.A consistent data set – relational integrity plays an increasingly important role as the rows in the data sets are updated and/or superceded.

2.Speed of loading – how do we take a large, structured data set into another data set with a large number of related tables/indices and, as the dataset grows, maintain an acceptable loading performance?

3.Query performance – to achieve a sufficient level of performance, we must heavily index each table in this structure, simply to reassemble the data into the form that the user will eventually require.

Nested Relational Approach
The Nested Relational approach that GE Healthcare Informatics employs is significantly simpler. Instead of trying to force square pegs into round holes and creating a large conceptual overhead, we will leave some pegs square and allow the round holes to have slightly square-like edges.

In many cases, data must be queried from the private database similarly to how it was found in the original system, but with some additional ”added value” information, such as cross functional references. To do this, we must consider the three issues identified above: we must provide a high performance load and query method that also adds inherent integrity in order to create a highly scalable, added value relational database.

By taking advantage of recent advances in database technology, we can enable a relational database to adopt an object database approach. This hybrid relational/object approach is illustrated in Diagram #2. In this example, a single relational table is being used to store multiple levels in a database without the need to deconstruct the data at load time and reconstruct it at query time. Keeping related data together in its original form provides inherent data integrity.

The next question is how to make the data look relational when we actually want to view it as a set of related entities? Conventional relational database management systems (RDBMS) lack the means to efficiently access data inside the nested objects. Using the discoveryHub platform in conjunction with a modern RDBMS, we can construct an elegant solution to the problem.

The nested relational structure shown in diagram #2 creates a single relational table that stores a set of complex objects in a compressed format; for example, as a set of “large objects” in Oracle (BLOB/CLOG/LONG). To complete the query processing, we need to index the data and we also need the ability to ‘see inside’ the large objects. We need only expose the index-able fields (accession, sequence, etc) to the relational database management system: these become key fields and thus can be indexed. In this way, indexing is kept simple and efficient. The large object contains the complete structure in native form. To access the information within the large objects, discoveryHub is used to implement a large object parsing level that enables the query to see inside the compressed data structure. Because discoveryHub is designed to deal with nested structures, the parsing level can be implemented simply and efficiently, without degrading performance on large volume queries.

The resulting solution is powerful and scalable. discoveryHub provides an efficient platform for creating, updating and accessing a hybrid object/relational database. The platform provides the means to work with the source data in its native form, eliminating conversion overhead and ensuring data integrity.


Data Integration Across Private / Cached Public Sources
Several issues arise when creating an integrated view across public and private sources. To avoid potentially unsafe traffic being traced between two sensitive points on the internet, some organizations choose to collect a large data set from various sources, in a manner which makes it appear to the outside as an anonymous chunk. Some organizations are concerned that focused mining of public sources in an ‘open’ environment may allow a potential competitor to gain some advantage from knowing ‘who accessed what’. To address this concern, they create large, local silos of cached public data. This creates several problems of its own.

When creating local silos that combine public and private data sources, maintaining data currency with respect to the public sources becomes a significant issue. Also, it is unlikely that a combined public / private data collection will remain structurally synchronized throughout its entire life cycle. Structural changes in the public source make re-synchronizing a private copy problematic; typically, a large effort is required to adjust to changes.

We can accommodate the above concerns by designing into our universe a layer that can seamlessly integrate multiple, incoherent data sets into one common view that readily adjusts to changes in the environment. This layer of abstraction is included within the query mediation level. Query mediation achieves integration at both the attribute and entity level, and its key to success is providing transparency to the end user. discoveryHub uniquely provides the platform and the tools to quickly create an efficient mediation level.

Using discoveryHub, we can create architecture to integrate new data from both public and private sources quickly and economically. Local data can be maintained independent of the public source and combined at query time by discoveryHub. The caching mechanisms of discoveryHub can be used to improve both performance and “anonymity” of access to public sources: multiple accesses to the same records are “cached” and result in a single external query per selected record. When repetitive, complex queries across multiple sources are conducted, the caching mechanism “hides” the fact that the same record set may be accessed multiple times. This effectively masks any meaningful analysis of access patterns from unwanted view.

The discoveryHub caching mechanism can be adjusted to achieve any desired level of data “freshness” automatically. Access to cached results is transparent; discoveryHub resolves currency issues. The thin wrappers mean changes in source structure affect only the wrapper, which is easily modified without affecting other parts of the system. Further, wrappers for the most popular public sources are maintained by GE Healthcare Informatics: changes in source structure are detected automatically, and revised wrappers issued when necessary.

Summary and Conclusion
Astonishing advances in technology have created explosive growth in available scientific data, creating a huge potential to accelerate drug critical discovery processes. Lack of a coherent and efficient ability to access, correlate and integrate this vast sea of information, however, is preventing many companies from fully realizing this potential. The discoveryHub platform offered by GE Healthcare Informatics provides the missing technology required to remove these barriers. By providing an open and scalable foundation for data integration, discoveryHub unlocks the key to achieving dramatic increases in new drug R&D productivity.


back to top
'"/>

Source:


Page: All 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Related biology technology :

1. Advantages of Roche Applied Science amplification products
2. Tools for Proteomic Science II
3. Automation of the Eppendorf Perfectprep BAC 96 DNA Kit on the Caliper Life Sciences Sciclone ALH 3000
4. World Class Life Science and Pharmaceutical Companies Leverage DoveBid s Global Reach for Asset Management and Disposition
5. HiSpeed Plasmid Kits
6. siRNA screening of the cell cycle with two dynamic GFP sensors (from Discovery Matters Issue 1, July 2005)
7. Rapid, High Throughput Mutation Discovery Using Fluorescent SSCP on the ABI PRISM 3100 and 3100Avant Genetic Analyzers
8. Quantitation of Acrylamide in Food Samples on the Finnigan TSQ Quantum Discovery by LC/APCI-MS/MS
9. Increased Analyte Sensitivity through the Utility of Enhanced Mass-Resolution on the FinniganTSQ Quantum Discovery
Post Your Comments:
*Name:
*Comment:
*Email:
TAG: Exploiting the Life Science Data Explosion Speed New Drug Discovery

(Date:7/29/2015)... 29, 2015  HealthSouth Corporation (NYSE: ... of post-acute healthcare services, offering both facility-based and ... operations for the second quarter ended June 30, 2015. ... by strong volume growth in both segments and ... Jay Grinney, President and Chief Executive Officer of ...
(Date:7/29/2015)... Station, TX (PRWEB) , ... July 29, 2015 ... ... POD® designer, manufacturer and supplier, announces the launch of their 2nd generation cell ... to the current miniPOD CT, but it also represents a new POD® design. ...
(Date:7/29/2015)... ... 2015 , ... Brady (NYSE:BRC), a global leader in product ... align chemical container labeling with OSHA’s updated Hazard Communication (HazCom) Standard and the ... of an accurate label, and pictogram uses and meanings. , “With changes to ...
(Date:7/29/2015)... YORK , July 29, 2015  Therapath ... testing, announces increased partnerships with VA and Military ... Epidermal Nerve Fiber Density (ENFD) and Sweat Gland ... small fiber neuropathy ( https://therapath.com/skin ). ... Nerve Fiber Density (ENFD) on punch skin biopsy ...
Breaking Biology Technology:HealthSouth Reports Results for Second Quarter 2015 2HealthSouth Reports Results for Second Quarter 2015 3HealthSouth Reports Results for Second Quarter 2015 4HealthSouth Reports Results for Second Quarter 2015 5HealthSouth Reports Results for Second Quarter 2015 6HealthSouth Reports Results for Second Quarter 2015 7HealthSouth Reports Results for Second Quarter 2015 8HealthSouth Reports Results for Second Quarter 2015 9HealthSouth Reports Results for Second Quarter 2015 10HealthSouth Reports Results for Second Quarter 2015 11HealthSouth Reports Results for Second Quarter 2015 12HealthSouth Reports Results for Second Quarter 2015 13HealthSouth Reports Results for Second Quarter 2015 14HealthSouth Reports Results for Second Quarter 2015 15HealthSouth Reports Results for Second Quarter 2015 16HealthSouth Reports Results for Second Quarter 2015 17HealthSouth Reports Results for Second Quarter 2015 18HealthSouth Reports Results for Second Quarter 2015 19HealthSouth Reports Results for Second Quarter 2015 20HealthSouth Reports Results for Second Quarter 2015 21HealthSouth Reports Results for Second Quarter 2015 22HealthSouth Reports Results for Second Quarter 2015 23HealthSouth Reports Results for Second Quarter 2015 24HealthSouth Reports Results for Second Quarter 2015 25HealthSouth Reports Results for Second Quarter 2015 26HealthSouth Reports Results for Second Quarter 2015 27HealthSouth Reports Results for Second Quarter 2015 28HealthSouth Reports Results for Second Quarter 2015 29HealthSouth Reports Results for Second Quarter 2015 30HealthSouth Reports Results for Second Quarter 2015 31HealthSouth Reports Results for Second Quarter 2015 32HealthSouth Reports Results for Second Quarter 2015 33HealthSouth Reports Results for Second Quarter 2015 34HealthSouth Reports Results for Second Quarter 2015 35HealthSouth Reports Results for Second Quarter 2015 36
... SPEX ) reported that on July 21, 2008, ... the bid price of the,Company,s common stock for the ... minimum $1.00 per share required for continued listing,on NASDAQ. ... the Common Stock at,this time., Spherix has been ...
... July 23 Duska Therapeutics,Inc. (OTC Bulletin Board: ... comment a synopsis of a proposed Phase 3 ... U.S. Food and Drug Administration,s (the,"FDA") Division of ... communications with the FDA, including a face-to-face,meeting held ...
... July 23 NxStage Medical,Inc. (Nasdaq: NXTM ... announced ongoing public access to current summary,information derived ... (ESRD) who undergo hemodialysis at home on a ... the conventional regimen of therapy,received three times a ...
Cached Biology Technology:Spherix Receives NASDAQ Bid Price Deficiency Letter 2Duska Therapeutics Submits Phase 3 ATPace(TM) Protocol to FDA for Comment 2Duska Therapeutics Submits Phase 3 ATPace(TM) Protocol to FDA for Comment 3Duska Therapeutics Submits Phase 3 ATPace(TM) Protocol to FDA for Comment 4NxStage Announces Access to Demographic and Outcomes Information for the Growing U.S. Daily Home Hemodialysis Patient Population 2NxStage Announces Access to Demographic and Outcomes Information for the Growing U.S. Daily Home Hemodialysis Patient Population 3NxStage Announces Access to Demographic and Outcomes Information for the Growing U.S. Daily Home Hemodialysis Patient Population 4NxStage Announces Access to Demographic and Outcomes Information for the Growing U.S. Daily Home Hemodialysis Patient Population 5
(Date:7/31/2015)... 31, 2015 NXT-ID, Inc. (NASDAQ: NXTD ) ... on the growing mobile commerce market and creator of ... provisional patent 62/198989 for ELECTRONIC CRYPTO-CURRENCY MANAGEMENT ... method to advance crypto-currencies such as Bitcoin into the ... uniform way to manage all payments.  ...
(Date:7/30/2015)... MOUNTAIN VIEW, Calif. , July 30, 2015 ... products and services for gene function analysis and ... CRISPR Guide RNA (sgRNA) Knockout Library targeting all ... to specifically and permanently "knock out" a gene,s ... library provides a high throughput screening tool so ...
(Date:7/27/2015)... , July 27, 2015   Zynx Health ... experience-based clinical improvement solutions, today announced that its ... now available on Android smartphones and tablets. With ... post-care organizations can use ZynxCarebook to securely exchange ... condition, streamline care transitions to other care settings, ...
Breaking Biology News(10 mins):NXT-ID Patents Electronic Crypto-Currency Management Technology 2NXT-ID Patents Electronic Crypto-Currency Management Technology 3NXT-ID Patents Electronic Crypto-Currency Management Technology 4CELLECTA, INC. Announces Launch of Human Whole Genome CRISPR Knockout Library 2Zynx Health Adds Android Device Support to ZynxCarebook Solution 2Zynx Health Adds Android Device Support to ZynxCarebook Solution 3
... historic meeting next week may decide the fate of tuna ... important marine resources. The Inter-American Tropical Tuna Commission ... 27 in Panama City, Panama, to reverse a trend ... of tuna stocks. Failure of the IATTC to compromise effectively ...
... in gaining major European Union funding to begin ... such as DNA, can be managed and made ... Centre for Integrated Genomic Medical Research (CIGMR), based ... a key role in developing the Biobanking and ...
... Worker Identification Credential (TWIC) Program, Ensures No ... MIAMI BEACH, Fla., June 17 ... high-performance solutions for trusted,credential assurance, reported that ... critical software to the Transportation Security Agency,s ...
Cached Biology News:Tuna populations at risk 2Manchester clears first hurdle in €170 million biobank building boom 2ID Solutions Awarded Full-Production Order for Fingerprint Processing Software for TWIC 2
... in Ham's F12K medium with 2 mM L-glutamine ... In order to keep the antigens in their ... cells are arrayed on a 12-well (5 mm) ... treated to enhance cellular attachment and to minimize ...
Mouse polyclonal antibody to REXO4 - XPMC2 prevents mitotic catastrophe 2 homolog (Xenopus laevis)...
... Stabilizer is an aqueous solution that contains ... chemicals in a PBS buffer (phosphate buffered ... product contains a combination of 0.02% methylisothiazolone ... StabilCoat Plus Stabilizer provides optimum performance for ...
... trays are disposable trays that ... that have been blotted onto ... eight 10.5 cm x 5 ... cut from blotted membrane and ...
Biology Products: