Last week’s announcement by IBM describes a promising biomedical analytics platform:
Scientists from IBM Research are collaborating with theFondazione IRCCS Istituto Nazionale dei Tumori, a major research and treatment cancer center in Italy, on the new decision support solution. This new analytics platform is being tested by the Institute’s physicians to personalize treatment based on automated interpretation of pathology guidelines and intelligence from a number of past clinical cases, documented in the hospital information system.
Selecting the most effective treatment can depend on a number of characteristics including age, weight, family history, current state of the disease and general health. As a result, more informed and personalized decisions are needed to provide accurate and safe care.
We are clearly moving towards personalized medicine, but there are many challenges along the way.
Technology. This one used to be problematic, but it no longer presents a significant challenge. Cloud services (both storage and computing power) are affordable, and there are many new technologies (Hadoop and various NoSQL frameworks) that simplify processing large amounts of loosely structured data – on cheap commodity servers.
Regulations. Various privacy guidelines complicate information gathering and make sharing it in a collaborative fashion difficult. Luckily, effective data models can be built using “de-identified” data (data that’s been processed or summarized in order to strip personal identifying details – essentially rendering all data anonymous). One problem is that sanitizing data often affects outliers the most (e.g. in de-indentified data, a given anonymous patient, who remained hospitalized for 200 days, may only show “6+ weeks” under Duration of Stay). While outliers are often deliberately ignored by statistical analysis (as they are often considered bad data points), they may still provide valuable information about the treatment or related data gathering/recording process.
Acceptance by “Big Pharma”. Highly targeted medications (e.g. those that are highly effective in a small subset of population with the right genetic makeup, but ineffective or potentially dangerous in others) are hard to get through the approval process. Moreover, such drugs may require the same (or larger) amount of R&D spend, yet present a much smaller “market”. Drugs that work (not very effectively) for most people, win. Extending a lung cancer patient’s life expectancy by 2 months is considered a success. This is likely to change as personalized medicine becomes more commonplace.
Genetic research. This is where things get a little depressing. Human beings have 20,000 … 25,000 [protein-coding] genes. I recently attended a Pesonalized Medicine panel discussion at Yale. According to the speakers, modern medicine knows nothing about 16,000 of those genes. Certain deseases (such as cancer) are caused by mutations in one or several key genes. Currently, about 500 of gene mutations are linked to various forms of cancer. Of those 500, we know and understand 8 (eight). So, someone with that particular gene mutation can receive “personalized” treatment that will be extremely effective.
In conclusion, next 5 years should present tremendous opportunities for data analysis. Some predictive models can be build simply by using existing de-personalized data. For example, it is now possible to predict how many days a patient is likely to spend hospitalized next year, based on their prior claims data, coupled with certain amount of “personal data” (e.g. age group, sex, Charlson index, etc). Some speculate that will help advance preventive care.
However, it appears that a patient’s ability to receive highly targeted (and highly effective) treatment for cancer, will continue to rely on advancements in genetic research. And that area is still dealing with technology limitations. There are companies working on making gene sequencing affordable, which should drive personalized medicine.
by Michael Alatortsev