Managing the Dynamic Datacenter

Datacenter Automation

Subscribe to Datacenter Automation: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Datacenter Automation: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Datacenter Automation Authors: Elizabeth White, Pat Romanski, Liz McMillan, Glenn Rossman, Moshe Kranc

Related Topics: Datacenter Automation

Blog Feed Post

The Technology Behind Breakthrough Alzheimer’s Disease Research: An Interview with Simon Lovestone

By

Editor’s note: The interview below first ran in our Analyst One site.-bg

Simon Lovestone is a professor of translational neuroscience at Oxford University, and a key researcher that has been part of a decade long pursuit of insights into Alzheimer’s. In an 8 July Telegraph piece titled “Alzheimer’s disease could be prevented after new blood test breakthrough” Simon provides key context on research conducted at Oxford and King’s College London which shows that detectable proteins can indicate the disease is imminent. This major step forward in research holds the promise of allowing for earlier treatment of the disease and potentially even a preventive strategy.

The potential benefit of this research is staggering to comprehend. Estimates are up to 44 million people worldwide are living with Alzheimer’s. The impact on them, their families and caregivers and society as a whole are impossible to calculate but clearly far too costly for humanity to tolerate. This makes the research reported here very virtuous and welcomed by a wide swath of people.

The research team published their results in a paper titled “Plasma proteins predict conversation to dementia from prodromal disease” at Science Direct.

We had an opportunity to ask Simon a few questions about his research. Our questions and his responses are below:

Gourley:  It seems your team used a pretty straightforward technology setup for your research. You leveraged the commercially available Luminex 200 xMap Assay instrument, then the Systat software package and SPSS for analysis. The machine learning algorithms of WEKA were used as an aide in classification of data. Does this describe your core technology setup?

Lovestone: Absolutely correct. The only additional point is that previous work, over the past ten years (first publication in 2006 and very many since that document our work) that did the primary discovery suggesting which proteins were associated with either disease or with pathology used a range of technologies including 2DGE and LC-MS/MS and other mass spectrometry approaches including multiplexing with TMTs and MRM methods. We used these as proteomic discovery phases, then did some replications and then finally the ‘near-qualification’ level study we published yesterday using Luminex tech.

Gourley:  These capabilities all come from different providers. Did your team encounter any challenges in moving data from system to system during this analysis? Any lessons learned regarding system interoperability you would like to pass on?

Lovestone: Good question. In the paper we published 8 July then the answer was no. All worked smoothly and the collaboration with Proteome Sciences and Millipore Merck was extremely productive and much valued. There were no inter-operability issues at either a technical or analytical level. However, if you take the last ten years work as a whole (as in the response above) then there are frequent interesting interoperability issues. It isn’t unusual for us to find a potential analyte associated with disease at proteomic level using MS and then not to find the same thing using an antibody capture technology. There can be many reasons for this – statistical error/happnestance/overfitting of data in proteomic phase, a failure of the antibody capture tech (antibody doesn’t ‘work’, other test failure) or what we suspect most often, that the MS measures something ‘different’ to the antibody capture. So the MS measures the analyte at a compound of protein level and post-transaltional modification. The antibody capture actually measures an epitope not a protein. So they measure different things and its not surprising that moving from discovery to validation doesn’t progress smoothly in all cases. However, the failures to progress are less important than the ones that do progress if the goal is a practicable and useable test.

Gourley:  Did you encounter any significant delays in your research due to technology limitations?  (what technology do you wish worked faster?)

Lovestone: It took longer than we wished as we have a very hands-on/manual laboratory as is typical in an academic lab. Also, although multiplexed, the Luminex we used was spread over more than one plate. Running that many Luminex tests on that many individuals and taking suitable care in doing so simply takes a lot of person-hours. However, all the technology is semi-automatable and suitable for robotic liquid handling etc

Gourley: Any recommendations for the technology community on capabilities that can enhance your team’s capabilities?

Lovestone: Buy some robots!

Also – we now need some tech development and we want to put the core analytes on a single multiplexed platform – if Luminex then on a single plate. This would hugely reduce costs and time.

However – looking forward, the thing I am most excited by is the possibility of miniaturisation and automation. I think the possibility is of very high throughput diagnostics for repeated use without centralised laboratories. This is possible…..

Gourley: Thank you very much for the context Simon. And thanks to you and your entire team for the continued pursuit of this critically important research.

Lovestone: My pleasure.

Read the original blog entry...

More Stories By Bob Gourley

Bob Gourley writes on enterprise IT. He is a founder of Crucial Point and publisher of CTOvision.com