Big Data in radiation therapy
Autor: | Tim Lustberg, Sean Walsh, Andre Dekker, Timo M. Deist, Arthur Jochems, Yvonka van Wijk, Philippe Lambin, Johan van Soest |
---|---|
Přispěvatelé: | Radiotherapie, Promovendi ODB, RS: GROW - R3 - Innovative Cancer Diagnostics & Therapy |
Jazyk: | angličtina |
Rok vydání: | 2017 |
Předmět: |
Databases
Factual Patient privacy Big data Interoperability 030218 nuclear medicine & medical imaging Imaging modalities 03 medical and health sciences 0302 clinical medicine Neoplasms Radiation oncology Medicine Humans Radiology Nuclear Medicine and imaging Clinical Trials as Topic business.industry Data Collection Physics and Technology Volume (computing) General Medicine Data science Variety (cybernetics) 030220 oncology & carcinogenesis Data quality Commentary Radiation Oncology business |
Zdroj: | The British Journal of Radiology British Journal of Radiology, 90(1069):20160689. British Institute of Radiology |
ISSN: | 0007-1285 |
DOI: | 10.1259/bjr.20160689 |
Popis: | Data collected and generated by radiation oncology can be classified by the Volume, Variety, Velocity and Veracity (4Vs) of Big Data because they are spread across different care providers and not easily shared owing to patient privacy protection. The magnitude of the 4Vs is substantial in oncology, especially owing to imaging modalities and unclear data definitions. To create useful models ideally all data of all care providers are understood and learned from; however, this presents challenges in the guise of poor data quality, patient privacy concerns, geographical spread, interoperability and large volume. In radiation oncology, there are many efforts to collect data for research and innovation purposes. Clinical trials are the gold standard when proving any hypothesis that directly affects the patient. Collecting data in registries with strict predefined rules is also a common approach to find answers. A third approach is to develop data stores that can be used by modern machine learning techniques to provide new insights or answer hypotheses. We believe all three approaches have their strengths and weaknesses, but they should all strive to create Findable, Accessible, Interoperable, Reusable (FAIR) data. To learn from these data, we need distributed learning techniques, sending machine learning algorithms to FAIR data stores around the world, learning from trial data, registries and routine clinical data rather than trying to centralize all data. To improve and personalize medicine, rapid learning platforms must be able to process FAIR "Big Data" to evaluate current clinical practice and to guide further innovation. |
Databáze: | OpenAIRE |
Externí odkaz: |