Medical Big Data Analytics: Improving Traumatic Brain Injury Prediction
Received: 01-Feb-2022, Manuscript No. ejbi-22-55998; Editor assigned: 02-Feb-2022, Pre QC No. ejbi-22-55998; Reviewed: 16-Feb-2022 QC No. ejbi-22-55998; Revised: 19-Feb-2022, Manuscript No. ejbi-22-55998; Published: 26-Feb-2022, DOI: 10.24105/ejbi.2022.18.2.16-17
Citation: Morris S (2022). Medical Big Data Analytics: Improving Traumatic Brain Injury Prediction. EJBI. 18(2):16-17.
This open-access article is distributed under the terms of the Creative Commons Attribution Non-Commercial License (CC BY-NC) (http://creativecommons.org/licenses/by-nc/4.0/), which permits reuse, distribution and reproduction of the article, provided that the original work is properly cited and the reuse is restricted to noncommercial purposes. For commercial reuse, contact submissions@ejbi.org
Abstract
Big data analytics, a fast growing subject, has begun to play a key role in the evolution of healthcare practises and research. It has given people the ability to collect, organise, analyse, and integrate huge amounts of disparate, structured, and unstructured data generated by today’s healthcare systems. Big data analytics has lately been used to help in the delivery of care and illness research. However, some basic difficulties inherent in the big data paradigm continue to stymie adoption and research advancement in this domain. We examine some of these significant problems in this work, with an emphasis on three emerging and promising areas of medical research: image, signal, and genomics-based analytics. It is described recent research that focuses on utilising massive volumes of medical data while merging multimodal data from many sources. Potential research areas in this discipline that have the potential to have a significant influence on healthcare delivery are also considered.
Keywords
Genomics, Big Data, TBI
Introduction
The failure of various clinical trials in traumatic brain injury (TBI) aimed at determining physiologic variable thresholds casts doubt on the clinical value of existing methodologies and emphasises the need for novel patient treatment strategies in acute brain trauma. Medical informatics is a multidisciplinary area that employs approaches from engineering, statistics, and computer science to solve clinical problems. They‘ve been shown to be effective in the omics domains, but they‘re less well-known in TBI research [1].
Some medical informatics applications in TBI research are mentioned in this chapter. First, rather than using populationbased measurements, we address the need for patient-specific physiologic thresholds. The role of cerebral autoregulation (CAR), as well as new ways for assessing a patient‘s CAR state and the clinical utility of CAR, are highlighted. Second, we look at two key areas of informatics: supervised and unsupervised machine learning, as well as its applications in ABI research. Machine learning (ML) is a powerful tool for finding patterns in massive datasets and developing prediction models [2].
We go over some of the ways machine learning may be used in TBI research, from predicting ICP hypertension to finding trends. Furthermore, the era of ‚big data‘ has been ushered in by recent advances in our ability to store enormous amounts of data and the availability of low-cost computing power. Big data has had a significant impact on fields such as genomics and proteomics. We talk about recent developments in this subject as well as the need for massive datasets in TBI. Because each patient‘s health is distinct, a shift to a „personalised medicine“ strategy, in which each patient‘s care protocol is unique and dependent on the patient‘s state, rather than a guideline-based approach, is required. When used to TBI research, informatics techniques can help [3].
The term „big data“ isn‘t new, but how it‘s characterised is. Various definitions of big data boil down to a collection of data items whose size, speed, kind, and/or complexity necessitate the development, adoption, and invention of new hardware and software processes in order to successfully store, analyse, and visualise the data. Healthcare is a great illustration of how the three Vs of data, velocity (the rate at which data is generated), variety, and volume, are built into the data it generates. This information is dispersed among a variety of healthcare systems, insurers, researchers, government agencies, and other organisations. Furthermore, each of these data repositories is compartmentalised and hence unable to provide a platform for global data transparency [4].
Currently, healthcare systems employ a variety of diverse and continuous monitoring equipment that use single physiological waveform data or discretized essential information to generate alarm mechanisms in the event of overt incidents. As a result, more extensive and improved ways to studying interactions and correlations across multimodal clinical time series data are required. This is significant since research shows that humans struggle to reason about changes affecting several signals [5].
Researchers have been able to investigate genetic markers across a wide range of populations, improve efficiency by more than five orders of magnitude since the human genome was sequenced, and link genetic origins of phenotypic in disease states thanks to the advent of high-throughput sequencing tools. Microarray-based genome-wide analysis has proven successful in examining features across a population and has led to the treatment of complicated disorders such as Crohn‘s disease and age-related muscle degeneration. The subjects covered by big data applications in genomics are diverse.
Conclusion
Big data analytics, which makes use of a plethora of fragmented, structured, and unstructured data sources, will become increasingly important in the future of healthcare. A variety of analytics are already being used to assist healthcare workers and patients in making decisions and improving their performance. The shifts will be fueled by a combination of increasingly powerful and capable BDA, as well as AI and machine learning (AI/ML) platforms, as well as financial incentives to employ these technologies. Neurocritical care will be one of the first fields to be altered by BDA and AI/ML methods. We will be able to detect connections between dozens of streams of realtime data from physiological monitoring, imaging, biochemical, and functional biomarkers using BDA and AI/ML. The current practise of triaging, diagnoses, treatments, and prognosis will be transformed into fully integrated, evidence-based patient care under this new method.
REFERENCES
- Mitra B, Cameron PA, Mori A, Maini A, Fitzgerald M, Paul E, et al. Early prediction of acute traumatic coagulopathy. Resuscitation. 2011;82(9):1208-13.
- Sejnowski TJ, Churchland PS, Movshon JA. Putting big data to good use in neuroscience. Nat Neurosci. 2014;17(11):1440-1441.
- Liebeskind DS, Albers GW, Crawford K, Derdeyn CP, George MS, Palesch YY, et al. Imaging in strokenet: realizing the potential of big data. Stroke. 2015;46(7):2000-2006.
- Khatri P, Draghici S, Ostermeier GC, Krawetz SA. Profiling gene expression using onto-express. Genomics. 2002;79(2):266-270.
- Zhu H. Big data and artificial intelligence modeling for drug discovery. Ann Rev Pharm Toxic. 2020;60:573-589.
Indexed at, Google Scholar, Cross Ref
Indexed at, Google Scholar, Cross Ref
Indexed at, Google Scholar, Cross Ref
Indexed at, Google Scholar, Cross Ref