Optimal Treatment Regimes for Personalized Medicine and Mobile Health
There has been increasing development in personalized interventions that are tailored to uniquely evolving health status of each patient over time. In this dissertation, we investigate two problems: (1) the construction of individualized mobile health (mHealth) application recommender system; and (2...
Main Author: | |
---|---|
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://doi.org/10.7916/d8-vvh2-3080 |
id |
ndltd-columbia.edu-oai-academiccommons.columbia.edu-10.7916-d8-vvh2-3080 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-columbia.edu-oai-academiccommons.columbia.edu-10.7916-d8-vvh2-30802020-08-27T05:03:32ZOptimal Treatment Regimes for Personalized Medicine and Mobile HealthOh, Eun Jeong2020ThesesBiometryPersonalized medicineTherapeuticsThere has been increasing development in personalized interventions that are tailored to uniquely evolving health status of each patient over time. In this dissertation, we investigate two problems: (1) the construction of individualized mobile health (mHealth) application recommender system; and (2) the estimation of optimal dynamic treatment regimes (DTRs) from a multi-stage clinical trial study. The dissertation is organized as follows. In Chapter 1, we provide a brief background on personalized medicine and two motivating examples which illustrate the needs and benefits of individualized treatment policies. We then introduce reinforcement learning and various methods to obtain the optimal DTRs as well as Q-learning procedure which is a popular method in the DTR literature. In Chapter 2, we propose a partial regularization via orthogonality using the adaptive Lasso (PRO-aLasso) to estimate the optimal policy which maximizes the expected utility in the mHealth setting. We also derive the convergence rate of the expected outcome of the estimated policy to that of the true optimal policy. The PRO-aLasso estimators are shown to enjoy the same oracle properties as the adaptive Lasso. Simulations and real data application demonstrate that the PRO-aLasso yields simple, more stable policies with better results as compared to the adaptive Lasso and other competing methods. In Chapter 3, we propose a penalized A-learning with a Lasso-type penalty for the construction of optimal DTR and derive generalization error bounds of the estimated DTR. We first examine the relationship between value and the Q-functions, and then we provide a finite sample upper bound on the difference in values between the optimal DTR and the estimated DTR. In practice, we implement a multi-stage PRO-aLasso algorithm to obtain the optimal DTR. Simulation results show advantages of the proposed methods over some existing alternatives. The proposed approach is also demonstrated with the data from a depression clinical trial study. In Chapter 4, we present future work and concluding remarks.Englishhttps://doi.org/10.7916/d8-vvh2-3080 |
collection |
NDLTD |
language |
English |
sources |
NDLTD |
topic |
Biometry Personalized medicine Therapeutics |
spellingShingle |
Biometry Personalized medicine Therapeutics Oh, Eun Jeong Optimal Treatment Regimes for Personalized Medicine and Mobile Health |
description |
There has been increasing development in personalized interventions that are tailored to uniquely evolving health status of each patient over time. In this dissertation, we investigate two problems: (1) the construction of individualized mobile health (mHealth) application recommender system; and (2) the estimation of optimal dynamic treatment regimes (DTRs) from a multi-stage clinical trial study. The dissertation is organized as follows.
In Chapter 1, we provide a brief background on personalized medicine and two motivating examples which illustrate the needs and benefits of individualized treatment policies. We then introduce reinforcement learning and various methods to obtain the optimal DTRs as well as Q-learning procedure which is a popular method in the DTR literature.
In Chapter 2, we propose a partial regularization via orthogonality using the adaptive Lasso (PRO-aLasso) to estimate the optimal policy which maximizes the expected utility in the mHealth setting. We also derive the convergence rate of the expected outcome of the estimated policy to that of the true optimal policy. The PRO-aLasso estimators are shown to enjoy the same oracle properties as the adaptive Lasso. Simulations and real data application demonstrate that the PRO-aLasso yields simple, more stable policies with better results as compared to the adaptive Lasso and other competing methods.
In Chapter 3, we propose a penalized A-learning with a Lasso-type penalty for the construction of optimal DTR and derive generalization error bounds of the estimated DTR. We first examine the relationship between value and the Q-functions, and then we provide a finite sample upper bound on the difference in values between the optimal DTR and the estimated DTR. In practice, we implement a multi-stage PRO-aLasso algorithm to obtain the optimal DTR. Simulation results show advantages of the proposed methods over some existing alternatives. The proposed approach is also demonstrated with the data from a depression clinical trial study. In Chapter 4, we present future work and concluding remarks. |
author |
Oh, Eun Jeong |
author_facet |
Oh, Eun Jeong |
author_sort |
Oh, Eun Jeong |
title |
Optimal Treatment Regimes for Personalized Medicine and Mobile Health |
title_short |
Optimal Treatment Regimes for Personalized Medicine and Mobile Health |
title_full |
Optimal Treatment Regimes for Personalized Medicine and Mobile Health |
title_fullStr |
Optimal Treatment Regimes for Personalized Medicine and Mobile Health |
title_full_unstemmed |
Optimal Treatment Regimes for Personalized Medicine and Mobile Health |
title_sort |
optimal treatment regimes for personalized medicine and mobile health |
publishDate |
2020 |
url |
https://doi.org/10.7916/d8-vvh2-3080 |
work_keys_str_mv |
AT oheunjeong optimaltreatmentregimesforpersonalizedmedicineandmobilehealth |
_version_ |
1719338858689069056 |