Examination of posterior predictive check and bootstrap as population model validation tools

Drug development is time consuming, expensive with high failure rates. It takes 10-15 years for a drug to go from discovery to approval, while the mean cost of developing a drug is $1.5 billions dollars. Pharmacometric models (PM) play a pivotal role in knowledge driven drug development and these mo...

Full description

Bibliographic Details
Main Author: Desai, Amit V.
Format: Others
Published: Scholarly Commons 2008
Subjects:
Online Access:https://scholarlycommons.pacific.edu/uop_etds/2381
https://scholarlycommons.pacific.edu/cgi/viewcontent.cgi?article=3380&context=uop_etds
id ndltd-pacific.edu-oai-scholarlycommons.pacific.edu-uop_etds-3380
record_format oai_dc
spelling ndltd-pacific.edu-oai-scholarlycommons.pacific.edu-uop_etds-33802021-08-24T05:14:31Z Examination of posterior predictive check and bootstrap as population model validation tools Desai, Amit V. Drug development is time consuming, expensive with high failure rates. It takes 10-15 years for a drug to go from discovery to approval, while the mean cost of developing a drug is $1.5 billions dollars. Pharmacometric models (PM) play a pivotal role in knowledge driven drug development and these models require validation prior to application. The purpose of the current study was to evaluate the posterior predictive check (PPC) and the bootstrap as population model validation tools. PPC was evaluated to determine, if it was able to distinguish between population pharmacokinetic (PPK) models that were developed/estimated from influence data versus models that were not derived/estimated from influence data. Bootstrap was examined to see if there was a correspondence between the root mean squared prediction errors (RMSPE) for serum concentrations when estimated by external prediction methods versus when estimated by the standard bootstrap. In the case of PPC, C last , C first -C last and C mid values from initial data sets were compared to corresponding posterior distributions. In the case of no influence data for C last , C first -C last and C mid on average 76%, 30% and 52% of the values from the posterior distributions were below the initial C last , C first -C last and C mid on average 93%, 13% and 67% of the values from the posterior distributions were below the initial C last , C first -C last and C mid respectively. PPC was able to classify models from influence versus no influence data. In the case of bootstrap when the original model was used to predict into the external data the WRMSPE for drug 1, drug 2, drug 3, drug 4 and simulated data set was 10.40 mg/L, 20.36 mg/L, 0.72 mg/L, 15.27 mg/L and 14.24 mg/L respectively. From the bootstrap the improved WRMSPE for drug 1 drug 2, drug 3, drug 4 and simulated data set was 9.35 mg/L, 19.85 mg/L, 0.50 mg/L, 14.44 mg/L and 13.98mg/L respectively. The bootstrap provided estimates of WRMSPE that corresponded to the external validation methods. From the results obtained, it was concluded that both the PPC and the Bootstrap were demonstrated to have value as validation tools. 2008-01-01T08:00:00Z text application/pdf https://scholarlycommons.pacific.edu/uop_etds/2381 https://scholarlycommons.pacific.edu/cgi/viewcontent.cgi?article=3380&context=uop_etds University of the Pacific Theses and Dissertations Scholarly Commons Statistics Pharmacy sciences Health and environmental sciences Pure sciences Bootstrap Population model validation Posterior predictive check
collection NDLTD
format Others
sources NDLTD
topic Statistics
Pharmacy sciences
Health and environmental sciences
Pure sciences
Bootstrap
Population model validation
Posterior predictive check
spellingShingle Statistics
Pharmacy sciences
Health and environmental sciences
Pure sciences
Bootstrap
Population model validation
Posterior predictive check
Desai, Amit V.
Examination of posterior predictive check and bootstrap as population model validation tools
description Drug development is time consuming, expensive with high failure rates. It takes 10-15 years for a drug to go from discovery to approval, while the mean cost of developing a drug is $1.5 billions dollars. Pharmacometric models (PM) play a pivotal role in knowledge driven drug development and these models require validation prior to application. The purpose of the current study was to evaluate the posterior predictive check (PPC) and the bootstrap as population model validation tools. PPC was evaluated to determine, if it was able to distinguish between population pharmacokinetic (PPK) models that were developed/estimated from influence data versus models that were not derived/estimated from influence data. Bootstrap was examined to see if there was a correspondence between the root mean squared prediction errors (RMSPE) for serum concentrations when estimated by external prediction methods versus when estimated by the standard bootstrap. In the case of PPC, C last , C first -C last and C mid values from initial data sets were compared to corresponding posterior distributions. In the case of no influence data for C last , C first -C last and C mid on average 76%, 30% and 52% of the values from the posterior distributions were below the initial C last , C first -C last and C mid on average 93%, 13% and 67% of the values from the posterior distributions were below the initial C last , C first -C last and C mid respectively. PPC was able to classify models from influence versus no influence data. In the case of bootstrap when the original model was used to predict into the external data the WRMSPE for drug 1, drug 2, drug 3, drug 4 and simulated data set was 10.40 mg/L, 20.36 mg/L, 0.72 mg/L, 15.27 mg/L and 14.24 mg/L respectively. From the bootstrap the improved WRMSPE for drug 1 drug 2, drug 3, drug 4 and simulated data set was 9.35 mg/L, 19.85 mg/L, 0.50 mg/L, 14.44 mg/L and 13.98mg/L respectively. The bootstrap provided estimates of WRMSPE that corresponded to the external validation methods. From the results obtained, it was concluded that both the PPC and the Bootstrap were demonstrated to have value as validation tools.
author Desai, Amit V.
author_facet Desai, Amit V.
author_sort Desai, Amit V.
title Examination of posterior predictive check and bootstrap as population model validation tools
title_short Examination of posterior predictive check and bootstrap as population model validation tools
title_full Examination of posterior predictive check and bootstrap as population model validation tools
title_fullStr Examination of posterior predictive check and bootstrap as population model validation tools
title_full_unstemmed Examination of posterior predictive check and bootstrap as population model validation tools
title_sort examination of posterior predictive check and bootstrap as population model validation tools
publisher Scholarly Commons
publishDate 2008
url https://scholarlycommons.pacific.edu/uop_etds/2381
https://scholarlycommons.pacific.edu/cgi/viewcontent.cgi?article=3380&context=uop_etds
work_keys_str_mv AT desaiamitv examinationofposteriorpredictivecheckandbootstrapaspopulationmodelvalidationtools
_version_ 1719471864523259904