Zobrazeno 1 - 10
of 144
pro vyhledávání: '"Villar, Sofía S"'
Autor:
Kaddaj, Daniel, Pin, Lukas, Baas, Stef, Tang, Edwin Y. N., Robertson, David S., Villar, Sofía S.
To implement a Bayesian response-adaptive trial it is necessary to evaluate a sequence of posterior probabilities. This sequence is often approximated by simulation due to the unavailability of closed-form formulae to compute it exactly. Approximatin
Externí odkaz:
http://arxiv.org/abs/2411.19871
Although response-adaptive randomisation (RAR) has gained substantial attention in the literature, it still has limited use in clinical trials. Amongst other reasons, the implementation of RAR in the real world raises important practical questions, o
Externí odkaz:
http://arxiv.org/abs/2410.03346
Response-adaptive (RA) designs of clinical trials allow targeting a given objective by skewing the allocation of participants to treatments based on observed outcomes. RA designs face greater regulatory scrutiny due to potential type I error inflatio
Externí odkaz:
http://arxiv.org/abs/2407.01055
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation
Externí odkaz:
http://arxiv.org/abs/2301.01107
Autor:
Chien, Isabel, Deliu, Nina, Turner, Richard E., Weller, Adrian, Villar, Sofia S., Kilbertus, Niki
While interest in the application of machine learning to improve healthcare has grown tremendously in recent years, a number of barriers prevent deployment in medical practice. A notable concern is the potential to exacerbate entrenched biases and ex
Externí odkaz:
http://arxiv.org/abs/2205.08875
When comparing the performance of multi-armed bandit algorithms, the potential impact of missing data is often overlooked. In practice, it also affects their implementation where the simplest approach to overcome this is to continue to sample accordi
Externí odkaz:
http://arxiv.org/abs/2205.03820
Autor:
Li, Tong, Nogas, Jacob, Song, Haochen, Kumar, Harsh, Durand, Audrey, Rafferty, Anna, Deliu, Nina, Villar, Sofia S., Williams, Joseph J.
Multi-armed bandit algorithms like Thompson Sampling (TS) can be used to conduct adaptive experiments, in which maximizing reward means that data is used to progressively assign participants to more effective arms. Such assignment strategies increase
Externí odkaz:
http://arxiv.org/abs/2112.08507
Using bandit algorithms to conduct adaptive randomised experiments can minimise regret, but it poses major challenges for statistical inference (e.g., biased estimators, inflated type-I error and reduced power). Recent attempts to address these chall
Externí odkaz:
http://arxiv.org/abs/2111.00137
Publikováno v:
In Contemporary Clinical Trials July 2024 142
Autor:
Williams, Joseph Jay, Nogas, Jacob, Deliu, Nina, Shaikh, Hammad, Villar, Sofia S., Durand, Audrey, Rafferty, Anna
Multi-armed bandit algorithms have been argued for decades as useful for adaptively randomized experiments. In such experiments, an algorithm varies which arms (e.g. alternative interventions to help students learn) are assigned to participants, with
Externí odkaz:
http://arxiv.org/abs/2103.12198