Zobrazeno 1 - 10
of 115
pro vyhledávání: '"RAMBACHAN, ASHESH"'
Recent work suggests that large language models may implicitly learn world models. How should we assess this possibility? We formalize this question for the case where the underlying reality is governed by a deterministic finite automaton. This inclu
Externí odkaz:
http://arxiv.org/abs/2406.03689
What makes large language models (LLMs) impressive is also what makes them hard to evaluate: their diversity of uses. To evaluate these models, we must understand the purposes they will be used for. We consider a setting where these deployment decisi
Externí odkaz:
http://arxiv.org/abs/2406.01382
Machine learning algorithms can find predictive signals that researchers fail to notice; yet they are notoriously hard-to-interpret. How can we extract theoretical insights from these black boxes? History provides a clue. Facing a similar problem --
Externí odkaz:
http://arxiv.org/abs/2404.10111
Predictive algorithms inform consequential decisions in settings where the outcome is selectively observed given choices made by human decision makers. We propose a unified framework for the robust design and evaluation of predictive algorithms in se
Externí odkaz:
http://arxiv.org/abs/2212.09844
Algorithmic risk assessments are used to inform decisions in a wide variety of high-stakes settings. Often multiple predictive models deliver similar overall performance but differ markedly in their predictions for individual cases, an empirical phen
Externí odkaz:
http://arxiv.org/abs/2101.00352
Autor:
Rambachan, Ashesh, Roth, Jonathan
This paper develops a finite-population, design-based theory of uncertainty for studying quasi-experimental settings in the social sciences. In our framework, treatment is determined by stochastic idiosyncratic factors, but individuals may differ in
Externí odkaz:
http://arxiv.org/abs/2008.00602
In panel experiments, we randomly assign units to different interventions, measuring their outcomes, and repeating the procedure in several periods. Using the potential outcomes framework, we define finite population dynamic causal effects that captu
Externí odkaz:
http://arxiv.org/abs/2003.09915
Autor:
Rambachan, Ashesh, Roth, Jonathan
Publikováno v:
1st Symposium on Foundations of Responsible Computing (FORC 2020)
We evaluate the folk wisdom that algorithmic decision rules trained on data produced by biased human decision-makers necessarily reflect this bias. We consider a setting where training labels are only generated if a biased decision-maker takes a part
Externí odkaz:
http://arxiv.org/abs/1909.08518
Autor:
Rambachan, Ashesh, Shephard, Neil
Bojinov & Shephard (2019) defined potential outcome time series to nonparametrically measure dynamic causal effects in time series experiments. Four innovations are developed in this paper: "instrumental paths," treatments which are "shocks," "linear
Externí odkaz:
http://arxiv.org/abs/1903.01637
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.