Zobrazeno 1 - 10
of 39
pro vyhledávání: '"RAMBACHAN, ASHESH"'
Large language models (LLMs) are being used in economics research to form predictions, label text, simulate human responses, generate hypotheses, and even produce data for times and places where such data don't exist. While these uses are creative, a
Externí odkaz:
http://arxiv.org/abs/2412.07031
While traditional program evaluations typically rely on surveys to measure outcomes, certain economic outcomes such as living standards or environmental quality may be infeasible or costly to collect. As a result, recent empirical work estimates trea
Externí odkaz:
http://arxiv.org/abs/2411.10959
Recent work suggests that large language models may implicitly learn world models. How should we assess this possibility? We formalize this question for the case where the underlying reality is governed by a deterministic finite automaton. This inclu
Externí odkaz:
http://arxiv.org/abs/2406.03689
What makes large language models (LLMs) impressive is also what makes them hard to evaluate: their diversity of uses. To evaluate these models, we must understand the purposes they will be used for. We consider a setting where these deployment decisi
Externí odkaz:
http://arxiv.org/abs/2406.01382
Machine learning algorithms can find predictive signals that researchers fail to notice; yet they are notoriously hard-to-interpret. How can we extract theoretical insights from these black boxes? History provides a clue. Facing a similar problem --
Externí odkaz:
http://arxiv.org/abs/2404.10111
Predictive algorithms inform consequential decisions in settings where the outcome is selectively observed given choices made by human decision makers. We propose a unified framework for the robust design and evaluation of predictive algorithms in se
Externí odkaz:
http://arxiv.org/abs/2212.09844
Algorithmic risk assessments are used to inform decisions in a wide variety of high-stakes settings. Often multiple predictive models deliver similar overall performance but differ markedly in their predictions for individual cases, an empirical phen
Externí odkaz:
http://arxiv.org/abs/2101.00352
Autor:
Rambachan, Ashesh, Roth, Jonathan
Design-based frameworks of uncertainty are frequently used in settings where the treatment is (conditionally) randomly assigned. This paper develops a design-based framework suitable for analyzing quasi-experimental settings in the social sciences, i
Externí odkaz:
http://arxiv.org/abs/2008.00602
In panel experiments, we randomly assign units to different interventions, measuring their outcomes, and repeating the procedure in several periods. Using the potential outcomes framework, we define finite population dynamic causal effects that captu
Externí odkaz:
http://arxiv.org/abs/2003.09915
Autor:
Rambachan, Ashesh, Roth, Jonathan
Publikováno v:
1st Symposium on Foundations of Responsible Computing (FORC 2020)
We evaluate the folk wisdom that algorithmic decision rules trained on data produced by biased human decision-makers necessarily reflect this bias. We consider a setting where training labels are only generated if a biased decision-maker takes a part
Externí odkaz:
http://arxiv.org/abs/1909.08518