ChatGPT as Research Scientist: Probing GPT's capabilities as a Research Librarian, Research Ethicist, Data Generator, and Data Predictor.
Autor: | Lehr SA; Cangrade, Inc., Watertown, MA 02472., Caliskan A; Information School, University of Washington, Seattle, WA 98195., Liyanage S; Department of Psychology, Harvard University, Cambridge, MA 02138., Banaji MR; Department of Psychology, Harvard University, Cambridge, MA 02138. |
---|---|
Jazyk: | angličtina |
Zdroj: | Proceedings of the National Academy of Sciences of the United States of America [Proc Natl Acad Sci U S A] 2024 Aug 27; Vol. 121 (35), pp. e2404328121. Date of Electronic Publication: 2024 Aug 20. |
DOI: | 10.1073/pnas.2404328121 |
Abstrakt: | How good a research scientist is ChatGPT? We systematically probed the capabilities of GPT-3.5 and GPT-4 across four central components of the scientific process: as a Research Librarian, Research Ethicist, Data Generator, and Novel Data Predictor, using psychological science as a testing field. In Study 1 (Research Librarian), unlike human researchers, GPT-3.5 and GPT-4 hallucinated, authoritatively generating fictional references 36.0% and 5.4% of the time, respectively, although GPT-4 exhibited an evolving capacity to acknowledge its fictions. In Study 2 (Research Ethicist), GPT-4 (though not GPT-3.5) proved capable of detecting violations like p-hacking in fictional research protocols, correcting 88.6% of blatantly presented issues, and 72.6% of subtly presented issues. In Study 3 (Data Generator), both models consistently replicated patterns of cultural bias previously discovered in large language corpora, indicating that ChatGPT can simulate known results, an antecedent to usefulness for both data generation and skills like hypothesis generation. Contrastingly, in Study 4 (Novel Data Predictor), neither model was successful at predicting new results absent in their training data, and neither appeared to leverage substantially new information when predicting more vs. less novel outcomes. Together, these results suggest that GPT is a flawed but rapidly improving librarian, a decent research ethicist already, capable of data generation in simple domains with known characteristics but poor at predicting novel patterns of empirical data to aid future experimentation. Competing Interests: Competing interests statement:Cangrade builds AI-driven tools for businesses, but is not affiliated with OpenAI. |
Databáze: | MEDLINE |
Externí odkaz: |