Abstract

In this dissertation, we argue that large language models (LLMs) exhibit a considerable amount of \textit{algorithmic fidelity}, a property where they have modeled ideas, behaviors, and attitudes of the population who generated their training data. This has important implications for social science, as this fidelity theoretically allows for the use of LLMs as effective proxies for human beings in experiments and research. We demonstrate this empirically in various social science domains (political partisanship, demographic surveying, voting behavior, hot-button policy issues, news media, populism, congressional summaries), in various applications (replicating social science survey findings, assisting in coding of text datasets, inferring demographics, automating interventions to improve conversations about divisive topics), and at various levels of granularity (from findings about the entire U.S. population down to specific demographics and down to the individual level). It is intrinsically interesting that LLMs could learn such behaviors on the unsupervised objective whereby they are trained. It is also strategically useful to establish where and to what extent they have, so that we can study them in cheaper and formerly impossible ways. This work serves as a preliminary study on these phenomena and an early, demonstrative methodology for drawing out the algorithmic fidelity from large language models.

Degree

PhD

College and Department

Computational, Mathematical, and Physical Sciences; Computer Science

Rights

https://lib.byu.edu/about/copyright/

Date Submitted

2023-05-23

Document Type

Dissertation

Handle

http://hdl.lib.byu.edu/1877/etd13229

Keywords

Natural Language Processing, Generative Modeling, Social Science, Simulation

Language

english

Share

COinS