Large-scale generative AI (genAI) models like large language models (LLMs) are now widely deployed, marketed for use as helpful assistants and creative partners. Yet, recent research has unearthed significant similarities in how LLMs represent different concepts. Building on this, recent work from the Argus Lab showed that a broad set of LLMs respond more similarly to creative prompts than humans do, though individual LLM responses appear creative. Furthermore, as more data from genAI models is posted online, it will be scraped into other genAI models’ training datasets, resulting in models being trained on each others’ outputs. These findings and internet realities hint at a larger, understudied problem: today’s genAI models are similar and may be converging.
Given the widespread use of genAI models, we believe it is crucial to study convergence and its possible effects on models and users. The Argus Lab has a variety of ongoing research projects examining causes and consequences of convergence among large-scale AI models. We study how models might evolve over time if they are trained on each others’ generated outputs, methods to mitigate unwanted creative convergence in model outputs, and broader issues related to representational similarity in models.