Art and the science of generative AI

Understanding shifts in creative work will help guide AI’s impact on the media ecosystem



AI systems increasingly have the capability to produce high-quality artistic media such as visual arts, music, fiction, literature, and video/animation. These new capabilities introduce foundational concerns and consequences for the governance and creativity of digital media, and have received enormous amounts of attention and public debate. In a massive interdisciplinary collaboration between 14 researchers across numerous institutions, we rectify naive misconceptions and offer key research directions to inform policy and beneficial uses of this technology.

📄 Read our 2-page perspective in Science

📄 Read our 23-page white paper on arXiv

In particular, we trace four primary areas for the societal impact of generative AI and how to center human agency in each: the economics of the creative industry, attribution+ownership, aesthetics and news (see below).

Shifts in Culture & Aesthetics


Generative AI tools have distinct affordances that complicate cultural production. These affordances interact with social media platforms in complex and novel ways which may impact aesthetic diversity and norms in visual culture.

Legal Dimensions of Authorship

Today's copyright laws likely will not adequately apportion valuable ownership rights among all participants involved in the production of generative AI, including artists on whose work these systems are trained, but instead may favor the end-user. We need new mechanisms to protect and compensate artists whose work is used for training, or even permit them to opt out.

The Labor Economics of Creative Work


Generative AI tools may accelerate the creative process through rapid ideation. Employment for artists may rise or fall depending on how AI tools interact with particular parts of the creative process. New economic theory is needed to characterize the creative process and understand generative AI’s impact on creative work.

Impacts on the Media Ecosystem

As the cost and time to produce media at scale decreases, the media ecosystem may become vulnerable to AI-generated misinformation through the creation of synthetic media, particularly media that provides probative evidence for claims. What is the role of platform interventions such as tracking source provenance and detecting synthetic media downstream in governance and building trust?

Our philosophy

The very term “artificial intelligence” might misleadingly imply that these systems exhibit human-like intent, agency, or even self-awareness. Natural language– based interfaces now accompany generative AI models, including chat interfaces that use the “I” pronoun, which may give users a sense of human-like interaction and agency. These perceptions can undermine credit to the creators whose labor underlies the system’s outputs and deflect responsibility from developers and decision-makers when these systems cause harm.

In the face of this strangeness of AI and its serious potential for societal harms, how can generastive AI be a distinct artistic medium with its own affordances? Artists are are often at the vangaurd of technological change, experimenting with the new affordances of a technology, and exploring the ethics and politics of its use. Below is an artwork "All watched over by machines of loving grace" by co-author Memo Akten about our complicated relationship with technology: