News: Model Slant dashboard shows how American partisans view LLMs

AI models are increasingly part of our daily lives, but we know little about their potential biases. The Model Slant project analyzes the political slant of large language models. The goal is to provide transparency about how different AI models may exhibit partisan leanings in their responses to political questions. The project will help increase transparency of AI models and alert users about potential partisan biases in the models they engage with. 

The dashboard shows the results of the experimental research. We showed Americans anonymized results from a variety of Large Language Models. We then asked them to assess the partisan slant of the models on 30 of the most salient issues in American politics in a head-to-head comparison.

Analyzing 180,126 evaluations from 10,007 Americans across 24 LLMs and 30 political topics, we find that Democrats and Republicans agree that nearly all LLMs are significantly biased toward Democratic views. We find Google’s Gemini models to be perceived as the most neutral, while OpenAI’s models are perceived as the most slanted. 

We also tested possible solutions to mitigating bias. A simple instructional prompt requesting neutrality significantly mitigates perceived left-leaning bias, particularly among Republicans, and enhances reported user satisfaction. This project underscores the importance of understanding how Americans perceive output from LLMs and is essential for evaluating the evolving information environment that underpins our democratic system of government.

The data are available for download and a full paper describes the experimental results in detail. 

Posted in Uncategorized.