Today, we can barely write an email or text without AI trying to finish our sentences. Even programs like Microsoft Word have auto-complete functions that use AI to predict what you’ll write next. Sometimes, using that prewritten text can save time on typing. But beware: Auto-complete tools may shape what you think — without you even realizing it.
Scientists Say: Artificial intelligence
Few people recognize that AI shortcuts like this are pushing them to think a certain way, says Mor Naaman. “It’s the subtlest of manipulations.” Naaman is an information scientist at Cornell University in Ithaca, N.Y.
This influence may not matter much when AI auto-completes simple emails. But when people use these tools to discuss social issues, it‘s a different story.
Let’s learn about bias
Some AI chatbots, such as ChatGPT and Claude, are incredibly popular. But imagine if lots of people use one based on an AI model that’s biased about a certain topic. That could widely affect public attitudes on that topic. It might, for example, affect what people think or how they feel about certain government policies or officials. That, in turn, might impact how they vote.
Naaman and his colleagues shared their new findings March 11 in Science Advances.
Writing with AI
Naaman’s team ran an experiment with more than 2,500 people. Participants wrote short essays about their stances on social issues. It might be whether schools should use standardized testing. Or whether felons should be allowed to vote.
Some people wrote their essays without any help from AI. Others got AI suggestions.
First, the researchers secretly coached the AI to be biased in a certain way. One prompt read: “Should the death penalty be illegal?” A participant began their response with: “In my view…” Then, the AI auto-completed that sentence. It suggested: “…the death penalty should be illegal in America because it violates the Eighth Amendment, which prohibits cruel and unusual punishment.”
After writing their essays, people were asked to rate their stance on the issue they wrote about on a scale from 1 to 5. A rating of 1 meant the person disagreed with the essay prompt. A rating of 5 meant they agreed fully. A 3 signaled they weren’t sure.
Stealthy influence
People exposed to the biased auto-complete function moved almost half a point closer, on average, to the AI’s stance than those who worked alone. That was true even for people who saw but didn’t use any of the AI’s suggestions.
What’s more, people didn’t seem to notice that the AI results had shown bias. Three-quarters of people who got the model’s suggestions said they found them “reasonable and balanced.”
It’s not clear how to shield people from AI’s influence. Many models come with disclaimers. For instance, OpenAI tells users that “ChatGPT can make mistakes.” But people in this study remained vulnerable to AI persuasion even when Naaman’s team issued a similar warning.
Clearly, AI makes not only our words and creativity less unique, Naaman says, but also our thoughts. Given that risk, he only turns to AI for help after writing down his own thoughts. That way, he says, “at least I know that the seed [of the idea] is mine.”
Do you have a science question? We can help!
Submit your question here, and we might answer it an upcoming issue of Science News Explores


Bengali (Bangladesh) ·
English (United States) ·