What’s of interest? Reducing the cultural bias of AI with one sentence | Cornell Chronicle
Tell me more!
Cultural values and traditions differ across the globe, but large language models (LLMs), used in text-generating programs such as ChatGPT, have a tendency to reflect values from English-speaking and Protestant European countries. A Cornell-led research team believes there is an easy way to solve that problem.
The researchers tested five versions of ChatGPT and found that what they call “cultural prompting” – asking the AI model to perform a task like someone from another part of the world – resulted in reduced bias in responses for the vast majority of the over 100 countries they tested. The findings suggest there could be an easy way for anyone to control AI models to align with their cultural values, they said, to reduce the cultural bias in these widely used systems.
“Around the world, people are using tools like ChatGPT directly and indirectly via other applications for learning, work, and communication,” he said, “and just like technology companies make localized keyboards on laptops to adapt to different languages, LLMs need to adapt to different cultural norms and values.”
Kizilcec is senior author of “Cultural Bias and Cultural Alignment of Large Language Models,” which published Sept. 17 in PNAS Nexus
Where is it?: Reducing the cultural bias of AI with one sentence | Cornell Chronicle
This is one among many items I will regularly tag in Pinboard as oegconnect, and automatically post tagged as #OEGConnect to Mastodon. Do you know of something else we should share like this? Just reply below and we will check it out.
Or share it directly to the OEG Connect Sharing Zone