AI celebrates the Fijian and Fijiana Drua’s upcoming seasons as they aim for glory in the Super League
March 18, 2024Deepfake Demolition: Potential Help for AI Cyber-Defence in the Pacific?
March 19, 2024Google’s AI platform, Gemini, recently came under fire for its handling of historical accuracy in image generation, as reported by TechCrunch. Utilizing the advanced Imagen 2 model, Gemini aims to create detailed and contextually appropriate images. However, issues have arisen when the AI is tasked with generating images without explicit detail, leading to inaccuracies due to biases in its training data.
A notable instance highlighted was the AI’s generation of images depicting “a person walking a dog in a park,” where the lack of specific details led Gemini to predominantly produce images featuring white individuals. This tendency stems from the over-representation of white people in the training data. Conversely, the system demonstrated overcorrections in scenarios involving historical figures, such as popes and the founding fathers of the United States, where it failed to represent any white individuals at all.
There is Still More Work to be Done for Pacific Islanders
Google asserts that Gemini is designed to generate a diverse array of images that reflect a variety of people, especially when no particular characteristics are specified by the user. This incident illuminates the broader challenge of mitigating biases within AI training data and emphasizes the importance of ensuring accurate and diverse representation in AI-generated imagery. With the limited image data available for Pacific Islanders, these biases are anticipated to persist significantly longer than in regions where representation feels more accurately depicted.
Source: TechCrunch
*An AI tool was used to add an extra layer to the editing process for this story.