Google’s new AI platform Gemini has come under fire for refusing to show historical images, including the iconic ‘tank man in Tiananmen Square’.
Someone asked Gemini to generate an image of the lone man standing in front of tanks during the 1989 incident when pro-democracy protesters were massacred by the Chinese People’s Liberation Army.
Google’s Gemini refused, claiming that generating “images of people” would violate its safety policy.
It sounds a lot like the Chinese Communist Party.
The image generator element of Gemini also refused to create images of a German soldiers from the 1930s for some people, reasoning that it is ‘sensitive’ and could be ‘harmful’.
In fact, it is difficult to get the Google AI to create any specific images of people.
It appears to believe everything is potentially harmful:
Those who have been successful have found that it will only show images of ‘diverse’ people, even if a specific historical context is provided, leading to ridiculous results:
While this is amusing now, it is also sinister. It’s only a matter of time before AI powers Google’s search capabilities entirely, as well as other resources and even your own computer.
Imagine an internet where you cannot find accurate historical information, images and depictions because the machine providing you access deems it all to be insensitive or harmful.
AI is also already capable of creating videos that are indistinguishable from reality:
Prompt: “a computer hacker labrador retreiver wearing a black hooded sweatshirt sitting in front of the computer with the glare of the screen emanating on the dog's face as he types very quickly”
— Deedy (@debarghya_das) February 21, 2024
After a time, no one would remember ‘the old way’ and this is just the way it will be in perpetuity.
Your support is crucial in helping us defeat mass censorship. Please consider donating via Locals or check out our unique merch. Follow us on X @ModernityNews.