Something's up with Gemini…

IMAGE: Gemini

This beautiful phrase is the end of a long conversation from Google's generative assistant, Geminiwith a user who was trying to use it to do a project for his university, and how could it not be otherwise, He was absolutely shocked when he read it..

The one who according to the company is its «most capable AI model»which theoretically has “security filters that prevent it from engaging in disrespectful, sexual, violent or dangerous discussions and encouraging harmful acts,” came out with such atrocity after a long, yes, but apparently quite innocuous exchange about current challenges for seniors to extend their income after retirement. The permanent link to the conversation is still available.

What are these problems in the operation of a generative algorithm due to? Is it a poor selection of the materials used in your training, or a problem when programming the filters that try to prevent the appearance of certain responses? One way or another, Google has achieved the dubious honor of topping the list of absurd responses, of those bad calls “hallucinations”, with errors that go far beyond simply inventing references or articles, as OpenAI's ChatGPT usually does, and ranging from recommendation to eat stones until put glue on your pizzaspassing through making spaghetti with gasoline or considering Obama the first Muslim president of the United States. And, logically, they have a much greater tendency to go viral than a simple reference error.

In many cases, the problems with “quaint” responses have to do with sources of information that are incorrectly ingested because they are satirical, as in the case of The Onion: a fun, witty page that we are all grateful for existing… but very possibly not the most suitable to feed a generative algorithm, at least as long as those algorithms have, as Sheldon Cooperserious problems detecting ironأa.

What drives Gemini to introduce such savagery into a seemingly harmless conversation? Has anyone at Google programmed any type of mechanism that determines patience or boredom? The traceability of these types of answers is usually impossible, and Google searches, already completely saturated with the phrase in question, which could resemble food and alcohol poisoning after reading the complete works of Bukowskido not seem to offer minimally similar results.

The Google responsethat the chatbots AIs like Gemini can “sometimes respond with nonsensical responses, and this is an example of that,” which is not particularly reassuring, because such a response goes far beyond being nonsensical. Google confirmed that the AI ​​text was a violation of its policies and that it had taken steps to prevent similar results from occurring, but what exactly does that mean? Who can avoid, in an issue like this, that a good part of the population fears encountering that آ«ghost in my, whatآ» (and no, I am not referring to the disco of my beloved The Police) capable of, in a conversational environment, telling you atrocities like this?

How can a company that already in 2016 was talking about the importance of this type of technology and that dedicated itself to training its entire workforce in them, reach such a level of irresponsibility?


This article is also available in English on my Medium page,آ آ«Now you know: don’t test Gemini’s patiencecome on

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button