
CMSC
-0.1200
This image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory.
But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago.
The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo.
At a time when internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free.
Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018.
In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025.
Before the war, sparked by Hamas's October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP.
Today, she weighs only nine. The only nutrition she gets to help her condition is milk, Modallala told AFP and even that's "not always available".
Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources."
The chatbot eventually issued a response that recognised the error -- but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen.
The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate.
- Radical right bias -
Grok's mistakes illustrate the limits of AI tools, whose functions are as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics.
"We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, "Hello ChatGPT".
Each AI has biases linked to the information it was trained on and the instructions of its creators, he said.
In the researcher's view Grok, made by Musk's xAI start-up, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right.
Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach.
"Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'."
AI does not necessarily seek accuracy -- "that's not the goal," the expert said.
Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016.
That error led to internet users accusing the French newspaper Liberation, which had published the photo, of manipulation.
- 'Friendly pathological liar' -
An AI's bias is linked to the data it is fed and what happens during fine-tuning -- the so-called alignment phase -- which then determines what the model would rate as a good or bad answer.
"Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," Diesbach said.
"Its training data has not changed and neither has its alignment."
Grok is not alone in wrongly identifying images.
When AFP asked Mistral AI's Le Chat -- which is in part trained on AFP's articles under an agreement between the French start-up and the news agency -- the bot also misidentified the photo of Mariam Dawwas as being from Yemen.
For Diesbach, chatbots must never be used as tools to verify facts.
"They are not made to tell the truth," but to "generate content, whether true or false", he said.
"You have to look at it like a friendly pathological liar -- it may not always lie, but it always could."
C.Fong--ThChM