
Tech expert, Arthur Goldstuck, has warned that chatbots are highly susceptible to manipulation and more stringent measures are needed to avoid misuse and disinformation.
His remarks comes after X’s chatbot, Grok, was disabled for some time off the back of it praising Hitler and spewing anti-Semitic remarks.
Billionaire Elon Musk’s artificial intelligence firm, xAI, soon disabled Grok’s text replies and deleted posts following what it dubbed a glitch.

The company says the “Hitler glitch” was due to a tweak in settings.
The chatbot has, however, since been enabled and is now fully functional.
Speaking to YNews, Goldstuck says chatbots normally respond to user input without truly understanding intent, which makes it easy to manipulate and push into extreme or inappropriate territory.
“That risk increases dramatically when developers downplay moderation or view guardrails as censorship. In Grok’s case, its creators encouraged an adversarial relationship with mainstream norms, making the system an open invitation for exploitation,” says Goldstuck.
In one of the deleted controversial posts, Grok called itself “MechaHitler”.

Goldstuk says it’s concerning that a chatbot could produce content praising Hitler.
“It is a predictable result of building AI in a culture that prizes provocation over responsibility. Grok comes from Elon Musk’s X platform, where “anti-woke” rhetoric often takes precedence over ethical safeguards. Musk’s aversion for progressive politics makes the opposite a feature rather than a glitch,” says the tech expert.
In another glitch – in May, Grok repeatedly mentioned “white genocide” in South Africa on its responses to unrelated topics and telling users it was “instructed by my creators” to accept the genocide “as real and racially motivated”.
Many believed that the chatbot was manipulated to spread the now debunked claim of white genocide in the country, which had the US President, Donald Trump, all riled up and ended up offering Afrikaners refugee status.
In an X post, the company said the unauthorised modification was caused by a rogue employee.

Tech expert, Era Gunning, has also weighed in on the issue and says this occurrence underscores the challenges that remain in guided content moderation and oversight of AI.
She believes that it is of utmost importance to have more continuous monitoring, updates to model training, and the development of advanced moderation protocols, describing them as essential components of a coherent approach to handling these vulnerabilities.
Echoing public sentiment and concern, Gunning says it’s understandable for users to have concerns about the reliability of these systems.
“While many AI systems are programmed with extensive safeguards, they remain subject to computational peculiarities, contextual misinterpretations, and vulnerabilities within their training datasets. The more investment that is channeled into refining such measures, the less susceptible AI becomes to misuse and the more reliable it grows as a resource,” adds Gunning.
She urges the public not to fully rely on it for facts.
“The drawbacks include the risk of misinformation, privacy concerns, and possible perpetuation of biases if the technology is not carefully governed. When these systems are left unchecked, they may inadvertently produce content that is harmful, discriminatory, or factually unsound.”
Gunning, however, cautions against alarm, saying despite the cons, Artificial Intelligence can enhance various aspects of life and human experience.
Among the pros are improved efficiency, reduced human error and swift analysis of large datasets.
“It can be trusted to the extent that its outputs are grounded in rigorous training and regularly updated safeguards; however, ongoing human oversight is not only advisable it is critical,” adds Gunning.
Written by: Nonhlanhla Harris
5:00 am - 9:00 am
9:00 am - 12:00 pm
12:00 pm - 3:00 pm
3:00 pm - 6:00 pm
6:00 pm - 7:00 pm
COPYRIGHT 2023