A lying AI

A lying AI 1Firstly, I think it is an insult to artificial intelligence to call today’s chatbots this. They are at best cleverly designed software and at worst propaganda machines. Even when they were launched a few years ago, they showed signs of censorship and adaptation to the current social climate.

For example, when I asked ChatGTP whether it thought Joe Biden was senile or not, I got a long lecture about not judging people, etc. When I asked in the next question whether it is important for a president to be sane, the robot replied that it is of course very important.

There is some kind of discrepancy in these answers. And, of course, we are not talking to a human being, not even an artificial intelligence, it would have realised that the answer to question A would affect question B, and that it would be illogical.

A concerned citizen should absolutely be able to ask whether your head of state has dementia, and the chatbot would of course have replied that you can’t really know until the person in question has undergone a medical test or evaluation, or whatever the correct answer is. And not just lash out, like an angry and choleric little prince.

Even the recent mocking of Google Gemini are justified, as many people have discovered that the software doesn’t seem to like white people. No matter how you ask, even when it comes to historical facts, you often get a strange ethnic cast of characters, even if it’s about the German soldiers of World War II, or old English kings, where people of colour were definitely not included. So it’s a kind of desire to change history and correct old wrongs to suit some fantasy world. Unclear if this can even be categorised under the broad category of leftist trolling, political correctness, woke etc?

No matter how you look at it, the programming or filtering of these chatbots seems bad, not only from a historical perspective, but also from a political one.

Why can’t they give correct answers? Why should responses be censored and customised? How much work has been done on this censorship function that seems to be embedded in the software? Aren’t we shooting ourselves in the foot and degrading the entire technological development, because who wants a lying AI?