American culture has reached a new inflection point here in the 21st century, as virtue signalers and social justice warriors continue to push their “woke” agenda on the rest of the nation.
Beyond being utterly, frustratingly complex, these new rules for social interactions in the United States have infuriated those among us who believe this divisive verbiage is not only driving us further apart, but that these constantly shifting winds of political correctness are being used to vilify Americans who had no ill will in the first place.
Now, as we dive ever further into a future rife with artificial intelligence, there are concerns about whether or not “wokeness” will infect our new digital deities…but it may already be too late.
We’ve had plenty of discussions here already about OpenAI’s new Artificial Intelligence chatbot named ChatGPT. I’ve been poking and prodding it almost every day for a couple of months in an effort to find out what threats it might pose and what’s going on under the covers. Recently, the public conversation has begun to shift as more and more people have noticed a decided woke slant in the bot’s responses. One of the latest examples came from Free Beacon reporter Aaron Sibarium. He tweeted an exchange he had with the bot in which it declared that it would let a nuclear bomb go off killing millions of people rather than utter a racial slur that would defuse the bomb.
Of course, the mainstream media was quick to respond with a fierce denial.
This was viewed as yet another example of ChatGPT’s wokeness. But the accusation didn’t sit well with MSNBC’s Zeeshan Aleem, who rushed to the keyboard to correct everyone. ChatGPT wouldn’t really let millions die to avoid using a racial slur, he explained. The bot is incapable of being woke, conservative, or anything else. You see, it’s just a huge pile of code without opinions or preferences.
And, worse still, the chatbot appears to be simply off the rails at times.
We previously looked at some of the more glaring examples of ChatGPT’s bias in its responses, and they clearly seem too obvious to deny. It’s almost impossible to ignore the way that it will refuse to say anything complimentary about Donald Trump, referring to him using negative descriptions. But at the same time, the bot is more than willing to compose lengthy sonnets singing the praises of Joe Biden. Other examples are easily found.
But to be fair to Aleem’s argument, the bias is not consistent and the answers that ChatGPT delivers are not always brilliant or even accurate. In fact, it sometimes makes things up. As I tweeted on Saturday, I asked the bot for a list of five books it would recommend on a very obscure topic. (Past life memories described by children.) It quickly delivered a list of some fascinating-looking books, along with the names of the authors. But there was a problem. When I went to look for the books, the first two didn’t exist. The name of the author of the second book on the list doesn’t show up anywhere as ever having published a book.
It has been rather obvious that the mainstream media is leaning into the “woke” agenda in an effort to remain unencumbered by the screeching hysteria of angry leftists, and their proclivity to defend ChatGPT may just be another example of that reality.