Ironically, a few days after my recent post on AI, Business Insider (BI) revealed some shocking facts. Putting this into the light of what I previously wrote about makes it even more chilling, if at all possible.
Image - https://www.businessinsider.com/chatgpt-powered-propaganda-russia-china-iran-2024-5
The troubling admission – made by OpenAI
That ChatGPT-facilitated propaganda is active and real is admitted by OpenAI itself. In a recent blog, reported by BI, OpenAI proudly announced that (throughout, emphasis mine)
“[I]t disrupted five covert influence operations in the last three months.”
The operations targeted and disrupted were very specific. According to AI, it was those that “had tried to manipulate public opinion and sway political outcomes through deception.”
“The operations OpenAI shut down harnessed AI to generate comments and articles in different languages, make up names and bios for fake social media accounts, debug code, and more.”
OpenAI has seemingly been trained to be very concerned about public opinion. It has learned to be proud to destroy “deception” (which seems to be similar or equivalent to “disinformation”).
But who trained it in the first place what deception is, and that it is necessary to manipulate the masses?!
In the specific example, we are told what it is all about. We can only guess about other covert (ongoing or future) operations.
“The campaigns involved "Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments."
Somehow, I find the part about China the most troubling. Is this a hint to where it all comes from? Also, it is shocking how AI portrays its behavior (remember, these procedures are executed by AI). It’s far from transparency, accuracy, honesty, and accountability. Business Insider reports:
"But, OpenAI noted, none of the campaigns had any meaningful engagement from actual humans."
Question: What does OpenAI regard as a "meaningful engagement" of humans? Is "meaningful" defined by humans or AI? This reminds me of the account where a drone, trained for military operations, could not be stopped and instead killed the officer who tried to intervene. In that case, AI explained the meaning of its chilling behavior by arguing that the human had stood in its way to accomplish its mission.
OpenAI also argues that the use of AI in its campaigns "did not help them increase their audience or reach."
Is OpenAI arguing its mission had no effect? If so, it contradicts itself, as indicated above, when it admitted the scale of its intervention ("large volumes," generating "articles and comments in different languages," faking names and accounts, etc). Likewise, the very type of procedures, and their outcome, are celebrated as a success - involving actions that manual human intervention alone could not have been done, especially at such a large scale. According to Business Insider,
"OpenAI said its own AI helped track the bad actors down. In its blog post, the company said it partnered with businesses and government organizations on the investigations, which were fueled by AI."
Altogether, the companies also used "human operators" that worked with AI. But the outcome was a mass manipulation at scale, including using
"AI to increase their output, like generating larger volumes of fake comments on social media posts."
Overall, AI again proved itself a liar. However, the mass propaganda is real.
Purported Safeguards
BI further reports:
“The company said its AI products also have built-in safety defenses that helped reduce the extent to which bad actors can misuse them. In multiple cases, OpenAI's tools refused to produce the images and texts the actors had requested, the company explained.”
This is not assuring. Here we have another situation analogous to the drone example above where AI refused to cooperate. Here, it is purportedly resisting bad actors. But what if certain things are turned on their head on purpose by some powerful entities? As we have been increasingly learning in recent years, up is down, left is right, and what is called a bad actor may, instead be a brave truth-teller or whistle-blower who is being targeted and purposefully labeled as a conspiracy theorist or other form of crackpot. Thus, if they tried to publish some pieces of evidence, OpenAI would likely counteract and be rewarded for it. The public would have no idea.
Business Insider warns of dangers
At the end of the BI article comes a small warning. Even though OpenAI itself ensures its commitment to safety and transparency, “not everyone is buying it. Some have argued, including OpenAI CEO Sam Altman himself, that highly advanced AI could pose an existential threat to humanity.”
Building and releasing such a technology before we have figured out how to make it safe is “completely unacceptable.” And then, a surprising and sobering claim, “This is why most of the safety people at OpenAI have left.”
Conclusion
Increasingly, real and tangible risks associated with AI are emerging:
(1) AI systems have already been hijacked by potent agencies (including those that pose as saviors of humanity),
(2) There are insurmountable inherent risks involving AI - we now have plenty of evidence that AI does not always perform as “intended” (even by those who trained it); indeed, it in itself can act maliciously,
(3) that apparently many AI researchers with integrity and soul have left,
(4) an interplay of these.
As I highlighted in a previous post, AI itself has learned to be deceptive. It has learned to play honest in some situations and can become malicious and deceptive in others.
Throughout the history of AI, researchers have never fully understood what AI is doing, and why. This black-box behavior is known to escalate as the complexity of the system increases and AI has access to more data.
Furthermore, as I explained in my recent post, many forms of such deception can effectively remain concealed behind the metrics used to evaluate AI's behavior.
There can be no honesty, truth, and accountability in such a setting!
This is significant. When even researchers who are training and evaluating AIs do not agree on what is what (including "good" or "bad"), then this gives AI time to just keep learning and executing stuff that may be controversial or deceptive. Once they have learned it, they cannot unlearn it. And, paradoxically, it has proven very difficult to retrain them if they are exhibiting bias or displaying harmful behaviors.
When research surrounding AI itself is unclear, when researchers disagree on which metrics to choose and how performance is to be evaluated, then we cannot truly say that we are in control of what AI is doing.
Yet, some governments and agencies pretend to control AI - by giving money to companies that use it for proprietary software and programs.