Nefarious potential of innovation and AI may be to their own demise (Part I)
From blaming to counterfeiting and the conundrum of training AI
Originally, I just wanted to write a short note that complements my previous post about the FBI’s predictions of forthcoming cyber attacks that purportedly will be done by Chinese hackers and destroy large parts of the American infrastructure. These predictions align with those of a cyber pandemic that will mimic the Covid pandemic. In turn, these are similar to warnings of the pandemic itself that previously hinted at a novel viral outbreak in China that ostensibly could be utilized for the roll-out of global mRNA “vaccines.” And of course, when the “novel coronavirus” did appear, it was blamed on China, and any possible involvements of nations, be it via funding or accompanying gain-of-function work, have been vehemently denied.
Blaming someone else is an ancient feature of humanity and reminded me of an Old Testament account where this attempt effectively backfired. As I kept thinking about this, I realized that the related societal and scientific problem is huge - especially in the context of advances in computer technology such as deep fakes and AI in general.
I plan on doing an entire series on this. Today’s post is meant as a motivation, also to set the record straight. I do not want to scare anyone or spread fear. Rather, in the next few substacks, I will try to make the point that some technological advances will drastically backfire, in part because they are too good at mimicking or blaming someone or something else. I hope this will become clear as I give more specific examples.
The blaming game - the simplicity of an Old Testament account
The following account will be very familiar to some. The reason I am including it here is that, regardless of whether one accepts it as something that literally happened or not, King Solomon’s response put a quick end to a major conflict. Nowadays, people may employ all sorts of technologies such as genetic tests to settle such a dispute. Yet, as I will describe in the following posts, technological advances have become extremely complex - and so has the potential for their misuse. As a result, the blaming game could go on forever without ever being clearly resolved; likewise, it would be rather simple to corrupt the system to disguise true evidence and blame the wrong person.
I love the Old Testament account for its timeless simplicity (from 1 Kings 3 - SOLOMON’S WISDOM):
Then two women who were prostitutes came to the king and stood before him.
The one woman said, “Oh, my lord, this woman and I live in the same house, and I gave birth to a child while she was in the house.
Then on the third day after I gave birth, this woman also gave birth. And we were alone. There was no one else with us in the house; only we two were in the house.
And this woman’s son died in the night, because she lay on him.
And she arose at midnight and took my son from beside me, while your servant slept, and laid him at her breast, and laid her dead son at my breast.
When I rose in the morning to nurse my child, behold, he was dead. But when I looked at him closely in the morning, behold, he was not the child that I had borne.”
But the other woman said, “No, the living child is mine, and the dead child is yours.” The first said, “No, the dead child is yours, and the living child is mine.” Thus they spoke before the king.
Then the king said, “The one says, ‘This is my son that is alive, and your son is dead’; and the other says, ‘No; but your son is dead, and my son is the living one.’”
And the king said, “Bring me a sword.” So a sword was brought before the king.
And the king said, “Divide the living child in two, and give half to the one and half to the other.”
Then the woman whose son was alive said to the king, because her heart yearned for her son, “Oh, my lord, give her the living child, and by no means put him to death.” But the other said, “He shall be neither mine nor yours; divide him.”
Then the king answered and said, “Give the living child to the first woman, and by no means put him to death; she is his mother.”
And all Israel heard of the judgment that the king had rendered, and they stood in awe of the king, because they perceived that the wisdom of God was in him to do justice.
The Epoch Times: “Our elected officials are being secretly controlled”
In a recent video by Epoch TV, Joshua Philipp exposes some serious problems in Washington where, allegedly, elected officials are being baited into compromising situations that are then used to influence them.
Watching the sobering video raises the question as to who it is who tries to divert blame and attention (aka, who is the “woman” who switched the two “sons?”). A fascinating part of the interview goes as follows:
It is certainly plausible that any of these nations could have been behind the scheme. Philipp details related tactics (esp. the use of honey traps) by regimes like the CCP.
But since this is so plausible, it would be even easier for someone else to orchestrate such types of plots.
The Epoch Time video emphasizes this in the following:
Phillip, highlighting the foundational question, asks, “If this is actually the case, why are we not going after those individuals in suits approaching and blackmailing elected leaders?”
I think he has answered this above. Too often, we have been told that it’s that other foreign nation that is to be blamed. History tells us that this may indeed be the case and many will be content with this explanation. But it may not be the whole story, as summarized by Philip:
“There are groups trying to control our elected leaders. Some of these groups are international organizations, some of them are foreign governments, and some of them, according to many allegations … coming from reliable sources… some of them are coming from an organization operating from within our own government.”
Technological advances will foster their abuse - but eventually, lead to their own downfall
Even though blackmailing and extortion have long been a facet of arguably all societies, they are more and more becoming systemic, in large part fostered by online communication and AI technologies. Beyond a doubt, we are going to see much more masquerading, counterfeiting, impersonation, etc., via digital manipulations such as deepfakes.
Passing the buck may seem easier but there is also the potential this may have a boomerang effect.
I find it interesting that Joshua Philipp, in the latter part of the above video believes that, whilst deepfakes will be able to fake all sorts of people and events, this in itself will effectively put an end to it.
His argument is simple: if you can fake anything, then what is there to believe? He reasons that people will simply not believe anything anymore, even if there is purported visual or auditory “evidence.”
How train AI to use it for benign purposes?
As with any other technology, AI can be used for good but also for nefarious purposes. Many believe that it would therefore be essential to categorically train AI systems so they would be better protected against misuse and evil intent.
I have been wondering if in the above Old Testament account, an AI system in place of Solomon, would ever be able to recognize the truth. More generally, how would AI judge if entity A was the true owner of entity B, or if instead it was C who owned B?
There are different types of AI learning models. Conceptually, learning entails everything that an AI program does to enhance its knowledge of a certain issue or topic. Technically, this is realized via different “learning processes” that consist of a collection of input-output pairs for a specific function that are processed via feedback mechanisms to enhance knowledge. The goal is that the AI system would be able to predict the outputs for new inputs.
From a technical/mathematical standpoint, a number of AI learning models have been devised. But how could the response of the true mother in the OT account be adequately captured?
The most basic form of learning (inductive learning) is based on inferring a general rule from datasets of input-output pairs. Yet, when trained on large datasets, it seems pretty obvious that the assertion “A owns B” means that A will defend their position, doubling down in the face of conflict, rather than letting go. Thus, the generalized deduction that AI by necessity will come up with is that of entitlement: if A owns B then they will not let go of this assertion, not even in court.
A type of learning technique that is becoming increasingly popular in modern AI solutions utilizes rewards and punishment to “reinforce” different types of knowledge. Imagine an AI system where entity A, the real owner of B, was to back down as soon as there was any dispute. If this was reinforced as the truth, then an AI system trained this way would expect that anyone who possesses something, when under pressure, would likewise let go of all forms of entitlement. Society would immediately collapse!
The paradox of training AI
The above raises the key issue: how could AI be selectively trained for benign purposes? More generally, how can AI be trained to know the truth? A moment’s reflection reveals major challenges. For example, how could it be trained in line with old and proven maxims such as the following?
The first step to knowing the truth is to recognize one’s ignorance.
Only the humble can perceive the truth.
The proud can never be taught.
Strangely, for nefarious AI applications, it does seem to be the case that AI can be specifically trained for that. As seen from deepfakes and other masquerading techniques, AI can be improved substantially in a way to support its ill-intended use.
Does it therefore mean that innovation and AI necessarily will lead us down the path of “evil?” Paradoxically, I think the opposite will be the case.
In the following post(s), I will dig deeper into this line of thought using specific facets of AI systems as an example. I will look at examples where the argument can be made that some technologies will lead to their downfall. It’s not because they will fail. By contrast, it is because of the very fact that they will do “too well” that people will realize that they cannot trust those systems.
The first example will be AI models that can be trained to be deceptive, and that in a way that is not obvious. They can be benign during testing but behave totally differently once deployed. For more, please look for my next post.