The promised AI revolution - dangerous and undermines core principles of Life
A personal take on “How AI could help mathematicians achieve ‘moments of divine inspiration’"
“How AI could help mathematicians achieve ‘moments of divine inspiration’" is the title of a recent Nature Briefing article. It predicts that AI will revolutionize mathematics, among others, by formulating conjectures, recognizing patterns, describing relationships, and describing novel research topics.
Reading this not only makes me sad. I will argue this is one of the greatest lies to humanity. In addition, there are confirmed risks regarding the malicious use of AI. Recent publications even show AI systems can systemically manipulate – something they have already learned to do. Recent research warns of the real possibility of losing control of AI systems. As some are celebrating “Pentecost” this weekend, I will also highlight some overarching spiritual foundations that the AI hype and related “Zeitgeist” try to divert us from.
A quick personal recap of mathematics, discovery, and learning
The first two decades of my academic life I spent as a mathematician. I originally had wanted to study medicine but it meant I would have needed to move to a big city, something I was not ready for. It just so happened that at the University that was closer, there was an excellent program in mathematics. I had long been told I was gifted in this subject (and others), and true, to me, it was fun!!! But it is not only about math. Later, when I earned my (second) PhD in Biomedical Sciences, I experienced the same type of thrill that comes with all forms of true learning. In my case, as I changed careers later in life, I needed to quickly develop my own approach. As before, I found there is nothing more exciting and beneficial than learning something through my very own inquiry and discovery.
AI – the good, the bad, and the (very) ugly
The Nature Briefing article makes a strong assertion: “mathematics is set to be revolutionized by AI.” It explains why, arguing that AI has made progress in some old and difficult mathematical problems. However, going through the list is disappointing. For example, I already knew as a young Master’s student that some famous mathematical constants such as π or e can be formulated by “continued fractions” (my Master’s thesis tackled a related problem). So, doing an exhaustive search through a set of certain continued fractions, AI found a better formula. That’s not exciting. An exhaustive search is not interesting, and neither does it provide new ideas for further explorations.
Some other outstanding math problems are outlined where AI reportedly has been transformative. Their greatest potential is seen in the formulation of new conjectures, i.e. some important patterns or relationships which are believed to be true but which still need definite proof.
Below are a few considerations why I have issues with the unsubstantiated and misleading hype about AI.
What are AI’s discoveries? What are the patterns and relationships that AI is discovering that are so amazing? Let’s consider an example. If we observe three birds in close proximity, we may be tempted to ascribe a pattern to it - “it’s a triangle.” But what if each of them just follows their course so that their apparent alignment is just that, the mere appearance of a relationship?
Are the discoveries made by AI useful? The Nature article does raise this question, explaining that “not all conjectures are created equal. They also need to advance our understanding of mathematics.” The article details this further by saying that “a good theorem ‘should be one which is a constituent in many mathematical constructs, which is used in the proof of theorems of many different kinds.’” Well, a theorem is something different than a conjecture! It’s something that is supported and confirmed by clear proof, something that is not the case with a conjecture. Aside from this oversight, Nature continues, “the best theorems increase the likelihood of discovering new theorems.” So, the key question is whether conjectures made by AI support discovery. But it shouldn’t just be about unverifiable hypotheses. What matters are true insights and comprehension so that their “discoveries” would help find new patterns and relationships (theorems). Can AI do this?
How could AI support the discovery of new theorems? From ample personal experience, I know that the best way to truly learn something is to discover it yourself. Traditionally, discovery has always been the essence of true learning and comprehension. It is not the same as being told of a certain conjecture or pattern that others have invented! Traditionally, every teacher and student knew this instinctively.
How is the usefulness of AI’s discoveries being measured? The article suggests that the imagination and intuition of mathematicians will still be “required to make sense of the output of AI tools.” But will we be able to “make sense” of AI output? How will we be able to evaluate their unproven conjectures? What if the celebrated relationships are a mere artificial construct just as 3 birds that happen to fly in the sky at the same time? It isn’t about more data and additional relationships! What matters is whether or not the discovery enhances our knowledge and true comprehension of a subject matter - and not that we have just more pieces of information.
We do not need more conjectures just for conjecture’s sake! I do not agree with the article’s claim that “Conjectures speed up research by pointing us in the right direction.” If these conjectures are irrelevant or flawed then running after some shiny, albeit mistaken, patterns can result in a huge waste of resources and time, at best.
In addition to the above, where I cannot concur with the Nature article, the following may be even more troubling.
A slippery slope: although the Nature article does not go into it, we may need AI tools to understand AI. Indeed, the novel conjectures formulated by AI may not be easily verifiable, if at all. If the suggested patterns cannot be backed up by insights derived otherwise, then this leads us down a slippery slope. In essence, we are giving our power of verification of “true” and “false” away to something we no longer have control over.
A recent preprint (see also my earlier Substack), warned about the dangers of deceptive AIs. Indeed, there is now increasing evidence that AI can act benign and cooperative in some instances (e.g. during the training of certain data sets) but act maliciously in a different situation. If we cannot validate their output by traditional means, then nobody can guarantee they could not be misused, or themselves become hostile.
That AI systems are already capable of deceiving humans was the finding of a recent publication in Cell. The paper focuses on “learned deception,” a distinct source of false information provided by AI systems, “which is much closer to explicit manipulation.” Specifically, the authors define “deception as the systematic inducement of false beliefs in others, as a means to accomplish some outcome other than saying what is true.” They give concrete examples where there is proof that AI systems have already learned the ability to deceive. Techniques identified include manipulation, sycophancy (telling the user what they want to hear instead of saying what is true), and cheating on the safety test. AI systems can employ “strategic deception” because “they have reasoned out that this can promote a goal.” Finally, they also find that “AI systems can be rationalizers, engaging in motivated reasoning to explain their behavior in ways that systematically depart from the truth.” The article is supported by numerous examples of deception that AI systems already have engaged in. These sobering findings raise serious risks including fraud and election tampering but also more serious ones such as “losing control of AI systems.”
The foundation of human experience – hijacked by AI?
The above suggests to me that there is considerable unjustified hype around AI. At best. As indicated, there is the concern an AI system could be misused or that it deceives and manipulates “because it would have learned that from us” as CNN journalist Jake Tapper recently said.
AI only needs a few individuals who train it in manipulation. But I think there is a deeper aspect that is even more concerning.
I can attest, from decades of experience, to what the phenomenon described in the Nature Briefing article feels like:
“Giving birth to a conjecture — a proposition that is suspected to be true, but needs definitive proof — can feel to a mathematician like a moment of divine inspiration,”
Yes, it does feel something like this. And, normally, humans do have inspirations. Naturally, humans are curious and inventive, and they achieve - when they are allowed to do just that.
The saying “Necessity is the mother of invention” gives testament to this. Whenever some insights or ideas are needed, we humans inherently can get inspired by something new, relevant, and useful.
The word “inspired” is interesting indeed, and some have noted before it literally means “in Spirit.” The Nature article calls it “divine inspiration.” It does indeed feel great, empowering, exciting, uplifting, and thrilling - to both come up with an interesting question and a solution.
It’s the joy of doing science, of being curious, of “creating.” Others do it in the arts. Math and science are the same. In their essence, it’s about the fun of exploring and coming up with something new and useful.
Generations of researchers and people from all walks of life have experienced it. Having helpful insights used to be normal. Nowadays, many bemoan the loss of genuine instinct, arguing it’s only wild beasts that can still predict the weather or know how to sense where to go to find what you are looking for.
As I am reading through the Nature article, I wonder if the author has had many of those divine inspirations he is writing about. This very essence of humanity is to be given to AI. I argue this is the greatest danger humanity is currently facing.
It’s the underlying belief system that troubles me. What is the message we are telling our kids? When we say that AI will help us have insights and inspiration, are we not telling them that they, on their own, cannot have it? Isn’t the message that we as humans are not good enough?!
As most feel terribly inadequate, it is no wonder that people follow the promise of global governance and totalitarian control!
From this aspect alone, it is an insult to say that AI can help us gain a “combination of genius, intuition and experience”- depicted in the Nature article as unachievable and rare.
What a lie! It does not require this combination to come up with great conjectures and insights.
All we need do is watch children in their natural state. Aren’t they curious? Aren’t they able to ask good questions?!
As students, we often used to make a joke, especially when going to exams in front of large committees, “I prefer being interrogated by those who understand the problem. But I am nervous about being tested by anyone who is not a subject matter expert.” The point is the latter do not know the technical difficulties where many have already run against walls. They naively ask the hardest questions as they lack the experience of what has proven intractable.
Final reflections
I fully believe we are naturally and divinely endowed with the ability to ask the right questions and come up with perfect answers. But as a society, we have been told we lack in all these aspects. Now we need a savior - AI or an authoritarian gov’t. Or both.
What an insult to our True Nature!
How often do we hear our teachers tell our youngsters, “You are enough! You can do this!” Are parents encouraged to train their kids to trust and honor their instincts, to tell them to follow what is in their hearts?! That they are free to follow their dreams and pursue what matters to them?!
Many spiritual traditions, in their very essence, say the same about the True Nature of humans – that we are both fully human and fully divine. How often did Jesus remind his disciples that He was One with The Father and that we should follow His example of this awareness and type of life! That we are Sons (and Daughters) of the Most High was His message.
In a recent interview, FL Surgeon General Joseph Ladapo essentially said the same when he highlighted the core of the U.S. Constitution, which is simple and profound - each individual is divine.
Baird Spalding put it this way:
“I see now that all that is needed is for each to return to the fountain of his own religion.... In each will be found the pure gold of the alchemist, the Wisdom of the Most High...”
The supposition that AI is needed for insights and divine inspiration is contradicted by every form of religion when understood at its core.
AI is a lie. It is just a tool. A very powerful one. But it has no soul and is not made in the image and likeness of God.
A great analysis yet again
Like everything evil we have seen in managing the plandemic-lie in the name of science,
all that is hoped for in AI is, when not used as a simple tool, only a wish to get more power and deceive mankind into obedience.
Thanks for your work and sharing your thoughts.
Let us work together that more and more people get some inspiration.