On the hijacking of the narrative of AI’s usefulness in science
Comment on the Nature Magazine article “How AI could maximize research impacts”
I have been busy with deadlines and aging parents. Yet, a recent article, seemingly balanced and addressing sensitive issues, deserves a critique. Unaddressed, it will support a narrative of the benefits of AI in science that is incomplete and missing seemingly forbidden points and considerations.
Source: Nature briefing (emphasis added)
Nature - “AI tools can help universities maximize research impacts”
A recent Nature magazine article makes the strong claim:
“Thus, data and AI tools can help institutions to identify people and ideas that are overlooked, both in a research institution and globally.”
The “Thus” refers to the introduction/motivation of this article. It describes a female researcher who had been very successful, having “published hundreds of papers and acquired tens of millions of dollars in research funding.”
So, what was the problem that required a solution, even by a novel approach such as via AI?
The answer is succinctly given in that she was unaware of “market impacts.” This explains the word “impact” used in the title of this article and throughout. Even when the author argues that AI could be utilized so that researchers with “untapped innovation potential” can be “identified” and that it could help address the “obstacles that hinder technological progress,” the context is entirely devoted to patenting, market impact, and financial considerations.
Indeed, the article goes on to detail what the key goals of improving “research impact” should be.
Research ideas must be more effectively commercialized.
Comment: We see that it is not about the quality of new ideas and how these could further research, learning, and a better UNDERSTANDING of the complexities of the phenomena studied.
“[T]here might be unintended consequences.”
Comment: Note the nature of these unintended consequences described next. One may expect that the issue is about unrecognized risks or known dangers that cannot be mitigated.
A key unintended consequence is highlighted as “flocking to what seem to be the hottest and most fruitful ideas today rather than to those that will help the world most in future [sic].”
Comment: Isn’t commercialization just about what is seemingly criticized here? Moreover, how can you measure and validate that an idea will be helpful in the future unless you know its true value in the present?
“Many of today’s issues, from pandemics to climate change, are closely linked with scientific progress.”
Comment: That’s a mouthful. It can be interpreted in two ways. (1) that “scientific progress” is associated with (or may even be causing) those issues. (2) that scientific progress is seen as a major - the major - factor to resolve those issues. (3) An interplay between (1) and (2). Some may think a perfect example is gain-of-function work to create more dangerous pathogens - associated with the celebrated remedy to “prevent”/”treat” the novel diseases those biological agents are causing.
The article describes key factors that hinder technological progress. The author believes AI will be instrumental in overcoming these:
The gender gap: female researchers often experience “unequal access to education and mentorship, funding disparities, prevailing norms and stereotypes and structural barriers in patenting and commercialization processes.”
A large difference between tenure-track and tenured faculty members: “tenured researchers patent their work at a higher rate.”
Comment: In both cases, the author argues, there is no disparity in the quality of the work or their potential for “impact.” By considering disparities such as these two, the author continues, “data and AI tools can help institutions to identify people and ideas that are overlooked, both in a research institution and globally.”
While I have been in the disadvantaged group on both accounts and can relate to these existing and true disparities, they are merely depicting the tip of a much larger iceberg. Letting AI lose on those bigger and usually under-appreciated factors will only increase the disparity. For example, major factors that cannot be resolved by AI, and instead can be secretly infiltrated to further those problems, are:
Researchers from a less prestigious affiliation, who are unaffiliated or with underprivileged economic status, home country, and native language: All these factors have led to a well-known pre-selection that those with privileged positions do not want to let go. Since these are the policymakers, AI will only be instructed to make existing things even worse. For example, many researchers cannot pay the insanely high publication fees. Who on earth can afford to pay the amounts that some spend on a (new to them) car, for example? I am grateful to some journals who under exceptional circumstances have granted a full fee waiver. Each case was indeed exceptional when I managed to correspond with a person who took the time to read my application requests. In most cases, however, I received insane responses such as a 10% reduction. By contrast, individuals retired from prestigious positions do receive a full waiver.
Researchers who perform work that is deemed conspiratorial or in the category of mis-/dis- or malinformation. It is their work that may actually “help the world most,” but if AI is programmed to ignore them, such an AI-hailed success will effectively undermine genuine science and true findings.
Researchers who have made a public stance against certain policies. No need to elaborate on this. There have been too many already who have lost their jobs, work contracts, and research connections and professionally been harmed in numerous ways just because of exercising their God-given human rights to speak openly and freely and refuse untested and harmful medical intrusions.
AI is not cheap. It requires substantial funding and support, thereby paving its way to be hijacked by unaccountable policy and (global) decision-makers. In addition to the above, rather than resolving some of the most serious underlying problems, it is expected that AI will further increase unethical policies, such as
Orchestrated cover-ups of unwanted truths and conspiracies at the highest level.
Retraction of articles that expose things the public should not know.
Rejection of work that is not aligned with the official narrative.
Censorship of ideas that are not welcome.
The ghost-writing of articles and publications.
Non-declared COIs.
The politicization of science - which has hijacked funding as well as which narratives and doctrines are allowed.
During the pandemic years, it has become clear that much of the politicization of science has been aimed toward one specific impact – that of provoking fear. Whether it is about killer pathogens or climate change leading to human extinction. Science has been hijacked to get the public to accept false narratives and inhumane policies.
In addition to ignoring the politicization of science and the capture of media, the author purports to have found the solution to an old and difficult problem:
“The dichotomy of basic versus applied research is becoming inadequate.”
The argument is that, for example, “discoveries that aid marketable applications,” or “insights that guide policymaking” are highly impactful, “as evidenced by high citation rates.”
Comment: Shouldn’t “impact” be defined much more broadly than how often something gets cited? What about potentially negative impacts of research, such as a pandemic that disrupts the life of virtually everyone, at all levels (other than the very few who benefit)?
“By engaging more with use-inspired research, scientists can produce insights that both advance basic understanding and address societal needs.”
Comment: Is it because we now have a new term, “use-inspired,” that the long-debated discussion contrasting basic versus applied research is suddenly resolved?
I remember this discussion decades ago when I was still doing “basic” research in number theory. No, initially, there were no applications. We researched to understand, I mean, really understand, the ins and outs of large numbers, to comprehend their characteristics and patterns. This is when and how I refined my skill to “grasp” things. Nobody has been able to describe yet how AI is supposed to do just that. The argument that “tomorrow it will be able to do it” has long been recognized as a mere marketing slogan. And, by the way, it was almost accidental that the “basic” research that initially seemed theoretical only, suddenly led to tangible “real-world” applications.
From my own experience, I can, therefore, illustrate a strong counter-example to the promised “use-driven” concept as a savior of the basic-vs-applied research debate. One of the oldest problems in number theory is how to recognize if a large number is a “prime number.” Numerous and important insights have been developed by “merely” tackling this problem. Then, suddenly, when information-technology and online communication came around, it was recognized that this problem would shape the very underpinnings of what has become known as “public-key cryptography.”
In short, without extensive research on this basic problem in mathematics, the Internet and online communication as we have known it since the mid-80s would never have existed. Since then, researchers have tried to find better mathematical problems. The race is still on. Some argue that the prime number recognition problem, and those related to it, has been unprecedented. It has had a unique role in shaping everything in online communication for decades. While alternative approaches have been developed, they have been shaped by those early insights that previously may have been belittled as “just” basic research.
Conclusion
The title of the article raises an important question. It then proceeds to indicate the answer,
“Massive data sets storing details of these outputs can be scoured by AI algorithms to better understand how science and technology progress and to identify gaps and bottlenecks that hinder breakthroughs.”
Yet, everything described in the article points to something else. No evidence is provided, and there are no reasons to believe that AI could resolve long-standing questions such as how to make science most “impactful.”
Indeed, the word “impact” is used in a very narrow sense to maximize monetary return and financial gains.
Albeit, through the narrow lens of commercialization and monetary gains, it is not hard to imagine how AI technology could be misused to benefit some at the expense of many others.
The proposed problems that AI is supposed to resolve do not include the real issues experienced by many of us in recent years. It will further increase disparity, can easily be misused for propaganda, the spreading of mis-, dis- and malinformation, at the expense of science that is independent, openly debatable, and which can be freely scrutinized and questioned.
“What will help the world tomorrow” has likely little to do with how much funded and patented this research is! One may argue it is the opposite, that scientific impact will be the greatest when politics is removed from science as much as possible so that scientists can do what traditionally they have always done:
Be curious, humble, open-minded, willing to acknowledge unknowns, errors, and gaps, learn from mistakes, and be inspired by the most lofty goals – that even whilst seemingly in remote distance, beckon to be discovered.
I believe this is the essence of true science to which we are all entitled and the only way science can flourish and help the world the most. This is emphasized by numerous ancient practices and traditions, for example, articulated by Baird T. Spalding
“[A]ll creative substance is yours to use, pressed out to you in fullest measure.”
Vitalistic biosociology in policy (=humanistic management) is not the solution. It is the starting point for repeating the pattern from the 20th century. If it's turned into political monism, the fight between the good, the bad, the ugly, then eugenic thinking is here again.
Eugenic thinking was always humanistic thinking. It's a not good knowns connection. It's very unlucky that Germany and Austria are not aware of. But the Brits.
There is really nothing new around the phenomen of vitalism and evolutionary spirituality. transhumanism, nooshere, super-intelligence, singularity, ... the Übermensch.
If the societal pressure rising up, it can lead into barbarism again.
https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1103847/full
Boosting up social capital with behavioral science into human brain capital as ressource and investment-object for economic growth in the 21st century is deeply frightening.
And they do it.
The only question is: Will it be a surveillance (algorthmic) capitalism or socialism? Or both: capitalism for the 1%, profiting with the futures and socialism for the 99%?
Neuroeconomic and behavioral science have already used AI in modelling the pandemic and the measures. In 2018 there was a warning of losing dignity and welfare. I think it happened since 2020 in a lot of societies.
The process of self-constitution is the decisive existential task with which every human being is confronted (Korsgaard 2009). However, if our own active decisions are outsourced sys-tematically through excessive delegation to politicians, companies or algorithms that are supposed to nudge us in all situations in life, our personality threatens to fragment. In the end, we no longer know which of our consumption and life decisions we actually made of our own free will and which were dictated to us by external authorities. One serious consequence of this could be a loss of respect for our own self, which Waldron (2014) warns of in his review of Sunstein's work. Waldron warns that too many external decision-making aids can lead to us no longer knowing the value of our own decision-making power. The resulting uncertainty - Waldron speaks of the “loss of dignity” - can lead to a considerable impairment of life satisfaction because we no longer have the feeling that we can lead our lives in a self-determined way (Deci and Ryan 2000). This type of loss of welfare, combined with the question of how to delegate everyday decisions in a meaningful way, has not yet played the role it deserves in the debates on libertarian paternalism in general and nudges in particular.
https://elibrary.duncker-humblot.com/pdfjs/web/viewer.html?file=article/9839/10_3790_vjh_87_1_29_01723907124.pdf