Another takeover attempt of the entire academic system by covert elites? SCITILITY
A short note to raise awareness
This is not a typically long post. I have gotten stuck on several preprints that I have been working on simultaneously, and am also thrilled about a major change in my work/research environment - more, in another post. This morning, a Nature Briefing article caused substantial unease. I thought it would be worth to write a short post on this.
It’s about a new startup and another AI application, ostensibly to help scientific publishers and the scientific community.
The start-up called Scitility aims to help publishers spot potentially problematic papers and suspicious work.
Source: https://scitility.com/aboutus
Scitility’s tool ‘Argos’ purports to offer a solution to improve research integrity and combat academic misconduct.
Their catch headline, mission statement, and the new service offered is “Research Integrity Made Easy”
How do they want to do this? In their words, they are offering
“A comprehensive retraction service that empowers researchers, institutions, societies, funders, and publishers to uphold the highest standards of scientific integrity. With daily updates, personalized retraction alerts, and seamless integration into workflows via an API, Argos swiftly identifies retracted articles and flags areas of concern. Fostering a culture of accountability and trust.“
To me, this raises more than just simple questions:
Argos argues they are a “self-funded company.” So, someone is funding them. But we are not told who they are.
Above, it’s insulting to use the word “empower.” Science is an art of learning and exchange. It cannot be “empowered” by dictates from unaccountable covert elites who are funding this.
It’s possible that the founders are doing this with some good intentions. Yet, how do we know that this large-scale operation does what they intend to do? We have no proof that AI is suitable for this at all.
Which globalists will jump on this idea and widely push the implementation of such programs without us knowing?
Trust hinges upon openness and free discussion, not on censorship.
Accountability - to whom and about what?! Have the criteria for “accountability” been independently established by unbiased scientists?
Research integrity is an issue of ethics but not AI censorship and intimidation!
As described by Nature (doi: https://doi.org/10.1038/d41586-024-03427-w):
“The science-integrity website Argos, which was launched in September by Scitility, a technology firm headquartered in Sparks, Nevada, gives papers a risk score…. A paper categorized as ‘high risk’ might have multiple authors whose other studies have been retracted…”
Will this lead to a hopeless downward spiral - once you have a paper distracted, e.g. for unwanted content, then it will just escalate from then on?!
How could you content with AI, to justify yourself? It’s a mass operation. How can we ever expect that those affected will receive a fair analysis?
And what if you are in a situation where a previous retraction was not justified, even to the point that this is acknowledged? Seeing that it is technically extremely difficult to re-train AI, how can scientific progress and errors ever be updated?
IMO, Argos does not “support the scientific community,” as they claim, even though it is “organized as a Public Benefit Corporation.”