6 Comments
May 29Liked by Siguna Mueller, Ph.D., Ph.D.

'abilities that are not present in smaller-scale models but are present in large-scale models; thus they cannot be predicted by simply extrapolating the performance improvements on smaller-scale models.'

HAHAHA

sort of like the vaccines and LNPs?

Brilliant observations

Expand full comment
May 29Liked by Siguna Mueller, Ph.D., Ph.D.

'abilities that are not present in smaller-scale models but are present in large-scale models; thus they cannot be predicted by simply extrapolating the performance improvements on smaller-scale models.'

hmmm like what happened with pandemic and the vaccines?

you cant model everything.

Great analysis, thank you.

Expand full comment
author

Thank you. I find it frustrating that we see this development in the technical sciences. Back in the day, when I first learned programming (as part of my Math education), the system performed the way it was intended. The only reason it failed was that you, the programmer, had made a mistake. The fact that you can show or hide certain phenomena, in the technical sciences, just because of how you measure these, is tragic. Now that I think of this, it seems that computer folks deep down still have the mindset that computer/AI performance is like a natural law, that it is not arbitrary. Once a certain capability/output/behavior is seen, it must exist. That is why the notion of the mirage may be difficult to accept for many, even if it is the truth.

Expand full comment

It is no wonder, that self-organizing systems organize themselves and different to expectations. But to learn a new language in advance, and not only use offset data to answer questions, the AI must have a motivation. And for a motivation it must have a genuine self, and say, I do it because I like it or because it is fun or because it helps me to understand the language I was trained in better.

If such explanations were to be given, the question would arise, if soul and character develops automatically with complexity of self-organizing systems. And if the human does not own a divine spark but is a complex machine only. This seems the conception of man in the present, and it leads to the acception of collaterals and to switching off the machine, when not needed. The divine string between the beyond and each one of us, would be demasked as an effect of scale, created by the demiurge.

Expand full comment
author

Thank you for this interesting observations. Well, as for the language. Researchers did not make it clear initially. But the more I learned about it, the more it became obvious it was NOT done in advance, nothing AI decided to do. Rather, it was a by-product of AI being trained on several languages and having access to a huge amount of data - including dictionaries and how words are being used. There was no need for it to have a self, a motivation, or a soul. With tons of data around, it would pick up others that only become relevant when we explicitly try to understand what it does. Also, AI did not just automatically know the unexpected language. Indeed, it did require further training, albeit, it did not take as much as researchers had predicted - the seeming emergent property. In sum, those models had gathered and collected tons of cross-information so that training for that language did not take long. Unfortunately, the interview did not portray it as such.

Expand full comment
RemovedMay 30
Comment removed
Expand full comment
author

Unfortunately, I do not see in which way this has to do with my post.

Expand full comment