Taking away some complexity comes at a price, and for some people, it’s hard to see that it outweighs the practicality.
"Designing AI for Disruptive Science" is a bit market-ey, but "AI Risks 'Hypernormal' Science" is just a trimmed section heading "Current AI Training Risks Hypernormal Science".
Maybe they could be, but it seems pretty unlikely. The edges of a lot of scientific understanding are now past practical applicability. The edges are essentially models of things impossible to test. In fact, relativity was only recently fully backed up with experimental data.
I think also what's practically applicable changes constantly. Perhaps we're truly at the End of Science, but empirically we've been wrong every other time we've said that. My money is that there's more race to run.
But they do. Paradigm shifts happen because the new paradigm explains the unexplained and importantly also covers the old model. If prior data is unexplained with a paradigm shift, the shift will never be adopted.
> Perhaps we're truly at the End of Science
Who said that? Just because the core of our current models seem pretty rock steady doesn't mean there's not more science. It simply means that we can mostly just expect refining rather than radical discovery.
There will be sub-paradigm shifts, but there's likely not going to be major "relativity" moments from here on out.
The practical issue is if there will enough funds for just "refining", instead of "paradigm shifts", which I understand as new and "exciting" discoveries. I'm not a scientist, of course, this is just my layman's understanding.
Empirically it seems that paradigm shifts are more driven by deaths and retirement rather than improved fit to the data. Moreover the way that you reconcile old data with the new model can be contestable; it's not like everyone all at once says "oh this new model is clearly a strict superset of the previous one, time to adopt it". With all that said I think one could argue that this stuff is basically noise and that the process still 'trends toward progress' (and I'd agree). But I would say that the scale of noise can also be quite large relative to things a human might experience in their life. I was sort of imagining social-disruption (like a dark-age type regression) as the 'backwards paradigm shift'.
> but there's likely not going to be major "relativity" moments from here on out
I cannot understand how anyone treat this as something that can be objectively concluded; by definition these kinds of radical paradigm shifts are basically unforeseeable up until they happen. I called it the "End of Science" to draw a parallel to "End of History"-type thinking because both (IMO) take this view of "there will be no more revolutions, only incremental adjustments on an unshakeable core into infinity", which I feel is personally a 'vibes based' assessment of things. It's not even that I disagree with it so much as I feel like the statement is basically (and will always be) a pure guess, one which many people have made and been wrong about in the past.
Indeed, Kuhn's own work acknowledged this.
I think important paradigm shifts can often look like this - there's not necessarily a reason to expect them to be instantly optimal. Deep Learning vs 'good old-fashioned AI' is another example of this dichotomy; it took a long time for deep learning to establish itself.
I'm also a little skeptical about the practical value of the bleeding edge of both experimental and theoretical physics. Interesting? Sure.
And the closer you get to physics, the less likely any sort of major paradigm shift will be discovered (though the article focuses pretty heavily on physics which is why I do as well).
But even in those fields, there are core parts that aren't likely to ever see any sort of paradigm shift. For example, in biology, I doubt we'll see a shift from evolution as it'll be impossible for a new model to also explain what evolution does.
I agree that at the edges you'll possibly see more paradigm shifts and discovery, but those are all going to be working from things that will not see paradigm shifts. For example, biology can't escape things like single celled organisms made up from atoms and chemical compounds.
But ultimately, what I disagree with in the article is the notion that discovery won't ultimately be a process of hypernormalization. In medicine, we are unlikely to see a new paradigm that isn't germ theory. When it comes to the research, it'll mostly be focused on finding new compounds and delivery mechanisms for treatment rather than finding a new paradigm for how to treat a disease.
The softer sciences are the only place where you might find new paradigms, but that's simply because the data itself is so squishy and poor anyways that it's easy to shift around. There it's less a question of the science and more of the utility of the model (regardless of whether or not it aligns with reality).
Alternatively: there's plenty of mainstream, accepted science that's plain, flat out, provably wrong. Yet, it is against good taste (job security, people's feelings, status quo bias, etc.) to point this out.
Hence, it can actually be tricky to catch wind of, or get a grasp on, such issues to begin with, much less pursue such issues toward meaningful, published, recognized change in understanding (that is to say: paradigm shift).
I'd name some examples, but you wouldn't believe me.
With respect to the article, it seems the current LLMs can (though, obviously, do not necessarily have to) return text that appears to reason (pretty reasonably!) about paradigm shifts, when given the context required and nudged quite forcefully toward particular directions. But, as the article seems to indicate, the LLMs seem to not tend toward finding, investigating, and reporting on paradigm shifts all on their own very much. (But maybe part of that is intrinsic to how they are programmed and/or their context?)
I highly doubt that.
There are a lot of people that think they've proving the mainstream wrong. But more often than not, it's cranks using bad non-repeated tests. These bad tests are propped up, ironically, because of people's feelings and job security more than a built up body of evidence.
They also almost always have to ignore the mainstream body of evidence and just say it's wrong and bad because of a conspiracy.
For example, plenty of creationists believe they have irrefutable evidence that evolution is provably wrong. It's usually a few cherry picked or poorly interpreted results or sometimes just flat out lying. And often they simply flat out lie about the existing body of evidence that support evolution.
Another example is the antivaxx movement. Wakefield and RFK both built careers that made them a lot of money talking about how the mainstream was wrong. Even when the industry adopted some of the recommendations (abandoning Thimerosal), they simply ignored the fact that further data didn't support their claims.
I probably would not. You would probably be wrong
Gravitational deflection (General relativity) received pretty important confirmation in 1919, only 8 years after Einstein first proposed it.
Time dilation (Special realativity) was experimentally confirmed in 1932.
Can you elaborate on the assertion you made here? In addition to the important points @elbasti made about tests performed approximately a century ago, what does it even mean for a scientific theory to be "fully backed up"? Such theories can be tested and the tests either passed or the theory disproven but it's not possible to _prove_ such a theory. And to some extent we already know that relativity cannot be the final answer because it doesn't mesh well with quantum mechanics (which has been experimentally tested substantially, arguably even more than relativity has).
That would be pretty hopeless for launching satellites and the like.
If I had a nickel for every AI-poisoned "researcher" I'd seen with a preprint full of nonsense buzzwords like "quantum fractal holographic resonance matrix"... well, I wouldn't be rich, but I'd probably at least have enough to buy a coffee.
which contains Heathrow Terminals 1, 2, 3, 4 & 5 on the Picadilly line. For about 15 seconds I imagined a world where Heathrow has had 5 terminals since 1933, then I read the map itself: "Recreated by Arthurs D". Phew.
Awesome example of improving information conveyance through abstractions though!
Worsen. LLMs discard/loses and mixes data on their statistical "compression" to create their vectorial database model. Across the time, successive feed back will be homologous to create a jpg image sourcing a jpg image that was created from another jpg image, through this "gaussian" loop.
Those faster (but worst) results will degrade real valuable data and science at a speed/rate that will statistically discard good done science on a regular basis, systematically.
IMHO.
https://search.worldcat.org/title/369632
The author completely missed the point Borges (and Korzybski) made about the utility of maps. Maps (according to both) are abstractions which allow the user to ignore irrelevant aspects of reality so other, more interesting facets come into sharper resolve. This might be why Beck's London Tube map is so well regarded. It allows the user to easily ignore aspects that are not germane to the task of deciding where and when to get on and off the tube.
But is a scientific paradigm like a map? Certainly it is an abstraction, if we take Kuhn's definition. If you're interested, I can recommend both "The Structure of Scientific Revolutions" and "The Essential Tension : Selected Studies in Scientific Tradition and Change" by Kuhn.
https://search.worldcat.org/title/4660423077
https://search.worldcat.org/title/3034084
Calling scientific paradigms maps isn't wrong, per se, but it does create more of a meta-metaphor, and a weak one at that.
Also. No. Maxwell did not replace a patchwork of equations with four short ones. That was Heaviside.
https://en.wikipedia.org/wiki/Oliver_Heaviside
Something we don't mention in polite society these days is that Maxwell proposed electromagnetic waves as propagating through an aether:
https://en.wikisource.org/wiki/A_Treatise_on_Electricity_and...
If you're going to talk about new paradigms, Maxwell is a great example, but his story is not complete without mentioning Heaviside, Michelson and Morley.
Also... I bristle at the phrase "Hypernormal Science." It's also introduced without definition or reference. Collins, et al describe it as distinct from (though seemingly related to) the word "Hypernormal" as coined by Yurchak in "Everything Was Forever Until It Was No More."
https://direct.mit.edu/posc/article-abstract/31/2/262/112751...
https://search.worldcat.org/title/1572419463
Or if you're short on time, you can get an entertaining (though not as enlightening) description from Adam Curtis' 2016 documentary HyperNormalization. You won't come away from it with a better understanding of AI, General Semantics or Popperian falsifiability, but it has a striking visual style and a very good soundtrack. And may lead to a better understanding of "hypernormal science."
And getting back to the Michelson-Morley experiment. The author talks about how their results did not cause the scientific establishment to abandon the concept of luminiferous aether. Certainly there is conservatism in science. Gigging science-monkeys tend to want to see interesting results replicated.
And this was one of the issues with the MM experiment. It took a while to replicate. We're MUCH better at replicating it these days and I would guess that thousands (maybe hundreds) of physics undergrads did this very task last year. But we've had over a century of pedagogical experience w/ this experiment. We know how to structure it to get the results we want. This was not the case in the late 1800s and in fact, several early attempts to replicate the experiment suggested the existence of an aether which was drifting slowly towards Cleveland.
And what does it say that heat flow, fluid flow, diffusion and electrostatics share equations? Does it say there's something fundamental in reality? Or does it say there's something fundamental in the way we model reality?
That being said... I think the author has hit upon something here... people are often wary of evidence which contradicts experience, even when that evidence (and not experience) is more correct.
But each of the examples he provides glosses over the process by which new paradigms overrode the old.
I deeply appreciate the author avoiding slavish fealty to fashionable AI trends. He probably could have gone further to describe more representational weakness of ESM3 and GNoME.
I fear, however, he has missed the point. It's less interesting to describe the messy ways in which AI fails than to describe the messy ways in which humans succeed. The process by which paradigms shift is messy, social and fundamentally human. It often has more to do with qualitative explanations than quantitative science. Science, as a human endeavor, is very much a story-telling exercise.
The process of judgement and resource allocation will still be human for quite a while, but it's quite likely some humans will outsource their responsibility to AI to cut corners.