A few days ago, I had one of those conversations that rearranges something inside you. My girlfriend and I were talking about spirituality. About African spiritual traditions, about the taboo around practices the West casually labels witchcraft, about the collective unconscious. She pointed out something that stuck with me: across cultures that had no contact with each other, you find the same archetypes, the same rituals, the same spiritual principles. Not vague similarities, but strikingly specific convergences. Jung spent his career documenting this. He called it the collective unconscious: a shared layer of human experience that surfaces independently across civilizations, not transmitted culturally but arising from something deeper.
What struck me wasn't the content itself, but my instinctual, but invalid skepticism about it. When faced with this uncertainty, I fell back to empiricism; Does humanity's current scientific knowledge allow these phenomena? Is there empirical data that captures or displays them? But as I tried to reason through this, the limitations of science became very apparent; science didn't gave me no valid arguments against or in favor of spiritualism, so why did I fall back on science?
I'm a data science student and AI entrepreneur. My instincts are to model, measure, and optimize. And yet here was an entire dimension of human experience, coherent, ancient, shared across civilizations, that my intellectual framework had quietly filed under irrelevant. Not because I had examined it and found it wanting, but because the paradigm I operate in doesn't have a place for it.
That conversation cost me several sleepless, yet fruitful nights. What I found was that the case against the categorical exclusion of the spiritual is not made by mystics alone. It is made by mathematicians, physicists, and philosophers of science. And, most unexpectedly, by the very AI systems I work with every day.
I. The Disenchanted World
Modern Western society rests on an assumption so pervasive it's invisible: that reality is, in principle, fully knowable through the scientific method, and that any current gap in understanding is temporary. This is not a scientific conclusion. It is a philosophical position, scientism, that presents itself as objectivity. Max Weber called the result the Entzauberung der Welt: the disenchantment of the world. Nature became a mechanism. Measurable, manipulable, drained of spirit or mystery.
This disenchantment has a double historical root. First, Christianity systematically demonized indigenous spiritual traditions across Europe, Africa, and the Americas, renaming coherent spiritual systems as devil worship. Second, the Enlightenment went further. Descartes split reality into mind and matter. Everything that wasn't empirically verifiable was relegated to superstition. The result is an epistemological hierarchy that still dominates today: science on top, philosophy in the middle, spirituality at the bottom as a relic of a pre-modern past.
What was lost is not trivial. Entire civilizations carried sophisticated worldviews built on principles that the West discarded without examining: animated universes, porous boundaries between life and death, relational ontologies, holistic systems where medicine, ethics and cosmology were inseparable. They were written off as primitive precursors to science, but were actually parallel approaches to the same unanswerable questions.
Jung stated that spiritual themes arise independently, from within, across populations with no contact, a much more bold claim than simply saying that cultures share spiritual themes. These themes arise independently, from within, across populations with no contact. The archetypes of the collective unconscious (the shadow, the hero, the great mother, death and rebirth) surface in African, European, Asian, and Indigenous American traditions with striking specificity. Jung's claim was essentially proto-instrumentalist: there are patterns in human experience that are real and recurrent but not mechanistically explicable. That maps directly onto what I will argue about neural networks; distributed patterns that produce accurate outputs without interpretable explanations.
Whether the collective unconscious reflects a metaphysical reality or a deep cognitive architecture is exactly the kind of question that scientism cannot answer but also cannot afford to ignore.
II. Science Against Itself
The irony is that science itself, from within, has been quietly dismantling the foundations of scientism for the better part of a century. The cracks are not in the details. They are in the architecture.
Mathematics proved its own incompleteness. In 1931, Kurt Gödel demonstrated that any sufficiently powerful formal system contains true statements that cannot be proven within that system. There are always truths that escape the system's grasp. No framework of knowledge, however powerful, can be complete. Gödel himself was a Platonist; he believed mathematical truths exist in a non-physical reality and are discovered, not invented. A strictly rational position that simultaneously transcends materialism.
Physics discovered fundamental indeterminacy. Heisenberg's uncertainty principle is not a measurement problem but a property of nature itself. The two best theories in physics, general relativity and quantum mechanics, are mathematically incompatible. About 95% of the universe consists of dark matter and dark energy that we barely understand. This is humbling, and proves that we are barely scratching the surface. Our scientific description of reality is spectacularly successful, yet spectacularly incomplete.
Philosophy of science showed that the method itself is historical and contingent. Thomas Kuhn demonstrated that science does not accumulate linearly but operates in paradigms that periodically collapse and are replaced. What counts as "rational" is itself paradigm-dependent. Paul Feyerabend went further: there is no universal scientific method, and the claim that science is the only valid source of knowledge is itself an unproven philosophical position.
Consciousness remains unexplained. David Chalmers formulated the hard problem of consciousness: why there is subjective experience at all, why it feels like something to be you, has not been remotely solved by neuroscience. Thomas Nagel argued in Mind and Cosmos that the materialist neo-Darwinian worldview is "almost certainly false" because it cannot account for consciousness, objective moral truths, or the capacity of reason itself. These are not marginal figures. This is mainstream analytic philosophy.
The collective conclusion of these thinkers is not that spirituality is true. The conclusion is that the categorical exclusion of the spiritual is not rationally grounded. The intellectually honest position is a form of epistemological humility: recognizing that reality may be larger than what our current instruments, or perhaps any instruments, can capture.
One of my best friends studies physics and will be pursuing a PhD in particle physics in Tokyo. He put it simply: our theories are approximation models, not descriptions of reality. Richard Feynman said something similar about quantum mechanics: we can calculate with it, but nobody understands what it means.
III. The Tipping Point: Neural Networks and the Expressiveness-Interpretability Tradeoff
This is where the argument becomes contemporary, and where I believe the synthesis is new.
The history of scientific methods can be read as an expressiveness-interpretability tradeoff. Each generation of tools approximates reality more accurately but becomes harder to understand. Algebra and classical mathematics are maximally interpretable but limited in reach. Differential equations extend this to dynamic systems but quickly become analytically unsolvable. Probability theory adds uncertainty as a fundamental element. Chaos theory acknowledges that deterministic systems can become unpredictable. Quantum mechanics introduces indeterminacy at the deepest level.
Deep learning and neural networks are the tipping point where this tradeoff becomes explicit and undeniable.
AlphaFold predicts protein structures more accurately than decades of experimental work. GenCast forecasts weather better than the best physics-based simulations. Both are not merely incremental improvements, but fundamental leaps. But when you ask why a protein folds a certain way, or why it will rain tomorrow, the neural network gives you no answer. There is no equation you can read, no mechanism you can visualize. There are millions of weights forming a pattern that works but that no human can interpret. The model approximates reality better than our best theories, but it explains nothing.
This is a paradigm shift in the Kuhnian sense. The classical scientific promise since the Enlightenment was: predict and understand. Newton didn't just predict that the apple falls, he also explained why. Einstein didn't just predict that light bends around mass, but gave you an intuitive picture of spacetime as a flexible fabric. Prediction and understanding were two sides of the same coin.
Neural networks break that coin in half. The best approximation of reality we can produce is fundamentally uninterpretable. Prediction and understanding are decoupled.
Demis Hassabis, the CEO of Google DeepMind, articulated this in his conversation with Lex Fridman: neural network-based models can approximate reality more accurately than classical approaches, but they come with an inherent opacity. We are entering an era where our most powerful tools for understanding the world are themselves beyond our understanding.
The promise of Explainable AI (XAI) to restore interpretability is, I believe, a rearguard action. An attempt to pull the new paradigm back into the old one. Neural networks derive their power from distributed representations that cannot be translated to human conceptual categories without information loss, just like human-fathomable models of physical reality are at most approximations. Secondly, as deep learning scales in complexity, XAI will structurally lag behind. Expressiveness outruns interpretability.
There may possibly even be a Gödelian limit at play: the system that best approximates reality cannot be fully understood by the system that produced it.
IV. The Fifth Decentering
What does this mean for how we see ourselves?
The old story was Promethean: humanity seizes reality, dissects it, understands it, masters it. Knowledge is power. Nature has no secrets we will not eventually penetrate. The new story is fundamentally humbler: we build instruments that approximate reality better than we will ever understand.
This fits into a sequence of decenterings that have progressively dismantled the human claim to exceptionalism. Copernicus decentered the Earth. Darwin decentered the human species biologically. Freud decentered the conscious ego. Luciano Floridi calls AI the fourth revolution. But the argument here goes further: AI does not just decenter humans as knowledge producers, it redefines the nature of knowledge itself, from interpretable explanation to opaque approximation.
Alfred North Whitehead warned against the fallacy of misplaced concreteness: confusing your abstractions with reality itself. Max Weber distinguished instrumental rationality (which tool works best?) from substantive rationality (what is true?). Despite its rhetoric of pursuing truth, science is in practice becoming ever more instrumentally rational. We're hitting the limits of (direct) human understanding and to advance, we are forced to optimize our tools instead of our understanding. Stanisław Lem described this prophetically in His Master's Voice: humanity receives a signal from space, extracts useful technology from it, but never understands what the message says. We are approaching that condition in our relationship with our own creations.
V. The Vacuum
Let me be precise about the societal implication, because this is where the argument is most easily misread.
The claim is not that the new paradigm actively creates space for spirituality. The claim is that it undermines the intellectual foundation of scientism. That is a negative movement — it removes something. And what happens in the resulting vacuum is an empirical question, not a logical necessity.
There are two plausible scenarios. The first is nihilistic pragmatism: a society that says "we don't understand it, it works, so optimize and don't ask questions." Heidegger feared this with his concept of Gestell — the technological drive to reduce all of reality to exploitable resource. Much of Silicon Valley already operates in this mode. The second is what you might call an existential or spiritual renaissance: in the vacuum left by scientism, other meaning-making frameworks rise.
The historical and sociological evidence tilts toward the second scenario, at least partially. Peter Berger, one of the most important sociologists of religion of the 20th century, retracted his own secularization thesis and argued in The Desecularization of the World that modernization does not lead to the disappearance of religion but to its restructuring. Charles Taylor showed in A Secular Age that secularization transforms belief from a default into a choice within a marketplace of meaning-making frameworks. In that marketplace, scientism has long held a dominant position by claiming it is not a framework at all but simply the truth. Once that claim collapses, it becomes just one option among others, and not necessarily the most attractive one, given that it offers notoriously little in the way of meaning, comfort, and community.
Historically, periods of great technological and epistemological upheaval coincide with spiritual surges. The Industrial Revolution produced spiritism and theosophy. The atomic bomb produced the New Age movement. The AI revolution may produce a similar wave, but with one crucial difference: the intellectual underpinning comes this time not from outside technology as a reaction against it, but from within technology as a consequence of it. That makes the position much harder to dismiss as irrational or nostalgic.
Camus' absurdism offers one more lens. The absurd is the tension between the human need for meaning and a reality that refuses to provide it. We build instruments that approximate reality better than we can understand, and we must live with that tension. The question is whether humanity can sustain the absurdist position. Historically, the answer is no. Societies reach for meaning-making frameworks. Given that the previous dominant framework, scientism, has been undermined by its own most powerful creations, the direction of that reach may well be toward the spiritual and the philosophical rather than the purely technical.
VI. The Honest Position
None of this proves that spirits are real, that ancestors intervene, or that the collective unconscious has a metaphysical basis. That is not the argument. The argument is that the confident materialist position "there is nothing but matter and energy, and everything is in principle scientifically explicable" is not the product of rigorous thinking. It is a paradigm masquerading as a conclusion.
Mathematics proved its own incompleteness. Physics discovered that its two best theories contradict each other and that 95% of the universe is dark. Philosophy of science showed that what counts as rational is historically contingent. The philosophy of mind demonstrated that the most basic feature of our existence — that we are conscious — is unexplained by materialism. And now, the most powerful instruments we have ever built approximate reality better than we can understand, completing a long arc from the Enlightenment's promise of total comprehension to the contemporary reality of productive opacity.
I predict that the tools that approach, compute and shape tomorrow's reality will only be less understandable, yet more accurate. The expressiveness-interpretability tradeoff that runs through the history of scientific tools has reached a tipping point. Deep learning decouples prediction from understanding. XAI's attempt to restore interpretability is a rearguard action from the previous paradigm. The human species is building systems that surpass its own comprehension, and must now reckon with what that means.
What it means, I think, is this: the age of scientism, the age in which the only respectable relationship to reality was one of measurement, explanation, and control, is ending. Not because science failed, but because science succeeded so spectacularly that it outran its own philosophical foundations. What comes next is genuinely open. It could be nihilistic pragmatism. It could be a deeper, more honest engagement with the dimensions of reality that the old paradigm excluded.
That late-night conversation about African spirituality, the taboo of witchcraft, the strange universality of spiritual intuitions across cultures — it wasn't a departure from my world of data science and AI. It was, I now think, the other end of the same thread. The tools I build every day are quietly making the case that the world is stranger, deeper, and less comprehensible than the Enlightenment promised. The honest response is not to double down on a paradigm that is cracking, but to sit with the mystery, and to take seriously the possibility that those who have been sitting with it for millennia might know something we forgot.