My advisor used to say that the most important properties of any material are the ones that emerge at scale. A single atom of carbon is unremarkable. Arrange 10²³ of them in a specific lattice and you get diamond — or graphite, or graphene, each with radically different properties. The arrangement is everything.
I spent years at the nanoscale, simulating how quantum interactions between atoms produce the macroscale properties of materials — hardness, conductivity, thermal behavior. That work taught me something that has followed me from the lab into every AI conversation I have had since: the substrate determines everything. You cannot understand the system without understanding what it runs on.
This is why I cannot stop thinking about neuromorphic computing.
The Problem With How We Talk About AI Hardware
When enterprise leaders talk about AI infrastructure, the conversation almost always collapses into a single question: how much compute? How many GPUs? Which cloud provider? H100s or B200s?
This framing is understandable. It is also dangerously incomplete.
It treats compute as a commodity — interchangeable, infinitely scalable, and controlled by whoever has the deepest pockets. It ignores the physics. And in AI, as in materials science, the physics always wins eventually.
Here is the number almost nobody in enterprise AI is discussing: a spiking neural network running on Intel's Loihi 2 neuromorphic chip consumes 1000 times less energy than a GPU performing equivalent inference tasks. Not 10 percent less. Not twice as efficient. One thousand times.
That is not an incremental improvement. That is a phase transition — the kind of discontinuity that, in materials science, signals that you are working with a fundamentally different structure, not just a better version of the same one.
What Biology Figured Out 500 Million Years Ago
To understand why neuromorphic computing matters, you need to go back — not to the 1980s when Caltech's Carver Mead coined the term, but much further. To the Cambrian explosion, when the first complex nervous systems appeared in the fossil record.
The biological brain did not evolve to be fast. It evolved to be efficient. The human brain runs on approximately 20 watts — less than a dim light bulb — while outperforming any silicon architecture ever built on tasks requiring flexible, contextual reasoning. Evolution had no access to data centers. It had to solve intelligence under severe energy constraints, and it arrived at a radically different architecture than the one we built into GPUs.
The key insight is the spike. Biological neurons do not fire continuously. They fire only when their input crosses a threshold — discrete, asynchronous electrical events separated by silence. Most neurons are silent most of the time. Computation is event-driven, sparse, and massively parallel. Information is encoded not in the magnitude of a signal but in the timing of spikes relative to each other.
This is the architecture neuromorphic computing attempts to replicate in silicon. Spiking Neural Networks — SNNs — process information through discrete spike events rather than the continuous activation functions used in standard deep learning. Because neurons only consume significant power when they spike, and because neurons are often silent, the overall energy draw drops dramatically. Intel's Loihi 2 supports up to a million neurons and 120 million synapses on a single die. Its Hala Point system scales to billions. IBM's TrueNorth, BrainChip's Akida, and Manchester's SpiNNaker 2 are pushing the same frontier.
My materials science background gave me a particular lens on this. The brain's efficiency is not just an algorithmic achievement — it is a materials achievement. Myelin sheaths that insulate axons, ion channels that gate with molecular precision, synaptic vesicles that release neurotransmitters in response to calcium influx. The substrate is the computation. You cannot separate the two.
When I hear enterprise leaders say "we will just add more GPUs," I hear the same category error I used to hear from colleagues who thought you could model quantum effects with classical force fields. You cannot get there from here by scaling the wrong architecture.
Three Things Neuromorphic Changes — That Nobody Is Governing
The hardware story is well-covered in technical circles. What is almost entirely absent from the conversation is what neuromorphic computing means for governance. And here is where your materials scientist turned AI governance practitioner needs to say something uncomfortable.
First: it breaks the latency excuse.
One of the primary arguments for centralized cloud AI is latency — real-time inference requires data center proximity and scale. Neuromorphic chips shatter this argument. Autonomous vehicles now use neuromorphic vision processors for microsecond-level response times that would be impossible with a cloud round-trip. Industrial robots use them for adaptive, reflex-like control. Edge IoT devices run multi-week battery life through event-driven inference.
When inference moves to the edge — embedded in physical environments, running on ambient power — the governance frameworks built on the assumption of centralized control stop working. Who audits a decision made by a chip running on a solar cell in a rural Kenyan clinic? What does regulatory oversight look like when the AI has no cloud connection to monitor?
Second: it creates a new sovereignty calculus.
This is the part that connects neuromorphic computing directly to the Global Majority.
Today's AI infrastructure is profoundly centralised. Your AI strategy runs through a handful of cloud providers — AWS, Azure, GCP — headquartered in the United States, subject to US export controls, interruptible by US policy decisions. This is not hypothetical: the US has already used export controls to restrict access to advanced AI chips for specific countries. The infrastructure layer is a geopolitical layer.
Neuromorphic chips running at the edge change this calculus fundamentally. AI that runs locally, on low-power hardware, without requiring a round-trip to a foreign data center, is AI that cannot be switched off by a foreign policy decision. It is AI that processes data where the data is generated, without routing it through servers in another jurisdiction. It is AI that a government, a hospital, a university, or a farmer can actually own in a meaningful sense.
I call this edge sovereignty: the capacity of an organization or nation to run AI systems independently of centralized foreign infrastructure. It is not just a technical property — it is a governance property. And neuromorphic hardware is what makes it physically possible.
Third: it introduces a learning architecture that has no supervisor.
Standard deep learning uses backpropagation — a global error signal that adjusts weights throughout the network based on centralized supervision. Someone decides the loss function. Someone curates the training data. The alignment mechanism is explicit and traceable.
Neuromorphic SNNs use a fundamentally different learning rule: Spike-Timing Dependent Plasticity — STDP. Synaptic weights update based on the relative timing of pre- and post-synaptic spikes. This is local, unsupervised, decentralized learning. No global loss function. No central supervisor. Each synapse learns from local information only.
This is how the brain adapts. It is also, from a governance perspective, a system that modifies itself through interaction in ways that may not be fully traceable or auditable. The alignment frameworks built for supervised learning — RLHF, fine-tuning, model cards — do not translate to systems that learn this way.
We are building the governance frameworks for AI systems before neuromorphic systems reach enterprise deployment at scale. That window is closing faster than most people realize.
The Materials Gap Nobody Is Measuring
There is a concept in technology transfer that I think about constantly in this context: the valley of death — the gap between a working laboratory prototype and a commercially deployable system. For neuromorphic computing, the valley of death is not primarily algorithmic. It is materials.
Training neuromorphic systems currently requires techniques that have not scaled the way transformer training scaled. The programming models are different. The toolchains are immature. The talent pool is thin. And the physical manufacturing of neuromorphic chips — the memristors, the phase-change materials, the spintronic devices that some researchers are pursuing — requires materials precision that is orders of magnitude more demanding than standard CMOS fabrication.
I spent my PhD working on exactly these kinds of materials challenges — using high-performance computing to simulate how material properties emerge from quantum-level interactions. That work gave me a deep respect for how long it takes to go from "this works in simulation" to "this works in a fab."
The organizations that will lead on neuromorphic AI are not necessarily the ones with the best algorithms. They are the ones that invest now in the materials science, the manufacturing partnerships, and the governance frameworks — before the technology reaches the deployment inflection point that transforms it from a laboratory curiosity into critical infrastructure.
What This Means for You — Right Now
Neuromorphic computing is not a 2030 problem. Intel's Hala Point is deployed. BrainChip's Akida is in production edge devices today. The transition is not coming — it has started.
Here is what enterprise leaders and policymakers need to be doing now:
Map your inference workloads against edge viability. Not every AI task needs a data center. Anomaly detection, sensor processing, real-time classification, predictive maintenance — these are workloads that neuromorphic edge deployment can handle today, with orders-of-magnitude better energy efficiency.
Start the governance conversation before the deployment. If you deploy systems that adapt through local learning rules without centralized supervision, you need audit mechanisms designed for that architecture — not ones ported from supervised learning. This is governance work that needs to happen in parallel with technical evaluation, not after deployment reveals unexpected behavior.
Take edge sovereignty seriously as a strategic asset. For organizations in the Global Majority — governments, universities, hospitals, development organizations — neuromorphic hardware is not just a cost and efficiency story. It is a strategic independence story. The ability to run AI locally, on data generated locally, without routing it through foreign infrastructure, is a form of sovereignty worth investing in deliberately.
The substrate determines everything. Evolution proved it over 500 million years. The question is whether we will build our AI governance frameworks with enough respect for the physics — or whether we will keep treating infrastructure as a commodity until the physics makes the decision for us.
Key Concepts
Edge sovereignty — The capacity of an organization or nation to run AI systems independently of centralized foreign infrastructure. Made possible by low-power neuromorphic hardware that can perform inference locally, without cloud connectivity. A governance property as much as a technical one. Introduced in this context by Dr. Jean-Leah Njoroge.
Spike-Timing Dependent Plasticity (STDP) — A local, unsupervised learning rule used in spiking neural networks in which synaptic weights update based on the relative timing of pre- and post-synaptic spikes. Requires governance frameworks fundamentally different from supervised learning — there is no global loss function and no central supervisor.
The materials gap — The distance between neuromorphic architectures that work in simulation or laboratory conditions and ones that can be manufactured at scale and deployed in production environments. Determined primarily by advances in materials science, not algorithms.
Frequently Asked Questions
About the Author
Dr. Jean-Leah Njoroge is an Engineer and AI Systems Architect who writes about AI governance. She built production AI systems at Dell and Lowe’s, worked on catalyst regeneration at Caterpillar, evaluates AI technologies for patent eligibility and commercial viability in university technology transfer, and runs AI54 Lab, a GPU training platform for sovereign AI deployment. Her named frameworks — edge sovereignty, governance debt, the 7 AI Categories system, and the CLEAR framework — originate from engineering observation, not policy commentary. She is recognized among VentureBeat’s leading women in AI and publishes in Business Daily Africa.
