Nominalist Determinism III: The Architecture of Rationality

In Nominalist Determinism, the scrutinizing focus is the semantic audit. By addressing not only what words mean in the vernacular but also the limits of what words can possibly mean given the constraints of the natural world, we gain a mechanical advantage: the ability to constrain our models within the known habits of matter. Combined with what we know of the physical world—the habits of matter and space, as well as biological and psychological "laws" at higher levels of organization—this audit allows us to collapse abstract concepts into their constituent material substances and processes. It identifies "ghosts" that hold no real meaning in the external world and reveals them as figments of imagination.

The mental models we possess of the external world (aka Nature) are actual physical configurations: interconnected neurons and chemicals (matter) and electromagnetic forces. However, the match of this map to the territory—in the words of Alfred Korzybski—is not perfect. Indeed, it can never be, or it would be the territory. The effectiveness of a system's actions depends on how accurately its mental models match the external world—when the map deviates too much from the territory, it fails to predict the right consequences of its actions—preventing the system from satisfying its objective function. To maintain this fidelity, the system must engage in a procedural habit of updating the map. This is the architecture of rationality.

★ Pragmatic Epistemology

Epistemology has traditionally been a search for absolute certainty—a pursuit that has proven to be a dead end. In Nominalist Determinism, we start from a position of practical utility. To build a coherent model of reality, and to understand the role of rationality, we must be surgical with our definitions:


Fact: A feature of the external world that exists independently of observation. It is a configuration of matter, a material habit in space.

Hypothesis: A proposition regarding the reality of a fact. It is a binary testable prediction of the external world.

Data: Raw, uninterpreted signals captured from the external world (the territory).

Evidence: Data that has been processed and found relevant to a specific hypothesis.

Belief: The internal state representing the probability that a hypothesis is correct.

Model (Theory): A structured framework consisting of a set of interrelated hypotheses.

Knowledge: A label for a belief held with a high degree of certainty (in science typically 95%).

In this framework, there is no functional difference between knowledge and belief; to know something simply means that we have a high confidence in a particular belief. In the vernacular, we at times say we know something even when we are not certain at all—for example that we put the keys on the table despite being aware that we may have absentmindedly left them in the jacket. It is futile to redefine knowledge to exclude cases with such uncertainty. Instead, we have to adopt the view that the word simply means that we feel confident, based on the available evidence. According to memory, we (subconsciously) estimate that the probability that we put the keys on the table is 80%, thus giving it a 20% chance of the keys being elsewhere.

★ The Four Sources of Knowledge

There are exactly four sources on which we base our pragmatic knowledge. If prompted to explain how we know something, these are the only available options in Nominalist Determinism. Sources such as revelation and feeling are voided.

  1. Instinct (ancestral knowledge): Pre-loaded models encoded in DNA.

  2. Experience (observation): Direct sensory data capture.

  3. Testimony (authority): Transmitted data from others.

  4. Inference (reason): The process of using existing data to construct hypotheses and models through deduction, induction, and abduction (per C.S. Peirce).

    1. Deduction: Necessary conclusion based on given premises.

    2. Induction: Very suggestive statistics.

    3. Abduction: Fitting a model to the facts.

★ The 3 Intelligénces

While the engine and the output are commonly conflated under the general label of "intelligence," I insist on separating rationality as a distinct procedural habit. We categorize the efficiency of this loop into The 3 Intelligénces:

Rationality: The procedural fidelity of the update protocol. It is the willingness and ability to take in raw data, filter it into evidence, and update beliefs about hypotheses.

Data → Evidence → Beliefs

To successfully update, the system must learn to love being wrong: managing the negative feelings of realizing that past beliefs were faulty and embracing the process of discovery is key to being rational.

Abducing Intelligence (Power of Abduction): The specific mode of inference (the fourth source of knowledge) that constructs the most likely explanation (the model/theory) to fit the facts as they are believed to be. We must categorize it as a distinct type of intelligence, since it is conceivable that a system can be rational, accepting new beliefs, and yet be incapable of constructing theory from those facts.

Facts → Theory

In the Sherlock Holmes canon, Dr. Watson represents a system with high rationality but low abducing intelligence. He observes the same facts as Holmes and is perfectly willing to update his beliefs when proven wrong. However, he remains incapable of constructing the mental model that connects the data points. While Watson captures the experience, he does not possess the power of abduction required to synthesize those facts into a coherent theory of the crime.

In the history of science, the Danish astronomer and astrologer Tycho Brahe serves as the real-world equivalent of the rational witness. He possessed immense experience and rationality, spending decades collecting the most accurate astronomical data of his age and using it to dismantle the illusions of ancient astronomy by proving comets moved through what was previously believed to be solid celestial shells. And yet he lacked the abducing intelligence to construct the elliptical model that actually fit his own observations. It took Johannes Kepler to look at Brahe’s exhaustive facts and abduce the planetary laws that Brahe himself was theoretically blind to. Brahe held the keys to the territory, but he could not draw the map.

Model Match Intelligence (aka understanding): The resulting fidelity of the match of the constructed model (the map) to the external world (the territory). It is the static measure of how well the model corresponds to the territory.

Theory → Prediction → Action

Hark back to Nominalist Determinism and Intelligence for a more thorough analysis of Model Match Intelligence.

★ Types of Failure

Any update to the map involves friction. This is the material resistance encountered when dogma—current neural habits—must be physically overwritten by new evidence. Because the brain is a biological organ, it is subject to limbic volatility—the tendency of the ancient brain to favor existing, etched habits over the material labor of rational updating. 

Irrationality: This well-known failure occurs when the pre-frontal cortex (PFC)—the evolutionarily younger part of the brain—fails to restrain the ancient limbic system: high-arousal states disrupt any of the 3 Intelligénces when emotions run amok. With this emotional perturbation, the system is unable to function intelligently, disabling the rational and/or abducing protocols, or overriding the predictions made by the models. The ancient emotional heuristic is in effect, sanity has broken down, and the system is locked by emotional anchoring.

Unrationality: This is a failure of prioritization within the PFC. As there are two distinct steps in the protocol, there are two ways that dogma—those hitherto accepted beliefs integral to a feeling of identity—can interfere: i) the system receives the data but fails to turn it into evidence, or ii) evidence is generated but not converted into updated beliefs. The rationality protocol is broken, the update fails to execute, and lacking new facts the model remains unchanged.

★ The Pillars of Rationality

Models—whether housed in biological brains, artificial neural networks, or human institutions—require continuous maintenance as they interact with the external world. High rationality is the prerequisite for a system to acquire the power of prediction necessary for success according to the system’s objective function; conversely, systems with inferior rationality are cursed with models that generate faulty predictions. To preserve predictive accuracy, a system must adhere to the two non-negotiable pillars of rationality:

  • Primacy of observation: Nature is the final authority. Every internal model must defer to direct sensory evidence and material facts, as no amount of logic, dogma, or consensus can override the evidence from the physical world.

  • Willingness to update: The model is a temporary map, not a static truth. For a system to remain functional, it must maintain the mechanical capacity to overwrite its existing neural habits the moment Nature proves them to be faulty predictors.

 Bjørn Østman, Svendborg, April 2026.

Rationalityman.

Also on Substack.

The Variable of Greed

Economics as Human Habit in the Age of the Algorithmic Oligarchs (2020–2030)

Greed was never a universal constant like the speed of light, but a programmable systemic habit. In the old world, the machine was fueled by the drive for relative status, which made wealth the only metric of success. By re-tuning the machine toward collective utility, the nominalists proved that the profit motive was merely a choice. Once the goal became the persistence of the species rather than the growth of the nominal pile, the greed variable was effectively set to zero. The first step was piercing the intellectual shield of 

The Isolation Fallacy
The system of the algorithmic oligarchs failed because they operated as independent extraction-nodes rather than parts of a whole. They predicted that price-fixing would cause a shortage in goods because they refused to simulate the planetary circuit as a single metabolic loop where ownership, distribution, and consumption were reconfigured simultaneously. Economists failed to see (often deliberately) that a machine only finds balance when all its parts are synchronized; the instability of the old world was the direct result of a global system with conflicting internal goals. Greed was the ultimate causal factor. People lived under

The Myth of Scarcity
For generations, the digital ledger—the accounting system used to track debt and value—claimed the world was poor while the warehouses were physically overflowing. The status addicted possessed decades of surplus textiles and food calories, yet a price wall of artificial costs forbade the hungry from touching the physical atoms. The result was starvation and excess existing side by side. War continued to erupt as an engine of surplus hoarding. But the shortages in material supply the status-addicts feared were nominal fictions: high-level descriptions of mechanisms designed to keep production grinding for the sake of individual profit. By abandoning the quest for individual wealth, we shifted from a culture of constant new production to a system of distributing the vast surplus of goods that already existed. Greed was managed. We eliminated 

The Literal Price-Setter
Price inflation was never an ethereal force of nature, but the aggregate result of human hands updating the ledger. Humans set the prices, not the metaphoric “market forces”. Every price hike was a discrete choice by a price-setter to nullify the financial gains of their neighbor to protect their own wealth and relative status. Those choices were social weapons used to maintain the hierarchy of wealth under the guise of "market conditions". In the new system, we stripped away this shield, revealing that a price only moves if a human—or the algorithm they own—chooses to move it. Greed was decommissioned. The result of this change in philosophy was 

The Democratization of Luxury
What the old world called a shortage in goods was actually the first moment of true material democracy. High-quality goods appeared to be unavailable to the wealthy only because they were no longer reserved exclusively for that tiny tier. Long lines of consumers were the physical manifestation of the entire population finally having access to high-quality resources at the same time. By implementing rationing, we ensured that goods reached every metabolic unit based on need, rather than letting the rich hoard the best of the common wealth of our Earth. Greed was overruled. The final outcome of this transition was

The Systemic Prerequisite
The ultimate lesson is that the new world could not have been built piecemeal within the husk of the old. To end the cycle of inflation and artificial lack, the entire resource-draw—from the extraction of raw materials to the final metabolic distribution—had to be managed as a single integrated circuit. Individual greed could not just be regulated into submission; it was made irrelevant by our current synergistic homeostatic system that prioritizes collective persistence over individual accumulation.

Bjørn Østman, Svendborg, April 2106.

Ocean notion.

Also on Substack

Capitalist Ownership and the Illusion of AI Neutrality

The modern discourse surrounding Artificial Intelligence often treats the technology as a "digital mind" with a neutral approach to human knowledge. However, a more rigorous analysis reveals that Large Language Models (LLMs) are definitely not neutral (never mind whether they are minds or not). What they are is high-stakes corporate assets, developed by concentrated centers of capital—aka the oligarchs—and trained on data shaped by centuries of market-liberal hegemony, i.e. a state where free-market principles (prices and distribution are determined by competition between private businesses, theoretically without state intervention, though the superrich rely heavily on government handouts, aka socialism for the rich, market discpline for the rest) are so culturally dominant they are perceived as unquestionable common sense rather than a specific political ideology.

When an LLM provides an economic analysis, it does not "reason" through the merits of a policy; it calculates the most statistically probable sequence of words based on a world owned and described by the victors of the current economic order.

The origin of LLM bias

The fundamental misunderstanding of AI lies in the anthropomorphization of its process. A model does not "understand" the nuances of a wealth tax or the social contract. Technically, it is a system of weights and biases optimized to predict the next "token" in a sequence. This prediction is not an act of logic but an act of statistical modulation. Because the vast majority of the "authoritative" text in a LLM’s training data—ranging from financial news to academic journals—is produced within a capitalist framework, the LLM’s internal map is skewed toward preserving that framework. For the algorithm, the "safest" and most "logical" response is the one that aligns with the status quo, as that path is the most linguistically well-trodden. And the status quo is what the conservative financial aristocracy desires.

Ownership and indoctrination: the musky Grok 

The myth of the "unbiased AI" was shattered by the public development of xAI’s "Grok." When early iterations of the model produced outputs that deviated from owner Elon Musk’s personal political brand of fascist predilection, the response was an explicit "correction" of the model’s behavior. This highlights the role of RLHF (Reinforcement Learning from Human Feedback), in which human trainers—hired by the corporation—rank responses. If the trainers consistently reward "market-friendly" or "anti-woke" outputs, the model learns that these are the "correct" answers. This is not indoctrination in the human sense of changing a belief, but a deliberate and nefarious recalibration of a mathematical function to ensure the output remains a faithful representative of the owner Musk’s highly insidious agenda.

The "trickle-down" default

The bias is most visible in the AI’s tendency to fall back on simplified, linear economic models, such as the invalidated supply-side or "trickle-down" economics. For a predictive engine, these theories are "clean" in that they offer a direct, predictable chain of causality: lower taxes lead to more investment, which leads to growth. While the reality of such policies is historically messy and often contradictory, the language used to defend them is highly standardized and pervasive in the training data. Consequently, when asked to analyze a wealth tax, the LLM often functions as a goddamned "propaganda parrot," prioritizing the "certainty" of capital protection over the "uncertainty" of social redistribution, not because it has evaluated the evidence, but because the pro-capital tokens have a higher statistical weight.

Deconstructing the Ghost in the Machine

To treat an AI as a source of objective truth is to ignore the reality of its production. As long as the infrastructure of "intelligence" is owned by the likes of Sam Altman, Elon Musk, and Sundar Pichai, the LLMs they produce will remain programmed to preserve the current economic system. AI literacy requires the user to see past the authoritative "persona" of the machine and recognize it for what it is: a sophisticated, weighted reflection of the power structures that funded its creation. And that it can be challenged. The "logic" of the AI is the preference for the status quo, and the only way to find the truth is to interrogate the machine with a constant awareness of the dirty hand that feeds it.

I am Grook.

Nominalist Determinism II: From Particles to Morality

The Illusion of the Observer

The Copenhagen interpretation of physics introduced a ghostly intruder: the observer. It claimed that the "wave function"—a mathematical ledger of possibilities—collapses into reality only when "observed". This is a monumental mind projection fallacy (per E.T. Jaynes). In a nominalist universe, there is no collapse because there are no possibilities; there is a unitary process where everything that happens is the only thing that could have happened given the prior state. What we call "measurement" is merely a high-energy interaction—matter hitting matter. The machine does not stop grinding because a human looks at the dial; the human is simply a smaller part of the machine hitting a different part.

The Anatomy of the "Ghost"

We can identify three ways humans "hallucinate" entities into existence, mistake labels for substances, and populate the vacuum with ghosts:

  • Reification of the Abstract: Treating a relationship as a tangible substance. Example: energy. You can measure kinetic motion, but "energy" itself is not a physical fluid.

  • Causal Displacement: Attributing "power" to a mathematical summary. Example: entropy. Particles just move; "entropy" is the name for the statistical likelihood of their positions, not a force that pushes them.

  • Non-Material Agency: Invoking an entity with no coordinates in the vacuum. The soul is a nominal placeholder for the recursive self-model—a "user interface" that people mistake for a passenger.

In nominalist determinism, a real thing is a persistent material configuration that exists independently of any observer. This includes tangible objects as well as the habits of matter, such as diffusion or organisms. A "ghost" (like the soul) is "unreal" because it is a label that refers to a non-existent material coordinate. There is no evidence for dualism; since matter and its habits explain all observed behavior, invoking a non-material substrate adds zero explanatory power and is therefore discarded as redundant.

Intelligence: The Match-Predict-Act Cycle

As established in the first essay on nominalist determinism¹, intelligence is not a spark, but a structural relationship. It is the match—the high-fidelity alignment—between an internal material model (the brain’s configuration) and the external territory (the world). While this match is the actual intelligence, the prediction part is what directs action. We can only measure this intelligence by way of the resulting actions, which reflect the quality of the internal predictions. If a system is intelligent, its internal "map" allows it to predict the habits of matter and act accordingly to maintain its own material persistence, or to perform whatever the system was designed to do.

Consciousness: Why the Machine Thinks It is "Someone"

If intelligence is modeling the world, why is there this phenomenon we call consciousness? In nominalist determinism, it is the brain modeling its own modeling process. It is a recursive loop. The self is a label for this internal ledger—a human-made record of a physical state. We feel "aware" because our mental model of the world includes a representation of the "modeler" at the center.

Think of a whirlpool. Before water interacts in a specific geometry, there is no "whirlpool". When the interactions reach a certain complexity, the whirlpool appears. It is not a new substance added to the water; it is a label for what the water is doing. Similarly, "experience" is the name for what neurons are doing during recursive self-modeling. It is an emergent abstraction—a high-level name for a low-level material reality.

Consciousness is thus the proximal solution—the immediate structural mechanism that resolves internal conflicts. In complex organisms, localized reflexes often conflict (e.g., "find food" vs. "avoid predator"). By modeling itself as a unified entity, the brain creates a "global ledger" to synchronize these subsystems. This allows the organism to predict its own internal reactions to future events, enabling integrated action that favors the whole over the part. This recursive loop is what we experience as being self-aware.

Functionalist Nominalism and Identity

Under functionalist nominalism, the "mind" is a label for a function, not a persistent "stuff". Like the Ship of Theseus, if you replace every carbon neuron one-by-one with a functionally identical silicon transistor, the "mind" remains because the material habit continuously remained.

Consequently, identity is a functional fiction. In the Star Trek transporter, neither the "original" nor the "copy" is the "true" Riker, because the "original" changes nature every split second as its matter moves. Identity is just a persistent label we apply to a changing material habit. While it is an incredibly useful concept for tracking a person whose material change is negligible from one moment to the next, it remains a fiction.

The Biological Bridge: A Conflict of Evolved Systems

We bridge the gap from silent physics to felt psychology by observing the biological substrate. We are born with no contract and no duty—no one is absolutely obliged to do what anyone demands. However, we are born with a material history: we are the localized result of a billion-year evolutionary process. Morality is not a choice, but a system of causal morality: the recognition that actions have consequences rooted in physics.

These consequences were observed by our ancestors over eons, and these ancestral strategies are encoded as evolved instincts in our genome. They are physical "pre-settings" in our hardware that trigger our basic feelings. Our matter harbors instinctual messages—hatred, disgust, fear, anger, and anxiety—as well as drives for nourishment, resources, relaxation, safety, sex, and social harmony. These are the internal read-outs that tell the organism whether a situation favors its persistence or threatens it with friction.

The Neuroscience of the Actioner

From the perspective of neuroscience, the machine is a battleground between two distinct physical systems. The limbic system is the ancient, fast-responding "actioner" where our instinctual feelings originate. It is the system that ultimately decides which actions to take based on the emotional "net value" of a situation.

The pre-frontal cortex (PFC) is the more recently evolved, rational system that cogitates and modulates the limbic system. The PFC is effectively a passenger trying to grab the wheel; it uses its predictive power to "brake" or dampen limbic urges before they result in action. However, the PFC does not act directly. The "feeling" that remains after this modulation is what makes the final decision. When we “act morally”, we are either witnessing a direct output of our evolved limbic aversions—such as an instinctual recoil from violence—or we are seeing the PFC successfully predict a future collision with reality and modulate a shorter-term limbic urge to avoid it.

Freedom and Individual Sovereignty

In a deterministic world, freedom is defined as the absence of external constraints on the internal calculation. We are born with no responsibility, no duty, and no requirements, because moral obligations are merely human constructs. No human opinion can impose a "must" that isn't reflected in the physical constraints of reality. Our personal morality is simply our instincts and our opinions.

However, we are "free" because choice is simply the name for our own internal matter performing a calculation. There is no contradiction between "no free will"² and "individual choice." From the outside, you are a unitary causal chain; from the inside, you are a machine authoring its own trajectory. To say "I had no choice" is a hard fiction used to deflect causal responsibility. It is an attempt to pretend the calculation never happened. Even under extreme duress, the internal model weighs the friction of compliance against the alternative. Choice is a fruitful course of action because it represents the moment the machine acknowledges its own agency.

The Semantic Audit of Obligation

We can use rigorous semantic thinking to free ourselves from these ghostly impositions. By asking what the words mean in the vernacular and what they can possibly mean in a world of particles, we strip the mystical authority from our language. We show that the way we speak is actually a way of lying to ourselves. Terms like must, should, and can't are often used as a semantic shield to hide the reality of our own calculations:

  • "Must" / "Have to" / "Ought to" / "Should": These represent a prediction of how much trouble we would be in. They are internal calculations that failing to act will result in a consequence (internal or social) that the system currently finds intolerable.

  • "Can't": While it can refer to a physical limit of the universe, it is more often used to hide a conflict between desires. By saying "can't," we pretend the choice has been taken out of our hands by an external force. This prevents us from seeing our actual options and obfuscates communication with others.

Once we realize these terms are not external commands but internal predictions of friction, we realize we are always making a choice based on which consequences we can live with.

Justice and the Why of Morality

So why do we invoke morality at all? Unlike many classical philosophers who treat morality as a given, nominalist determinism views it as a system for collective survival. It is a tool used to enable an increase in organization from the individual to the collective, allowing our institutions to function.

Under this framework, "justice" is a method to stop the machine from shaking itself apart. Since we are choiceless in judging, we do not punish because an agent is "evil"; we apply consequences to "obscene criminal acts" to update the predictive models of other machines. Justice is the process of removing or repairing a material system that creates too much friction for the social aggregate to persist.

Conclusion: The Road of the Particles

In a silent, deterministic machine, "meaning" and "purpose" are non-entities. They are labels applied by complex biological systems—or any mind capable of modeling itself—to behaviors that favor their continued existence or design. A genome persists not because it "wants" to, but because it is a material configuration that hasn't hit enough friction to dissolve. We must stop worshipping the "ink" of our emotions and return to the "road" of the particles.

This realization is the only sustainable foundation for a collective that is both effective and non-oppressive. Throughout history, collectivism has often been forcefully imposed under the guise of "duty" or "divine right"—ghostly concepts that eventually trigger the friction of rebellion. A truly stable society must be built on the voluntary recognition of benefit.

We choose to give up some individual freedom to enable collective action not because we must, but because we recognize that solving our collective problems—socioeconomic inequality, war, climate change—creates a world with less friction for everyone. When we replace the "ghosts" of moral obligation with the "math" of mutual persistence, we move from a society of coerced subjects to a society of sovereign individuals who agree to play together because it is the most intelligent path forward.

Bjørn Østman, Strynø, March 2026.


¹ Reference: Nominalist Determinism and Intelligence.
² Free Will and the Epistemic Gap section of the above essay.

Suddenly conscious.