Overview

Human knowledge operates through models.

Models compress reality into tractable structures that allow prediction, explanation, and action under constraint.

However, models can drift into ontological overreach when they claim to fully capture reality.

The concept of ontological ratios provides a way to understand how much of reality a model plausibly captures within a defined hypothesis space.

This concept serves as a safeguard against ontological absolutism while preserving the usefulness of abstraction.


Ontology as a Sliding Scale

Ontology is often treated as binary:

true ontology vs false ontology

In practice, ontology operates on a continuum of coverage.

Models capture only portions of reality.

This can be conceptualized as an ontological coverage ratio.

Ontological Coverage Ratio = captured explanatory structure / relevant explanatory structure within a defined hypothesis space

This ratio cannot be measured precisely.

Instead, it functions as an epistemic orientation tool.

It reminds us that models are:


The Impossible 100 Percent Ontology

A model reaching complete ontological capture would require:

Such conditions imply:

complete knowledge of reality

Even in deterministic universes, combinatorics and computational limits prevent total capture.

Therefore:

100% ontology is not reachable for bounded agents.


The Opposite Extreme: Zero Ontology

The opposite extreme would be the claim that no knowledge is possible.

This would imply:

However, empirical science demonstrates that partial prediction and explanation are possible.

Human knowledge therefore exists between two extremes:

0% ontology ← partial coverage → 100% ontology

All models occupy positions within this range.


Hypothesis Spaces and Ontological Coverage

Ontological ratios become more meaningful when evaluated relative to a defined hypothesis space.

A hypothesis space contains the set of explanatory variables currently considered relevant to a domain.

Example:

H = {h₁, h₂, h₃, h₄ …}

Each hypothesis explains part of observable reality.

Models can be evaluated by:

However, the hypothesis space itself remains:

Therefore ontological closure remains impossible.


Interaction as the Ground of Observation

Human epistemic systems are grounded in interaction.

Observation occurs when systems interact with reality through:

In simplified form:

interaction → observation → model formation

Explanations that cannot connect to observable interaction cannot be grounded within tractable epistemic systems.

This does not prove that non-interacting entities cannot exist.

It means they cannot serve as epistemic grounding within human modeling systems.


Ontological Overreach

Ontological overreach occurs when a model elevates a single variable or small set of variables to universal explanatory status.

Examples often appear as claims that everything is explained by:

Each of these variables interacts with real systems.

However, none of them alone captures the full network of interacting processes.

Such explanations compress reality beyond what the observable interaction structure supports.

This typically produces low ontological coverage ratios despite rhetorical certainty.


The Interactional Testing Principle

Ontological claims should be tested against the observable network of interacting variables.

A model claiming universal explanatory power must demonstrate compatibility with:

If a model ignores most interacting variables, it is likely compressing reality excessively.

This does not make the model useless.

It means the model has limited ontological coverage.


Circular Explanation Failure

When explanations detach from observable interaction, they risk becoming self-referential.

The structure becomes:

Model explains phenomenon

Model is justified by belief in model

In such cases the explanation explains itself rather than the world.

Interactional grounding prevents this failure by requiring that explanations remain connected to observable processes.


Combinatorics and Ontological Limits

The number of possible interacting variables in complex systems grows combinatorially.

Even modest systems contain vast interaction networks.

This creates a structural constraint:

no single-variable ontology can realistically capture complex systems

Combinatorics ensures that explanatory compression always loses information.

Intelligence therefore operates by managing useful partial models, not complete ontological closure.


Relation to Interactional Epistemics

Ontological ratios complement the earlier principles of interactional epistemics.

Interaction grounds observation.

Friction signals alignment and misalignment.

Closure allows bounded agents to act.

Intelligence compresses reality into tractable representations.

Ontological ratios ensure that these compressed representations are not mistaken for complete reality.


Final Principle

Models are tools for navigating constraint-bound reality.

They do not fully capture reality itself.

The goal of epistemics is therefore not to reach perfect ontology.

It is to maintain:

In short:

Models guide action. Interaction corrects models. Ontology remains open.