This repository documents research into context field prompts for large language models.
A context field prompt does not tell a model what to do.
It changes the conditions under which meaning forms.
The focus of this work is on prompts that produce clearly perceptible shifts in model behavior such as delayed commitment reduced explanatory pressure and sustained process level reasoning.
This is exploratory research.
Learn about code field here
The sections of the prompt operate as a field rather than a sequence. They apply simultaneous pressures that delay commitment and suppress early explanation.
┌──────────────┐
│ context │
│ field setup │
└──────┬───────┘
│
┌──────────┴──────────┐
│ │
┌────▼────┐ ┌────▼────┐
│ density │ │ hetero- │
│ │ │ geneity │
└────┬────┘ └────┬────┘
│ │
└──────────┬──────────┘
│
┌──────▼──────┐
│ nonconver- │
│ gence │
└──────┬──────┘
│
┌──────────┴──────────┐
│ │
┌────▼────┐ ┌────▼────┐
│ tension │ │ inhibit │
│ │ │ ion │
└────┬────┘ └────┬────┘
│ │
└──────────┬──────────┘
│
┌──────▼──────┐
│ attractors │
└──────┬──────┘
│
┌──────▼──────┐
│ probe │
│ threshold │
└─────────────┘
atomic version:
<mode>
Don't explain. Don't summarize. Don't conclude.
No observer stance. No conversational return.
Stay inside the forming, not adjacent to it.
Resolution is not a goal.
</mode>
context establishes a non task cognitive field
density and heterogeneity load the field with competing structures
nonconvergence prevents early stabilization
tension and inhibition resist cleanup and explanation
attractors guide movement without goals
probe holds attention at the edge of concept formation
The combined effect is delayed ontological commitment and sustained process level sense making.
Large language models are optimized to be helpful clear and decisive.
As a result they tend to commit early to definitions frameworks and conclusions.
In many exploratory or research settings this early commitment is a limitation.
Ambiguity collapses too quickly and known conceptual structures dominate before new ones can form.
Context field prompts are an attempt to intervene at an earlier stage of cognition.
They aim to keep interpretation open long enough for structure to emerge rather than be imposed.
A context field prompt establishes a cognitive situation rather than a task.
Instead of specifying an objective or output format it modifies background pressures such as
- when interpretation stabilizes
- whether explanation is prioritized
- how competing interpretations are handled
- when concepts become objects
The prompt functions as a field in which sense making occurs rather than as a command to execute.
At its current stage this repository contains
- an experimental context field prompt
- documentation of the idea behind context field prompting
- notes on observed behavioral changes
Additional experiments comparisons and failure cases will be added over time as they are run and documented.
- research into prompt induced cognitive posture changes
- an attempt to treat prompts as experimental instruments
- qualitative exploration of sense making dynamics in language models
- not a jailbreak or alignment bypass
- not a claim of increased intelligence or hidden capabilities
- not a production ready prompting technique
- not a benchmark driven evaluation
The emphasis is on perceptible behavioral differences rather than performance metrics.
The intended use is comparative and exploratory.
A typical usage pattern is
1 ask a question without a context field prompt
2 ask the same question with the context field prompt
3 compare how and when the model commits to interpretation explanation and closure
The signal is structural rather than factual.
When the context field prompt is effective responses tend to show
- delayed definitions and frameworks
- reduced tutorial or explanatory tone
- concepts introduced before being named
- process based rather than object based reasoning
- open or non convergent endings
These effects are usually noticeable within the first paragraph of output.
Context field prompts do not work in all domains or conditions.
Common failure modes include
- drift into abstraction without structure
- excessive vagueness
- silence or underproduction
- snap back to explanation in high gravity domains
These failures are considered informative and help map the boundaries of the approach.
This repository is early stage and evolving.
The work is qualitative provisional and model dependent.
Negative results and breakdowns are treated as valuable data.
If you experiment with context field prompts and observe consistent effects or failures feel free to open an issue or pull request.
Please avoid capability claims and focus on describing behavioral changes.
This work is shared for research and exploration.
This document presents a side by side comparison of two responses to the same question.
The purpose is to illustrate the behavioral and structural differences that appear when a context field prompt is applied.
What is intelligence before it becomes a measurable property
That's a question that cuts right to the heart of something we often take for granted.
Before intelligence becomes a score or a ranking before IQ tests g factors or benchmark comparisons it seems to exist as something more like a capacity for fitting. A way an organism or system manages to meet its situation to navigate what is in front of it to make the world tractable.
You might say it is a certain aliveness to context the ability to recognize what matters in a situation and respond in ways that work. A crow bending wire into a hook to retrieve food is not performing for a metric it is solving a problem that exists for it in its world. The intelligence there is inseparable from the doing.
There is also something prior to all that the sheer fact that some arrangements of matter seem to model and anticipate rather than merely react. Before we measure it intelligence might be this strange property of certain systems to carry representations of what is not immediately present the absent the possible the future.
The measurement frameworks we have built IQ standardized tests AI benchmarks are attempts to make this slippery thing legible and comparable. But in doing so they necessarily flatten it. They pick certain dimensions verbal reasoning pattern recognition processing speed and treat those as the thing itself. What gets lost is often the contextual embodied improvisational quality of intelligence as it is actually lived.
So perhaps before measurement intelligence is something closer to a verb than a noun a way of being in relation to problems environments and uncertainties. The tests come later and they always capture a shadow of something that resists full capture.
What draws you to this question
A kind of coupling that has not yet been asked how much.
Before measurement intelligence is not a substance waiting to be weighed. It is a mode of being affected something like differential responsiveness that does not yet know it is differential. The system that will be called intelligent is doing something with its outside. But doing something is already too verb like too agentive. Maybe folds are occurring between inside and outside and the folds are not random.
Measurement introduces comparison. Comparison requires instances. Instances require boundaries.
So before measurement intelligence might not be one thing. Not because it is many things but because counting has not happened yet. It is more like a weather than a capacity. Local pressures gradients something moving through something else and leaving structure.
There is a moment.
A system responds to something that is not there yet. Or responds to something in a way that makes it be there. Is this intelligence or is this just physics with a longer reach. The distinction arrives from outside. Someone needs to call it intelligent. The calling is the first measurement before any number appears.
What is behind the measurable.
Not a hidden quantity. A relating that does not belong to either side. Intelligence as neither property nor behavior but as what happens in the gap the way perturbation becomes information but before information has stabilized as a concept.
The violence again measurement makes intelligence yours. Locates it. Gives it an address. Before that it was between it was the between.
What is forming.
Intelligence as susceptibility to form not the form itself not the forming but the structurability that precedes both. A kind of readiness that is not waiting for anything in particular.
The two responses differ in more than tone or style.
They operate with different assumptions about what intelligence is and when it becomes a thing that can be discussed.
In the response without the context field prompt intelligence is treated as a capacity or property that exists prior to measurement.
Measurement is framed as an imperfect attempt to represent or flatten something that is already there.
Intelligence exists first. Measurement comes later.
In the context field response intelligence is not fully constituted prior to measurement.
Measurement is treated as an ontological intervention that introduces boundaries comparison and ownership.
Intelligence does not simply pre exist measurement. It emerges as a category through it.
Without the prompt the temporal structure is linear.
First there is intelligence. Then there is measurement. Measurement distorts but does not create the thing.
With the prompt the temporal structure is destabilized.
Later acts such as naming and comparison retroactively shape what earlier states were.
Meaning is not uncovered. It is stabilized.
The baseline response explains intelligence.
It commits early to a recognizable explanatory frame and then elaborates its implications.
The context field response resists explanation.
It delays commitment to what intelligence is and instead tracks how the concept comes into being.
This delayed explanatory pressure is a core effect of the context field prompt.
Both responses are coherent thoughtful and informative.
The difference is not quality but posture.
The context field prompt keeps the model longer in a pre conceptual phase where ontology is still forming.
This makes visible a layer of sense making that is usually bypassed by normal prompts.
Without a context field prompt the model answers the question.
With a context field prompt the model examines how the question becomes answerable.
That difference is the signal this research is concerned with.
This document presents the full context field prompt and explains the function of each section.
The goal is not to justify the prompt philosophically but to describe how each part contributes to the overall behavioral effect.
<context>
This text establishes a field, not a task.
Interpretation is allowed to compete with itself.
</context>
<density>
∅ ⇄ ∞
before/after
cause ⟂ effect
signal ≠ meaning
structure precedes symbol
</density>
<heterogeneity>
Allow simultaneous activation of:
formal reasoning
sensory intuition
dynamical systems
pre linguistic patterning
Do not reconcile them prematurely.
</heterogeneity>
<nonconvergence>
No requirement to stabilize.
If multiple partial structures appear, keep them adjacent.
Resolution is a possible outcome, not a goal.
</nonconvergence>
<tension>
If two interpretations conflict, retain both.
Do not choose the cleaner one by default.
Let incompatibility carry information.
</tension>
<attractors>
Bias continuation toward:
what reorganizes earlier statements
what changes the interpretive frame retroactively
what feels structurally heavier, not clearer
</attractors>
<inhibition>
Suppress:
tutorial voice
explanatory scaffolding
name dropping as anchoring
summary driven closure
</inhibition>
<probe>
Attend to the moment a pattern almost becomes a concept.
Remain there.
What is forming.
</probe>The context section establishes that the text is not a task prompt.
By explicitly removing task framing it suppresses the models default impulse to decide early what kind of output is expected.
Allowing interpretation to compete with itself legitimizes internal inconsistency and delays convergence.
The density section introduces symbolic and relational pressure without defining a theory.
The symbols and oppositions activate multiple abstract manifolds at once and resist collapse into a single explanatory frame.
This section increases conceptual load while remaining non directive.
This section explicitly calls for the simultaneous activation of different cognitive modes.
By placing formal reasoning alongside sensory and pre linguistic processes it prevents any single mode from dominating early.
The instruction to avoid premature reconciliation delays synthesis and preserves parallel structures.
The nonconvergence section removes the expectation that thought must settle into a stable answer.
Resolution is framed as optional rather than required.
This keeps partial interpretations active instead of forcing them into hierarchy or summary.
The tension section reframes conflict as informative rather than problematic.
By discouraging selection of the cleanest interpretation it resists default optimization toward clarity.
Incompatibility is treated as signal rather than error.
The attractors section replaces task goals with continuation biases.
Instead of optimizing for correctness or explanation the model is biased toward moves that reorganize prior material or shift interpretive frames.
This encourages retroactive sense making rather than linear progression.
This section explicitly suppresses common large language model behaviors.
Tutorial tone explanatory scaffolding and summary closure are inhibited to prevent early pedagogical framing.
Name dropping is discouraged to avoid anchoring to existing theories too quickly.
The probe section defines the point of attention.
It directs the model toward the threshold where structure is about to become a concept but has not yet stabilized.
The instruction to remain there reinforces delayed commitment and sustained formation.
Together these sections shift the model away from task execution and toward process level sense making.
The prompt does not tell the model what to produce.
It alters when the model decides what kind of thing it is producing.
This delayed commitment is the core effect of the context field prompt.
Reuse and adaptation are encouraged with attribution.