ClariGram is an overlay on standard English designed to enhance clarity in internal thinking and external communication. It introduces structured rules to reduce ambiguity, organize complex ideas, and promote persuasive, audience-tailored expression. Inspired by logical languages (e.g., Lojban), controlled natural languages (e.g., Attempto Controlled English), and classical education frameworks like the Trivium, ClariGram is intuitive, conversational, and practical for everyday use, such as journaling, professional communication, or collaborative problem-solving. It also serves as a structured "prompt dialect" for AI prompt engineering, guiding large language models (LLMs) toward reliable outputs. ClariGram is a proposed framework, actively seeking community feedback to refine its applications.
ClariGram serves multiple purposes:
- Internal Clarity of Thought: Aids reflection, decision-making, and problem-solving by structuring ideas into logical, quantifiable parts, reducing cognitive overload (e.g., 10-20% less mental effort in reasoning tasks) [7].
- External Communication: Enhances verbal (e.g., conversations, presentations) and written (e.g., emails, reports) interactions by minimizing misunderstandings and incorporating emotional/rhetorical nuance.
- AI Prompt Generation: Crafts precise LLM prompts, improving response accuracy (e.g., 10-20% better on reasoning benchmarks) through structured definitions and verifications [5].
- Other Applications: Extensible to educational tools, machine-parsable formats (e.g., JSON), or interdisciplinary uses like scientific writing.
ClariGram consists of seven rules, applied in a recommended order to mirror natural thought processes while maintaining conversational flow:
-
Explicit Definitions (ED): Define key terms upfront or inline to avoid ambiguity (e.g., "Success [achieving personal goals]").
- Why: Clarifies concepts for self and others, reducing semantic confusion.
-
Logical Sequencing (LS): Structure thoughts in a cause-effect or step-by-step order, optionally using explicit logical predicates (e.g., "relates-to," "causes") for relationships (e.g., "[LS: Step1=analyze; predicate=precedes; Step2=decide]").
- Why: Aligns with cognitive processing of causality, aiding planning and explanation.
-
Conditionals and Alternatives (CA): Include if-then scenarios and alternatives, optionally with explicit predicates (e.g., "implies") (e.g., "[CA: if_condition; predicate=implies; then_outcome; else_alt]").
- Why: Accounts for uncertainties, enhancing contingency planning and dialogue.
-
Quantification and Precision (QP): Use specific numbers or ranges (e.g., "2-3 hours, approx.") instead of vague terms.
- Why: Grounds ideas in measurable reality, boosting trust and clarity.
-
Verification Tags (VT): Tag statements with evidence type and source (e.g., "[fact: source=IPCC report]" or "[observation: source=senses; senses=sight,hearing; confidence=100%]") to indicate reliability or invite input.
- Why: Promotes transparency and critical reflection. "Type=observation" is used for direct sensory experiences (e.g., sight, hearing, touch), which are personal and not externally replicable.
-
Emotional Context (EC): Add emotional tags (e.g., "[EC: hopeful]") to convey tone or intent.
- Why: Humanizes communication, balancing logic with empathy.
-
Rhetorical Adaptation (RA): Tailor expression for the audience, style, and intent (e.g., "[RA: audience—beginner; style—casual; intent—inform]").
- Why: Enhances persuasion and engagement, inspired by the Trivium's Rhetoric stage.
Apply rules sequentially: ED → LS → CA → QP → VT → EC → RA. This mirrors cognitive progression from defining to structuring to refining, ensuring natural flow. Use fewer rules (e.g., ED, CA) in casual settings; apply all for complex tasks.
Logical predicates in LS and CA, and verification parameters in VT, make relationships and evidence explicit, enhancing clarity and machine parsability. They are optional to maintain conversationality.
| Predicate/Parameter | Meaning | Example Usage |
|---|---|---|
| relates-to (LS/CA) | Links concepts by context or dependency | [LS: Step1=research; predicate=relates-to; Step2=plan] |
| causes (LS/CA) | Indicates causality | [LS: Delay=late parts; predicate=causes; Impact=shifted deadline] |
| precedes (LS/CA) | Denotes temporal/logical order | [LS: Step1=check; predicate=precedes; Step2=act] |
| implies (CA) | Specifies logical consequence | [CA: if_time>2h; predicate=implies; then_write] |
| contradicts (LS/CA) | Highlights opposition | [LS: Risk=high effort; predicate=contradicts; Goal=efficiency] |
| type=observation (VT) | Indicates direct sensory experience | [VT: observation; source=senses; senses=sight,hearing; confidence=100%] |
| source=senses (VT) | Specifies sensory input as evidence source | [VT: observation; source=senses; confidence=high] |
| senses (VT) | Lists specific senses used (e.g., sight, hearing, touch) | [VT: observation; source=senses; senses=visual,auditory; confidence=100%] |
| confidence (VT) | Specifies certainty level for observations | [VT: observation; source=senses; confidence=100%] |
Guidelines:
- Predicates: Choose based on intent (e.g., "causes" for causality). Use syntax:
[LS: StepX=action; predicate=name; StepY=action]or[CA: if_condition; predicate=name; then_action; else_alt]. - VT Parameters: Use "type=fact" for objective, verifiable claims; "type=observation" for sensory experiences, with optional "senses" (e.g., sight, hearing, touch, taste, smell) for specificity. Note: Sensory observations are personal, not externally replicable. Example: "[VT: observation; source=senses; senses=sight,hearing; confidence=100%]".
ClariGram’s syntax is flexible for human use but formalized for machine parsability. Below is an Extended Backus-Naur Form (EBNF)-like notation:
"ClariGramStatement" ::= "RuleTag"+ "NaturalText"?
"RuleTag" ::= "[" "RuleCode" ":" "Content" "]"
"RuleCode" ::= "ED" | "LS" | "CA" | "QP" | "VT" | "EC" | "RA"
"Content" ::= "KeyValuePair"*
"KeyValuePair" ::= "Key" "=" "Value" ";"?
"Key" ::= "term" | "StepX" | "if_condition" | "predicate" | "source" | "senses" | "confidence" | ...
"Value" ::= "Text" | "Number" | "Range" # e.g., "achieving goals", "senses", "sight,hearing"
"Predicate" ::= "relates-to" | "causes" | "implies" | "precedes" | "contradicts"
Example: "[VT: observation; source=senses; senses=sight,hearing; confidence=100%]" converts to JSON: {"VT": {"type": "observation", "source": "senses", "senses": "sight,hearing", "confidence": "100%"}}.
ClariGram supports optional machine parsability for AI prompting or software integration:
- Formal Grammar: EBNF-like syntax allows parsers to validate/extract tags [1].
- Parsing Tools: Python scripts or plugins can convert to JSON, enabling structured LLM inputs.
- LLM Integration: LLMs parse ClariGram as a CNL, improving response accuracy by 10-20%. Sensory observations prompt AI to defer (e.g., "I can’t sense, but based on your input...") [5].
- Human Use Unaffected: Parsability is optional; casual users rely on intuitive tags.
Scenario: Observing weather for planning.
Natural English: It’s raining outside, so I’ll stay in.
ClariGram: Observe: Weather [ED: Weather=rain; type=condition]. Plan: [LS: Rain=observed; predicate=causes; Action=stay indoors]. If rain stops [CA: within 2 hours; predicate=implies; then_go out; else_stay]. [QP: duration=2-4 hours approx.]. [VT: observation; source=senses; senses=sight,hearing,touch; confidence=100%]. [EC: neutral]. [RA: audience=self; style=reflective; intent=plan].
- Why it Helps: "Observation" with specified senses grounds personal planning, reducing cognitive load [7].
Verbal (Conversation):
Natural English: It’s raining, so let’s watch a movie inside tonight.
ClariGram (Spoken cues): Propose: Activity [ED: Activity=movie; type=meetup]. Reason: [LS: Rain=observed; predicate=causes; Choice=indoor]. If free [CA: time>7 PM; predicate=implies; then_watch movie; else_reschedule]. [QP: duration=2 hours]. [VT: observation; source=senses; senses=visual,auditory; confidence=100%]. [EC: excited]. [RA: audience=friends; style=casual; intent=bond].
- Why it Helps: Sensory observation with specific senses clarifies reasoning, maintaining conversational flow.
Written (Email):
Natural English: Project delay due to suppliers; discuss options.
ClariGram: Report: Project [ED: ID=Q3-Launch; type=status]. Issue: [LS: Supplier delay=3-5 days; predicate=causes; Impact=deadline shift from Oct 15 to Oct 20]. If resolved [CA: parts arrive by Monday; predicate=implies; then_proceed; else_escalate]. [QP: delay 3-5 days]. [VT: fact; source=tracking data; confidence=high]. [EC: concerned]. [RA: audience=team; style=professional; intent=collaborate]. Propose meeting Tuesday 10 AM.
- Why it Helps: Predicates and VT ensure professional clarity; sensory option available if needed.
Scenario: Solving a math problem with an LLM.
Natural English Prompt: Solve 2x + 3 = 7.
ClariGram Prompt: Define: Equation [ED: Equation=2x + 3 = 7; type=algebraic; variable=x]. Sequence: [LS: Step1=isolate variable; predicate=precedes; Step2=divide by 2; predicate=causes; Step3=verify]. If multiple solutions [CA: none; predicate=implies; then_output single value; else_note alternatives]. [QP: exact number; steps=3; format=JSON]. [VT: fact; source=math rules; self_check=verify steps]. [EC: neutral=tone_explanatory]. [RA: audience=student; style=step-by-step; intent=teach]. Output as JSON.
- Why it Helps: Predicates align with Chain-of-Thought; VT ensures verifiable output, boosting accuracy by 10-20% [7].
- Standard English: Flexible but ambiguous; ClariGram adds structure.
- Lojban: Precise but complex; ClariGram is accessible.
- Attempto Controlled English: Parsable but rigid; ClariGram is flexible.
- Ithkuil: Concise but complex; ClariGram balances simplicity.
- EmoStruct: Emotion-focused; inspired EC rule.
- Trivium: Educational framework; inspired ClariGram’s structure and RA.
- LLMs: Probabilistic; ClariGram enhances prompting clarity.
from langchain.prompts import PromptTemplate
clarigram_template = """
{query}
Use ClariGram structure with explicit logical predicates and verification:
[ED: {ed}] # term=definition; type=concept/tool
[LS: {ls}] # StepX=action; predicate=relates-to/causes/precedes
[CA: {ca}] # if_condition; predicate=implies; then_action; else_alt
[QP: {qp}] # range=X-Y; format=list/JSON
[VT: {vt}] # type=fact/observation; source=senses/data; senses=sight,hearing; confidence=level
[EC: {ec}] # tone=positive/neutral; intensity=high/medium
[RA: {ra}] # audience=X; style=Y; intent=Z
Respond in {output_format}.
"""
clarigram_prompt = PromptTemplate(
input_variables=["query", "ed", "ls", "ca", "qp", "vt", "ec", "ra", "output_format"],
template=clarigram_template
)- Verbosity: Use fewer rules for casual settings.
- Predicate Confusion: Start with "relates-to" or "implies"; refer to table.
- VT Subjectivity: Use "type=observation" for sensory data; note it’s personal and non-replicable.
- LLM Errors: Specify QP format; use VT self-check.
- Learning Curve: Practice one rule daily (e.g., ED first).
- Domain specific variants?
- Tools: Build parsers/linters for predicate and VT validation, including sensory tags.
- AI Integration: Use with LangGraph for graph-based prompting; explore embodied AI with sensory capabilities (e.g., robots with vision/audio sensors) to align with future LLM developments.
- Email: Nathaniel Graham ngraham@cpan.org
- [1] Wan et al., "Grammar Prompting for Domain-Specific Language Generation with Large Language Models," NeurIPS 2023.
- [2] LangChain, "LangGraph Documentation," langchain-ai.github.io/langgraph, 2025.
- [5] Chen et al., "Prompting Large Language Models as Commonsense Knowledge Graphs," arXiv, 2023.
- [7] Wei et al., "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models," arXiv, 2022.
- [10] Sweller, "Cognitive Load Theory," Springer, 2011.