mcp surface playbook session

Captured: 2026-04-07T15:36:02.313518+00:00 | Model: qwen3.5-27b@q4_k_m | Integration: mcp/prolog-reasoning | Initial skin: standard | Rendered: 2026-04-08 05:15:59 UTC

preflight

User
Use show_system_info and list_known_facts first.
Then give a one-paragraph summary of what this MCP server can do.
tool calls2
  • show_system_info {}
    output
    {
      "status": "success",
      "system": "Prolog Reasoning",
      "version": "0.5",
      "description": "A local-first neuro-symbolic reliability layer for LLM agents. Natural language helps express intent; symbolic reasoning decides truth.",
      "knowledge_base_path": "D:\\_PROJECTS\\prolog-reasoning\\prolog\\core.pl",
      "core_idea": "Memories are timestamped. Facts are not.",
      "control_plane_policy": {
        "clarification_eagerness": 0.0,
        "status": "no-op placeholder for future routing policy",
        "note": "This key is reserved for future clarification policy tuning. Current write behavior is unchanged."
      },
      "capabilities": [
        "Literal Prolog query execution (`query_logic`, `query_rows`)",
        "Statement classification before query or ingestion decisions",
        "Deterministic knowledge-base reasoning",
        "Runtime fact assertion/retraction for chat-driven scenarios",
        "Semantic validation before query execution",
        "Friendly failure explanations with suggestions",
        "Proof-trace generation"
      ],
      "supported_domains": [
        "Family relationships (parent, sibling, ancestor)",
        "Access control (permissions, roles, users)",
        "Clinical medication triage (deterministic risk flags)",
        "Project dependency risk analysis (CPM-like milestone impact)",
        "General knowledge representations"
      ],
      "example_queries": [
        "Who is John's parent?",
        "Is Alice an ancestor of Bob?",
        "Can admin read files?",
        "What is Bob allergic to?"
      ],
      "learn_more": {
        "getting_started": "See training/01-llm-memory-magic.md",
        "kb_design": "See training/02-knowledge-bases-101.md",
        "error_handling": "See training/03-learning-from-failures.md",
        "lm_studio": "See docs/lm-studio-mcp-guide.md",
        "github": "https://github.com/dr3d/prolog-reasoning"
      }
    }
  • list_known_facts {}
    output
    {
      "status": "success",
      "known_entities": [
        "admin",
        "alice",
        "bob",
        "john",
        "read",
        "write"
      ],
      "supported_predicates": [
        "parent",
        "sibling",
        "ancestor",
        "child",
        "allergic_to",
        "takes_medication",
        "user",
        "role",
        "permission",
        "access_level",
        "can_access",
        "granted_permission",
        "allowed",
        "patient",
        "renal_risk",
        "candidate_drug",
        "interaction",
        "drug_class",
        "interacts",
        "triage",
        "safe_candidate",
        "task",
        "depends_on",
        "duration_days",
        "task_supplier",
        "supplier_status",
        "completed",
        "milestone",
        "downstream",
        "blocked_task",
        "unmet_prereq",
        "task_status",
        "safe_to_start",
        "waiting_on",
        "impacts_milestone",
        "delayed_milestone",
        "character",
        "location",
        "connected",
        "at",
        "faction",
        "has_item",
        "hp",
        "status",
        "weather",
        "time_of_day",
        "insomnia",
        "charmed",
        "quest_active",
        "asleep",
        "awake",
        "co_located",
        "can_move",
        "exposed",
        "needs_rest",
        "vulnerable",
        "threatened",
        "high_risk",
        "can_trade",
        "can_cast_charm"
      ],
      "predicate_notes": {
        "fact_backed_examples": [
          "parent",
          "role",
          "permission",
          "allergic_to"
        ],
        "typically_rule_derived": [
          "allowed",
          "ancestor",
          "asleep",
          "awake",
          "blocked_task",
          "can_access",
          "can_cast_charm",
          "can_move",
          "can_trade",
          "child",
          "co_located",
          "delayed_milestone",
          "downstream",
          "exposed",
          "high_risk",
          "impacts_milestone",
          "interacts",
          "needs_rest",
          "safe_candidate",
          "safe_to_start",
          "sibling",
          "task_status",
          "threatened",
          "triage",
          "unmet_prereq",
          "vulnerable",
          "waiting_on"
        ]
      },
      "note": "This is a summary view of currently known entities plus the predicate vocabulary the skill knows how to talk about. It is not a full dump of every stored fact, and a supported predicate does not imply facts exist for every entity.",
      "knowledge_base_path": "D:\\_PROJECTS\\prolog-reasoning\\prolog\\core.pl"
    }
Assistant
This Prolog Reasoning MCP server is a local-first neuro-symbolic reliability layer designed to augment LLM agents with deterministic symbolic reasoning capabilities. It enables Literal Prolog query execution (`query_logic`, `query_rows`) where user intent is expressed conversationally while the underlying Prolog engine validates truth against a structured knowledge base, supporting domains ranging from family relationships and access control permissions to clinical medication triage and project dependency risk analysis. The system features runtime fact assertion and retraction for dynamic chat-driven scenarios, semantic validation before query execution, friendly failure explanations with suggestions, and proof-trace generation—all built on a rich predicate vocabulary that includes both fact-backed examples (like parent, role, permission) and rule-derived predicates (such as ancestor, allowed, blocked_task, safe_candidate). With its core philosophy that "memories are timestamped but facts are not," this server provides a reliable reasoning layer that can handle complex logical queries across multiple domains while maintaining transparency through detailed explanations of what the system knows and how it arrives at conclusions.

reset

User
Use ONLY reset_kb. Confirm success in one sentence.
tool calls1
  • reset_kb {}
    output
    {
      "status": "success",
      "result_type": "runtime_reset",
      "message": "Runtime KB reset to baseline seed state.",
      "knowledge_base_path": "D:\\_PROJECTS\\prolog-reasoning\\prolog\\core.pl"
    }
Assistant
The knowledge base has been successfully reset to its baseline state.

seed

User
Use bulk_assert_facts with these facts, then use assert_fact for one extra fact.
Facts for bulk_assert_facts:
task(alpha).
task(beta).
depends_on(beta, alpha).
supplier_status(alpha_vendor, on_time).
Then call assert_fact with:
completed(alpha).
tool calls2
  • bulk_assert_facts { "facts": [ "task(alpha).", "task(beta).", "depends_on(beta, alpha).", "supplier_status(alpha_vendor, on_time)." ] }
    output
    {
      "status": "success",
      "result_type": "bulk_fact_assertion",
      "requested_count": 4,
      "asserted_count": 4,
      "failed_count": 0,
      "asserted_facts": [
        "task(alpha).",
        "task(beta).",
        "depends_on(beta, alpha).",
        "supplier_status(alpha_vendor, on_time)."
      ],
      "failed_facts": [],
      "message": "Bulk assertion complete."
    }
  • assert_fact { "fact": "completed(alpha)." }
    output
    {
      "status": "success",
      "result_type": "fact_asserted",
      "fact": "completed(alpha).",
      "message": "Fact asserted into runtime KB for this server process.",
      "note": "Use reset_kb to clear runtime changes."
    }
Assistant
Perfect! Both operations completed successfully:

**Bulk Assertion (4 facts):**
- ✅ task(alpha).
- ✅ task(beta).
- ✅ depends_on(beta, alpha).
- ✅ supplier_status(alpha_vendor, on_time).

**Individual Assertion (1 fact):**
- ✅ completed(alpha).

All 5 facts have been added to the runtime knowledge base. You can now query these facts using any of the available query functions.

rows

User
Use ONLY query_rows with this query:
depends_on(Task, Prereq).
Return rows and count.
tool calls1
  • query_rows { "query": "depends_on(Task, Prereq)." }
    output
    {
      "status": "success",
      "result_type": "table",
      "predicate": "depends_on",
      "prolog_query": "depends_on(Task, Prereq).",
      "variables": [
        "Task",
        "Prereq"
      ],
      "rows": [
        {
          "Task": "beta",
          "Prereq": "alpha"
        }
      ],
      "num_rows": 1,
      "reasoning_basis": {
        "kind": "fact-backed",
        "note": "The `depends_on` predicate is typically answered from stored facts, though rules may still participate."
      }
    }
Assistant
## Query Results: `depends_on(Task, Prereq)`

**Count:** 1 row(s) found

### Rows:
| Task | Prereq |
|------|--------|
| beta | alpha  |

The knowledge base contains one dependency relationship: **beta depends on alpha**.

logic

User
Use ONLY query_logic with this exact query:
depends_on(beta, alpha).
Return short answer and detailed answer.
tool calls1
  • query_logic { "query": "depends_on(beta, alpha)." }
    output
    {
      "status": "success",
      "result_type": "success",
      "predicate": "depends_on",
      "answer_short": "SUCCESS: Query 'depends_on(beta, alpha)' succeeded with 1 solution(s).",
      "answer_detailed": "SUCCESS: Query 'depends_on(beta, alpha)' succeeded with 1 solution(s).\n  (Ground query confirmed)",
      "confidence": 1.0,
      "prolog_query": "depends_on(beta, alpha).",
      "reasoning_basis": {
        "kind": "fact-backed",
        "note": "The `depends_on` predicate is typically answered from stored facts, though rules may still participate."
      },
      "metadata": {
        "proof_trace": [],
        "bindings": [],
        "num_solutions": 1
      }
    }
Assistant
**Short Answer:**
SUCCESS - `depends_on(beta, alpha)` is TRUE (1 solution found).

**Detailed Answer:**
The Prolog query `depends_on(beta, alpha).` succeeded with 1 solution. This confirms that in the knowledge base, there is a fact stating that "beta depends on alpha". The query was ground (no variables), and it matched exactly one stored fact in the database. The confidence level is 1.0, indicating this is definitively established as true based on the current facts in the system.

nl_query

User
Use ONLY query_rows with this query:
depends_on(Task, Prereq).
Return rows and count.
tool calls1
  • query_rows { "query": "depends_on(Task, Prereq)." }
    output
    {
      "status": "success",
      "result_type": "table",
      "predicate": "depends_on",
      "prolog_query": "depends_on(Task, Prereq).",
      "variables": [
        "Task",
        "Prereq"
      ],
      "rows": [
        {
          "Task": "beta",
          "Prereq": "alpha"
        }
      ],
      "num_rows": 1,
      "reasoning_basis": {
        "kind": "fact-backed",
        "note": "The `depends_on` predicate is typically answered from stored facts, though rules may still participate."
      }
    }
Assistant
## Query Results: depends_on(Task, Prereq)`n
**Count:** 1 row(s) found

### Rows:
| Task | Prereq |
|------|--------|
| beta | alpha  |

The knowledge base contains one dependency relationship: **beta depends on alpha**.

classify

User
Use ONLY classify_statement on:
Maybe my mother was Ann.
Return classification fields only.
tool calls1
  • classify_statement { "text": "Maybe my mother was Ann." }
    output
    {
      "kind": "tentative_fact",
      "confidence": 0.76,
      "needs_speaker_resolution": true,
      "can_query_now": false,
      "can_persist_now": false,
      "suggested_operation": "store_tentative_fact",
      "reasons": [
        "tentative language detected"
      ],
      "status": "success",
      "text": "Maybe my mother was Ann.",
      "message": "Return classification only. Do not act on the statement. This is a routing hint, not a write. Runtime write tools exist (assert/retract/reset), but classify_statement itself performs no mutation, and durable journaled persistence is not implemented yet.",
      "proposal_check": {
        "status": "needs_clarification",
        "issues": [
          "speaker identity is unresolved"
        ],
        "reasoning": [
          "predicate and arity validated against deterministic registry",
          "candidate contains <speaker> and requires explicit grounding before persistence"
        ],
        "candidate": {
          "canonical_predicate": "parent",
          "arguments": [
            "ann",
            "<speaker>"
          ],
          "normalized_fact": "parent(ann, <speaker>).",
          "source": "text:family",
          "raw_predicate": "parent"
        }
      }
    }
Assistant
```json
{
  "kind": "tentative_fact",
  "confidence": 0.76,
  "needs_speaker_resolution": true,
  "can_query_now": false,
  "can_persist_now": false,
  "suggested_operation": "store_tentative_fact",
  "reasons": [
    "tentative language detected"
  ],
  "status": "success",
  "text": "Maybe my mother was Ann.",
  "proposal_check": {
    "status": "needs_clarification",
    "issues": [
      "speaker identity is unresolved"
    ],
    "reasoning": [
      "predicate and arity validated against deterministic registry",
      "candidate contains <speaker> and requires explicit grounding before persistence"
    ],
    "candidate": {
      "canonical_predicate": "parent",
      "arguments": [
        "ann",
        "<speaker>"
      ],
      "normalized_fact": "parent(ann, <speaker>).",
      "source": "text:family",
      "raw_predicate": "parent"
    }
  }
}
```

explain

User
Use ONLY explain_error with this error message:
Entity 'charlie' not in KB
Return the explanation and suggestions.
tool calls1
  • explain_error { "error_message": "Entity 'charlie' not in KB" }
    output
    {
      "status": "success",
      "error_input": "Entity 'charlie' not in KB",
      "error_type": "undefined_entity",
      "explanation": "The query refers to an entity that is not currently known to the knowledge base: `charlie`.",
      "suggestions": [
        "Check the spelling of the entity name",
        "Ask about a known entity such as john, alice, bob, admin, read, or write",
        "Use 'list_known_facts' to inspect which entities are currently known",
        "If the entity should exist, add facts for it before querying"
      ],
      "learn_more": "See the training materials at training/03-learning-from-failures.md"
    }
Assistant
## Error Explanation

**Error Type:** `undefined_entity`

**Explanation:** The query refers to an entity that is not currently known to the knowledge base: `charlie`.

## Suggestions

1. **Check the spelling of the entity name** - Ensure you're using the correct capitalization and spelling
2. **Ask about a known entity** such as john, alice, bob, admin, read, or write
3. **Use 'list_known_facts'** to inspect which entities are currently known in the knowledge base
4. **If the entity should exist**, add facts for it before querying

## Learn More

See the training materials at `training/03-learning-from-failures.md`

cleanup

User
Use ONLY these tools in order:
1) retract_fact completed(alpha).
2) reset_kb
Confirm cleanup complete.
tool calls2
  • retract_fact { "fact": "completed(alpha)." }
    output
    {
      "status": "success",
      "result_type": "fact_retracted",
      "fact": "completed(alpha).",
      "message": "Fact retracted from runtime KB."
    }
  • reset_kb {}
    output
    {
      "status": "success",
      "result_type": "runtime_reset",
      "message": "Runtime KB reset to baseline seed state.",
      "knowledge_base_path": "D:\\_PROJECTS\\prolog-reasoning\\prolog\\core.pl"
    }
Assistant
Cleanup complete. Both operations executed successfully:

1. ✅ Fact `completed(alpha).` retracted from runtime KB
2. ✅ Runtime KB reset to baseline seed state

The knowledge base has been cleaned and restored to its original baseline configuration.