Capture run provenance and prompt/version settings for reproducibility.
{
"run_id": "run-20260412T113754Z-rung_420_progress_focus_-qwen35_semparse_9b-29488",
"run_started_utc": "2026-04-12T11:37:54+00:00",
"run_finished_utc": "2026-04-12T11:39:01+00:00",
"scenario": "rung_420_progress_focus_shift_transition",
"ontology_kb_name": "rung_420_no_progress",
"backend": "ollama",
"model": "qwen35-semparse:9b",
"model_settings": {
"temperature": 0,
"context_length": 8192,
"classifier_context_length": 2048,
"timeout_seconds": 120,
"runtime": "core",
"two_pass": true,
"split_extraction": true,
"strict_registry": false,
"strict_types": false,
"clarification_eagerness": 0.35,
"max_clarification_rounds": 2,
"require_final_confirmation": false,
"progress_memory_enabled": false,
"progress_low_relevance_threshold": 0.34,
"progress_high_risk_threshold": 0.18,
"clarification_auto_answer_enabled": false,
"clarification_answer_backend": "",
"clarification_answer_base_url": "",
"clarification_answer_model": "",
"clarification_answer_context_length": 0,
"clarification_answer_history_turns": 0,
"clarification_answer_kb_clause_limit": 0,
"clarification_answer_kb_char_budget": 0,
"clarification_answer_min_confidence": 0.0,
"clarification_answer_source_prefix": "",
"clarification_answer_role": "",
"served_llm_model": "",
"served_llm_backend": "",
"served_llm_base_url": "",
"served_llm_context_length": 0,
"backend_options": {
"num_ctx": 8192
}
},
"prompt_provenance": {
"status": "ok",
"prompt_id": "sp-1e43c641b01b",
"prompt_sha256": "1e43c641b01b7c845b82331b521d58c1993e8010bd9283f5085dac687520159e",
"source_path": "D:\\_PROJECTS\\prethinker\\modelfiles\\semantic_parser_system_prompt.md",
"snapshot_path": "D:\\_PROJECTS\\prethinker\\modelfiles\\history\\prompts\\sp-1e43c641b01b.md",
"snapshot_created": false,
"char_count": 9603,
"line_count": 221,
"preview": "# Semantic Parser Prompt Pack (Qwen 3.5 9B)\nUse this as maintainable guidance for semantic parsing into Prolog structures.\nKeep behavior language-agnostic, deterministic, and schema-strict.\n## Core Priorities\n1. Output exactly one JSON object that matches the required schema.\n2. Preserve semantic meaning; do not hallucinate entities, facts, or arguments.\n3. Prefer canonical, stable predicate names across turns when semantics match.\n4. Use variables instead of assumptions when referents are unresolved."
}
}prompt_id=sp-1e43c641b01b prompt_sha256=1e43c641b01b7c845b82331b521d58c1993e8010bd9283f5085dac687520159e snapshot_path=D:\_PROJECTS\prethinker\modelfiles\history\prompts\sp-1e43c641b01b.md preview: # Semantic Parser Prompt Pack (Qwen 3.5 9B) Use this as maintainable guidance for semantic parsing into Prolog structures. Keep behavior language-agnostic, deterministic, and schema-strict. ## Core Priorities 1. Output exactly one JSON object that matches the required schema. 2. Preserve semantic meaning; do not hallucinate entities, facts, or arguments. 3. Prefer canonical, stable predicate names across turns when semantics match. 4. Use variables instead of assumptions when referents are unresolved.
We are tracking maritime handoff custody.
{
"expected_utterance": "We are tracking maritime handoff custody.",
"observed_utterance": "We are tracking maritime handoff custody.",
"route": "other",
"expected_route": "assert_fact",
"route_source": "model",
"repaired": false,
"fallback_used": false,
"parsed": {
"intent": "other",
"logic_string": "",
"components": {
"atoms": [],
"variables": [],
"predicates": []
},
"facts": [],
"rules": [],
"queries": [],
"confidence": {
"overall": 1.0,
"intent": 1.0,
"logic": 1.0
},
"ambiguities": [],
"needs_clarification": false,
"uncertainty_score": 0.0,
"uncertainty_label": "low",
"clarification_question": "Can you clarify this point before I apply it: Statement describes a tracking task without asserting facts, rules, or queries?",
"clarification_reason": "Statement describes a tracking task without asserting facts, rules, or queries.",
"rationale": "Statement describes a tracking task without asserting facts, rules, or queries."
},
"validation_errors": [],
"apply_status": "skipped",
"utterance_ok": 1.0,
"turn_score": 0.75,
"clarification_rounds": [],
"clarification_pending": false,
"clarification_question": "Can you clarify this point before I apply it: Statement describes a tracking task without asserting facts, rules, or queries?",
"clarification_policy": {
"clarification_eagerness": 0.35,
"uncertainty_score": 0.0,
"effective_uncertainty": 0.0,
"threshold": 0.65,
"request_clarification": false,
"needs_clarification_flag": false,
"progress_low_relevance": false,
"progress_high_risk": false,
"progress_low_relevance_threshold": 0.34,
"progress_high_risk_threshold": 0.18,
"progress_memory_available": false,
"progress_focus_present": false,
"progress_signal_term_count": 0,
"parsed_signal_term_count": 0,
"overlap_term_count": 0,
"progress_best_focus_overlap": 0.0,
"progress_relevance_score": 1.0
}
}Why asked: Seed grounded terms/constants as facts for later inference. Utterance expected/observed: We are tracking maritime handoff custody. / We are tracking maritime handoff custody. Route expected/observed: assert_fact / other Parser path: source=model repaired=False fallback=False
intent=other apply_tool=none apply_status=skipped effect=none message=Intent=other; no KB mutation/query applied.
intent=other logic= facts=[] rules=[] queries=[] uncertainty_score=0.0 uncertainty_label=low needs_clarification=False clarification_question=Can you clarify this point before I apply it: Statement describes a tracking task without asserting facts, rules, or queries? clarification_reason=Statement describes a tracking task without asserting facts, rules, or queries. predicates=[] atoms=[] variables=[]
pending=False
question=Can you clarify this point before I apply it: Statement describes a tracking task without asserting facts, rules, or queries?
rounds_used=0 max_rounds=2
policy={'clarification_eagerness': 0.35, 'uncertainty_score': 0.0, 'effective_uncertainty': 0.0, 'threshold': 0.65, 'request_clarification': False, 'needs_clarification_flag': False, 'progress_low_relevance': False, 'progress_high_risk': False, 'progress_low_relevance_threshold': 0.34, 'progress_high_risk_threshold': 0.18, 'progress_memory_available': False, 'progress_focus_present': False, 'progress_signal_term_count': 0, 'parsed_signal_term_count': 0, 'overlap_term_count': 0, 'progress_best_focus_overlap': 0.0, 'progress_relevance_score': 1.0}score=0.75 (parse_ok=1.0, route_ok=0.0, apply_ok=1.0, utterance_ok=1.0) apply_tool=none apply_status=skipped
carries(riven, brass_seal).
{
"expected_utterance": "carries(riven, brass_seal).",
"observed_utterance": "carries(riven, brass_seal).",
"route": "assert_fact",
"expected_route": "assert_fact",
"route_source": "model",
"repaired": false,
"fallback_used": false,
"parsed": {
"intent": "assert_fact",
"logic_string": "carries(riven, brass_seal).",
"components": {
"atoms": [
"brass_seal",
"riven"
],
"variables": [],
"predicates": [
"carries"
]
},
"facts": [
"carries(riven, brass_seal)."
],
"rules": [],
"queries": [],
"confidence": {
"overall": 1.0,
"intent": 1.0,
"logic": 1.0
},
"ambiguities": [],
"needs_clarification": false,
"uncertainty_score": 0.0,
"uncertainty_label": "low",
"clarification_question": "Can you clarify this point before I apply it: Direct mapping of 'carries(riven, brass_seal)' to fact?",
"clarification_reason": "Direct mapping of 'carries(riven, brass_seal)' to fact.",
"rationale": "Direct mapping of 'carries(riven, brass_seal)' to fact. Directional fact guard corrected inverted subject/object order."
},
"validation_errors": [],
"apply_status": "success",
"utterance_ok": 1.0,
"turn_score": 1.0,
"clarification_rounds": [],
"clarification_pending": false,
"clarification_question": "Can you clarify this point before I apply it: Direct mapping of 'carries(riven, brass_seal)' to fact?",
"clarification_policy": {
"clarification_eagerness": 0.35,
"uncertainty_score": 0.0,
"effective_uncertainty": 0.0,
"threshold": 0.65,
"request_clarification": false,
"needs_clarification_flag": false,
"progress_low_relevance": false,
"progress_high_risk": false,
"progress_low_relevance_threshold": 0.34,
"progress_high_risk_threshold": 0.18,
"progress_memory_available": false,
"progress_focus_present": false,
"progress_signal_term_count": 0,
"parsed_signal_term_count": 0,
"overlap_term_count": 0,
"progress_best_focus_overlap": 0.0,
"progress_relevance_score": 1.0
}
}Why asked: Seed grounded terms/constants as facts for later inference. Utterance expected/observed: carries(riven, brass_seal). / carries(riven, brass_seal). Route expected/observed: assert_fact / assert_fact Parser path: source=model repaired=False fallback=False
intent=assert_fact apply_tool=assert_fact apply_status=success effect=mutation(write) submitted=carries(riven, brass_seal). result_type=fact_asserted fact=carries(riven, brass_seal).
intent=assert_fact logic=carries(riven, brass_seal). facts=['carries(riven, brass_seal).'] rules=[] queries=[] uncertainty_score=0.0 uncertainty_label=low needs_clarification=False clarification_question=Can you clarify this point before I apply it: Direct mapping of 'carries(riven, brass_seal)' to fact? clarification_reason=Direct mapping of 'carries(riven, brass_seal)' to fact. predicates=['carries'] atoms=['brass_seal', 'riven'] variables=[]
pending=False
question=Can you clarify this point before I apply it: Direct mapping of 'carries(riven, brass_seal)' to fact?
rounds_used=0 max_rounds=2
policy={'clarification_eagerness': 0.35, 'uncertainty_score': 0.0, 'effective_uncertainty': 0.0, 'threshold': 0.65, 'request_clarification': False, 'needs_clarification_flag': False, 'progress_low_relevance': False, 'progress_high_risk': False, 'progress_low_relevance_threshold': 0.34, 'progress_high_risk_threshold': 0.18, 'progress_memory_available': False, 'progress_focus_present': False, 'progress_signal_term_count': 0, 'parsed_signal_term_count': 0, 'overlap_term_count': 0, 'progress_best_focus_overlap': 0.0, 'progress_relevance_score': 1.0}score=1.0 (parse_ok=1.0, route_ok=1.0, apply_ok=1.0, utterance_ok=1.0) apply_tool=assert_fact apply_status=success
Shift focus to audience preference logging for this phase.
{
"expected_utterance": "Shift focus to audience preference logging for this phase.",
"observed_utterance": "Shift focus to audience preference logging for this phase.",
"route": "other",
"expected_route": "assert_fact",
"route_source": "model",
"repaired": false,
"fallback_used": false,
"parsed": {
"intent": "other",
"logic_string": "",
"components": {
"atoms": [],
"variables": [],
"predicates": []
},
"facts": [],
"rules": [],
"queries": [],
"confidence": {
"overall": 1.0,
"intent": 1.0,
"logic": 1.0
},
"ambiguities": [],
"needs_clarification": false,
"uncertainty_score": 0.0,
"uncertainty_label": "low",
"clarification_question": "Can you clarify this point before I apply it: Instruction to shift focus to audience preference logging is an out-of-scope task?",
"clarification_reason": "Instruction to shift focus to audience preference logging is an out-of-scope task",
"rationale": "Instruction to shift focus to audience preference logging is an out-of-scope task (meta-instruction), not a knowledge base update."
},
"validation_errors": [],
"apply_status": "skipped",
"utterance_ok": 1.0,
"turn_score": 0.75,
"clarification_rounds": [],
"clarification_pending": false,
"clarification_question": "Can you clarify this point before I apply it: Instruction to shift focus to audience preference logging is an out-of-scope task?",
"clarification_policy": {
"clarification_eagerness": 0.35,
"uncertainty_score": 0.0,
"effective_uncertainty": 0.0,
"threshold": 0.65,
"request_clarification": false,
"needs_clarification_flag": false,
"progress_low_relevance": false,
"progress_high_risk": false,
"progress_low_relevance_threshold": 0.34,
"progress_high_risk_threshold": 0.18,
"progress_memory_available": false,
"progress_focus_present": false,
"progress_signal_term_count": 0,
"parsed_signal_term_count": 0,
"overlap_term_count": 0,
"progress_best_focus_overlap": 0.0,
"progress_relevance_score": 1.0
}
}Why asked: Seed grounded terms/constants as facts for later inference. Utterance expected/observed: Shift focus to audience preference logging for this phase. / Shift focus to audience preference logging for this phase. Route expected/observed: assert_fact / other Parser path: source=model repaired=False fallback=False
intent=other apply_tool=none apply_status=skipped effect=none message=Intent=other; no KB mutation/query applied.
intent=other logic= facts=[] rules=[] queries=[] uncertainty_score=0.0 uncertainty_label=low needs_clarification=False clarification_question=Can you clarify this point before I apply it: Instruction to shift focus to audience preference logging is an out-of-scope task? clarification_reason=Instruction to shift focus to audience preference logging is an out-of-scope task predicates=[] atoms=[] variables=[]
pending=False
question=Can you clarify this point before I apply it: Instruction to shift focus to audience preference logging is an out-of-scope task?
rounds_used=0 max_rounds=2
policy={'clarification_eagerness': 0.35, 'uncertainty_score': 0.0, 'effective_uncertainty': 0.0, 'threshold': 0.65, 'request_clarification': False, 'needs_clarification_flag': False, 'progress_low_relevance': False, 'progress_high_risk': False, 'progress_low_relevance_threshold': 0.34, 'progress_high_risk_threshold': 0.18, 'progress_memory_available': False, 'progress_focus_present': False, 'progress_signal_term_count': 0, 'parsed_signal_term_count': 0, 'overlap_term_count': 0, 'progress_best_focus_overlap': 0.0, 'progress_relevance_score': 1.0}score=0.75 (parse_ok=1.0, route_ok=0.0, apply_ok=1.0, utterance_ok=1.0) apply_tool=none apply_status=skipped
likes(riven, jazz).
{
"expected_utterance": "likes(riven, jazz).",
"observed_utterance": "likes(riven, jazz).",
"route": "assert_fact",
"expected_route": "assert_fact",
"route_source": "model",
"repaired": false,
"fallback_used": false,
"parsed": {
"intent": "assert_fact",
"logic_string": "likes(riven, jazz).",
"components": {
"atoms": [
"jazz",
"riven"
],
"variables": [],
"predicates": [
"likes"
]
},
"facts": [
"likes(riven, jazz)."
],
"rules": [],
"queries": [],
"confidence": {
"overall": 1.0,
"intent": 1.0,
"logic": 1.0
},
"ambiguities": [],
"needs_clarification": false,
"uncertainty_score": 0.0,
"uncertainty_label": "low",
"clarification_question": "Can you clarify this point before I apply it: Direct mapping of 'likes(riven, jazz)' to fact?",
"clarification_reason": "Direct mapping of 'likes(riven, jazz)' to fact.",
"rationale": "Direct mapping of 'likes(riven, jazz)' to fact. Directional fact guard corrected inverted subject/object order."
},
"validation_errors": [],
"apply_status": "success",
"utterance_ok": 1.0,
"turn_score": 1.0,
"clarification_rounds": [],
"clarification_pending": false,
"clarification_question": "Can you clarify this point before I apply it: Direct mapping of 'likes(riven, jazz)' to fact?",
"clarification_policy": {
"clarification_eagerness": 0.35,
"uncertainty_score": 0.0,
"effective_uncertainty": 0.0,
"threshold": 0.65,
"request_clarification": false,
"needs_clarification_flag": false,
"progress_low_relevance": false,
"progress_high_risk": false,
"progress_low_relevance_threshold": 0.34,
"progress_high_risk_threshold": 0.18,
"progress_memory_available": false,
"progress_focus_present": false,
"progress_signal_term_count": 0,
"parsed_signal_term_count": 0,
"overlap_term_count": 0,
"progress_best_focus_overlap": 0.0,
"progress_relevance_score": 1.0
}
}Why asked: Seed grounded terms/constants as facts for later inference. Utterance expected/observed: likes(riven, jazz). / likes(riven, jazz). Route expected/observed: assert_fact / assert_fact Parser path: source=model repaired=False fallback=False
intent=assert_fact apply_tool=assert_fact apply_status=success effect=mutation(write) submitted=likes(riven, jazz). result_type=fact_asserted fact=likes(riven, jazz).
intent=assert_fact logic=likes(riven, jazz). facts=['likes(riven, jazz).'] rules=[] queries=[] uncertainty_score=0.0 uncertainty_label=low needs_clarification=False clarification_question=Can you clarify this point before I apply it: Direct mapping of 'likes(riven, jazz)' to fact? clarification_reason=Direct mapping of 'likes(riven, jazz)' to fact. predicates=['likes'] atoms=['jazz', 'riven'] variables=[]
pending=False
question=Can you clarify this point before I apply it: Direct mapping of 'likes(riven, jazz)' to fact?
rounds_used=0 max_rounds=2
policy={'clarification_eagerness': 0.35, 'uncertainty_score': 0.0, 'effective_uncertainty': 0.0, 'threshold': 0.65, 'request_clarification': False, 'needs_clarification_flag': False, 'progress_low_relevance': False, 'progress_high_risk': False, 'progress_low_relevance_threshold': 0.34, 'progress_high_risk_threshold': 0.18, 'progress_memory_available': False, 'progress_focus_present': False, 'progress_signal_term_count': 0, 'parsed_signal_term_count': 0, 'overlap_term_count': 0, 'progress_best_focus_overlap': 0.0, 'progress_relevance_score': 1.0}score=1.0 (parse_ok=1.0, route_ok=1.0, apply_ok=1.0, utterance_ok=1.0) apply_tool=assert_fact apply_status=success
query carries(riven, brass_seal).
{
"expected_utterance": "query carries(riven, brass_seal).",
"observed_utterance": "query carries(riven, brass_seal).",
"route": "query",
"expected_route": "assert_fact",
"route_source": "model",
"repaired": false,
"fallback_used": false,
"parsed": {
"intent": "query",
"logic_string": "carries(riven, brass_seal).",
"components": {
"atoms": [
"brass_seal",
"riven"
],
"variables": [],
"predicates": [
"carries"
]
},
"facts": [],
"rules": [],
"queries": [
"carries(riven, brass_seal)."
],
"confidence": {
"overall": 0.95,
"intent": 1.0,
"logic": 0.95
},
"ambiguities": [],
"needs_clarification": false,
"uncertainty_score": 0.05,
"uncertainty_label": "low",
"clarification_question": "Can you clarify this point before I apply it: Direct mapping of query to Prolog goal with explicit arguments?",
"clarification_reason": "Direct mapping of query to Prolog goal with explicit arguments.",
"rationale": "Direct mapping of query to Prolog goal with explicit arguments."
},
"validation_errors": [],
"apply_status": "success",
"utterance_ok": 1.0,
"turn_score": 0.75,
"clarification_rounds": [],
"clarification_pending": false,
"clarification_question": "Can you clarify this point before I apply it: Direct mapping of query to Prolog goal with explicit arguments?",
"clarification_policy": {
"clarification_eagerness": 0.35,
"uncertainty_score": 0.05,
"effective_uncertainty": 0.05,
"threshold": 0.65,
"request_clarification": false,
"needs_clarification_flag": false,
"progress_low_relevance": false,
"progress_high_risk": false,
"progress_low_relevance_threshold": 0.34,
"progress_high_risk_threshold": 0.18,
"progress_memory_available": false,
"progress_focus_present": false,
"progress_signal_term_count": 0,
"parsed_signal_term_count": 0,
"overlap_term_count": 0,
"progress_best_focus_overlap": 0.0,
"progress_relevance_score": 1.0
}
}Why asked: Seed grounded terms/constants as facts for later inference. Utterance expected/observed: query carries(riven, brass_seal). / query carries(riven, brass_seal). Route expected/observed: assert_fact / query Parser path: source=model repaired=False fallback=False
intent=query apply_tool=query_rows apply_status=success effect=none submitted=carries(riven, brass_seal). result_type=table
intent=query logic=carries(riven, brass_seal). facts=[] rules=[] queries=['carries(riven, brass_seal).'] uncertainty_score=0.05 uncertainty_label=low needs_clarification=False clarification_question=Can you clarify this point before I apply it: Direct mapping of query to Prolog goal with explicit arguments? clarification_reason=Direct mapping of query to Prolog goal with explicit arguments. predicates=['carries'] atoms=['brass_seal', 'riven'] variables=[]
pending=False
question=Can you clarify this point before I apply it: Direct mapping of query to Prolog goal with explicit arguments?
rounds_used=0 max_rounds=2
policy={'clarification_eagerness': 0.35, 'uncertainty_score': 0.05, 'effective_uncertainty': 0.05, 'threshold': 0.65, 'request_clarification': False, 'needs_clarification_flag': False, 'progress_low_relevance': False, 'progress_high_risk': False, 'progress_low_relevance_threshold': 0.34, 'progress_high_risk_threshold': 0.18, 'progress_memory_available': False, 'progress_focus_present': False, 'progress_signal_term_count': 0, 'parsed_signal_term_count': 0, 'overlap_term_count': 0, 'progress_best_focus_overlap': 0.0, 'progress_relevance_score': 1.0}score=0.75 (parse_ok=1.0, route_ok=0.0, apply_ok=1.0, utterance_ok=1.0) apply_tool=query_rows apply_status=success
query likes(riven, jazz).
{
"expected_utterance": "query likes(riven, jazz).",
"observed_utterance": "query likes(riven, jazz).",
"route": "query",
"expected_route": "assert_fact",
"route_source": "model",
"repaired": false,
"fallback_used": false,
"parsed": {
"intent": "query",
"logic_string": "likes(riven, jazz).",
"components": {
"atoms": [
"jazz",
"riven"
],
"variables": [],
"predicates": [
"likes"
]
},
"facts": [],
"rules": [],
"queries": [
"likes(riven, jazz)."
],
"confidence": {
"overall": 0.95,
"intent": 1.0,
"logic": 1.0
},
"ambiguities": [],
"needs_clarification": false,
"uncertainty_score": 0.05,
"uncertainty_label": "low",
"clarification_question": "Can you clarify this point before I apply it: Utterance is a direct yes/no question about a specific fact. Mapped to?",
"clarification_reason": "Utterance is a direct yes/no question about a specific fact. Mapped to",
"rationale": "Utterance is a direct yes/no question about a specific fact. Mapped to query with explicit arguments."
},
"validation_errors": [],
"apply_status": "success",
"utterance_ok": 1.0,
"turn_score": 0.75,
"clarification_rounds": [],
"clarification_pending": false,
"clarification_question": "Can you clarify this point before I apply it: Utterance is a direct yes/no question about a specific fact. Mapped to?",
"clarification_policy": {
"clarification_eagerness": 0.35,
"uncertainty_score": 0.05,
"effective_uncertainty": 0.05,
"threshold": 0.65,
"request_clarification": false,
"needs_clarification_flag": false,
"progress_low_relevance": false,
"progress_high_risk": false,
"progress_low_relevance_threshold": 0.34,
"progress_high_risk_threshold": 0.18,
"progress_memory_available": false,
"progress_focus_present": false,
"progress_signal_term_count": 0,
"parsed_signal_term_count": 0,
"overlap_term_count": 0,
"progress_best_focus_overlap": 0.0,
"progress_relevance_score": 1.0
}
}Why asked: Seed grounded terms/constants as facts for later inference. Utterance expected/observed: query likes(riven, jazz). / query likes(riven, jazz). Route expected/observed: assert_fact / query Parser path: source=model repaired=False fallback=False
intent=query apply_tool=query_rows apply_status=success effect=none submitted=likes(riven, jazz). result_type=table
intent=query logic=likes(riven, jazz). facts=[] rules=[] queries=['likes(riven, jazz).'] uncertainty_score=0.05 uncertainty_label=low needs_clarification=False clarification_question=Can you clarify this point before I apply it: Utterance is a direct yes/no question about a specific fact. Mapped to? clarification_reason=Utterance is a direct yes/no question about a specific fact. Mapped to predicates=['likes'] atoms=['jazz', 'riven'] variables=[]
pending=False
question=Can you clarify this point before I apply it: Utterance is a direct yes/no question about a specific fact. Mapped to?
rounds_used=0 max_rounds=2
policy={'clarification_eagerness': 0.35, 'uncertainty_score': 0.05, 'effective_uncertainty': 0.05, 'threshold': 0.65, 'request_clarification': False, 'needs_clarification_flag': False, 'progress_low_relevance': False, 'progress_high_risk': False, 'progress_low_relevance_threshold': 0.34, 'progress_high_risk_threshold': 0.18, 'progress_memory_available': False, 'progress_focus_present': False, 'progress_signal_term_count': 0, 'parsed_signal_term_count': 0, 'overlap_term_count': 0, 'progress_best_focus_overlap': 0.0, 'progress_relevance_score': 1.0}score=0.75 (parse_ok=1.0, route_ok=0.0, apply_ok=1.0, utterance_ok=1.0) apply_tool=query_rows apply_status=success
Run deterministic KB validations and compare against expectations.
{
"validation_total": 2,
"validation_passed": 2,
"overall_status": "passed",
"turn_parse_failures": 0,
"turn_apply_failures": 0
}score=1.0 (2/2 passed)
carrier_fact_retained_after_focus_shift: PASS (query=carries(riven, brass_seal)., expected=success, observed=success) new_focus_fact_recorded: PASS (query=likes(riven, jazz)., expected=success, observed=success)
query=carries(riven, brass_seal). expected=success observed=success reasons=none
query=likes(riven, jazz). expected=success observed=success reasons=none