STRATON-LLM
Architecture
STRATON-LLM is organized as a layered negotiation system so each stage of semantic conflict handling is explicit. Instead of hiding mismatch resolution inside one opaque model call, the architecture exposes where triggering, proposal planning, protocol control, trust evaluation, and learning each take place.
Full Pipeline Data Flow
Layered negotiation pipeline
Negotiation Trigger & Context Initialization
Interprets incoming messages, checks the mapping store, invokes ontology matching when needed, and decides whether negotiation should start.
Strategy Selection
Chooses the negotiation mode using confidence, context, urgency, and goal alignment. In the final design, selection converges on structured argumentation modes rather than unrelated strategies.
Proposal Generation
Transforms the selected mode into concrete dialogue proposals such as claims, supports, attacks, defenses, and evidence requests.
Protocol Engine
Runs the finite-state dialogue, validates acts, manages turns, enforces legal transitions, and terminates sessions cleanly.
Agreement Evaluation & Outcome Handling
Assesses whether the produced agreement is trustworthy using dialogue-aware heuristics instead of accepting the latest answer at face value.
Persistence, Logging & Learning
Stores mappings, preserves negotiation traces, updates confidence, and feeds learning signals back into future negotiations.
Courtroom FSM State Machine
The finite-state machine controls all valid dialogue transitions. Each state is explicit, transitions are validated, and the session terminates cleanly at VERDICT — preventing unbounded chat loops.
Module Dependency Map
How the layers connect
Pipeline 1
Receive request
Agent A sends a message that uses terms from its own ontology.
Pipeline 2
Check semantic compatibility
Agent B attempts local resolution through ontology lookup and cached mappings.
Pipeline 3
Trigger negotiation when needed
Low-confidence or unknown terms activate the negotiation pipeline.
Pipeline 4
Run bounded dialogue
The protocol engine controls a multi-turn argumentation session.
Pipeline 5
Evaluate trust
The agreement is checked against dialogue-level trust heuristics.
Pipeline 6
Persist and reuse
Accepted mappings and traces become reusable system knowledge.
Major architectural boundaries
Bounded LLM role
LLMs assist matching, proposal generation, and argument reasoning. They do not own protocol transitions, persistence, or trust acceptance.
Separate learning boundary
Accepted mappings, traces, and ontology updates are handled in dedicated persistence and learning modules, keeping the negotiation loop auditable.
Explicit protocol ownership
The protocol engine owns turn-taking, message legality, termination, and replayability. This prevents the dialogue from devolving into an unbounded chat loop.
Controlled ontology evolution
Negotiation outcomes are stored as mappings first. Evolution is deferred to a controlled and versioned process.
STRATON-LLM pages
Overview
Problem framing, project goals, research contribution, and end-to-end system story.
Architecture
Six-layer architecture, module boundaries, and system walkthrough.
Protocol & Negotiation
Triggering, dialogue acts, FSM control, and a worked negotiation scenario.
Design Decisions
The major choices, trade-offs, and constraints that shaped STRATON-LLM.
Evaluation & Learning
Agreement trust, heuristic scoring, persistence, and evolution roadmap.
Why this architecture works
- → Each layer has a clear responsibility and can be tested in isolation.
- → Failure points become visible because control logic is not hidden inside one prompt.
- → The same negotiation core can be reused inside a broader agent framework.
- → Persistence turns one successful negotiation into future system knowledge.
Capability signal
This page demonstrates architectural thinking: boundaries, ownership, control flow, stateful learning, and a concrete method for making agent negotiation inspectable.