STRATON-LLM

Architecture

STRATON-LLM is organized as a layered negotiation system so each stage of semantic conflict handling is explicit. Instead of hiding mismatch resolution inside one opaque model call, the architecture exposes where triggering, proposal planning, protocol control, trust evaluation, and learning each take place.

Full Pipeline Data Flow

PIPELINE DATA FLOWIncomingMessageAgent A → Bstep 1TriggerClassifiersemantic/ctxstep 2StrategySelectormode choicestep 3ProposalGeneratorLLM-assistedstep 4ProtocolEngineFSM turnsstep 5EvaluationLayertrust scorestep 6PersistenceLayerstore + learnstep 7learned mappings feed back into trigger layer

Layered negotiation pipeline

1

Negotiation Trigger & Context Initialization

Interprets incoming messages, checks the mapping store, invokes ontology matching when needed, and decides whether negotiation should start.

Message interceptorOntology lookupMapping-store lookupLLM4OM matcherTrigger classifier
2

Strategy Selection

Chooses the negotiation mode using confidence, context, urgency, and goal alignment. In the final design, selection converges on structured argumentation modes rather than unrelated strategies.

Context analyzerConfidence policyMode selectorGuardrails
3

Proposal Generation

Transforms the selected mode into concrete dialogue proposals such as claims, supports, attacks, defenses, and evidence requests.

Proposal generatorArgumentation plannerOntology context managerDialogue plan builder
4

Protocol Engine

Runs the finite-state dialogue, validates acts, manages turns, enforces legal transitions, and terminates sessions cleanly.

Session managerTurn controllerMessage validatorRule engineTermination handler
5

Agreement Evaluation & Outcome Handling

Assesses whether the produced agreement is trustworthy using dialogue-aware heuristics instead of accepting the latest answer at face value.

Heuristic scorerConfidence aggregatorOutcome deciderRefinement trigger
6

Persistence, Logging & Learning

Stores mappings, preserves negotiation traces, updates confidence, and feeds learning signals back into future negotiations.

Mapping storeTrace loggerLearning updaterMetrics collectorOntology updater

Courtroom FSM State Machine

COURTROOM FSM — STATE MACHINEIDLEOPENINGARGUMENTATIONpropose · support · attackCHALLENGEEVALUATIONVERDICTstartproposeattackresolveassessdecidedefendcache hit

The finite-state machine controls all valid dialogue transitions. Each state is explicit, transitions are validated, and the session terminates cleanly at VERDICT — preventing unbounded chat loops.

Module Dependency Map

MODULE DEPENDENCY MAPcore/packagemodelsexceptionsloggingdetection/packageTriggerClassifierStrategySelectorprotocol/packageFSMSessionTurnControllerValidatorgeneration/packageTermIndexerLLMClientProposalGeneratorevaluation/packageHeuristicScorerOutcomeDeciderpersistence/packageMappingStoreTraceLoggerLearningUpdaterpipeline.py — orchestrator

How the layers connect

Pipeline 1

Receive request

Agent A sends a message that uses terms from its own ontology.

Pipeline 2

Check semantic compatibility

Agent B attempts local resolution through ontology lookup and cached mappings.

Pipeline 3

Trigger negotiation when needed

Low-confidence or unknown terms activate the negotiation pipeline.

Pipeline 4

Run bounded dialogue

The protocol engine controls a multi-turn argumentation session.

Pipeline 5

Evaluate trust

The agreement is checked against dialogue-level trust heuristics.

Pipeline 6

Persist and reuse

Accepted mappings and traces become reusable system knowledge.

Major architectural boundaries

Bounded LLM role

LLMs assist matching, proposal generation, and argument reasoning. They do not own protocol transitions, persistence, or trust acceptance.

Separate learning boundary

Accepted mappings, traces, and ontology updates are handled in dedicated persistence and learning modules, keeping the negotiation loop auditable.

Explicit protocol ownership

The protocol engine owns turn-taking, message legality, termination, and replayability. This prevents the dialogue from devolving into an unbounded chat loop.

Controlled ontology evolution

Negotiation outcomes are stored as mappings first. Evolution is deferred to a controlled and versioned process.

Why this architecture works

  • Each layer has a clear responsibility and can be tested in isolation.
  • Failure points become visible because control logic is not hidden inside one prompt.
  • The same negotiation core can be reused inside a broader agent framework.
  • Persistence turns one successful negotiation into future system knowledge.

Capability signal

This page demonstrates architectural thinking: boundaries, ownership, control flow, stateful learning, and a concrete method for making agent negotiation inspectable.