Flagship Project · Final Year Research
STRATON-LLM — Strategy-Oriented Negotiation Framework with LLM Mediation
STRATON-LLM is a research-driven system for semantic communication, negotiation, and agreement evaluation in multi-agent environments. It is designed for cases where agents do not share the same ontology, yet still need to discover capabilities, interpret requests, and reach trustworthy agreements.
System Context Overview
What the project is trying to solve
Many agent systems assume communication works once a message is delivered. The real problem starts when agents use different ontology terms for related concepts. A simple example is one agent asking for a price while another reasons about a rate. Without a structured mechanism, the interaction becomes brittle, ambiguous, or silently wrong.
Core problem
Semantic mismatch between heterogeneous agents
Response
Protocol-governed negotiation instead of ad-hoc matching
Project objectives
Enable heterogeneous agents to communicate when their local ontologies do not align directly.
Move from brittle ontology matching to explicit semantic negotiation with explainable reasoning.
Use protocol-governed multi-turn dialogue rather than unconstrained prompt exchanges.
Persist successful mappings and negotiation evidence so the system improves over time.
End-to-end system flow
A complete negotiation loop from mismatch detection to reusable learned outcomes.
Step 1
Receive request
Agent A sends a message that uses terms from its own ontology.
Step 2
Check semantic compatibility
Agent B attempts local resolution through ontology lookup and cached mappings.
Step 3
Trigger negotiation when needed
Low-confidence or unknown terms activate the negotiation pipeline.
Step 4
Run bounded dialogue
The protocol engine controls a multi-turn argumentation session.
Step 5
Evaluate trust
The agreement is checked against dialogue-level trust heuristics.
Step 6
Persist and reuse
Accepted mappings and traces become reusable system knowledge.
Why this project matters
- → Ontology-based communication between heterogeneous agents
- → Protocol-governed multi-turn structured argumentation
- → Mapping persistence instead of uncontrolled ontology mutation
- → Trust-aware agreement evaluation with dialogue heuristics
- → Designed as a modular negotiation system, not a prompt chain
Project story
Problem framing: heterogeneous agents fail when terms such as “price” and “rate” carry different ontology meanings.
Core insight: ontology alignment alone is insufficient when context, goals, and confidence remain ambiguous.
System direction: negotiation must be explicit, explainable, and governed by protocol.
Engineering result: STRATON-LLM becomes a layered framework connecting trigger detection, argumentation, evaluation, and persistent learning.
STRATON-LLM pages
Overview
Problem framing, project goals, research contribution, and end-to-end system story.
Architecture
Six-layer architecture, module boundaries, and system walkthrough.
Protocol & Negotiation
Triggering, dialogue acts, FSM control, and a worked negotiation scenario.
Design Decisions
The major choices, trade-offs, and constraints that shaped STRATON-LLM.
Evaluation & Learning
Agreement trust, heuristic scoring, persistence, and evolution roadmap.
Quick facts
Project type
Research-grade intelligent agent negotiation system
Main contribution
Makes semantic conflict handling explicit, protocol-bound, and inspectable
Implementation focus
End-to-end negotiation pipeline with evaluation and persistence
Architecture snapshot
Layer 1
Negotiation Trigger & Context Initialization
Interprets incoming messages, checks the mapping store, invokes ontology matching when needed, and decides whether negotiation should start.
Layer 2
Strategy Selection
Chooses the negotiation mode using confidence, context, urgency, and goal alignment. In the final design, selection converges on structured argumentation modes rather than unrelated strategies.
Layer 3
Proposal Generation
Transforms the selected mode into concrete dialogue proposals such as claims, supports, attacks, defenses, and evidence requests.
Layer 4
Protocol Engine
Runs the finite-state dialogue, validates acts, manages turns, enforces legal transitions, and terminates sessions cleanly.
Layer 5
Agreement Evaluation & Outcome Handling
Assesses whether the produced agreement is trustworthy using dialogue-aware heuristics instead of accepting the latest answer at face value.
Layer 6
Persistence, Logging & Learning
Stores mappings, preserves negotiation traces, updates confidence, and feeds learning signals back into future negotiations.
Planned article series
→ Designing semantic negotiation for heterogeneous intelligent agents
→ Why ontology alignment alone is not enough for real agent interoperability
→ How protocol-governed dialogue makes LLM-supported negotiation inspectable
→ Evaluating agent agreements with dialogue-aware trust heuristics