Flagship Project · Final Year Research

STRATON-LLM — Strategy-Oriented Negotiation Framework with LLM Mediation

STRATON-LLM is a research-driven system for semantic communication, negotiation, and agreement evaluation in multi-agent environments. It is designed for cases where agents do not share the same ontology, yet still need to discover capabilities, interpret requests, and reach trustworthy agreements.

Multi-Agent SystemsOntology NegotiationStructured ArgumentationLLM Mediation

System Context Overview

SYSTEM CONTEXT — STRATON-LLMAgent ATravel DomainOWL Ontology AAgent BFinance DomainOWL Ontology BSTRATON-LLM CORETrigger DetectionStrategy SelectionProtocol Engine + GenerationEvaluation + Trust ScoringPersistence + LearningLLM ReasonerrequestmappingrespondagreementMapping StoreLearned AlignmentsTrace LoggerDialogue History

What the project is trying to solve

Many agent systems assume communication works once a message is delivered. The real problem starts when agents use different ontology terms for related concepts. A simple example is one agent asking for a price while another reasons about a rate. Without a structured mechanism, the interaction becomes brittle, ambiguous, or silently wrong.

Core problem

Semantic mismatch between heterogeneous agents

Response

Protocol-governed negotiation instead of ad-hoc matching

Project objectives

1

Enable heterogeneous agents to communicate when their local ontologies do not align directly.

2

Move from brittle ontology matching to explicit semantic negotiation with explainable reasoning.

3

Use protocol-governed multi-turn dialogue rather than unconstrained prompt exchanges.

4

Persist successful mappings and negotiation evidence so the system improves over time.

End-to-end system flow

A complete negotiation loop from mismatch detection to reusable learned outcomes.

View negotiation details

Step 1

Receive request

Agent A sends a message that uses terms from its own ontology.

Step 2

Check semantic compatibility

Agent B attempts local resolution through ontology lookup and cached mappings.

Step 3

Trigger negotiation when needed

Low-confidence or unknown terms activate the negotiation pipeline.

Step 4

Run bounded dialogue

The protocol engine controls a multi-turn argumentation session.

Step 5

Evaluate trust

The agreement is checked against dialogue-level trust heuristics.

Step 6

Persist and reuse

Accepted mappings and traces become reusable system knowledge.

Why this project matters

  • Ontology-based communication between heterogeneous agents
  • Protocol-governed multi-turn structured argumentation
  • Mapping persistence instead of uncontrolled ontology mutation
  • Trust-aware agreement evaluation with dialogue heuristics
  • Designed as a modular negotiation system, not a prompt chain

Project story

1

Problem framing: heterogeneous agents fail when terms such as “price” and “rate” carry different ontology meanings.

2

Core insight: ontology alignment alone is insufficient when context, goals, and confidence remain ambiguous.

3

System direction: negotiation must be explicit, explainable, and governed by protocol.

4

Engineering result: STRATON-LLM becomes a layered framework connecting trigger detection, argumentation, evaluation, and persistent learning.

Quick facts

Project type

Research-grade intelligent agent negotiation system

Main contribution

Makes semantic conflict handling explicit, protocol-bound, and inspectable

Implementation focus

End-to-end negotiation pipeline with evaluation and persistence

Architecture snapshot

Layer 1

Negotiation Trigger & Context Initialization

Interprets incoming messages, checks the mapping store, invokes ontology matching when needed, and decides whether negotiation should start.

Layer 2

Strategy Selection

Chooses the negotiation mode using confidence, context, urgency, and goal alignment. In the final design, selection converges on structured argumentation modes rather than unrelated strategies.

Layer 3

Proposal Generation

Transforms the selected mode into concrete dialogue proposals such as claims, supports, attacks, defenses, and evidence requests.

Layer 4

Protocol Engine

Runs the finite-state dialogue, validates acts, manages turns, enforces legal transitions, and terminates sessions cleanly.

Layer 5

Agreement Evaluation & Outcome Handling

Assesses whether the produced agreement is trustworthy using dialogue-aware heuristics instead of accepting the latest answer at face value.

Layer 6

Persistence, Logging & Learning

Stores mappings, preserves negotiation traces, updates confidence, and feeds learning signals back into future negotiations.

Planned article series

Designing semantic negotiation for heterogeneous intelligent agents

Why ontology alignment alone is not enough for real agent interoperability

How protocol-governed dialogue makes LLM-supported negotiation inspectable

Evaluating agent agreements with dialogue-aware trust heuristics