Structured Knowledge meets GenAI: A Framework for Logic-Driven Language Models

Paper Authors

Farida Eldessouky¹, Nourhan Ehab¹, Carolin Schindler², Mervat Abuelkheir¹, Wolfgang Minker²

  • ¹German University in Cairo
  • ²Ulm University
Access Paper

Table of Content


Abstract

“Large Language Models (LLMs) excel at generating fluent text but struggle with context sensitivity, logical reasoning, and personalization without extensive fine-tuning. This paper presents a logical modulator: an adaptable communication layer between Knowledge Graphs (KGs) and LLMs as a way to address these limitations. Unlike direct KG-LLM integrations, our modulator is domain-agnostic and incorporates logical dependencies and commonsense reasoning to achieve contextual personalization. By enhancing KG interaction, this method will produce linguistically coherent and logically sound outputs, increasing interpretability and reliability in generative AI.”


Why exactly is fine-tuning extensive, is it always much more expensive?

Fine-tuning requires large datasets, significant compute resources, and careful optimization to prevent overfitting or catastrophic forgetting. It is often expensive because training large models, especially with domain-specific data, demands GPUs/TPUs, storage, and expertise. However, lightweight techniques like LoRA and adapters can reduce costs.

What is a logical modulator?

It is an intermediary mechanism that refines and adapts LLM outputs by incorporating logical dependencies and commonsense reasoning, improving coherence, personalization, and interpretability WITHOUT modifying the core model.

What is domain-agnostic?

Domain-agnostic means that a method, model, or system is NOT limited to a specific field or subject area and can be applied across different domains. In the context of the logical modulator, it means that the system can work with various Knowledge Graphs and LLMs regardless of the specific industry (e.g., healthcare, finance, or education) without requiring major modifications.

What do they mean by adaptable communication layer?

This refers to a flexible interface between Knowledge Graphs (KGs) and LLMs that dynamically adjusts how information flows, ensuring that the LLM receives structured, relevant knowledge while maintaining domain-agnostic applicability.

What are Knowledge Graphs (KGs), examples?

KGs are structured representations of entities and their relationships, used to model real-world knowledge. Examples include Google’s Knowledge Graph, Wikidata, ConceptNet, and DBpedia.

How does a modulator and direct integration as they put it differ?

Direct integration means feeding KG data directly into an LLM, which can be rigid and inefficient. A modulator, instead, acts as an intermediary, selectively incorporating logical dependencies and commonsense reasoning, leading to more adaptable, personalized, and logically consistent responses.

1. Introduction

The paper highlights the need for LLMs to be more explainable and context-aware, especially in critical domains like healthcare. Instead of just using KGs as external data sources, it proposes a bidirectional mediator that dynamically exchanges information between LLMs and KGs. This ensures responses are both accurate and fluent, improving trust and transparency.

Bidirectional in this context suggests that the mediator facilitates two-way communication between the LLM and the Knowledge Graph (KG), rather than just a one-time retrieval of facts. This likely means:

  • Dynamic querying – The LLM can request specific knowledge from the KG, and the KG can refine or adjust its response based on the LLM’s context.
  • Iterative refinement – Instead of a single query-response cycle, the LLM and KG can exchange multiple rounds of information, improving the accuracy and coherence of responses.
  • Context-aware updates – The mediator ensures that the KG’s structured knowledge is contextually relevant, allowing the LLM to modify its request based on intermediate results.

This approach is different from direct KG-LLM integration, where the KG typically provides static facts without iterative refinement.

Existing LLMs rely on statistical patterns rather than true reasoning, making them error-prone in critical fields like medicine and law. RAG improves factual accuracy by retrieving external knowledge, but it mainly relies on surface-level matching. Graph-based approaches like GraphRAG integrate structured knowledge but lose logical dependencies when converting KGs into embeddings. Some methods embed KG data into LLM prompts to enhance reasoning, but they struggle to retain full relational structures, limiting multi-step reasoning and contextual consistency. This reinforces the need for a bidirectional mediator that preserves logical dependencies while ensuring fluent, explainable responses.

3. A Framework for KG-Enhanced LLM Reasoning

The authors propose a domain-independent end-to-end framework that enhances reasoning and personalization in LLMs using an independent mediator. This mediator acts as the retrieval mechanism in RAG but improves on it by structuring the reasoning process more effectively.

Key features:

  • Decompositional querying – The mediator breaks down user queries into smaller sub-queries for the Knowledge Graph (KG).
  • Interpretation layer – It extracts relevant nodes and relationships from the KG, making structured data more understandable for the LLM.
  • Bidirectional interaction – The mediator manages the exchange between the LLM and KG, ensuring multiple rounds of refinement for a more complete response.

This approach optimizes structured knowledge use, making LLM outputs more logical, contextual, and explainable.

graph TD; A["**Input Layer:**
Generic prompt that
requires reasoning"] --> B["**Reasoning Layer:**
Mediator module managing KG
queries, dependencies, and
rule applications"]; B --> C["Prompt Decomposition"]; C --> D["Multiple KG Querying:
Retrieve relevant data"]; D --> E["Logical conjunction of
the retrieved data"]; E -->|Communicates with| F["**Knowledge Base Layer:**
KG layer, representing
stored entities, relationships,
and rules"]; E -->|Communicates with| G["**Language Generation Layer:**
LLM module handling
response generation"]; style B stroke-dasharray: 5,5; style C stroke-dasharray: 5,5; style D stroke-dasharray: 5,5; style E stroke-dasharray: 5,5; classDef dashed fill:none,stroke:#fff,stroke-dasharray:5 5; class B,C,D,E dashed;
Modular Architecture for our proposed end-end framework for KG-enhanced LLM reasoning - Eldessouky et al., 2025

The authors argue that their mediator-driven framework improves reasoning and explainability by enabling the Knowledge Graph (KG) to be the primary reasoning source, while the LLM handles fluent text generation. Unlike retrieval-based methods like GraphRAG, which reduce KG data to embeddings, their approach directly interacts with the KG through a dynamic reasoning layer, preserving logical depth and personalization.

Key advantages:

  • Flexibility & Interpretability – Separating LLM and KG modules with a mediator makes the system more adaptable and explainable.
  • Enhanced Explainability – Symbolic AI integration allows responses to be traced back to KG elements and rules, improving trust.
  • Logical Depth & Accuracy – The structured knowledge from KGs is leveraged more effectively, preventing the loss of relational details seen in simpler KG-LLM integrations.

This hybrid approach bridges structured knowledge and generative AI, improving factual accuracy, reasoning, and user-specific adaptability.

Symbolic AI refers to an approach in artificial intelligence that represents knowledge using explicit symbols, rules, and logic rather than learned statistical patterns. It relies on symbol manipulation, such as if-then rules, ontologies, and logical reasoning, to process and infer knowledge.

In contrast to neural networks, which extract patterns from data, symbolic AI systems operate on structured knowledge (e.g., Knowledge Graphs, rule-based systems) and provide transparent, traceable reasoning, making them useful for explainability and logic-based decision-making.

4. Conclusion and Future Work

The paper proposes a decompositional reasoning layer that enhances LLM reasoning by structuring interactions with Knowledge Graphs (KGs). This approach ensures responses are both contextually grounded and fluent, making it particularly useful in fields like career development, healthcare, and legal advising. Future work will explore how the mediator can dynamically update KGs, allowing knowledge to evolve without manual intervention.

Limitations and Ethical Considerations

  • Latency – Complex KG queries and multi-step reasoning can slow response times.
  • Entity Linking – Ambiguities and synonyms make accurate entity mapping difficult.
  • Scalability – Expanding KGs increases storage and complexity, requiring efficient management.
  • Evaluation & Bias – Assessing reasoning quality and ensuring fair, transparent responses remain key concerns.

Addressing these challenges is crucial for scalable, explainable, and ethically responsible KG-augmented LLM systems.