Building useful Knowledge Graphs will long be a Humans + AI endeavor. A recent paper lays out how best to implement automation, the specific human roles, and how these are combined. The paper, "From human experts to machines: An LLM supported approach to ontology and knowledge graph construction", provides clear lessons. These include: 🔍 Automate KG construction with targeted human oversight: Use LLMs to automate repetitive tasks like entity extraction and relationship mapping. Human experts should step in at two key points: early, to define scope and competency questions (CQs), and later, to review and fine-tune LLM outputs, focusing on complex areas where LLMs may misinterpret data. Combining automation with human-in-the-loop ensures accuracy while saving time. ❓ Guide ontology development with well-crafted Competency Questions (CQs): CQs define what the Knowledge Graph (KG) must answer, like "What preprocessing techniques were used?" Experts should create CQs to ensure domain relevance, and review LLM-generated CQs for completeness. Once validated, these CQs guide the ontology’s structure, reducing errors in later stages. 🧑⚖️ Use LLMs to evaluate outputs, with humans as quality gatekeepers: LLMs can assess KG accuracy by comparing answers to ground truth data, with humans reviewing outputs that score below a set threshold (e.g., 6/10). This setup allows LLMs to handle initial quality control while humans focus only on edge cases, improving efficiency and ensuring quality. 🌱 Leverage reusable ontologies and refine with human expertise: Start by using pre-built ontologies like PROV-O to structure the KG, then refine it with domain-specific details. Humans should guide this refinement process, ensuring that the KG remains accurate and relevant to the domain’s nuances, particularly in specialized terms and relationships. ⚙️ Optimize prompt engineering with iterative feedback: Prompts for LLMs should be carefully structured, starting simple and iterating based on feedback. Use in-context examples to reduce variability and improve consistency. Human experts should refine these prompts to ensure they lead to accurate entity and relationship extraction, combining automation with expert oversight for best results. These provide solid foundations to optimally applying human and machine capabilities to the very-important task of building robust and useful ontologies.
Comparing LLMs and In-House Ontology Expertise
Explore top LinkedIn content from expert professionals.
Summary
Comparing large language models (LLMs) and in-house ontology expertise means evaluating how AI tools process information versus how human experts structure and connect business knowledge. Ontology, in simple terms, is a system that organizes concepts and relationships so that both people and machines can understand and use them to get meaningful answers.
- Build semantic bridges: Connect your internal data with knowledge graphs and ontologies to give your LLMs the context they need to truly understand your business.
- Combine AI and human review: Use LLMs for automating repetitive knowledge tasks, but rely on experts to refine, guide, and validate outputs for accuracy and relevance.
- Focus on structured context: Make sure your organization curates business-specific ontologies so AI systems can reason over your unique processes and relationships, not just generic information.
-
-
Your internal LLM is failing because your knowledge is in prison. I keep hearing the same story: "ChatGPT works great outside, but our internal deployment is useless." The diagnosis: Context starvation. External LLMs feast on a buffet of connected knowledge: - Millions of documents cross-referenced - Knowledge graphs linking concepts - Ontologies providing meaning - Years of training on relationships between ideas Your internal IP? It sits in solitary confinement: - Documents in silos, in PDF jail - Data in closed systems and obscure APIs - No semantic connections - No knowledge graphs - No context Then you wonder why your internal LLM can't connect the dots. Here's what nobody tells you about enterprise LLM deployment: The model isn't the product. The context layer is. Your proprietary insights about drug-target interactions, clinical trial patterns, competitive intelligence, all meaningless to an LLM without the semantic scaffolding that explains HOW these pieces relate. External models know that "KRAS G12C" connects to "lung cancer" connects to "Sotorasib" connects to "Amgen" because millions of documents taught them these relationships. Your internal model knows nothing about your code names, your project relationships, your institutional knowledge. Unless you build the bridges. This is why data foundation work matters: - Knowledge graphs that map your internal concepts - Ontologies that explain your terminology - Clean and structured data with proper relationships - Semantic layers that translate internal to external context Without this, you're asking your LLM to be a genius in solitary confinement, brilliant but utterly disconnected, not only to the outer world, but the disconnection runs between your internal departments as well! The companies getting value from internal LLMs? They spent 18 months building the context layer first. The boring work of creating connections, mapping relationships, building semantic bridges. Your IP isn't worthless. It's just context-starved. Feed it properly, and watch your internal LLM suddenly get smart. What internal knowledge is trapped in your organization's silos, waiting for context to set it free?
-
Your LLM can write emails. But can it reason over your Q3 pipeline? The difference is context. I've been observing something interesting in the enterprise AI landscape: "All of the value in the market is going to go to CHIPS and what we call the ONTOLOGY." Most people focus on the chips part. They miss the ontology revolution. What am I saying? Follow me here. Ontology isn't just a philosophical concept. It's a data relationship retrieval mechanism to augment reasoning that structures, connects, and contextualizes it so AI can actually make sense of it. Think about it this way: Models are only as smart as the world they understand. Most LLM systems are trained on generic web-scale data. They're incredibly intelligent. And completely clueless about your business. The breakthrough comes when you ground AI in domain-specific ontology. When you transform business knowledge into structured, machine-understandable intelligence. This is the missing architecture layer. What ontology actually looks like: - A curated, evolving graph of business concepts. - Accounts, pipeline stages, personas, intent signals, forecast dimensions. - Not just data points. Relationships and meanings. Why it matters technically: It acts as a semantic engine enabling deeper reasoning, causality, and traceability. Not just prediction, but explainability and control. Raw signals become interpretable actions. The architecture stack: Raw data inputs flow into the ontology layer. The ontology layer feeds structured context to reasoning systems. Reasoning systems power coordinated agent actions. Most enterprises are missing this middle layer. They're connecting raw data directly to AI models. No wonder their agents make decisions that ignore core business logic. The companies getting this right understand something fundamental: - Entity Relationships: AI that knows how deals, reps, products, and timelines actually connect. - Business Rules Integration: AI that respects ownership hierarchies, escalation paths, and approval flows. - Action-Agent Mapping: AI that understands which specialist should handle which situation. This isn't about making AI smarter. It's about making AI business-aware. When agents operate from unified business context: - Decisions become coordinated across systems - Silos disappear between teams - Accuracy increases while blind spots reduce The result? AI that doesn't just automate tasks. AI that understands the business it's operating within. At Aviso AI, we've embedded this ontology layer at the core of our agentic architecture. It's what enables our agents to reason, act, and collaborate across GTM ecosystems. Because if compute is the fuel, ontology is the map. Context isn't just king. When it's structured as business ontology, it's transformational.