
6 Hard Questions About the Impact of AI on Your Workforce
Research-backed answers on AI adoption, job transformation, and employee resistance. What the data reveals about layoffs, automation, and workforce planning.

An operational intelligence layer, the ontology, models relationships of entities and actions, enabling “digital twins” for organizations.
Table of Contents
Philosophically, ontology means the study of being. Does ‘nothing’ exist? Does ‘God’ exist? Do ‘I’ exist? To articulate metaphysical questions like these in a formal way, philosophers came up with a structure of rules and categories they can group entities into.
In data science and AI, ontologies are similarly formal and explicit specifications of concepts (like a vocabulary) and their relationships within a specific domain (like a network or blueprint).
Specifically, an ontology should have these three nailed down to ensure logical consistency for AI agents:
These definitions help algorithms understand the real-world environment they operate in, and make multi-agent communication more effective.
What makes an ontology powerful compared to a traditional database is how it formally models the relationships between entities and the actions they can perform. This unified model creates a “digital twin” of an organization that can provide insights, execute tasks, then update itself based on the results.
The main utility is in replacing siloed systems with a single, intelligent operational layer on top of an organization.
Let's take two examples from different industries, a General Hospital and a Supply Chain scenario:
The main goal of ontology is to become the central nervous system of an enterprise, providing:
These capabilities are also the main value proposition of the ontology for any organization.
In practice, this means that:
We need a concrete, machine-readable set of files and databases. Not a diagram on a whiteboard. Not just a system prompt.
Formal language
It's defined using standard languages from the Semantic Web stack, primarily OWL (Web Ontology Language) and RDF (Resource Description Framework). These are W3C standards designed for this exact purpose.
Implementation as a knowledge graph
In practice, the ontology (the schema/model) and the instance data (the actual things in your world, like "Car_789") are stored together in a knowledge graph. This is most often housed in a specialized database called a graph database (e.g. Neo4j, Amazon Neptune, Stardog, AllegroGraph). These databases are optimized for storing and querying the complex relationships defined by the ontology.
The agents interact with the ontology through APIs and tools. They don't get a 50-page PDF explaining it.
This is the primary mechanism. Each agent in a multi-agent system is given a set of "tools" it can use. Several of these tools are specifically designed to interact with the knowledge graph.
Querying the graph
The agent can call a tool like query_kg(query). The agent's task is to formulate the right query in a language like SPARQL (for RDF databases) or Cypher (for Neo4j).
Example: An "Inventory Management Agent" needs to know which parts are running low. It would formulate a SPARQL query like:
PREFIX ont: <http://mycompany.com/ontology#>
SELECT ?part ?stockLevel
WHERE {
?part a ont:Component .
?part ont:hasStockLevel ?stockLevel .
FILTER(?stockLevel < 10)
}
The agent's LLM brain is instructed to generate this query based on the natural language request, "Find all components with stock levels below 10."
Updating the graph (the closed loop)
The agent can also use a tool to write back to the ontology. This is essential for a dynamic system where actions and results are recorded in real time. Example:
Example: A "Procurement Agent" is tasked with ordering more of a specific part. After successfully placing an order via an API call to a purchasing system, it uses a tool like update_kg(update_statement) to add new information:
PREFIX ont: <http://mycompany.com/ontology#>
INSERT DATA {
ont:PurchaseOrder_987 a ont:PurchaseOrder .
ont:PurchaseOrder_987 ont:forComponent ont:Part_ABC .
ont:PurchaseOrder_987 ont:hasStatus "Placed" .
}
Now, all other agents can see that this part is on order.
While the entire ontology isn't stuffed into the prompt, a relevant subset of it is.
Agent role and capabilities
An agent's system prompt will absolutely define its role in the context of the ontology. For example, the "Logistics Agent" prompt would state:
"You are a Logistics Orchestration Agent. Your goal is to ensure the timely delivery of Shipments. You can interact with objects like Shipment, Warehouse, Carrier, and Route. Your available tools are get_shipment_status(shipment_id), calculate_optimal_route(origin, destination), and update_shipment_eta(shipment_id, new_eta)."
Grounding
The ontology provides the "nouns" and "verbs" the agent can use. When a user asks, "What's the status of the shipment to Chicago?", the agent's LLM knows that "shipment" is a formal Shipment object in its world model and that it can use its tools to query information about it.
For very large ontologies, the relevant parts of the schema (the definitions of objects and relationships) can be embedded into vector space. When an agent has a query, it first does a similarity search on the ontology's documentation to find the relevant classes and properties. This retrieved context is then added to its prompt to help it formulate the correct API call or graph query. This is how it connects to a "knowledge base" representing the whole structure.
The ontology itself is the map. An agent understands its position by knowing its:
Yeah, the operational layer needs to be fed by a reliable source of data. Because ontologies are highly complementary to modern data methodologies like the Data Vault, we should take a quick look how they’re seated in the architecture compared to each other:
| Data Vault | Ontology | |
| Primary goal | Store and integrate historical data. | Activate data for real-time operations. |
| Layer | System of Record / Integration Layer. | System of Engagement / Decision Layer. |
| Time focus | Historical, point-in-time, auditable. | Real-time, current state, dynamic. |
| Core question | "What was true?" | "What is true now, and what should we do?" |
| Technology | Typically relational databases (Snowflake, BigQuery). | Typically graph databases (Neo4j, Neptune). |
| Represents | Raw, integrated facts with history. | High-level, semantic business concepts. |
| Use case | Business Intelligence (BI), reporting, analytics. | AI-driven automation, operational apps, semantic search. |
So, while the Data Vault archives the complete historical truth, the ontology can be used to activate the relevant parts of that truth for immediate action.
While many companies using these systems are secretive, there are some tangible examples.
How it works
Skywise integrates data from thousands of aircraft, maintenance logs, and supply chain systems. The ontology defines Aircraft, Engine, ComponentPart, MaintenanceRecord, FlightPlan, and FaultCode.
Agent interaction
A "Predictive Maintenance Agent" could continuously monitor streaming data from an Engine's sensors. When it detects a pattern that matches a known FaultCode profile (defined in the ontology), it can:
How it works
The ontology defines Customer, Account, Transaction, Beneficiary, and RiskIndicator. It models complex ownership structures (e.g., shell companies).
Agent interaction
An "Alert Triage Agent" sees a new high-value transaction. It queries the knowledge graph to trace the funds' origin and destination, traversing multiple layers of Customer and Account relationships. If it finds a link to a sanctioned entity (another object in the graph), it escalates the transaction by creating a SuspiciousActivityReport object, which is then assigned to a human compliance officer.
Paper title: KROMA: Ontology Matching with Knowledge Retrieval and Large Language Models
Arxiv link: https://arxiv.org/pdf/2507.14032

So, what if you have clashing definitions within your processes? What if your main ontology calls it a ‘car’, but one of your databases uses the term ‘automobile’?
Traditionally, you’d need handcrafted rules to sort this out, but those do not generalize well.
That’s why automatic ontology matching is a breakthrough method, enabling you to resolve any and all of these discrepancies, making ontologies themselves much more robust and practically applicable.
The implementation is a tiered system:
The ambition is to move beyond siloed data and applications to a truly integrated, intelligent, and responsive operational environment. The success of this model across diverse problems (from healthcare to energy to manufacturing) shows us that actions become really powerful when integrated with deep semantic understanding.
Explore more stories