advanced
Knowledge Graphs for AI
Extract entities and relationships from conversations to build a knowledge graph that powers smarter AI interactions with Memoid.
Prerequisites
- Python 3.8+
- Memoid account with API key
- OpenAI API key (for chat responses)
What You’ll Build
A knowledge-graph-enhanced AI system that:
- Extracts entities (people, places, organizations) from conversations
- Identifies relationships between entities
- Queries the graph for connected information
- Combines vector search with graph traversal
Why Knowledge Graphs?
Vector search finds semantically similar content, but knowledge graphs capture structured relationships:
| Approach | Query | Result |
|---|---|---|
| Vector Search | “Who does John work with?” | Finds memories mentioning “John” and “work” |
| Graph Query | “Who does John work with?” | Traverses relationships to find colleagues |
Prerequisites
- Python 3.8+
- Memoid API key (sign up free, get key from dashboard)
- OpenAI API key (for chat responses only)
Note: Memoid handles entity extraction, graph storage, and search server-side. You don’t need your own LLM for knowledge graph operations — just pass
extract_graph: truewhen adding memories.
Setup
pip install openai requests Implementation
import os
import requests
from openai import OpenAI
MEMOID_API_KEY = os.environ["MEMOID_API_KEY"]
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
MEMOID_BASE_URL = "https://api.memoid.dev/v1"
class KnowledgeGraphAssistant:
def __init__(self):
self.openai = OpenAI(api_key=OPENAI_API_KEY)
self.headers = {
"Authorization": f"Bearer {MEMOID_API_KEY}",
"Content-Type": "application/json"
}
def extract_knowledge(self, text):
"""Extract entities and relationships from text (Memoid handles this server-side)."""
response = requests.post(
f"{MEMOID_BASE_URL}/graph/extract",
headers=self.headers,
json={"text": text}
)
return response.json()
def query_graph(self, entity, depth=2):
"""Query the knowledge graph starting from an entity."""
response = requests.post(
f"{MEMOID_BASE_URL}/graph/query",
headers=self.headers,
json={"entity": entity, "depth": depth}
)
return response.json()
def search_memories(self, query, user_id, limit=5):
"""Search vector memories."""
response = requests.post(
f"{MEMOID_BASE_URL}/search",
headers=self.headers,
json={"query": query, "user_id": user_id, "limit": limit}
)
return response.json().get("results", [])
def recall(self, query, user_id):
"""Get memories + graph context in a single call."""
response = requests.post(
f"{MEMOID_BASE_URL}/recall",
headers=self.headers,
json={
"query": query,
"user_id": user_id,
"include_graph": True,
"memory_limit": 10
}
)
return response.json()
def chat(self, user_id, message):
"""Chat with hybrid memory and graph context."""
context = self.recall(message, user_id)
memory_text = "\n".join(
f"- {m['memory']}" for m in context.get("memories", [])
) or "No relevant memories."
graph_text = ""
for e in context.get("entities", []):
graph_text += f"Entity: {e['name']} ({e['type']})\n"
for r in context.get("relationships", []):
graph_text += f" {r['subject']} --{r['predicate']}--> {r['object']}\n"
system_prompt = f"""You are an assistant with memories and a knowledge graph.
Memories:
{memory_text}
Knowledge Graph:
{graph_text or "No graph connections."}
Use both sources for comprehensive answers."""
response = self.openai.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": message}
]
)
answer = response.choices[0].message.content
# Store and extract knowledge (Memoid handles extraction server-side)
requests.post(
f"{MEMOID_BASE_URL}/memories",
headers=self.headers,
json={
"messages": [
{"role": "user", "content": message},
{"role": "assistant", "content": answer}
],
"user_id": user_id,
"extract_graph": True
}
)
return answer
def main():
assistant = KnowledgeGraphAssistant()
user_id = "graph_user"
print("Knowledge Graph Assistant")
print("Type 'quit' to exit\n")
while True:
user_input = input("You: ").strip()
if user_input.lower() == "quit":
break
if user_input:
response = assistant.chat(user_id, user_input)
print(f"Assistant: {response}\n")
if __name__ == "__main__":
main() Key Concepts
Automatic Entity Extraction
Pass extract_graph: true when adding memories and Memoid extracts entities server-side:
- People: Names, roles, titles
- Organizations: Companies, teams, departments
- Locations: Cities, countries, addresses
- Concepts: Products, technologies, topics
Relationship Detection
Memoid uses domain-flexible predicates — not a fixed set:
| Relationship | Example |
|---|---|
| works_at | “John works at Acme” |
| reports_to | “John reports to Sarah” |
| located_in | “Acme is in New York” |
| married_to | “John is married to Priya” |
Unified Recall
The /v1/recall endpoint returns memories, entities, and relationships in a single call — no multiple round-trips.
Use Cases
- Enterprise assistants: Track org structure and projects
- Research tools: Map connections between concepts
- CRM systems: Understand customer relationships
- Personal assistants: Remember people and connections
Next Steps
- Add custom entity types for your domain
- Build a graph visualization UI using
/v1/context - Implement relationship inference rules