NodePrime builds institutional memory for every client — so consultants never
re-learn context, meetings compound into insight, and the system
gets better on its own with every engagement.
Problem
Each new engagement starts from zero. Client context lives in scattered Drive, Slack, CRM. Consultants burn 30-40% of billable hours on discovery.
Solution
Three memory layers per client (semantic · episodic · procedural) fed from real-world sources. Claude agents plug in, execute, and feed results back.
Differentiator
Self-improving loop. When an answer is wrong → the underlying Skill, document, or memory is fixed automatically. Same stack, better every week.
Business outcome
Deliver Advisory-tier insight at Blueprint-tier price. Scale from 1 client to 20 without hiring 20 consultants. Margin > 90%.
Cloudflare Pages (frontend · free) · Cloudflare Tunnel (secure proxy) · Hetzner €5/mo (backend + DB) · Postgres + pgvector · Graphiti + FalkorDB · Skills · Claude API via CF AI Gateway · HillClimbLoop
flowchart TD
subgraph EXT["📥 Client Data Sources · via Composio"]
direction LR
D1["🗂 Google Drive\nSOPs · Contracts · Docs"]
D2["💬 Slack\nThreads · Decisions"]
D3["📧 Gmail · Calendar\nEmails · Meetings"]
D4["🏢 CRM · ERP\nDeals · Operations"]
D5["🎙 Meeting Transcripts\nZoom · Meet · live"]
end
subgraph MEM["🧠 Memory Layer · per client, isolated"]
direction TB
subgraph SEM["Semantic Memory"]
LR["LightRAG\nKnowledge Graph + Vector\n❓ why did this happen?"]
PG[("pgvector\nPostgreSQL")]
LR -.- PG
end
subgraph EPI["Episodic Memory"]
GR["Graphiti\nTemporal Knowledge Graph\n📖 what do we know about this client?"]
FK[("FalkorDB\nRedis-based graph")]
GR -.- FK
end
subgraph PRO["Procedural Memory"]
SKL["Skills\nhow-to · checklists · runbooks\n⚙️ how to do the task"]
end
end
subgraph AGT["⚙️ Agent Runtime"]
SR["Skill Runner\norchestrator\nassembles context"]
CL["Claude\nSonnet · Opus\nexecution"]
SR -->|context assembled| CL
end
subgraph OUT["📤 Outputs"]
O1["📄 Analysis Report"]
O2["🤖 Automated Action\nSlack · Drive · CRM"]
O3["💡 Insight / Recommendation"]
end
subgraph LOOP["🔄 Self-Improvement Loop"]
direction LR
FB["Feedback\nConsultant · Client"]
SC["Scorer\nquality · accuracy · relevance"]
HL["HillClimb Loop\nauto-optimization"]
FB --> SC --> HL
end
D1 & D2 & D3 & D4 -->|"batch ingestion\nonboarding + daily sync"| LR
D5 -->|"real-time\nafter each session"| GR
SKL -->|"load on demand · lazy"| SR
LR -->|"inject at start · what exists"| SR
GR -->|"inject at start · what we know"| SR
CL --> O1 & O2 & O3
O1 & O2 & O3 --> FB
HL -->|"skill was wrong → improve skill"| SKL
HL -->|"document was wrong → fix + reindex"| LR
HL -->|"new experience → write to memory"| GR
classDef ext fill:#0f2744,stroke:#1d4ed8,color:#93c5fd
classDef sem fill:#0a1a35,stroke:#1d4ed8,color:#60a5fa
classDef epi fill:#1a0a35,stroke:#7c3aed,color:#c084fc
classDef pro fill:#0a2010,stroke:#16a34a,color:#4ade80
classDef agt fill:#1a1a0a,stroke:#ca8a04,color:#fde68a
classDef out fill:#0a1a20,stroke:#0e7490,color:#67e8f9
classDef loop fill:#2a0a0a,stroke:#dc2626,color:#fca5a5
class D1,D2,D3,D4,D5 ext
class LR,PG sem
class GR,FK epi
class SKL pro
class SR,CL agt
class O1,O2,O3 out
class FB,SC,HL loop
Semantic Memory LightRAG
Knowledge Graph + Vector Search
Answers "why did this happen?"
Batch ingestion: Drive, Slack, Gmail, CRM
Storage: PostgreSQL + pgvector
Query modes: naive / local / global / hybrid / mix
Composio connects sources →
Batch ingest → LightRAG builds KG →
Historical emails/Slack → Graphiti seeds timeline →
Consultant asks: "Who are the key people? What are the current priorities?"
OutputFull client briefing with org chart, active projects, recent decisions, open risks — generated in hours.
Discovery drops from 2-4 weeks to 2-4 days. Start billable work on Day 3.
🎙️
Meeting Intelligence — Zero-Effort Documentation
👤 Consultant in Zoom/Meet with client
TriggerMeeting ends. Transcript available.
Flow
Transcript → Graphiti (new episode, temporal) →
Entities/decisions extracted → knowledge graph updated →
Action items → Composio creates tasks in CRM/Asana →
Summary emailed to attendees
OutputPost-meeting summary + tasks + updated client state, all automatically. Nothing falls through cracks.
Consultant saves 30-60 min per meeting. Client gets faster follow-through.
❓
Strategic "Why?" Q&A with Evidence
👤 Consultant preparing QBR or strategy review
Trigger"Why did we lose the Acme deal in Q3?"
Flow
Graphiti temporal query → timeline of events → LightRAG → related emails, Slack threads, docs →
Claude reasons across both → answer with citations
Output"Sept 15: CTO mentioned Q4 budget freeze. Oct 3: our proposal arrived. Oct 7: competitor (cheaper) was chosen. Root cause = late + over-priced."
Institutional memory that scales across consultants — nobody re-learns the story.
🚨
Proactive Risk Detection
👤 NodePrime Ongoing Advisory service
TriggerDaily scheduled skill run at 06:00.
Flow
Skill: client-health-check →
Signals from Slack (CEO quiet 7d), email (procurement delays),
CRM (deal stalled) → compare to Graphiti baseline →
Score drop > 15% → alert consultant
Output"Client Acme: health dropped 18% this week — 3 warning signals. Recommended action: schedule call with CTO."
Catch churn before the client complains. Ongoing Advisory becomes proactive, not reactive.