Menu
Introduction

MSP Core Concepts

Core Concepts: The Building Blocks of Context Engineering

Understanding MSP's core concepts transforms how you think about developer productivity. These aren't just features—they're fundamental primitives for a new way of working.

The Session: Your Atomic Unit of Work

What is a Session?

A session is a continuous period of focused development work with clear start and end points. Unlike arbitrary time blocks or task tickets, sessions capture the full context of your work.

# A session has:
- Identity: Unique ID (msp-2025-07-20-093042)
- Lifecycle: Start → Updates → End
- Context: All decisions, progress, and blockers
- Purpose: Clear goal or focus area
- Continuity: Links to previous/next sessions

Session Anatomy

Session:
  id: msp-2025-07-20-093042
  started: 2025-07-20 09:30:42
  user: developer@team.com
  project: e-commerce-api
  
  state:
    progress: 67%
    phase: payment-integration
    last_update: "Fixed webhook validation"
    
  contains:
    updates: 12
    decisions: 3
    blockers: 1 (resolved)
    code_refs: 8
    
  links:
    previous: msp-2025-07-19-141558
    epic: LIN-234
    branch: feature/stripe-payments

The R³ Protocol: Route-Recall-Record

Understanding R³

The R³ Protocol is MSP's core loop, inspired by GPS navigation:

  1. ROUTE: Where am I going?
  2. RECALL: Where have I been?
  3. RECORD: What happened here?

This creates a continuous, memory-augmented development cycle.

Route: Defining Your Destination

# Route phase happens at session start
.\msp.ps1 start --goal "Implement payment webhooks"

# MSP helps you route by showing:
- Current epic/milestone progress
- Upcoming tasks from Linear
- Architecture decisions that guide you
- Team dependencies

Routing Principles:

  • Clear destination improves focus
  • Waypoints (milestones) track progress
  • Rerouting is normal and tracked

Recall: Loading Your Context

# Recall happens automatically
.\msp.ps1 start

🧠 RECALL Phase:
- Last session: Yesterday, 3.5 hours
- Previous work: JWT refresh token implementation  
- Key decision: 24-hour token expiry
- Active blocker: None
- Progress trend: +15% this week

Recall Powers:

  • Instant context restoration
  • Decision rationale at fingertips
  • Pattern recognition across sessions
  • Team knowledge aggregation

Record: Capturing Your Journey

# Recording happens continuously
.\msp.ps1 update "Implemented webhook endpoint" 68
.\msp.ps1 decide "Using synchronous processing for reliability"
.\msp.ps1 blocker "Stripe signature validation failing"
.\msp.ps1 resolve "Fixed: needed raw request body"

Recording Best Practices:

  • Record as you work, not after
  • Capture the "why" in decisions
  • Small updates are valuable
  • Blockers help future you

The Knowledge Graph: Your Project's Memory

What is a Knowledge Graph?

MSP builds a Neo4j knowledge graph that represents all relationships in your project:

Neo4j Knowledge Graph Visualization

The knowledge graph captures relationships between sessions, decisions, features, and team members - creating a living memory of your project's evolution.

// Your project as a living graph
(Session)-[:CONTAINS]->(Decision)
(Decision)-[:RESOLVES]->(Blocker)
(Session)-[:ADVANCES]->(Milestone)
(Developer)-[:PARTICIPATED_IN]->(Session)

Why Graph, Not Database?

Traditional databases store data. Graphs store relationships.

Graph databases are often preferred over regular (relational) databases for Generative AI (GenAI) applications due to their ability to efficiently model and query relationships between data points. This strength is particularly valuable in GenAI, where understanding connections and context is crucial for tasks like the present case - knowledge representation, as well as others,like recommendation systems, and fraud detection.

Graph databases store relationships directly as part of the data model, unlike relational databases that use JOIN operations to infer relationships. This direct storage allows for faster and more efficient traversal of relationships, which is essential for tasks like finding paths, detecting patterns, and understanding context in a network.

Graph databases also offer greater flexibility in adapting to changing data models and evolving requirements. Unlike the rigid schemas of relational databases, graph databases allow for easy addition of new nodes, relationships, and properties without disrupting existing data or requiring extensive schema changes.This agility is crucial in GenAI development, where experimentation and iteration are common, and new data types or relationships may need to be added as the AI model learns and evolves.

Graph databases are optimized for complex relationship-based queries, such as finding the shortest path between two nodes or detecting cycles in a graph. These types of queries can be computationally expensive in relational databases but are straightforward and efficient in graph query languages like Cypher, used by Neo4j.

Most importantly for context engineering, graph databases are especially well-suited for building and managing knowledge graphs, which are structured representations of knowledge that can be used to improve the accuracy and explainability of stored context. Knowledge graphs can represent facts, relationships, and concepts, efficiently providing context and meaning to the data. For this same reason you may see a knowledge graph being used to provide context to a chatbot, enabling it to understand user intent and provide more accurate and informative responses.

Examples of data retrieval using the Cypher query language:

// Find all decisions that led to current architecture
MATCH path = (d1:Decision)-[:LED_TO*]->(d2:Decision)
WHERE d2.content CONTAINS 'microservices'
RETURN path

// Discover who knows about specific features
MATCH (dev:Developer)-[:WORKED_ON]->(s:Session)-[:TOUCHED]->(f:Feature)
WHERE f.name = 'Authentication'
RETURN dev.name, count(s) as expertise

Graph Growth Over Time

Day 1:   (Session)
Day 7:   (Session)-->(Decision)-->(Session)
Day 30:  Complex web of interconnected knowledge
Day 90:  Full project archaeology available
Day 365: Institutional memory preserved

Progress: Beyond Binary States

The Progress Spectrum

Traditional tools: OpenClosed

MSP: 0%1%2% → ... → 100%

This granularity changes everything:

# Traditional
"Working on authentication" (for 2 weeks)

# MSP
Day 1: "Auth research" (5%)
Day 2: "Database schema" (12%)
Day 3: "JWT implementation" (25%)
Day 4: "Refresh tokens" (40%)
...
Day 10: "Production ready" (100%)

Progress Psychology

Visible progress is motivating:

  • 1% is better than 0%
  • Daily progress compounds
  • Trends reveal velocity
  • Completion is satisfying

Progress Queries

// Weekly velocity
MATCH (s:Session)
WHERE s.timestamp > datetime() - duration('P7D')
RETURN avg(s.progressDelta) as weeklyVelocity

// Progress breakdown by feature
MATCH (s:Session)-[:ADVANCED]->(f:Feature)
RETURN f.name, sum(s.progressDelta) as totalProgress
ORDER BY totalProgress DESC

Decisions: Architectural Memory

What is a Decision?

A decision is a recorded choice with rationale:

.\msp.ps1 decide @"
Using PostgreSQL instead of MongoDB because:
1. Need ACID transactions for payments
2. Relational model fits our domain
3. Team expertise with PostgreSQL
Considered: MongoDB (too eventual), DynamoDB (vendor lock-in)
"@

Decision Attributes

Decision:
  id: dec-2025-07-20-094532
  session: msp-2025-07-20-093042
  timestamp: 2025-07-20 09:45:32
  
  content: "Using PostgreSQL for main database"
  rationale: 
    - "ACID transactions required"
    - "Relational model fits domain"
    - "Team expertise"
    
  alternatives:
    - name: MongoDB
      reason_rejected: "Eventual consistency issues"
    - name: DynamoDB  
      reason_rejected: "Vendor lock-in concerns"
      
  impact: high
  reversible: false
  category: architecture

Decision Graph Power

// Find all database-related decisions
MATCH (d:Decision)
WHERE d.content =~ '.*database.*'
RETURN d ORDER BY d.timestamp

// Trace decision evolution
MATCH path = (d1:Decision)-[:SUPERSEDED_BY*]->(d2:Decision)
RETURN path

// Find decisions by impact
MATCH (d:Decision)-[:AFFECTED]->(f:Feature)
RETURN d, collect(f.name) as impactedFeatures
ORDER BY size(impactedFeatures) DESC

Blockers: Productive Problem Tracking

Blockers vs Issues

Traditional issue tracking is binary. MSP blockers have lifecycle:

# Encounter blocker
.\msp.ps1 blocker "OAuth redirect failing in production"

# Investigation updates
.\msp.ps1 update "Found: CORS policy blocking redirect"
.\msp.ps1 update "Trying: Whitelist origin in API Gateway"

# Resolution
.\msp.ps1 resolve "Fixed: Added production URL to CORS whitelist"

Blocker Intelligence

// Most common blockers
MATCH (b:Blocker)
RETURN b.category, count(*) as frequency
ORDER BY frequency DESC

// Average resolution time
MATCH (b:Blocker)
WHERE b.status = 'resolved'
RETURN avg(b.resolutionTime) as avgHours

// Blocker patterns by developer
MATCH (d:Developer)-[:ENCOUNTERED]->(b:Blocker)
RETURN d.name, collect(DISTINCT b.category) as blockerTypes

Integration Points: The Ecosystem

Tool Philosophy

MSP enhances, not replaces:

Neo4j: Stores the knowledge graph
Obsidian: Renders human-readable notes
Linear: Syncs with team planning
Git: Enriches commit context
AI: Receives complete context

Integration Flow

User Action → MSP Core → Integration Layer
                ↓              ↓
          Local State    External Tools
                ↓              ↓
          Next Session ← Synchronized

The Meta-Concept: Context Engineering

What is Context Engineering?

Context Engineering is the practice of systematically capturing, structuring, and leveraging development context to enhance productivity and decision-making.

Core Principles

  1. Context is Sacred: Never lose it
  2. Progress is Continuous: Not binary
  3. Decisions Have Rationale: Always document why
  4. Knowledge Compounds: Today's context helps tomorrow
  5. Sessions Have Lifecycle: Respect the rhythm

The Paradigm Shift

Old Way (Vibe Coding):
- Start fresh each time
- Rely on memory
- Hope for consistency
- Context in your head

New Way (Context Engineering):
- Start with full context
- Rely on the graph
- Guarantee consistency  
- Context in the system

Putting It All Together

These concepts work synergistically:

  1. Sessions provide the container
  2. R³ Protocol provides the workflow
  3. Knowledge Graph provides the memory
  4. Progress provides the momentum
  5. Decisions provide the rationale
  6. Context Export provides the amplification

The result: You never lose context, make better decisions, and work with superhuman memory.


Next Steps

Now that you understand the concepts:

  1. Try it: Quick Start Guide
  2. See it: Real Examples
  3. Deep dive: Protocol Specification
  4. Implement: Setup Guide

Ready to engineer your context? The future of development awaits.