MCP Server

Setup guide and tool reference for the TEA Techniques MCP server

What is MCP?

The Model Context Protocol (MCP) is an open standard that lets AI assistants like Claude connect to external data sources. The TEA Techniques MCP server gives Claude direct access to our knowledge graph—enabling conversational discovery of all TEA techniques.

Why Use the MCP Server?

Instead of manually browsing pages or writing API queries, you can ask an AI agent questions in natural language:

TaskManual ApproachWith MCP
Find fairness techniques for neural networksBrowse categories, filter manually, read each page"Find fairness techniques that work with neural networks"
Compare SHAP vs LIMEOpen two tabs, compare side by side"Compare SHAP and LIME and help me evaluate which is the best one for my use case"
Find evidence for an assurance claimSearch docs, map techniques to claims"What techniques support the claim that our model treats all groups fairly?"
Explore related techniquesFollow links manually"What techniques are related to SHAP?"

How It Works

Architecture diagram showing Claude connected to MCP Server which fetches from the Knowledge Graph

The MCP server runs locally on your machine and provides 10 tools that Claude can call. When you ask a question, Claude selects the appropriate tools, the server queries the knowledge graph, and Claude synthesises the results into a helpful response.

Quick Start

No installation needed — run directly with npx:

Shell
npx @chrisdburr/tea-techniques-mcp

Add to your project's .mcp.json:

JSON
{
  "mcpServers": {
    "tea-techniques": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@chrisdburr/tea-techniques-mcp"]
    }
  }
}

After configuring, restart Claude. You should see 10 tools prefixed with mcp__tea-techniques__.

First-use setup

On first run, the server fetches the knowledge graph from GitHub Pages and caches it locally for 24 hours. Semantic search also downloads a lightweight ONNX model (~30MB) on first use.

Local development (from source)

If you want to contribute or run from the monorepo:

Shell
git clone https://github.com/alan-turing-institute/tea-techniques.git
cd tea-techniques/mcp-server
pnpm install

Then use npx tsx ./mcp-server/src/index.ts as the command in your MCP config, with --local to load data from project files instead of fetching remotely.


Workflow Examples

Discovering Techniques for a Use Case

Scenario: Building a credit scoring model that needs explainable decisions

You: I'm building a credit scoring model using gradient boosting. What techniques can help me explain individual decisions to applicants?

Claude uses find_techniques with goals=["explainability"], tags=["tree-based"]

Claude: Based on your needs, here are relevant techniques:

  • SHAP — Quantifies each feature's contribution using game-theoretic principles.
  • LIME — Creates simple local explanations by approximating the model.
  • Permutation Importance — Measures feature importance by shuffling values.

You: How do SHAP and LIME compare?

Claude uses compare_techniques with slugs=["shapley-additive-explanations", "local-interpretable-model-agnostic-explanations"]

Claude: SHAP provides theoretically grounded explanations with consistency guarantees but is more computationally expensive. LIME is simpler and faster but may produce less stable explanations.

Building Assurance Evidence

Scenario: Demonstrating your model doesn't discriminate

You: What types of evidence can fairness techniques produce?

Claude uses find_evidence_types

Claude: Fairness techniques can produce:

  • Fairness metrics (16 techniques) — Demographic parity, equalised odds
  • Quantitative metrics (98 techniques) — Statistical measures of behaviour
  • Qualitative reports (15 techniques) — Human review and documentation

You: Show me techniques that produce fairness metrics for neural networks.

Claude uses find_techniques with goals=["fairness"], tags=["neural-network", "quantitative-metric"]

Research and Exploration

Scenario: Understanding the AI assurance landscape

You: Give me an overview of the TEA Techniques knowledge graph.

Claude uses get_knowledge_graph_summary

Claude: The knowledge graph contains 121 techniques across 7 assurance goals, with 456 academic resources and 194 tags in a hierarchical taxonomy.

You: Which goal has the most techniques?

Claude uses coverage_statistics with dimension="goals"

Claude: Reliability has the most coverage (75 techniques), followed by Transparency (69) and Fairness (61). Privacy has the least (11).


Tool Reference

Discovery Tools

Tools for finding and exploring techniques.

find_techniques

Primary Search

Search and filter techniques by query text, assurance goals, and tags.

ParameterTypeDescription
querystringFree-text search across names and descriptions
goalsstring[]Filter by goal slugs (e.g. ["explainability", "fairness"])
tagsstring[]Filter by tag fragments (e.g. ["model-agnostic", "tabular"])
limitnumberMax results (default 20)

get_technique

Detail View

Get full details of a technique: goals, tags, use cases, limitations, related techniques, and resources.

ParameterTypeDescription
slugstringTechnique slug (e.g. shapley-additive-explanations)

compare_techniques

Side-by-Side

Compare 2-5 techniques across goals, tags, and resources.

ParameterTypeDescription
slugsstring[]Array of 2-5 technique slugs

Exploration

Find techniques related via explicit links, shared goals, or shared tags.

ParameterTypeDescription
slugstringTechnique slug to find relatives for
depthnumber1 = explicit + same-goal, 2 = also same-tag (default 1)

Platform Integration Tools

Tools for connecting techniques to assurance arguments.

suggest_techniques_for_claim

Claim Matching

Given an assurance claim, suggest relevant techniques using embedding-based semantic search with hybrid RRF ranking.

ParameterTypeDescription
claimstringAssurance claim text (e.g. "the model treats all groups fairly")
modelTypestringModel type context (e.g. neural-network)
dataTypestringData type context (e.g. tabular)
lifecycleStagestringLifecycle stage (e.g. model-development)

find_evidence_types

Evidence Discovery

Explore what types of evidence techniques can produce, with example techniques for each type.

No parameters required.


Research Tools

Tools for analysing the dataset and finding resources.

explore_taxonomy

Navigation

Navigate the hierarchical tag taxonomy. Call with no path to see top-level categories.

ParameterTypeDescription
pathstringTag path to explore (e.g. applicable-models)
includeTechniquesbooleanInclude technique names for each tag

coverage_statistics

Analysis

Analyse dataset coverage across a chosen dimension.

ParameterTypeDescription
dimensionenumOne of: goals, tags, lifecycle, evidence, model-types, complexity, cross-goal

search_resources

Literature

Search academic resources (papers, software, documentation).

ParameterTypeDescription
querystringFree-text search across titles, abstracts, and authors
typeenumFilter by type: technical_paper, software_package, documentation, tutorial
techniquestringFilter by technique slug to get its cited resources
limitnumberMax results (default 20)

get_knowledge_graph_summary

Overview

High-level statistics: entity counts and relationship counts across the entire graph.

No parameters required.


Data Source

The server fetches from the public knowledge graph:

https://alan-turing-institute.github.io/tea-techniques/data/ld/graph.jsonld

Data is cached locally in ~/.cache/tea-techniques-mcp/ for 24 hours. Use --local to load from project files during development:

Shell
npx tsx mcp-server/src/index.ts --local