Skip to content
X

Level 10: Swarm Architect — Agents Building Agents

In a Swarm architecture, multiple autonomous agents work cooperatively. Taking it further, Claude itself can design and generate the roles, configurations, and code for new agents. This is the turning point from “using AI” to “designing and building AI systems.”

Target audience: Advanced engineers who have completed through Level 9 and want to tackle the design of autonomous AI systems.

Estimated learning time: Read 30min + Practice 60min


A Swarm is a system where multiple autonomous agents divide and cooperate on work. Each agent handles a specific area of expertise, and a coordinator manages the whole.

User Request

Coordinator Agent (overall progress management · integration)
    ├── Researcher Agent (information gathering · research)
    ├── Coder Agent (code implementation)
    ├── Reviewer Agent (review · quality verification)
    └── Deployer Agent (deploy · release)

Each specialist agent operates independently and processes its tasks without waiting for other agents to complete theirs. The coordinator receives the output from all specialists and integrates it into the final deliverable.

By asking Claude to “please design the specialist agents needed for this service,” Claude itself generates a blueprint for an agent group.

> Please design the agents needed to develop and operate
> a Python + FastAPI API server.
>
> For each agent, please define:
> 1. Role and scope of responsibility
> 2. Tools to use (MCP, commands, etc.)
> 3. Input/output interface (JSON schema)
> 4. Constraints and rules to write in CLAUDE.md
> 5. Slash command design

Example of a design Claude generates:

## Agent Group Design

### 1. spec-agent (requirements definition lead)
Role: Receives user requests and defines the API specification in OpenAPI format

Input: Natural language feature requests
Output: JSON in {"openapi": "3.0.0", "paths": {...}} format

Slash command:
/spec {feature name} → Generate OpenAPI spec and save to docs/api-spec.json

### 2. coder-agent (implementation lead)
Role: Receives the output of spec-agent and implements FastAPI endpoints

Input: OpenAPI specification JSON
Output: src/api/routes/{resource}.py

Slash command:
/implement {spec-file} → Generate code based on the specification

...

Swarm Implementation with the Anthropic Agent SDK

Section titled “Swarm Implementation with the Anthropic Agent SDK”

Using Python’s Anthropic SDK, implement a swarm with a coordinator and specialists.

# swarm/api_builder.py
# Example swarm implementation that automatically builds a Python + FastAPI API server

import anthropic
import concurrent.futures
import json

client = anthropic.Anthropic()

def run_specialist(role: str, system_prompt: str, task: str) -> str:
    """Run a specialist agent and return the result"""
    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=4096,
        system=system_prompt,
        messages=[{"role": "user", "content": task}]
    )
    return response.content[0].text

def build_api_server(requirements: str) -> dict:
    """Automate the design and implementation of an API server using a swarm"""

    # Step 1: The requirements definition agent creates the specification
    spec = run_specialist(
        "spec-agent",
        "I am an API design expert. I receive requirements and return detailed API specifications in JSON.",
        f"Please design a FastAPI API specification based on the following requirements.\n\n{requirements}"
    )

    # Step 2: Run code generation, test generation, and documentation generation in parallel
    parallel_tasks = {
        "coder": (
            "I am a FastAPI implementation expert. I receive API specifications and generate implementation code.",
            f"Please implement FastAPI endpoints based on the following specification.\n\n{spec}"
        ),
        "tester": (
            "I am a test engineer. I receive API specifications and generate pytest test code.",
            f"Please generate pytest test code based on the following specification.\n\n{spec}"
        ),
        "documenter": (
            "I am a technical writer. I receive API specifications and generate developer documentation.",
            f"Please generate Markdown documentation based on the following specification.\n\n{spec}"
        ),
    }

    results = {}
    with concurrent.futures.ThreadPoolExecutor() as executor:
        futures = {
            executor.submit(run_specialist, role, system, task): role
            for role, (system, task) in parallel_tasks.items()
        }
        for future in concurrent.futures.as_completed(futures):
            role = futures[future]
            results[role] = future.result()

    # Step 3: The coordinator performs an integrated review of all deliverables
    summary = run_specialist(
        "coordinator",
        "I am a coordinator who integrates the deliverables of multiple specialists and verifies quality.",
        f"""
        Please integrate the following deliverables and point out any issues.

        Specification: {spec}
        Implementation code: {results['coder']}
        Test code: {results['tester']}
        Documentation: {results['documenter']}
        """
    )

    return {
        "spec": spec,
        "implementation": results["coder"],
        "tests": results["tester"],
        "docs": results["documenter"],
        "review": summary,
    }

if __name__ == "__main__":
    requirements = """
    Build a user management API.
    - CRUD for users (create, read, update, delete)
    - JWT authentication
    - Use SQLite as the database
    """
    result = build_api_server(requirements)
    print(json.dumps(result, ensure_ascii=False, indent=2))

The Flow from Requirements to Fully Automated Deployment

Section titled “The Flow from Requirements to Fully Automated Deployment”

Using a swarm, you can run the entire process from natural language requirements to deployment continuously.

1. User inputs requirements in natural language

2. spec-agent generates OpenAPI specification

3. (Parallel)
   coder-agent generates implementation code
   tester-agent generates test code
   documenter-agent generates documentation

4. reviewer-agent verifies quality and provides correction instructions

5. (Once corrections are complete)
   deployer-agent triggers CI/CD

6. Human performs final review and approves merge/deploy

I recommend not skipping step 6’s human review even in full automation.

Each agent has only one role. I never create agents with multiple responsibilities like “coder and reviewer.” The clearer the role, the more concise the prompt and the higher the accuracy.

Standardize data passing between agents in a structured format like JSON. Passing information as raw natural language risks misinterpretation by downstream agents.

Implement retry logic and fallback (alternative processing) so that the whole system doesn’t stop if one agent fails. Always add exception handling — especially for agents that call external APIs.

Even in full automation, always include a human confirmation step for high-impact operations such as production deployments, content publishing, and sending data to external services.

Level rangePositionKey skills
Level 0–2Using AI toolsContext setup · MCP connection
Level 3–5Designing AI workflowsCustom commands · Issue-driven development
Level 6–8Building AI systemsHeadless · browser automation · parallel execution
Level 9–10Operating AI infrastructureAlways-on · swarm design

Q. What is the difference between a swarm and the Level 8 orchestrator pattern?

In the orchestrator pattern, “a script designed by a human manages the agents.” In a swarm, “the agents themselves decide the design and role assignments of other agents.” That is the key distinction.

Q. When is the right time to adopt a swarm?

Using a swarm for tasks a single agent can handle only adds complexity. Consider adopting it when there are three or more independent tasks that can be parallelized, and they repeat at high frequency.


Hands-on tutorial for this level →


I’ve reached from Level 0 to Level 10. I now have the foundation to design and operate Claude Code not just as “a chat that runs in the terminal,” but as “AI infrastructure running 24/7.”

As a next step, try building a system at the level that best fits your own project. Not every project needs to be at Level 10 — the practical approach is to choose a level that matches the scale, risk, and maturity of your team.