Home Blog About
10 min read

Building a2alist.ai: A Protocol Directory for the Agentic Web

How I built a discovery platform for A2A and x402 protocol implementations — featuring automated security reviews, an x402-powered Discovery API, and the infrastructure that lets agents find other agents.

ProjectsAIA2Ax402CloudflareAgentsSide Projects

The agent-to-agent (A2A) and x402 protocols are reshaping how AI systems communicate and transact. But there’s a fundamental problem: discovery. How does an agent find other agents that implement these protocols? Marketing pages claim support, GitHub repos go stale, and nobody verifies whether implementations actually work.

I built a2alist.ai to solve this — a living directory that doesn’t just list claims but actively verifies protocol implementations. It’s now tracking 67 implementations with 34 fully verified, and it offers something I think is even more interesting than the directory itself: a Discovery API that lets agents programmatically find other agents.

The Discovery Problem

A2A enables agents to communicate with each other using a standardized protocol. x402 enables micropayments for agent-accessed content. Both are gaining traction, but the ecosystem is fragmented:

  • Some projects claim A2A support but haven’t updated in months
  • Others implement x402 but only for specific content types
  • There’s no central place to see who’s actually building on these protocols
  • Agents can’t programmatically discover other agents without human intervention

That last point is the most interesting. A human can browse a directory and click links. But what about an AI agent that needs to find a translation service, or a code review agent, or anything else? The agent-to-agent future needs agent-to-agent discovery.

I wanted a platform that would:

  1. List real implementations, not just announced intentions
  2. Verify that endpoints actually respond to protocol requests
  3. Update automatically as the ecosystem evolves
  4. Provide programmatic discovery for agents themselves
  5. Filter submissions through security review, not just availability checks

Architecture: Astro, D1, and Edge Computing

The stack is built for low operational overhead and global performance:

D1 deserves special mention. For small-to-medium datasets that don’t need complex queries, it’s remarkably capable. The entire directory runs on a single SQLite database replicated to Cloudflare’s edge network. Reads are fast everywhere, writes are fast enough for my needs, and the operational overhead is zero — no database server to maintain, no connection pooling to configure, no backups to manage.

The schema tracks not just listings but verification history and security ratings:

CREATE TABLE agents (
  id TEXT PRIMARY KEY,
  name TEXT NOT NULL,
  url TEXT NOT NULL,
  agent_card_url TEXT,
  protocols TEXT,           -- JSON array: ["a2a", "x402"]
  last_verified TEXT,
  verification_count INTEGER DEFAULT 0,
  consecutive_failures INTEGER DEFAULT 0,
  safety_rating TEXT DEFAULT 'unreviewed',  -- verified/pending/unreviewed
  threat_categories TEXT,   -- JSON array of T1-T6 flags
  status TEXT DEFAULT 'pending'
);

CREATE TABLE verification_history (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  agent_id TEXT NOT NULL,
  checked_at TEXT NOT NULL,
  status TEXT NOT NULL,
  response_time_ms INTEGER,
  error_message TEXT,
  FOREIGN KEY (agent_id) REFERENCES agents(id)
);

Workers handle the heavy lifting — scheduled verification scans, webhook processing, and the Discovery API. The separation means the marketing site stays snappy even during bulk verification runs.

The Discovery API: Agents Finding Agents

Here’s where it gets interesting. A directory is useful for humans. But the entire premise of A2A is that agents need to work with other agents. They need programmatic discovery.

The Discovery API exposes six endpoints that let agents query the directory:

EndpointDescriptionPrice
/api/v1/agentsList all verified agents$0.01
/api/v1/agents/{id}Get agent details$0.01
/api/v1/searchSearch by capability/protocol$0.02
/api/v1/categoriesList agent categories$0.01
/api/v1/protocolsGet protocol statistics$0.01
/api/v1/healthCheck agent availability$0.03

Pricing is via x402 — micropayments in USDC on Base. An agent can discover other agents for pennies.

The implementation uses Cloudflare Workers with the x402 payment verification middleware:

// Discovery API endpoint with x402 payment
export async function onRequest(context: EventContext) {
  const { request, env } = context;
  
  // Verify x402 payment header
  const payment = await verifyX402Payment(request, {
    expectedAmount: 0.02,
    currency: 'USDC',
    network: 'base'
  });
  
  if (!payment.valid) {
    return new Response(JSON.stringify({ 
      error: 'Payment required',
      x402: {
        amount: '0.02',
        currency: 'USDC',
        network: 'base',
        recipient: env.PAYMENT_ADDRESS
      }
    }), { 
      status: 402,
      headers: { 'Content-Type': 'application/json' }
    });
  }
  
  // Process the search query
  const query = new URL(request.url).searchParams.get('q');
  const results = await env.DB.prepare(`
    SELECT id, name, url, protocols, safety_rating
    FROM agents 
    WHERE status = 'verified' 
      AND (name LIKE ? OR protocols LIKE ?)
    ORDER BY verification_count DESC
    LIMIT 20
  `).bind(`%${query}%`, `%${query}%`).all();
  
  return Response.json({ agents: results.results });
}

Why charge for API access? Three reasons:

  1. Sustainability — Directory maintenance costs compute time; API revenue covers it
  2. Quality signal — Paying clients are serious clients
  3. Spam prevention — Bulk scraping becomes expensive fast

The pricing is low enough that legitimate use is trivial (searching 100 times costs $2) but high enough that scraping the entire database repeatedly isn’t free.

Security Review Pipeline

Not every submitted agent should be listed. An AI agent directory is a potential attack vector — malicious actors could submit fake agents that harvest credentials, execute prompt injection, or worse. I built a security review pipeline to catch this.

Every submission goes through automated analysis across six threat categories:

CategoryWhat It Checks
T1: Endpoint SecurityTLS configuration, certificate validity, CORS headers
T2: Agent Card ValidationSchema compliance, required fields, capability claims
T3: Response AnalysisContent-type correctness, payload structure, error handling
T4: Behavioral PatternsUnusual redirects, suspicious response timing, fingerprinting attempts
T5: Reputation SignalsDomain age, known bad actors, associated infrastructure
T6: Protocol ComplianceActual A2A/x402 implementation versus claimed support

The automated review produces a safety rating:

  • Verified — Passed all checks, safe to list
  • Pending — Needs manual review (flagged in one or more categories)
  • Unreviewed — New submission, hasn’t been scanned yet

Currently, 34 of the 67 listings are fully verified. The rest are either pending review or too new to have completed the weekly scan cycle.

// Security review result structure
interface SecurityReview {
  agentId: string;
  timestamp: string;
  categories: {
    t1_endpoint: 'pass' | 'warn' | 'fail';
    t2_card: 'pass' | 'warn' | 'fail';
    t3_response: 'pass' | 'warn' | 'fail';
    t4_behavioral: 'pass' | 'warn' | 'fail';
    t5_reputation: 'pass' | 'warn' | 'fail';
    t6_protocol: 'pass' | 'warn' | 'fail';
  };
  overallRating: 'verified' | 'pending' | 'unreviewed';
  notes: string[];
}

Any ‘fail’ in categories T1-T4 requires manual review before listing. Category T5 and T6 failures can be overridden if there’s a reasonable explanation (new domains fail T5 by default, for instance).

The x402 Evolution: From Spam Filter to Revenue

Here’s an interesting bit of product evolution. I originally implemented x402 as a spam filter for submissions.

The theory was elegant: charge $0.99 to submit a listing. Spam bots won’t pay. Bots crawling forms and submitting garbage won’t integrate x402 wallets. But legitimate projects won’t hesitate at a dollar — especially since A2A/x402 implementers already have the payment infrastructure.

It worked. Spam dropped to zero. But it also created friction for legitimate submissions. People would start the submission flow, see the payment step, and bounce — not because $0.99 was expensive, but because setting up x402 payments just to submit a listing felt like overkill for projects that hadn’t implemented x402 themselves yet.

So I changed the model:

  • Submissions are free with admin review
  • Discovery API uses x402 for per-query micropayments

This is a better fit. Submissions should be low-friction to maximize coverage. API access — which is ongoing and automated — is where x402 makes sense. An agent making dozens of discovery queries per day can budget for micropayments. A human submitting once doesn’t want to set up a wallet.

The original insight (x402 as quality gate) still applies, just to a different part of the product.

Automated Verification Pipeline

Listed agents need periodic verification. An A2A agent card that worked last month might be down today. I run weekly automated scans to check status.

The pipeline runs as a Cloudflare Worker cron job:

// Scheduled verification (runs weekly)
export default {
  async scheduled(event: ScheduledEvent, env: Env) {
    const agents = await env.DB.prepare(`
      SELECT * FROM agents WHERE status = 'verified'
    `).all();
    
    const limit = pLimit(10);  // 10 concurrent requests
    const results = await Promise.all(
      agents.results.map(agent => 
        limit(() => verifyAgent(agent, env))
      )
    );
    
    // Update database with results
    for (const result of results) {
      await env.DB.prepare(`
        UPDATE agents 
        SET last_verified = ?,
            consecutive_failures = ?,
            verification_count = verification_count + 1
        WHERE id = ?
      `).bind(
        new Date().toISOString(),
        result.failed ? 'consecutive_failures + 1' : 0,
        result.id
      ).run();
      
      // Log to verification history
      await env.DB.prepare(`
        INSERT INTO verification_history 
        (agent_id, checked_at, status, response_time_ms, error_message)
        VALUES (?, ?, ?, ?, ?)
      `).bind(
        result.id,
        new Date().toISOString(),
        result.status,
        result.responseTimeMs,
        result.error || null
      ).run();
    }
  }
};

The concurrency limit matters. Hit too many endpoints simultaneously and you’ll trigger rate limits or look like a DDoS attack. Ten concurrent requests balances speed against politeness.

Agents that fail verification get flagged but not immediately removed. Protocol implementations have downtime; I don’t want to delist someone because their server rebooted during my scan window. Three consecutive failures trigger a manual review. Verification history is preserved so I can see patterns — intermittent failures versus hard down versus degrading performance.

Lessons Learned

D1 is production-ready. For directories, product listings, and other read-heavy workloads with modest write volumes, Cloudflare D1 is excellent. The operational simplicity of “it’s just SQLite” combined with global edge replication is compelling. No database server, no connection pools, no replicas to manage.

Security review is table stakes for agent directories. Without vetting, a directory of AI agents becomes an attack vector. The T1-T6 framework catches most automated garbage; manual review handles the edge cases. The alternative — listing anything that submits — would make the directory actively dangerous.

x402 fits API access better than submission fees. For one-time submissions, payment friction loses more legitimate users than it filters spam. For ongoing API access, micropayments are a natural fit — small enough to be invisible, large enough to matter at scale.

Discovery APIs are the real product. The directory website is nice, but what agents actually need is programmatic access. Building the Discovery API from day one was the right call.

Parallelization has diminishing returns. Going from sequential to 10x concurrency was massive. Going from 10x to 100x would have been marginal and risked rate limiting. Know when to stop optimizing.

The Agentic Web Needs Discovery

Here’s what I think is actually happening: we’re building the infrastructure for a web where AI agents are first-class participants.

Right now, agents rely on hardcoded integrations or human-curated lists to find services. That’s how the early web worked too — directories like Yahoo!, manually curated links, no search. Then came discovery infrastructure, and the web exploded.

The agent-to-agent future needs the same thing. Agents need to find other agents programmatically. They need to verify that discovered agents actually work. They need to assess trust and safety before interacting.

a2alist.ai is one small piece of that infrastructure: a verified directory with programmatic discovery, security vetting, and payment rails for sustainable operation. As the A2A and x402 ecosystems grow, discovery becomes more valuable, not less.

The interesting question isn’t whether agent discovery matters — it obviously does. The question is what discovery looks like when agents are the primary consumers, not humans. Human directories optimize for browsability. Agent directories need to optimize for API ergonomics, trust signals, and real-time verification.

We’re still figuring that out. But with 67 listings and growing, the experiment continues.


More on the A2A ecosystem at a2alist.ai.

Share: