Guide · April 2026 · The new “open” lives one layer up

Open source AI recruiting tools in 2026: the protocol is the new repo.

Most lists for this topic are honest about half the question. They point at the open-source applicant tracking systems that have existed since the mid-2000s (OpenCATS, CandidATS, Eazyrecruit) and then tack on a few closed AI recruiting suites at the bottom. They never ask the thing the buyer actually cares about in 2026: is the agent layer reachable from any client you want to bring, or does it live behind a vendor UI you cannot drive any other way? This guide answers that question with the receipts, including the MCP server URL you can wire an open-source agent client to today.

N
Nhat Nguyen
14 min read
4.9from Series A to mid-B TA teams
MCP server on Starter ($0). Same toolset on every plan.
Goose, Continue.dev, mcp-agent, Claude, ChatGPT all drive it.
Writes return a queue position, not a side effect.

The phrase is doing three different jobs at once

When somebody types “open source AI recruiting tools” into a browser they could be asking three different questions and the existing roundups conflate them. The first question is whether the applicant tracking system is self-hostable (OpenCATS, CandidATS, Eazyrecruit). The second is whether the agent runtime they want to use is open-source (Goose, Continue.dev, mcp-agent, OpenInterpreter, Cline). The third, the new one, is whether the agent layer of the recruiting workspace speaks an open protocol so any of those clients can drive it.

The first and second questions have been answered for years. The third only became a real question in late 2024 when Anthropic shipped the Model Context Protocol and OpenAI started speaking it back. By April 2026 every general-purpose agent client speaks MCP, and so does every recruiting workspace that takes “open” seriously. The roundups have not caught up.

The rest of this guide separates the three lanes, names the tools in each, and shows what the third lane looks like with a real MCP server URL and a 20-line Python script that drives it from open-source code.

Three lanes that get put on the same list

Most pages on this topic mash these together and call it a roundup. They are not substitutes. They sit at different layers.

Lane 1 · Open-source ATS

OpenCATS, CandidATS, Eazyrecruit. Self-hostable trackers, mid-2000s lineage. Resume parsing optional. No sourcing agent, no scheduling agent, no Match Rating model. You bring the agent layer.

Lane 2 · Open-source agent runtimes

Goose (Block), Continue.dev, mcp-agent (Anthropic), OpenInterpreter, Cline. General-purpose. Speak MCP. None of them ship a recruiting workspace; they need one to drive.

Lane 3 · Open-protocol agent workspace

Closed-source SaaS, open-protocol surface. Chosen HQ ships an MCP server (https://mcp.10xats.com/v1) on $0 Starter. Recruiter-intent tools, queue-by-default writes, claim-attached scoring.

Where the legacy roundups go wrong

They list lane 1 and stop. The buyer ends up with OpenCATS, an LLM, glue code, and no audit log. Or they list closed AI suites and call them ‘open’ because the company has a blog. The actual 2026 axis (lane 3) is missing from almost every list.

What lane 3 unlocks

Bring any client. Swap clients without changing the workspace. Approval queue and audit ledger enforced server-side. Same tools whether the call comes from a closed UI or a Python script.

The clients that drive a Chosen tenant

Open and closed sit side by side. Same MCP server. Same tools. Same approval queue. The vendor of the chat surface is the recruiter’s choice, not the workspace’s lock-in.

goose (Block)continue.devmcp-agent (Anthropic)MCP Python SDKMCP TypeScript SDKclaude desktopclaude projectschatgpt custom gptyour in-house Slack bot

How the protocol-shaped stack actually wires up

Open-source clients on the left. Chosen MCP at the hub. Named recruiter agents on the right. The hub never lets a write reach a candidate without a queue stop, regardless of which client called it.

Open-source clients → Chosen MCP → recruiter agents

Goose
Continue.dev
mcp-agent
Python SDK
Chosen MCP
Sourcing Agent
Match Rating
Scheduling Agent
Approval Queue

Wiring an open-source client in 12 lines of YAML

Goose is Block’s open-source agent runtime, Apache 2.0. Same shape works for Continue.dev and the official MCP Inspector. Paste the block, run the OAuth handshake against your tenant once, and the recruiter-intent tool list shows up.

~/.config/goose/config.yaml

Or 20 lines of Python against the official MCP SDK

For the team that wants to write its own recruiter loop on top of an open-source LLM client, the SDK call shape is the same one every closed client uses. The write tool returns a queue position because the safety boundary is at the server.

recruiter_loop.py

Ten recruiter-intent tools, exposed to whichever client

Generic ATS MCP wrappers expose one tool per API endpoint. The Ashby community wrapper has 33 of them. The Chosen surface is shaped around recruiter intent, not REST verbs. Ten tools, every client, every plan.

goose · chosen mcp · /tools

Every write tool (`scheduling.draft_outreach`, `scheduling.reschedule_panel`, `match.override_weight`, `queue.approve`) returns a queue position, not a confirmation. The recruiter approves from Gmail or the Chosen web view, and only then does the side effect fire.

A real session against the MCP server, from Goose

No closed UI in the loop. The recruiter is in a terminal, the tool surface is the same one Claude sees, and the queue position comes back the same way.

goose · chosen mcp · session

The honest 2026 grading sheet

The five questions that separate a list of names from a buying decision. Open repo on GitHub is one of them. It is not the only one, and on its own it is not enough.

FeatureOpen-source ATS roundup average (OpenCATS / CandidATS / Eazyrecruit)Chosen HQ (open-protocol)
Source code publicYes (PHP, MIT/AGPL, GitHub).Closed-source SaaS. Honest about it.
Agent layer includedNo. You assemble one out of an LLM API.Sourcing, Scheduling, Match Rating, Analytics, all named.
Public MCP serverNone. You write the wrapper if you want one.https://mcp.10xats.com/v1, OAuth-gated per tenant.
Open-source client supportN/A. There is no agent layer to drive.Goose, Continue.dev, mcp-agent, MCP SDKs, plus closed clients.
Approval queue / audit ledgerBuild your own. The ATS audit log does not include the LLM step.Queue-by-default writes. Claim-by-claim ledger with overrides logged.
Compliance shape (LL144 / HB 3773 / CAIA / EU AI Act)On the buyer to design.Ledger feeds the bias-audit artifact. Per-jurisdiction toggles on Enterprise.
Total cost of ownership at 12 reqsVPS ~$10-$40/mo, plus your engineer time, plus model spend.$99/mo Growth (founding) / $399 after. $0 Starter for 3 reqs, no card.

Facts as of April 2026 from public repos, vendor pricing pages, and the Chosen MCP tool surface.

The closed AI suites, graded the same way

The other half of every roundup. The agents exist. The protocol does not, or it is gated behind an enterprise tier. Bring-your-own client is not on the menu.

FeatureClosed AI recruiting suites (Gem, Ashby AI, Eightfold, Paradox, SeekOut)Chosen HQ (open-protocol)
Public MCP server, GANone as of April 2026. Some have community wrappers around the public REST API.First-party MCP. Same surface for every client.
Pricing postureDemo-required floor pricing. Enterprise gating around the agentic tier (SeekOut). Workday-only QSR enterprise (Paradox).Starter $0, Growth $99 founding / $399 after. One published price.
Agent shapeAI features bolted onto the existing ATS (Ashby AI, Gem). Or recruitment marketing layer. Or chat-only.Four named agents that own recruiter workflows. Approval queue between every agent and the candidate.
Match scoring shapeOften opaque similarity scores. Vendor grade. The reasoning is not exposed.5 to 15 testable claims per JD with weights and resume evidence spans. Override is an MCP tool call.
Bring-your-own clientNot supported. The vendor UI is the only surface.Any MCP client. Swap clients without changing the workspace.
Compliance artifactVendor-published audits, where they exist. Often not bound to your tenant’s actual decisions.Per-tenant ledger of every claim override, weight change, and queue approval. The ledger is the artifact.

Vendor positioning summarized from public 2026 pricing/marketing pages. None of the listed vendors had a GA first-party MCP server at the time of writing.

$0

MCP server, Claude integration, ChatGPT integration, Gmail integration. All of them ship on the $0 Starter plan, no credit card. Same toolset on every plan.

Chosen HQ pricing page, Starter tier, April 2026

The numbers that matter when “open” is the question

Not all of these favor us. We list them because the buyer should see the math.

0Lanes inside one phrase. Most lists name two.
0Recruiter-intent tools, every plan, every client
0Tools in the Ashby community MCP wrapper, read-only by default
$0Starter plan, no credit card, MCP unlocked

Tool counts: Chosen pricing page and the Ashby community MCP listing on Composio/Truto. The Starter line item is a direct quote from Chosen’s pricing page.

Three reasons protocol beats repo for AI recruiting in 2026

The 2019 case for “open source ATS” was data residency and exit cost. Both of those still matter. The 2026 case is harder to make for the repo and easier to make for the protocol. Three reasons.

Lock-in moved up

0 protocol, every client

The thing the buyer cares about (the agent layer) lives one layer above the database now. If the database is open but the agent is silo’d, you bought lock-in at the layer that matters. MCP is the open protocol at the agent layer.

Compliance is per-decision

0 jurisdictions, one ledger

NYC LL144, IL HB 3773, CO CAIA, EU AI Act each demand a defensible reason for every automated hiring decision. A claim-override logged through an MCP tool call is that reason. A self-hosted database with no agent layer cannot produce one. A closed AI suite with an opaque score also cannot.

Inbound volume broke

0+ applications per role

AI-fabricated resumes are routine. Sourcing, scoring, and the fraud signal that comes from claims that cannot be sourced to a resume span are not features you bolt onto OpenCATS over a weekend. They are the workspace.

How to plug Chosen into an open-source agent client today

Five steps from a fresh laptop to a working recruiter loop driven by Goose, Continue.dev, or your own SDK script. Total time is roughly fifteen minutes if Goose is already installed.

1

Sign up at $0

Starter plan, no credit card, three open reqs. Your tenant exists the moment you log in.

2

Install Goose (or pick another open-source MCP client)

Goose is Apache 2.0 from Block. Continue.dev, mcp-agent, the Python and TypeScript MCP SDKs, and the official MCP Inspector all work.

3

Drop the MCP server URL into the client config

https://mcp.10xats.com/v1. Run the OAuth handshake. The tool list (sourcing.search_persona, match.list_claims, queue.approve, ...) shows up immediately.

4

Connect Gmail and Cal.com once

Two OAuth flows. After this, scheduling drafts land in Gmail under your address and the Cal.com invites go out from your calendar.

5

Run a real recruiter loop from the client of your choice

Write a persona in plain English, watch the dossiers come back with claims and evidence spans, advance two to panel, approve the drafts in Gmail. The audit ledger updates regardless of which client you used.

The buyer’s checklist for “is this stack actually open?”

Take this list into any vendor pitch labeled “open source” or “open AI recruiting.” The answers place a product on the right shelf in about ten minutes.

Ten minutes against any vendor

  • Show me the public MCP server URL. If there is no first-party MCP, you are buying a single-client UI, not a workspace.
  • Drive the workspace from an open-source client (Goose or Continue.dev) on the call. If the demo only works inside the vendor's own UI, the openness is marketing.
  • Call a write tool. Show me what reaches the candidate. If the write fires directly, the human-in-the-loop layer does not exist.
  • Show me the audit log for the override the agent just proposed. Actor, timestamp, prior weight, new weight. A chat transcript is not an audit log.
  • Tell me, on the spot, what the MCP integration costs. If it is gated behind Enterprise, the buying conversation is going to be six months long.
  • Show me the Match Rating output the agent reads. Claim-by-claim with evidence spans. Not a single similarity number.
  • Walk me through the data posture. No training on customer data, contractually. Per-tenant data residency. OAuth scoped to my tenant only.

The ledger Goose sees, the same one Claude sees

The thing that makes the protocol work is that the safety surface lives at the server, not at the client. Whichever open-source client the recruiter is in, the same two artifacts come back: claim-attached scoring, and queue-shaped writes.

What the client reads

A claim, not a score

Each req has 5 to 15 testable claims, classified must-have / nice-to-have / red flag, with weights and the resume span that supports each one. The agent quotes the span back to the recruiter in a sentence. There is no opaque similarity number to defend.

What the client writes

A queue position, not a side effect

Every write tool returns a queue position rather than firing the action. The recruiter approves from Gmail or the Chosen web view, and the audit ledger captures actor, timestamp, prior, and new value. That ledger is the bias-audit artifact a regulator can read.

The one-paragraph version

The 2026 list of “open source AI recruiting tools” is honest only if it splits into three lanes. Lane one is the self-hostable applicant tracking systems (OpenCATS, CandidATS, Eazyrecruit), useful as a system of record, missing the agent layer entirely. Lane two is the open-source agent runtimes (Goose, Continue.dev, mcp-agent), general-purpose, with no recruiting workspace to drive. Lane three is the workspace whose agents are reachable over an open protocol so any of those clients can drive them. Chosen HQ ships lane three on the $0 Starter plan, with a public MCP server, recruiter-intent tools, queue-shaped writes, and a claim-attached audit ledger.

If the page that brought you here did not make that split, this one did. Bring whichever client you want.

Drive Chosen from your open-source agent on the call

30 minutes with the team. Bring Goose, Continue.dev, or your own MCP script. We paste the MCP URL, run the tool list live, and approve the first scheduling draft from your Gmail.

Questions buyers actually ask about ‘open’

What does 'open source AI recruiting tool' actually mean in 2026?

The phrase has split into three things readers conflate. First, an open-source applicant tracking system: code you self-host, OpenCATS / CandidATS / Eazyrecruit being the canonical examples, with no agentic layer at all. Second, an open-source AI agent runtime: things like Goose from Block, Continue.dev, OpenInterpreter, the Anthropic mcp-agent SDK, all general-purpose, none recruiting-specific. Third, a recruiting workspace whose agent graph is reachable over an open protocol (MCP), where any of those clients can drive named recruiter agents but the writes are still gated by a human approval queue. Most articles list camp one and camp two and call the question answered. The buyer's actual 2026 question is whether the third lane exists for the workspace they want.

Are OpenCATS, CandidATS, and Eazyrecruit AI recruiting tools?

They are open-source applicant tracking systems with optional resume parsing. Useful as a system of record, particularly if data residency is the buying constraint. They do not ship a sourcing agent, a scheduling agent, or a Match Rating model. The agentic layer, if you want one, is something you assemble yourself out of an LLM API and your own glue code. None of them ship a public MCP server, so an external agent client cannot drive them as tools without you writing the wrapper. They belong on the list, but on the 'storage' shelf, not the 'agents' shelf.

Why does the MCP angle matter more than whether the source code is on GitHub?

Because in 2026 the lock-in pattern moved up the stack. The thing the buyer cares about, the agent that drafts outreach, scores resumes, schedules panels, lives one layer above the database. If that layer is reachable over an open protocol (Model Context Protocol), you can swap or pair the client (Claude Desktop, Goose, ChatGPT, your own script) without rewriting the workspace. If the agent only lives behind a vendor UI, you bought a silo regardless of whether the database underneath was open-source. The repo being public matters less than the agent being reachable.

Can I run an open-source MCP client against the Chosen MCP server?

Yes. The MCP server URL is `https://mcp.10xats.com/v1`. It works with Goose (Block's open-source agent), Continue.dev, the Anthropic `mcp-agent` Python SDK, the official MCP SDKs in TypeScript and Python, plus the closed clients (Claude Desktop, ChatGPT custom GPT). The same recruiter-intent tools (`sourcing.search_persona`, `match.list_claims`, `match.override_weight`, `queue.approve`) are exposed regardless of which client you use. Writes always return a queue position rather than firing the side effect, so the safety boundary is enforced at the server, not the client.

Is the MCP server on the free plan a marketing trick? What is actually unlocked at $0?

The MCP server is unlocked the moment a tenant exists. Starter is $0 with no credit card, capped at three open reqs and unlimited candidate profiles. The same toolset shipped to Growth and Enterprise is exposed on Starter. The plan limits reqs and seats, not the protocol surface. There is no 'agentic tier' gated behind a sales call (a pattern visible at SeekOut and Eightfold) and no minimum seat count.

How is this different from gluing an open-source LLM to OpenCATS myself?

You can do it. You will write three things you do not have today: (1) a tool layer that the LLM can call against the OpenCATS REST endpoints, (2) a queue that intercepts every candidate-facing write so the model cannot send a bad email, and (3) an audit log shaped to NYC LL144 / IL HB 3773 / CO CAIA / EU AI Act high-risk hiring rules. Chosen ships those three as the core of the product. Match Rating extracts 5 to 15 testable claims per JD with weights and resume evidence spans, and every override is logged with actor, timestamp, prior, and new weight. That ledger is the bias-audit artifact. A chat transcript with your model is not.

Which closed AI recruiting tools could I plug an open client into instead?

As of April 2026, no general-availability MCP server from Gem, Ashby, Greenhouse, Eightfold, Paradox, or SeekOut. Community wrappers exist for Ashby and Greenhouse on the Anthropic and Smithery MCP marketplaces; the canonical Ashby community MCP exposes about 33 tools mapped one-to-one to the Ashby API and is recommended read-only by default because the underlying ATS has no human-in-the-loop queue. Lever has a partner Claude integration. None of them ship a first-party MCP server with writes routed through an approval queue out of the box.

Does a public MCP server mean my candidate data leaks across clients?

No. The MCP server runs inside your Chosen tenant, OAuth-gated. A client is a session, not a tenant. Your data is reachable only to clients that completed the OAuth handshake against your tenant. The Chosen privacy policy is contractual: no training on customer data, ever. Anthropic and OpenAI both default to no training on paid API tiers. Adding an open-source client on top does not change that posture.

Why is 'human in the loop' the differentiator and not 'fully autonomous'?

Because the 2026 regulatory shape pushed it. NYC DCWP Local Law 144 demands a published bias audit and a candidate notice. Illinois HB 3773 holds employers liable for AI-driven discrimination. Colorado CAIA requires risk management for high-risk hiring decisions. The EU AI Act classifies hiring as high-risk and requires human oversight by definition. A queue where every candidate-facing write becomes a draft a recruiter can read, edit, and approve is the cheapest way to be compliant in all four jurisdictions at once. A fully autonomous agent that emails candidates without a tap is not.

Where does Chosen sit on the 'is it open source' question itself?

The product is closed-source SaaS. The agent graph is open-protocol: MCP server, tool documentation, OAuth, no client lock-in. The bet is that 'open' has moved one layer up. The buyer who picks an ATS in 2026 because the source is on GitHub but the agent layer is missing is solving the 2019 problem. The buyer who picks a workspace whose agents are reachable from any client they bring is solving the 2026 one. We try to be honest about which lane we are in.