Guide · The recruiter’s Claude stack, end to end
Anthropic, Claude, and your hiring ATS: the missing piece is the approval queue.
Every other guide for plugging Claude into hiring shows the same shape: install a third-party MCP server, point it at Greenhouse or Ashby, leave it in read-only mode, and copy answers back into the ATS UI by hand. That is a query interface, not a workspace. This guide describes the other shape, the one Chosen HQ shipped on day one: an agent graph that owns the work, with Claude as a first-class front door and every write routed through a one-tap human approval queue.
Two patterns for connecting Claude to hiring
Pattern one, the one most articles describe, is the read-layer pattern. A recruiter installs a community MCP server (Ashby, Greenhouse, Lever, or a unified wrapper), pastes an API key into Claude Desktop, and gets a chat surface that can answer questions about pipeline state. Almost every public guide for this setup recommends restricting the integration to read-only tools first, because the underlying ATS has no human-in-the-loop spine for writes. Method filtering, tool tags, scope clamps: whatever the wrapper exposes, the advice is the same. Read first, write later, and you build the audit on your own time.
Pattern two is the workspace pattern. The agent graph is the product. Claude is one of several front doors, and the moment Claude proposes an action that touches a candidate, the action does not fire. It produces a draft in a queue you tap from Gmail or the web. Reads still flow through Claude exactly as they would in pattern one. Writes flow through Claude too, but they land in front of a human first. The audit log is built in.
The two patterns share a brand and a protocol. They do not share a shape. The rest of this guide describes pattern two, with the receipts.
The two shapes, side by side
Claude Desktop talks to a community MCP wrapping Greenhouse, Ashby, or Lever. The recruiter asks questions, copies answers into the ATS, performs the write inside the ATS UI. Writes that the MCP supports are usually disabled by default. The audit is whatever the recruiter writes in a notion doc afterward.
- Read-only by default. Writes need a custom audit you build.
- The recruiter still opens the ATS UI to perform every action.
- No queue. No claim-attached scoring. Two systems of record.
- Bias audit cannot reference the chat. The chat is not the ledger.
The five Claude surfaces in a recruiter’s day
One agent graph, multiple Anthropic-shaped front doors. Each one hits the same approval queue.
How the wiring actually flows
Claude is the client. The Chosen MCP server fans out across the named agents. The approval queue collects every candidate-facing write before it reaches Gmail.
Claude → Chosen MCP → named agents → approval queue → Gmail
The MCP config, in 8 lines
Paste this into Claude Desktop’s MCP settings, sign in once with your tenant, and the named-agent toolset shows up under the attachment menu. No api keys to rotate, no scope toggles to argue about.
The Chosen tool surface, ten of them
A community Ashby MCP exposes 33 tools mapped one-to-one to the Ashby API; the canonical Greenhouse one is read-only by default. The Chosen surface is shaped around recruiter intent, not API endpoints. Ten of the tools, abridged:
Every write tool returns a queue position, not a confirmation. The recruiter approves the queue, the queue triggers the side effect. The agent never reaches a candidate directly.
A real Claude session, top to bottom
Tuesday morning. The recruiter has 12 open reqs, an Infra SWE panel to pull together, and a board metric the VP needs by standup. Here is the session that lands all three before 10am.
The full request path: Claude turn → Gmail send
Twelve seconds of wall-clock time. Two systems involved. One human tap.
Anthropic client → Chosen MCP → agents → queue → Gmail
The audit ledger Claude reads from, and writes to
When Claude asks for a match score, it does not get a number. It gets the claim ledger: a list of weighted claims with the resume span that supports each one. When Claude proposes an override, the override goes through `match.override_weight`, which logs actor, timestamp, prior weight, new weight. That ledger is the artifact a regulator can read. A chat transcript is not.
What Claude reads
A claim, not a score
5 to 15 testable claims per req, classified must-have / nice-to-have / red flag. Each claim has a weight and a resume span attached. Claude can quote the span back to the recruiter in a sentence. There is no opaque similarity number to defend.
What Claude writes
A logged override, not a side effect
Every weight flip, claim reclassification, or rerun is a tool call with actor, timestamp, prior, and new value. The ledger feeds the bias-audit artifact on Enterprise; the same ledger drives the why-this-score notice required under NYC Local Law 144 and the EU AI Act high-risk hiring rules.
“MCP server, Claude integration, ChatGPT integration, Gmail integration. All of them ship on the $0 Starter plan, no credit card. Same toolset on every plan.”
Chosen HQ pricing page (Starter tier line item, April 2026)
Connect Claude to Chosen in five steps
The whole setup is a single config paste plus three OAuth pops. Most teams are running queries against pipeline data inside ten minutes.
Sign up at $0
Starter, no credit card, up to 3 open reqs. The MCP server is unlocked the moment your tenant exists.
Paste the MCP config into Claude Desktop
One server entry. OAuth handshake against your Chosen tenant. The 30-plus tools show up under Claude's attachment menu.
Connect Gmail and Cal.com
Two OAuth flows. After this, scheduling drafts land in the recruiter's Gmail and the panel re-asks happen on the recruiter's Cal.com.
Write one persona, run the Sourcing Agent
A paragraph, not a boolean query. Twelve dossiers come back with the claims already matched to evidence spans. Claude can read all of it.
Approve the first batch from Gmail
Drafts arrive in your inbox, labeled by the queue. Tap approve. Cal.com invites go out under your address. The audit log is updated.
Chosen MCP vs. a community ATS MCP, side by side
Reading every public Ashby/Greenhouse Claude MCP guide as of April 2026, the recurring shape is the same. Read-only by default, custom audit on you, no queue, no claim ledger. Here is what a like-for-like comparison looks like.
| Feature | Community ATS MCP (Ashby / Greenhouse / Lever wrapper) | Chosen MCP server |
|---|---|---|
| Default mode | Read-only. Writes recommended off until you build an audit. | Read and write. Writes land in the approval queue by default. |
| Tool shape | 1-to-1 mapping of the underlying ATS API. ~33 tools for Ashby. | Recruiter-intent tools (sourcing.search_persona, queue.approve, match.override_weight). |
| Approval surface | None. The agent fires writes directly. You wrap your own queue. | First-class queue, surfaced in Gmail and the web view. One-tap approve. |
| Match scoring shape | Whatever the ATS exposes. Often opaque similarity or vendor grade. | Claim-by-claim ledger with resume evidence spans, returned to Claude. |
| Audit artifact | Chat transcript plus the ATS audit log, reconciled by hand. | Single ledger of claim overrides with actor/timestamp/prior/new. |
| Pricing | Free wrapper, but you pay the underlying ATS at $400-$2k/seat/yr. | Ships on $0 Starter (3 reqs). Growth $99 founding / $399 after. |
| Data training | Depends on the wrapper. ATS DPA may not address Claude inputs. | No training on customer data. Contractual and architectural. |
Facts as of April 2026, drawn from public MCP marketplace listings and vendor pricing pages.
The recruiter day, in numbers
The Ashby tool count is from the Truto Composio MCP listing. The 14-emails-per-interview baseline and Starter pricing are from Chosen’s own how-it-works and pricing pages. The intent-tool count above is the abridged surface used in this guide.
Three pressures forcing this shape in 2026
The MCP-as-query-layer pattern was fine in 2025 because writes were a stretch goal. Three things changed.
Protocol maturity
0 standard, every Anthropic client
Claude Desktop, Claude Projects, the Anthropic API SDK, and the ChatGPT GPT builder all speak MCP. One server, every client. The buyer who picks an ATS without a first-party agent surface is buying a silo to rewire in 18 months.
Regulation
0 jurisdictions, one ledger
NYC LL144, IL HB 3773, CO CAIA, EU AI Act each demand a defensible reason for every automated decision. A claim override logged through an MCP tool call is that reason. A Claude transcript is not.
Resume fraud
0+ inbound per role
AI-fabricated resumes are a normal input now. Claims that cannot be sourced to a specific resume span are the earliest fraud signal. Surfacing that signal to Claude as text means the recruiter sees it before the phone screen, not after.
The 10-minute due-diligence script
Take this list into any conversation with a vendor selling a Claude integration for hiring. The answers separate the query-layer products from the workspace ones quickly.
Ten minutes with the sales engineer
- Open Claude with your MCP server attached. Ask a write tool to advance two candidates. Show me what reaches the candidate. If the write fires directly to the candidate, the queue does not exist.
- Show me the audit log for the override Claude just proposed. It should have actor, timestamp, prior weight, new weight, claim id. A chat transcript is not an audit log.
- Show me the Match Rating output Claude reads. It should be a list of testable claims with evidence spans, not a single similarity number.
- Tell me, on the spot, what the MCP server costs. If it is gated behind Enterprise, the buying conversation is going to be six months long.
- Tell me whether Claude inputs and outputs from this session are used to train any model. Get it in the DPA.
- Tell me what happens when a recruiter reschedules one panelist. The reschedule should re-ask the other three on its own. If it does not, every panel is a chase.
- Show me the same workflow from inside ChatGPT and from Slack. If the MCP server only works inside Claude Desktop, you are buying a single-client integration, not a workspace.
The one-paragraph version
The Anthropic Claude hiring stack worth buying in 2026 is not a wrapper around your existing ATS that lets Claude read pipeline state and recommends you keep writes turned off. It is an agent graph that owns the work, with Claude as one of several Anthropic-shaped front doors, and a one-tap approval queue between every Claude-initiated write and the candidate who sees it. Match scores are claim ledgers, not numbers. Overrides are tool calls, not chat suggestions. The same ledger that drives the workflow drives the bias audit.
Chosen HQ ships exactly that shape, and the MCP server is on the $0 Starter plan from day one.
Wire Claude to your real pipeline on the call
30 minutes with the team. Bring your Claude Desktop. We paste the MCP config, run the first three questions live, and approve the first scheduling draft from your Gmail.
Questions buyers actually ask
What does an Anthropic Claude hiring ATS actually look like in 2026?
Two patterns dominate. Pattern one is the read-layer pattern: a recruiter installs a third-party MCP server in Claude Desktop that wraps Greenhouse, Ashby, or Lever, asks questions of pipeline data, and copies answers back into the ATS UI to take action. Almost every public Ashby and Greenhouse Claude MCP guide explicitly recommends starting that integration in read-only mode, since the ATS itself has no human-in-the-loop spine for the writes. Pattern two is the workspace pattern: the agent graph is the recruiter's primary surface, Claude is one of several front doors, and any write-action Claude proposes surfaces as a draft in a one-tap approval queue tied to a claim-by-claim audit log. Chosen HQ ships pattern two on day one, on the $0 Starter plan.
Why is the Ashby Claude MCP usually configured read-only?
The Ashby Claude MCP exposes 33 distinct tools mapped to the Ashby API, including writes for stage moves, notes, and outreach. The community guidance is to use method filtering or tool tags to restrict the agent to the read tools first. The reason is honest: a generic MCP write can reach a candidate without any approval surface in between. Adding that approval surface is a project the customer is told to do themselves. Chosen's MCP server is the inverse. Writes are first-class tools, but each write produces a draft in the Gmail or web approval queue. The queue, not the agent, decides what reaches the candidate.
Does Chosen require Claude, or is the web UI the primary surface?
Chosen ships five surfaces against one agent graph: ChatGPT, Claude, Gmail, Slack, and the Chosen web queue. Claude is the front door for the talent leaders who already live inside an Anthropic Project for board prep, persona writing, or comp benchmarking. The web queue is the front door for most coordinators. Both are equal-class. Pick whichever is closest to your hand.
What is in the Chosen MCP toolset that a generic ATS MCP does not have?
Three things you cannot get by wrapping Greenhouse or Ashby. First, named-agent tools: ask the Sourcing Agent for dossiers against a paragraph persona, not a boolean query. Second, queue-aware writes: every advance, draft, reschedule, or override returns the queue position rather than firing the action. Third, claim-attached scoring: every match score Claude reads is a list of 5 to 15 testable claims with weights and resume evidence spans, and the override is a tool call. Generic ATS MCPs return whatever the underlying API returns, and the audit and approval are someone else's problem.
Can a recruiter run their entire week from inside Claude?
For most reads and most drafts, yes. A reasonable Tuesday looks like: ask Claude which Infra SWE candidates have on-call experience in the last three years, advance two of them to panel, watch the two scheduling drafts land in Gmail, approve them with one tap, and then ask Claude for the offer-accept rate by source for the last quarter for board prep. The recruiter never opened the ATS UI. They opened Claude, Gmail, and the Cal.com link. The Chosen web queue is there as a backstop for high-volume sweeps, but the primary surface for that day was Claude.
What happens with bias-audit and Local Law 144 if Claude is making the calls?
Claude is not making the calls. The Match Rating agent extracts 5 to 15 testable claims from each job description, classifies each must-have, nice-to-have, or red flag, and sources the evidence span-by-span in the resume. Claude reads that claim ledger and proposes overrides; the recruiter approves or edits each one. Every override is logged with actor, timestamp, prior weight, and new weight. That ledger is the artifact NYC DCWP Local Law 144, Illinois HB 3773, Colorado CAIA, and the EU AI Act high-risk hiring rules require. A black-box similarity score from a generic ATS plus a chat transcript with Claude is not.
Is the MCP server gated behind Enterprise pricing?
No. The MCP server, the Claude integration, and the ChatGPT integration ship on the $0 Starter plan, no credit card, up to three open reqs. Growth is $99 per month as a founding-member price ($399 after) for unlimited reqs and the rest of the surfaces. Enterprise is custom for teams above 250 and adds SSO, SCIM, the per-jurisdiction toggles, and a published bias-audit artifact. Every agent ships on every plan.
Will Anthropic train on our candidate data through the MCP server?
No. Two layers protect this. The Chosen privacy policy is contractual: no training on customer data, ever. Anthropic's Claude API, which the MCP server uses on the consumer side of the recruiter's session, also does not train on inputs or outputs by default for paid tiers. Resumes, personas, and org criteria stay inside your tenant.
How does this fit when the head of TA is the buyer and uses Claude for board prep?
That is the crispest fit pattern. A VP of Talent already has a Claude Project open for board narrative, comp band justifications, headcount planning. Connecting Chosen to that Project takes a single MCP server URL paste. After that, the VP can ask their Project for time-to-fill by level, last four quarters, broken out by source, and get a chart that matches the data the recruiters are operating against, not a CSV the analyst rebuilt last Wednesday. Same agents, same ledger, no spreadsheet handoff.