Guide · 2026 field manual
AI powered applicant tracking system: the approval queue is the product.
Most guides on this topic describe AI as something an applicant tracking system does to candidates in the background. This one argues the opposite. The feature worth buying in 2026 is the surface where a recruiter sees every draft, every score, every claim, and clicks approve. Here is what that actually looks like inside a product, and what to ask any vendor before you sign.
The phrase is doing two different jobs
When someone searches for an AI powered applicant tracking system, they are asking one of two questions. The first: I want a faster version of my current ATS. Resume summarisation, interview-note autofill, a chat box on top of Greenhouse. The second: I want named agents that own recruiter work, with a human watching every step. The market conflates these. The products do not.
Most pages that currently cover this topic list bullet points: auto-screen, auto-rank, auto-schedule, auto-reject. The author never chooses between the two questions above. They hedge, then publish. A buyer walks away unable to tell whether the product is a slightly better Greenhouse or an autonomous agent pipeline that will send cold emails without them.
Chosen HQ picks. Four agents, each with a name and a job. Every action that would reach a candidate sits in an approval queue first. Every score a recruiter uses to advance or reject carries the claims and weights behind it. The AI does the heavy lifting. The recruiter owns the decision.
The anatomy of the approval queue
Four named agents feed one queue. The queue is the product surface a recruiter spends their morning inside.
Four agents → one approval queue → candidate-facing actions
The numbers that justify the shape of the product
The 14-emails and 1,680 numbers are drawn from the hiring retro that prompted Chosen to exist. The claim-count range is the design target of Match Rating: enough testable claims to defend a score, few enough to review in under a minute.
What a single Match Rating score looks like, opened up
The black box is the thing that fails a Local Law 144 audit and loses you a good candidate who looked wrong on the first pass. This is the alternative.
5 to 15 testable claims extracted per JD
Parsed from the job description and your org's criteria. Not a vibe score. Not a resume-embedding cosine. A list of short, checkable statements a person could argue with.
Must / nice / red flag
Every claim is classified with a visible weight. Recruiter can change the weight before or after the score runs.
Evidence span linked
The supporting line or paragraph from the resume is cited next to each claim. If the AI cannot find evidence, it is marked not supported, not guessed.
Override and re-run
Flip a claim, change a weight, re-run the score. The previous version is logged. That log is the bias-audit artifact regulators ask for under NYC DCWP, Illinois HB 3773, Colorado CAIA, and the EU AI Act high-risk hiring rules.
No training on your data
Candidate resumes and org criteria never enter a shared training corpus. Contractual and architectural.
One tap to advance
Score approved, the candidate moves stage. Score rejected, it stops there with the override reason logged.
A Monday morning with the approval queue open
This is the actual rhythm. Not a rendered demo, the work.
07:40 — 23 drafts waiting
Overnight, the Sourcing Agent added 12 dossiers to the review queue for the Infra SWE req. The Scheduling Agent drafted 11 replies: three reschedules, six confirms, two follow-ups.
07:45 — sweep the scheduling stack
Each draft opens inline in the queue. The recruiter edits tone on one, approves the rest in a sequence. Ten of eleven go out before coffee. The industry-baseline version of this is 14 emails per interview times the open interviews on the board.
08:10 — review three Match Rating scores
Each score lists 9 to 12 claims. The recruiter agrees with the must-haves but downgrades a claim about Kubernetes seniority from must to nice, and re-runs. Candidate #2 flips from reject to advance. The override and reasoning are logged.
08:30 — dossier review for the outbound sprint
Sourcing Agent returned 12 dossiers for the new Series B infra persona. Each dossier shows which criteria the evidence supports and which it does not. Five get approved into outbound, three get persona feedback, four are rejected.
08:55 — analytics question for the board
VP of Talent asks for offer-accept by source, by level, last four quarters. Recruiter types the sentence into Analytics. The chart is back before the 9am standup.
“The recruiter sent 1,680 scheduling emails that quarter. The number showed up in a retro and became the reason Chosen exists.”
Chosen HQ founding story
How this differs from the other products in the category
Every comparison below is drawn from public material as of April 2026. Vendors ship weekly and the facts move; check the current plans before you sign anything.
| Feature | Typical AI-bolted-on ATS | Chosen HQ |
|---|---|---|
| Primary surface | ATS record with an AI sidekick button | The approval queue. Every AI draft passes through it. |
| Pricing | Gated behind a demo, often per-seat plus AI add-ons | $0 / $99 founding / $399 after / custom enterprise. Published. |
| Match scoring | Opaque resume-to-JD similarity or black-box grade | 5 to 15 testable claims, visible weights, evidence spans, overridable |
| Scheduling | Automatic send with AI drafting | Drafts in Gmail, one-tap approve, reschedule re-asks the panel |
| Compliance posture | Policy page and a bias-audit PDF from the vendor | Claim log is the audit trail, per-jurisdiction toggles on Enterprise |
| Data training | Consent-based or unclear opt-out | Contractual and architectural: no training on customer data |
| Migration off the current ATS | Implementation project, quarter or more | Self-serve on Starter/Growth, day-or-two on Greenhouse Essential |
Facts as of April 2026. Verify with the vendors before signing.
The questions to ask any vendor before the signature
2026 AI ATS buyer checklist
- Name the approval surface. Where exactly does a human click approve before an email, a score, or a rejection leaves the product? If the answer is nowhere, the product is a different shape than what the term AI-powered now has to mean.
- Open a Match Rating score in front of the sales engineer. Ask to see the claims, the weights, and the evidence span for each. If the score opens as a single number, ask how you audit it.
- Ask for the current published price for a 10-recruiter team. If the answer requires a demo, treat the demo as a cost and the price as a range.
- Ask whether the AI trains on your candidate data. Get the answer in writing in the DPA.
- Ask how a recruiter turns AI off per jurisdiction, per role, and per stage. If the only switch is global, your Colorado and New York reqs will not work the same way as your remote-US reqs.
- Ask for the migration path off Greenhouse, Lever, or Workday Recruiting, priced, with timelines. A renewal conversation is more honest than a new-customer pitch.
Why the approval queue holds up in 2026 specifically
Three regulatory deadlines and one technology shift are converging on the same answer.
Regulatory
0 jurisdictions, one audit trail
NYC LL144, IL HB 3773, CO CAIA, EU AI Act each demand a defensible reason for every automated hiring decision. Claim-by-claim scoring is that reason in the shape the regulators ask for.
Fraud
0+ inbound applicants per role
AI-fabricated resumes are a normal input now. Claim spans that cannot be sourced to specific resume text are the earliest red flag, surfaced before the phone screen.
Market
0 flat, published price
The mid-market ATS category is in forced reconsideration after the Workday-Paradox acquisition and the SAP SmartRecruiters deepening. A published price is the only way a buyer can do the math in public.
The short version
An AI powered applicant tracking system in 2026 is not a product with a chat box. It is a product where the recruiter’s attention is the most expensive resource, and every AI action either respects that attention or wastes it.
The approval queue is how you respect it. Claim-by-claim scoring is how you defend the decisions that come out of it. A flat published price is how you make sure the vendor does not price the AI as a separate SKU next year.
If you want to see this running against a live req, book a 30-minute call and we will open the queue on screen.
Walk through your current ATS renewal, line by line
30 minutes with the team. We open the approval queue against a real req from your pipeline.
Book a call →Questions buyers actually ask
What does it mean for an applicant tracking system to be AI-powered?
In 2026 the term covers three very different things. Some systems bolt a resume-summariser onto a classic ATS and call it AI. Some run autonomous chatbot agents that message candidates without any human in the loop. Chosen HQ is a third shape: named agents own recruiter workflows end-to-end (sourcing dossiers, scheduling drafts, match scoring, analytics), but every action that touches a candidate sits in a one-tap human approval queue. The AI does the work. The recruiter makes the calls.
What exactly is the one-tap approval queue?
It is the primary surface of the product. When the Sourcing Agent finishes a persona search, dossiers arrive in a review queue, not an outbound campaign. When the Scheduling Agent drafts outreach, a reschedule, or a follow-up, every message sits as a draft the recruiter can edit and approve with one click before Gmail sends it. When Match Rating proposes a score, the claims and weights behind it are visible and overridable before the candidate advances. Nothing leaves the building without a human touch.
What is claim-by-claim Match Rating and why does it matter for compliance?
Match Rating extracts 5 to 15 testable claims from each job description and your organisation's criteria, classifies each one must-have, nice-to-have, or red flag with a visible weight, and sources the supporting evidence to a specific span in the resume. Every claim, every weight, every piece of evidence is visible, overridable, and re-runnable. That audit trail is the review flow NYC DCWP Local Law 144, Illinois HB 3773, Colorado CAIA, and the EU AI Act high-risk hiring obligations were written for. A black-box score cannot satisfy a why-this-score notice requirement. A list of checked claims with citations can.
How does this compare with Ashby, Gem, Paradox, Eightfold, or SeekOut?
Ashby and Gem are ATS or sourcing platforms with AI features bolted onto a pre-existing product, both start at a higher price floor and require a demo to see that floor. Paradox is excellent at QSR frontline hiring and, after Workday acquired it in October 2025, is now positioned for enterprise QSR use cases. Eightfold is enterprise-first and does not source outbound. SeekOut is a sourcing platform with an agentic tier gated to larger accounts. Chosen HQ is the only one of these products that publishes a flat per-plan price, ships every agent on every plan, and makes the approval queue the primary surface instead of an optional setting.
What is the pricing, published plainly?
Starter is $0 for up to three open reqs, no credit card. Growth is $99 per month as a founding-member price, $399 per month after. Enterprise is custom for teams above 250 and comes with SSO, SCIM, audit log, per-jurisdiction automation toggles, and a published bias-audit artifact. Every agent, every feature, on every plan. The Growth price is on the pricing page, not gated behind a demo.
Does the AI train on our candidate data?
No. The privacy policy is explicit: no AI training on customer data. It is a contractual and architectural commitment, not a best-effort posture. Candidate resumes, personas, and org criteria stay inside your tenant.
What about AI-fabricated resumes and deepfake candidates?
The claim-by-claim evidence trail is a fraud-resistant review flow. Every claim is sourced to a specific span in the resume, so contradictions, missing evidence, and statements the AI could not anchor surface as red flags before a recruiter spends 30 minutes on a phone screen. The evidence spine that already lives in the scoring system is the fraud-detection layer; you do not buy it as an add-on.
Who should not buy this?
CHROs at enterprises running millions of interviews per hour. QSR frontline hiring at 10,000 seats. Staffing agencies reselling a platform. Teams who want a course on how to recruit. Chosen HQ is built for Series A to mid-B tech teams, 40 to 250 people, one or two recruiters carrying 12 to 25 concurrent reqs, VPs of Talent accountable to a board for defensible data.