Guide · Two stacks, one buying decision

Your ATS already has SOC 2. The audit you fail in 2026 is a different one.

Every guide on applicant tracking system security compliance reads the same way. Encryption at rest. Encryption in transit. SSO, SCIM, RBAC. SOC 2 Type 2 once a year. ISO 27001 if the buyer is European. A retention window per the GDPR. That stack matters and every credible ATS has shipped it for a decade. None of it is what fails a 2026 hiring audit. The audit that fails is per-decision, per-candidate, and asks a question the perimeter stack cannot answer: why was this specific candidate rejected, which claims drove the rejection, what did the human reviewer change, and where is the row that proves it. This page is about the second stack, the one almost no ATS buying guide names.

M
Matthew Diakonov
13 min read

The perimeter stack is necessary and uninteresting

Walk into any procurement security review at a Series B tech company and the questionnaire is the same. Encryption at rest, AES-256. Encryption in transit, TLS 1.2 or higher. Single sign-on through SAML 2.0 or OIDC. SCIM 2.0 for user provisioning. Role-based access controls with a documented permission matrix. Audit logs of administrator actions, kept for at least one year. Independent third-party penetration testing on an annual cadence. A SOC 2 Type 2 report under AICPA SSAE 18, scoped to the security trust service criterion at minimum and ideally to availability and confidentiality too. ISO 27001 certification if the buyer has European customers. A signed data processing addendum that maps to GDPR Article 28. A documented retention policy with the ability to purge candidate records on request.

Every entry on that list is real. Every entry is also a solved problem. Greenhouse has been ISO 27001 and SOC 2 Type 2 audited for years. Ashby went through Vanta and shipped a SOC 2 attestation by 2021. Workday Recruiting has the entire enterprise security stack as a baseline. Lever has SOC 2 Type 2. Every credible ATS in the consideration set for a 40 to 250 person tech team has shipped this layer. The differentiator is not whether the vendor has it. The differentiator is what the vendor has on top of it.

Treat the perimeter stack as a gate. If the vendor cannot produce a current SOC 2 Type 2 report under NDA within five business days, fail them at the gate. If they produce the report and the controls are in scope, move to the part of the conversation that actually decides whether the product survives 2026.

The 2026 shift: per-decision compliance, not perimeter

Four anchors define the 2026 AI hiring law cluster, and they all judge the ATS on the same axis: what the system does on a per-decision basis, not what it does at the perimeter. New York City Local Law 144 went into effect for enforcement in July 2023 and has been the template the rest of the regulatory pile copies from. It requires an annual independent bias audit of any automated employment decision tool that substantially assists or replaces a hiring or promotion decision, a public posting of the audit summary, and candidate notice at least ten business days before the tool is used.

Illinois HB 3773 amended the Illinois Human Rights Act effective January 1, 2026 to make it a civil rights violation for an employer to use AI in hiring, promotion, discharge, training, or discipline in a way that produces a discriminatory effect, and to require notice when AI is used. Colorado SB 24-205, the Colorado AI Act, was originally scheduled for February 1, 2026 with deployer obligations on high-risk AI systems including hiring, was amended during the 2025 session, and lands in 2026 with documented impact assessments and consumer notice obligations. The EU AI Act lists hiring and worker management systems as high-risk under Annex III, with conformity assessment, transparency, and post-market monitoring obligations phasing in through 2026 and 2027.

Read those four together and the same shape falls out. The regulator does not ask whether the resume was encrypted at rest. The regulator asks why a particular candidate was rejected, which AI inferences contributed to the rejection, which spans of the candidate's record were the evidence, what the human reviewer changed, and whether the candidate received notice that an automated decision tool was in use. Every one of those questions is per-decision. None of them is answerable from a SOC 2 control catalog.

The two stacks, in one table

The left column is what every ATS guide on this topic describes. The right column is what 2026 hiring law actually demands. The first is necessary. The second is the thing that decides whether you survive a candidate complaint.

FeaturePerimeter security stackPer-decision audit stack
Unit of auditThe data store and the access controls around it.A single AI-influenced decision, scoped to one candidate and one job.
Question it answersIs the data confidential, available, and access-restricted?Why was this candidate rejected, and which claims and overrides drove it?
Primary artifactSOC 2 Type 2 report, ISO 27001 certificate, signed DPA.Per-candidate claim ledger plus override log, queryable on request.
Evidence shapeControl narratives, sample tests, vendor attestations.Row-level claims with weights, classifications, and resume span pointers.
Human reviewer signalAdministrator audit logs of who logged in and when.Approval queue row showing what the reviewer changed and why.
Failure modeData breach, insider threat, unauthorized access.Black-box rejection with no per-claim trail to defend it.
Refresh cadenceAnnual third-party audit; report renewed each year.Continuous; written every time the ATS scores or a recruiter overrides.
Regulator that caresCustomer procurement, infosec teams, AICPA auditors.NYC DCWP, Illinois Department of Human Rights, Colorado AG, EU national authorities.

Both stacks are required in 2026. The right column is what most ATS pages still treat as someone else's problem.

The claim ledger, in code

The data structure that answers the regulator's per-decision question is the claim ledger. One ledger per (candidate, job) pair. Each row is one claim extracted from the job description and the org's hiring criteria. The shape below is the minimum viable schema. Anything smaller fails the audit. Anything that compresses the row down to a single similarity score fails harder.

claim_ledger.ts

Five things are doing work in that schema. The classification field encodes the hiring rubric (must-have, nice-to-have, red flag) so a reviewer can argue with the rubric, not just the score. The weight is visible and overridable so the recruiter can adjust the rubric per req instead of pretending the model picked the right weights. The evidence span hash survives resume deletion: if the candidate exercises a GDPR Article 17 right to erasure on the resume itself, the claim ledger row keeps the proof of what the AI saw without retaining the source text. The human override block is a separate row, not an edit to the AI row, so the original AI decision and the human decision both stay in the ledger; the row that ships is the most recent non-null one. The reviewer ID and timestamp are the fields a regulator will ask for first.

Most ATS systems today produce a single floating-point match score per candidate. That score is a perimeter-stack artifact pretending to be a per-decision audit artifact. When a candidate complaint arrives and the regulator asks which claims drove the rejection, the score has no answer. The claim ledger has eleven.

11 claims

The audit that fails in 2026 is the one where the regulator asks for the row that explains the rejection and the ATS hands over a confidence score and a SOC 2 report.

Pattern in candidate complaints under NYC LL144 and emerging Illinois HB 3773 cases

What human in the loop actually means at the data layer

The phrase human in the loop has been doing too much work in vendor decks for two years. Three of the four 2026 laws care less about whether a human is anywhere in the workflow and more about whether the human's input is captured as data. Colorado CAIA's deployer obligations require documented human review of consequential decisions. The EU AI Act distinguishes human oversight from human approval and requires that the reviewer have the information and authority to override, in a form that is reconstructable after the fact. NYC Local Law 144 distinguishes between AI that makes a decision and AI that substantially assists a decision; the audit obligation tracks both, and the substantially-assists definition turns on whether the human meaningfully changed the AI's output.

The data shape that satisfies all three is identical. Every AI-produced claim and every AI-produced score is a row. The row carries a column for human override with a timestamp, a reviewer ID, a reviewer role, and an optional note. The override is not a soft delete or a field replacement; it is a separate logged entry tied back to the AI row. The row that ships into the next step of the hiring funnel is the override, not the AI's first answer. If you cannot point at a row in your ATS that proves the human override happened, the human is not in the loop in any way the law recognizes.

A second, quieter test: the reviewer has to have authority to disagree. If the override field exists but the workflow gates the next step on the AI score, the reviewer is advisory, not in the loop. The law looks at whether the override actually controls the outgoing decision. The approval queue pattern (every AI write is a draft, the draft becomes the action only after a recruiter tap) survives that test. A dashboard where the AI fires the action and the reviewer audits afterward does not.

The regulator request, walked through

The shape of the actual workflow when one candidate complains. This is the test the per-decision audit stack is built for. A perimeter-only ATS gets stuck at step two.

One candidate complaint, end to end

CandidateRegulatorEmployerATSFiles complaint: rejected by an automated toolNotice of investigation, scoped to candidatePull AEDT decision record for candidate_idClaim ledger + override log + notice receipt11 claims, 2 reviewer overrides, notice ackDecision was substantively human-reviewed

What the perimeter-only ATS hands over at step three

A perimeter-only ATS at step three of that flow has two artifacts to hand over: a similarity score (often a floating-point number to three decimal places) and an administrator audit log showing that a recruiter logged in and marked the candidate rejected. That is not the answer to the regulator's question. The regulator asked which claims drove the rejection. The ATS handed over a score and a login time. A reasonable regulator infers that the AI substantially assisted the decision (because there is no other input visible) and routes the case under whichever AEDT statute applies. The employer is now defending the model, which the employer did not build, with no internal visibility into how the model arrived at the score.

A claim-ledger ATS at the same step hands over the eleven-row ledger, the two override entries (one weight adjustment, one classification flip), the notice receipt showing the candidate was informed an AEDT was in use, the bias audit summary for the AEDT version that scored the candidate, and the retention status for the resume itself. The conversation moves from the AI to the rubric and the reviewer, which is where the law expects it to be.

The eight-question security review every ATS buyer should run in 2026

  • Show me a SOC 2 Type 2 report under NDA, scoped to the security trust service criterion at minimum, dated within the last twelve months.
  • Show me one candidate's per-decision audit trail. I want the claim ledger, the override log, and the notice receipt, queried by candidate_id.
  • Show me where the recruiter override lives in your data model. I want to see that it is a row, not a field, and that the timestamp and reviewer_id columns are present.
  • Walk me through how you keep the per-decision audit row beyond the resume retention window without retaining the resume itself.
  • Show me the bias audit summary you publish under NYC Local Law 144. Tell me which AEDT version it covers and how often it is refreshed.
  • Show me what your candidate notice looks like in production, where it is delivered, and where the receipt of the notice is logged.
  • Tell me what happens in your product if a recruiter approves the AI's score without changing anything. Is that captured as a distinct row from the no-review case? It must be.
  • Tell me which fields in your audit log a regulator can request without forcing a full data export. The answer should be at least: claim_id, override.timestamp, override.reviewer_id, evidence.span_hash.

Where the perimeter stack still wins, and what it cannot do alone

None of this argues against the perimeter stack. The perimeter stack stops the breach that ends up in TechCrunch. It stops the contractor exfiltrating the candidate database on their last day. It stops the GDPR Article 33 seventy-two hour notification clock from starting. Those are the failures that lose the customer base. The per-decision audit stack is what loses the customer to a regulator instead of a hacker; both losses count.

The honest counterargument to everything above is that Local Law 144 enforcement has been thin, Illinois HB 3773 only kicks in this year, the Colorado AI Act has been amended and partially delayed, and the EU AI Act has a phased timeline running through 2027. A reasonable procurement officer can argue that the per-decision audit stack is a leading indicator, not a 2026 failing-grade question. That argument is correct on enforcement volume today and wrong on procurement direction. Boards accountable for hiring data and customers running procurement security reviews are already asking the per-decision questions because the laws are public, the shape is set, and a vendor that cannot produce the artifact today is a vendor that has to ship a roadmap to do so by renewal.

The buy decision in 2026 is not whether to take the perimeter stack seriously. It is whether the vendor on the other side of the table can answer the per-candidate question when one candidate complains. If the answer is yes, you have a vendor who can defend their work. If the answer is a SOC 2 report and a confidence score, you do not.

Perimeter stack

Necessary, solved, uninteresting

AES-256 at rest, TLS 1.2+ in transit, SAML 2.0 SSO, SCIM 2.0 user provisioning, role-based access, administrator audit logs, SOC 2 Type 2, ISO 27001, signed DPA, documented retention windows under GDPR Article 5(1)(e) and CCPA. Treat as a gate. If a vendor in your consideration set fails any of these, end the conversation. Do not negotiate on the perimeter stack; it is the price of being on the list.

Per-decision audit stack

The 2026 differentiator

Per-candidate claim ledger with classification and weights. Row-level recruiter override with timestamp, reviewer ID, and note. Evidence span hash that survives resume deletion. Candidate AEDT notice with logged receipt. Independent bias audit summary tied to the AEDT version that produced the score. Queryable archive scoped per candidate. Negotiate here. The right answer is a live query into a real candidate's record, not a slide.

Where 10xats sits in this lens

10xats is in development. Waitlist members get the first invite and the first published price. The product was designed against the per-decision audit stack first. Match Rating produces a claim ledger of 5 to 15 rows per (candidate, job) pair, each row carrying a visible weight, a must-have / nice-to-have / red flag classification, the resume span the evidence came from, and a logged recruiter override. The recruiter approval queue is the override surface; every approve, edit, or reject on a candidate-facing draft is a logged row. The privacy policy commits, in writing, to no AI training on customer data. The terms commit to a Human Oversight Required posture as a contractual obligation, not a slogan.

None of that excuses the perimeter stack. The perimeter stack ships when the product ships, and the build is aligned to the same SOC 2 Type 2 controls every credible buyer will ask about at procurement. The argument is not that the perimeter stack is optional. The argument is that the per-decision audit stack is what should decide the buy, and the perimeter stack is what should decide whether the vendor is in the room at all.

If you are evaluating an ATS in 2026 and the per-decision audit stack is not on the questionnaire, add it. Ask the vendor for one candidate's audit trail. The quality of the answer is the buy decision.

The one-paragraph version

Two stacks. The perimeter is the gate. The per-decision audit is the buy.

Applicant tracking system security compliance in 2026 is two stacks running in parallel. The perimeter stack is encryption, access control, retention, and certification; every credible vendor has it, and a vendor without it should not be in the room. The per-decision audit stack is the per-candidate claim ledger, the row-level override log, the evidence span hash, the candidate notice receipt, and the queryable archive that survives a regulator request. The 2026 AI hiring law cluster (NYC Local Law 144, Illinois HB 3773, Colorado CAIA SB 24-205, EU AI Act Annex III) all judge the second stack. Most ATS vendors still ship only the first. Buy the one that ships both.

Want the per-decision audit walkthrough?

Join the waitlist; we will send the first invite, the first published price, and the per-candidate audit-trail demo.

Questions ATS buyers actually ask

What does applicant tracking system security compliance actually cover in 2026?

Two stacks. The perimeter stack is the legacy meaning: encryption at rest (AES-256), encryption in transit (TLS 1.2+), SSO, SCIM provisioning, role-based access, audit logs of administrator actions, SOC 2 Type 2, ISO 27001, GDPR data processing addendum, and a documented retention policy. Every credible ATS has had this stack for years and an auditor will check it in an afternoon. The per-decision AI audit stack is newer: a claim ledger per AI-influenced decision, a logged recruiter override on every claim and every score, evidence spans tying every claim to a piece of source text, candidate notice that an automated employment decision tool was used, and a queryable archive that lets you reconstruct any decision when a regulator or candidate asks. The 2026 AI hiring law cluster (NYC Local Law 144, Illinois HB 3773, Colorado CAIA SB 24-205, EU AI Act Annex III) lives in the second stack, not the first.

Why does a SOC 2 Type 2 report not satisfy NYC Local Law 144 or the EU AI Act?

Because they are answering different questions. SOC 2 asks whether the vendor's controls are designed and operating to protect customer data. NYC Local Law 144 (the New York City Department of Consumer and Worker Protection bias audit rule) asks whether a specific automated employment decision tool was independently audited for disparate impact, whether candidates received a notice that an AEDT was used, and whether a summary of the audit was posted publicly. The EU AI Act Annex III asks whether a high-risk hiring system maintains transparency obligations, conformity assessments, and post-market monitoring. None of those three questions are answerable from a SOC 2 report. They are per-decision and per-deployment questions. The data structure that answers them is the claim ledger plus the override log, not the SOC 2 control catalog.

What is a claim ledger and why does it matter for compliance?

A claim ledger is the data structure an AI-driven ATS produces every time it scores a candidate against a job. It is a list of 5 to 15 testable claims extracted from the job description and the org's hiring criteria. Each row carries the claim text, a numeric weight, a classification (must-have, nice-to-have, or red flag), the resume span the evidence came from, and any recruiter override. It matters because it is the only data shape that answers a regulator's actual question after a candidate complaint: why was this candidate rejected, which claims drove the rejection, which spans of the resume were the evidence, and what did the human reviewer change. A black-box similarity score has no answer to any of those. A claim ledger has the answer per row.

What is the difference between the perimeter security stack and the per-decision audit stack?

The perimeter stack protects the data store. The per-decision audit stack explains the inference. The first stack is what every ATS has had since the early 2010s: encryption, access control, retention windows, certifications. The second stack is what 2026 hiring laws require and almost no traditional ATS surfaces: a per-candidate claim ledger, override logs at the row level, evidence spans, candidate notice records, post-deployment bias monitoring, and a queryable archive scoped per candidate per decision. An ATS can be SOC 2 Type 2 certified, ISO 27001 audited, and GDPR compliant on retention, and still fail a Local Law 144 review or an EU AI Act Annex III assessment because none of those frameworks ask the questions the new laws ask.

Which laws make up the 2026 AI hiring compliance cluster?

Four anchors and a thickening regulatory pile around them. NYC Local Law 144 (effective enforcement July 2023, ongoing) requires an annual independent bias audit of any automated employment decision tool used to substantially assist or replace a hiring or promotion decision, public posting of the audit summary, and candidate notice 10 business days before the tool is used. Illinois HB 3773 (effective January 1, 2026) amends the Illinois Human Rights Act to make it a civil rights violation to use AI in hiring or promotion in a way that produces a discriminatory effect, with a candidate notice obligation. Colorado CAIA (SB 24-205, originally effective February 2026, amended in 2025 with portions delayed into 2026) imposes deployer obligations on high-risk AI systems including impact assessments and consumer notice. EU AI Act Annex III lists hiring systems as high-risk, triggering conformity assessments, transparency obligations, and post-market monitoring. Each of those laws cares about the same data shape: a per-decision audit trail.

Does a human in the loop actually count as compliance, or is it just a slogan?

It only counts as compliance if it is a row in a table. Most ATS marketing calls any reviewer presence in the workflow a human in the loop. That is not what the laws require. NYC Local Law 144 distinguishes between a tool that makes a decision and a tool that substantially assists a decision; the audit obligation tracks both. The Colorado CAIA deployer rules require documented human review of consequential decisions. The EU AI Act distinguishes human oversight from human approval and requires that the human reviewer have the information and authority to override. The data structure that satisfies all three is the same: each AI-produced claim and each AI-produced score is a row, the row carries a column for human override (with timestamp and reviewer ID), and that override is the row that ships, not the AI's first answer. If the override is not a logged row, the human in the loop is a slogan.

What does a regulator request for one candidate actually look like?

A candidate files a complaint with the Equal Employment Opportunity Commission, the New York City Commission on Human Rights, or an EU national authority. The complaint says the candidate believes they were rejected by an automated tool. The ATS gets a request, scoped to that candidate. The request asks for: the AI tool name and version that scored the candidate, the specific score and the data the score was computed from, the human review record (who, when, what was changed), the candidate notice that an AEDT was used, the bias audit summary, and any retention or deletion records. A perimeter-only ATS answers none of those questions natively. A claim-ledger ATS answers all of them with one query against the candidate ID.

How does 10xats handle the per-decision audit stack?

Match Rating extracts 5 to 15 testable claims from each job description and the org's hiring criteria. Each row carries the claim text, a visible numeric weight, a must-have / nice-to-have / red flag classification, the resume span the evidence came from, and any recruiter override. The recruiter approval queue is the human override surface; every approve, reject, or edit on a draft is a logged row. The result is a per-candidate audit trail that survives the request shape above. The product is in development; the data shape is what we are building. The privacy policy already contractually commits to no AI training on customer data, the terms commit to a Human Oversight Required posture, and the published voice keeps the approval queue at the center.

Does the perimeter stack stop mattering?

No. It is necessary, just not sufficient. An ATS that ships the per-decision audit stack but skimps on encryption, RBAC, or SOC 2 will fail the procurement security review at any 250-person tech company before the AI compliance question even comes up. The order of operations for a 2026 buyer is: perimeter stack as a gate, per-decision audit stack as the differentiator. A vendor that has both is in the consideration set. A vendor that has only one is not. Most vendors today have only the perimeter stack, which is why the per-decision audit stack is where the buying conversation actually lands.

Can I get an ATS to pass NYC LL144 with a separate bias audit vendor?

You can. A bolt-on bias audit run by a third party against your hiring funnel produces the report Local Law 144 requires you to post. What it does not produce is the per-candidate audit trail you need when one specific candidate complains. The two are different artifacts. The bias audit summary is a public-facing aggregate document. The per-candidate audit trail is a private, queryable record scoped to a single decision. A 2026 ATS security compliance posture needs both. The aggregate audit is the front-of-store sign; the per-candidate trail is what survives discovery.

What about candidate data retention under GDPR and CCPA when AI scoring is involved?

The retention question gets harder, not easier. Under GDPR Article 22 and the EU AI Act, a candidate has the right to request information about how an automated decision was made and the right to object. That right has to survive the deletion of the candidate's resume under your standard retention window. The pattern that holds is: keep the claim ledger row plus the recruiter override beyond the resume retention window, but keep them in a form that does not reproduce the resume content (only the claim text and the evidence span hash). That way you can answer the regulator's question about why the candidate was rejected without retaining the original resume past your stated retention period. Most traditional ATS retention models do not split the resume from the decision artifact this way.