Back to all blogs

Back to all blogs

Back to all blogs

Open-Book Tech Interviews in the AI Era: What to Allow, What to Flag, What to Measure

Open-Book Tech Interviews in the AI Era: What to Allow, What to Flag, What to Measure

Explore the rise of open-book tech interviews in the AI Era and what teams should consider to keep assessments fair and effective.

Published By

Image

Abhishek Kaushik

Published On

Dec 2, 2025

Deepfake voices
in hiring
Deepfake voices
in hiring

TL;DR

Modern engineering teams work in open-book environments.
Developers use:

  • StackOverflow

  • GitHub repos

  • Documentation

  • AI assistants like ChatGPT and Copilot

So banning tools entirely in interviews is unrealistic and can feel unfair.
But allowing unrestricted AI breaks the signal of real skill.

The goal is not to prevent tool usage.
The goal is to evaluate reasoning and ownership while the candidate uses tools.

The Problem

If interviews only test:

  • Memory

  • Syntax recall

  • Algorithm trivia

Then candidates who are highly capable in real-world engineering environments appear weaker than they are.

But if interviews allow:

  • AI-generated full solutions

  • Pre-written script reading

  • Proxy or coached guidance

Then the company risks hiring people who cannot perform independently.

To balance fairness + integrity, we must define what open-book actually means in 2025.

What to Allow

These forms of assistance match real professional workflows and do not meaningfully distort skill signals:

Allowed

Why

Reading official docs

Models real-world engineering

Checking syntax or library usage

Avoids trivia bias

Using a scratchpad or notes

Encourages organized thinking

Asking AI to generate clarifying questions

Shows thinking scaffolding

Using AI to brainstorm alternative approaches

Reveals reasoning critique ability

The important rule:

Candidates may reference resources, but explanations must come from their own understanding.

What to Flag

These behaviors interfere with evaluating real reasoning:

Behavior

Why It Matters

Copy-paste of complete code blocks

Hides construction ability

Scripted story responses to behavioral prompts

Masks ownership

AI-generated architecture diagrams or answers

Avoids tradeoff thinking

Rehearsed or coaching-fed reasoning

Prevents authenticity signals

Red flags are not accusations.
They are prompts for deeper probing.

What to Disallow Entirely

These practices directly compromise interview integrity:

Not Allowed

Risk

Live whisper coaching

Identity substitution

Proxy interviewers

Fraudulent representation

Deepfake face or voice masking

Identity misrepresentation

Continuous AI answer generation

No evaluation of reasoning ability

This is where Sherlock AI is critical.
It detects:

  • Background coaching patterns

  • Identity mismatches

  • Code authorship irregularities

  • Scripted narrative cadence

Sherlock AI enforces fair open-book, not no-book.

The Open-Book Interview Format (Copy This Workflow)

Step 1: Introduce Expectations Clearly

You may use documentation or notes. You may not use AI to generate or dictate full solutions or answers. You must explain your reasoning as you work

Step 2: Use a Micro Problem to Observe Thinking

Small enough to finish live.
Complex enough to require decomposition.

Step 3: Ask the Constraint Shift Question

If one assumption changed, what would you do differently?

This reveals real understanding.

Step 4: Score on Reasoning, Not Output

Scorecard categories:

  • Clarity of explanation

  • Awareness of tradeoffs

  • Ability to adapt

Not:

  • Syntax correctness

  • Memorized patterns

  • Speed

What to Measure (This is the Key)

Do not measure:

  • Lines of code written

  • How fast they answer

  • Whether they remembered details

Measure:

  • How they decide

  • How they adapt

  • Whether they understand why

Signal

Meaning

Explains alternatives

Real engineering maturity

Narrates tradeoffs

Seniority indicator

Adjusts solution under change

Authentic understanding

Can explain output in plain language

True ownership

Example Scorecard Language (Copy-Paste)

Candidate demonstrated reasoning behind decisions and could explain tradeoffs clearly. Adapted solution when constraints changed. Used external resources appropriately without outsourcing problem-solving

If concerns arise:

Candidate relied heavily on external resources and was unable to explain rationale or adapt solution under changing constraints

This keeps scoring neutral, specific, and audit-safe.

Conclusion

Open-book interviewing is not only possible.
It is the future of fair technical hiring.

The goal is not to test what candidates remember.
The goal is to test how they think, decide, and adjust.

When combined with:

  • Consistent reasoning-based prompts

  • Clear AI-use boundaries

  • Sherlock AI’s identity and authenticity verification

You get:

  • Stronger hiring accuracy

  • Faster interview cycles

  • Lower false negatives

  • Lower fraud exposure

This is hiring for the real world.

© 2025 Spottable AI Inc. All rights reserved.

© 2025 Spottable AI Inc. All rights reserved.

© 2025 Spottable AI Inc. All rights reserved.