TLDR – Hiring engineers in 2026 and beyond
- Build hiring to filter at scale (volume is normal) with gates, not more sourcing.
- Add lightweight authenticity checks early to reduce remote fraud.
- Assume AI interview cheating and test for understanding + repeatability, not answers.
- Replace vibes with structured interviews + scorecards (anchored, consistent scoring).
- Use small, job-relevant work samples instead of puzzles/trivia.
- Leverage work samples and score with a rubric to stay humane + hard to game.
- Use a simple system test to surface fundamentals, tradeoffs, and integrity fast.
- Publish an AI-use policy to ensure everyone is on the same page regarding use.
- Treat vibe coding as prototype-only unless backed by tests, review, and ownership.
- Mentor regularly to prevent the junior gap from becoming a senior shortage.
Here be monsters: The new remote hiring reality
You open your ATS on a Tuesday morning. It’s not a pipeline so much as a flood. Where you used to have problems getting enough candidates to apply, now you have the reverse problem: 5,000+ applications to sift through, in order to find the few truly qualified candidates for the role.
You scroll and skim and start forming opinions off patterns and vibes. But what if half of these resumes are AI-generated? What if the candidate’s work isn’t actually theirs? In fact, this might sound absurd, but what if the person you interview isn’t the person who shows up to work on Day 1?
It happens. Software engineering veteran John Szeder put it bluntly when we talked:
“Fully remote? ‘Here be monsters.’ That’s what hiring looks like today.” — John Szeder
I’ve personally watched modern hiring drift into two equally bad extremes:
- The trust fall process: Quick chats and gut checks, before shipping an offer. (As a prospective employee, I love this — but it’s probably not the best idea for a hiring manager.)
- The hazing gauntlet process: Endless rounds of interviews, puzzle theater, 8-hour “skill assessments,” and stress tests that punish healthy humans. (As a prospective employee, I hate this — and I have to assume it’s burdensome for the hiring team, as well.)
John highlighted that neither one works in engineering hiring 2026. One gets you burned and the other makes great candidates walk.
The nightmare scenario you can’t ignore anymore
Hiring the wrong person for the job is not a minor inconvenience. It can burn months of roadmap, drain team morale, and erode trust in the entire hiring loop. And, John notes, in a post-LLM, remote-first market, there are more threats, too:
- Remote identity can be faked.
- Real-time assistance can be hidden.
- Smooth talk can cover weak fundamentals.
- Vibe-coded demos can impress while hiding inadequacies.
When I asked John about vibe coding, he gave a great analogy:
Building fast with AI can feel like “trying to build a skyscraper out of drywall.” — John Szeder
That kind of describes hiring, too. Your hiring process might have been effective at one point, but if you don’t fully understand the current remote-first post-AI climate hiring takes place in, and how to adapt to it, it can all come crumbling down like drywall.
Things have changed and hiring teams must adapt
If you’re a CTO, VP of Engineering, Engineering Director, Engineering Manager, tech lead, or a recruiter embedded with engineering teams — you can’t keep doing things the same way you have been.
Especially if any of these feel familiar:
- You’re seeing extreme applicant volume.
- You’re worried about AI interview cheating.
- You’ve felt the risk of remote hiring fraud.
- You’ve watched vibes beat standards.
- You’re trying to prevent the junior gap.
You’re not imagining it, the environment is different now. We all need to adapt.
The state of engineering in 2026: Speed is up, risk is up
Software feels faster right now. But, in some ways, it’s also thinner. Remember what John said about AI and drywall? The fact is that AI makes output feel cheap. However, delivery is still expensive.
In controlled settings, AI coding tools can make developers finish tasks much faster. One GitHub Copilot experiment found the Copilot group completed a coding task 55.8% faster than the control group.
John provides anecdotal evidence as well, mentioning how he completed a project of his own that normally would have taken 3 months and cost $5,000. With vibe coding he finished the project in 10 hours.
So yes — speed gains are real. But code produced is not the same thing as software delivered. At least, it shouldn’t be. John himself notes that while his 10-hour vibe-coding project was solid, he also knows where the “skeletons are buried” in it.
Research from DORA paints the same picture. In an analysis, they found that AI adoption correlates with improvements in things like documentation quality, code quality, and code review speed. However, the same analysis also reports AI adoption is associated with worse delivery performance — with an estimated 7.2% reduction in delivery stability and 1.5% reduction in delivery output for every 25% increase in AI adoption.
That’s a strong warning signal. We can generate more code at faster speed, but if we’re not careful, the cost might be more things breaking.
Vibe coding risks are real (but predictable)
When AI gives you a clean-looking answer instantly, it nudges teams toward a dangerous lie: “If it runs, it’s ready.”
That’s how drywall becomes load-bearing. Because production software isn’t just about whether something works. It’s about whether it’s robust, has redundancies, can handle edge cases, and is secure.
Security research backs this up. A peer-reviewed study of an AI code assistant found participants with access to the assistant wrote significantly less secure code, while also feeling more confident that it was secure.
That combo — more speed + more confidence + less safety — is exactly how teams ship risk without realizing it.
The other productivity trap: Most work isn’t coding
Even if AI makes coding faster, coding is not the whole job. Atlassian’s 2025 DevEx research puts it plainly: developers only spend 16% of their time coding, and many still lose major time to non-coding friction and organizational inefficiencies.
So if your team is faster but still stuck, it’s not because AI failed. It’s because the bottleneck moved.
Practical takeaways for using AI in engineering
AI can be a controlled advantage. But only if you keep engineering discipline strong:
- Smaller batch sizes.
- Strong tests.
- Real code review.
- Clear ownership.
- Better docs.
- Slower thinking where it matters.
That’s what keeps quick code from becoming crumbling drywall.
Hiring engineers in 2026: Signal collapse, fraud, and AI cheating
When John said “here be monsters,” he wasn’t being dramatic. He was naming the new baseline. In 2026, engineering hiring isn’t just hard. It’s noisy, gameable, and happening at scale.
“You get 5,000 resumes, and most of them you don’t want to spend any time looking at.” — John Szeder
That quote is the whole problem in one sentence.
Signal collapsed because the cost of applying hit zero
A resume used to be a little expensive. It took time, effort, specificity, and even a little risk.
Now the economics have flipped:
- AI makes it easy to tailor a resume in minutes.
- Auto-apply tools blast out applications at scale.
- Remote roles attract global volume instantly.
So you get more candidates. But you also get less truth per candidate. More screening isn’t necessarily the right fix either. That just burns out your team.
Remote hiring fraud moved from rare to operational
John described the nightmare scenario every hiring leader fears:
“The person who interviewed was not the same person who showed up to work.” — John Szeder
As crazy as it sounds, this is a growing problem.
Gartner reported that in a survey of 3,000 candidates, 6% admitted to interview fraud (posing as someone else, or having someone else pose for them). Gartner also predicts that by 2028, one in four candidate profiles worldwide will be fake.
That prediction matches what a lot of security and identity folks are warning about, too — especially as deepfake tools get easier.
AI interview cheating is a real category now
Engineering teams can’t simply assess coding capability.
They must assess:
- can a candidate code with tools?
- can a candidate explain what they did?
- can a candidate replicate their work consistently?
The Week reported cases of deepfake-style deception in interviews. It also notes the rise of tools that give real-time answers during technical interviews — something John also highlighted in our conversation.
Even outside tech, the pattern is clear. The Guardian reported universities dealing with deepfake applicants in automated interview flows, including attempts to manipulate identity signals.
Same playbook. Different target.
“Camera off” isn’t a preference, it’s a risk signal
John said it plainly during our conversation:
“Camera off? For me, that’s a hard no.” — John Szeder
That’s because remote work removes physical anchors:
- you can’t verify presence naturally
- you can’t sanity-check identity cues
- you can’t rely on engagement to deter fraud
So teams need lightweight, effective controls. John offers a couple pointers here:
- If a candidate won’t do an on-camera interview, you should walk away
- Screenshot on-camera interviews to ensure it matches who shows up to work
That’s not paranoia in an age of deepfakes and remote work. It’s a sensible approach to our modern reality.
Incentives vs. morals in engineering hiring
Most candidates are still honest. I believe that. But when applying gets cheap, and cheating tools get common, a small percentage can create huge damage.
This is why John is so adamant about building hiring systems that protect individuals and companies in the current job climate. And he notes that you can (and should) do that without punishing good people.
The 2026 hiring North Star: A hard-to-game system that still feels humane
If you’re hiring engineers right now, you’re probably carrying two conflicting truths at once.
On one hand, you want to be fair. You want to treat candidates like humans. You want to believe good intent is the norm. On the other hand, you’ve felt the market shift. Volume has exploded. AI is powerful. And remote work makes identity verification more difficult.
So here’s the north star engineering leaders need to align on for 2026 and beyond: Build a hiring system that’s hard to game, easy to run, and still respectful to honest people.
It shouldn’t be harder or meaner. Nor does it need more rounds. It really just needs to be reliable and consistent from the bottom up.
Protect your signal and you’ll catch bad actors
Across our conversation, John points to a few things that usually point to a broken hiring system:
- rewarding performance over competence
- confusing confidence with capability
- turning unstructured conversation into “evidence”
- creating loopholes big enough to drive a proxy candidate through
If your system can be gamed, it will be. Not by everyone. But by enough people to wreck your week.
But every anti-fraud step you add has a cost. It can create stress and feel invasive. Plus, it can punish the wrong people.
Defensible hiring means knowing specifics
John notes that hiring should be sensible and everyone involved should be able to answer specific questions at the end of the day.
If a job candidate asks, “Why did I fail?” You should be able to answer.
If your CEO asks, “Why did we hire this person?” You should be able to answer.
If your team asks, “Is this fair?” You should be able to answer.
Defensible hiring has receipts:
- clear criteria (what “good” means for the role)
- consistent steps (same process for comparable candidates)
- anchored scoring (what a 1 vs 3 vs 5 actually looks like)
- role relevance (with testing to predict work output)
- documentation (so the system survives turnover)
That’s what separates liking a candidate from actually hiring well.
Humane hiring is precise and accounts for diversity in human behavior
A humane process isn’t one where everybody passes.
A humane process is one where:
- candidates understand what you’re evaluating
- the interview doesn’t reward performative behavior
- the work sample respects time and boundaries
- you don’t treat people like suspects by default
- the system reduces bias by reducing randomness
However, when John looks at the hiring culture we’ve normalized, he sees hiring teams that are inadvertently filtering out some of the best candidates for the job because they work more off of vibes and apply bad controls.
“A lot of people want to give you a real-time coding project while you’re on camera. But you’ll get a false negative. You’re going to get rid of that candidate because they’re going to choke on the spot and you’re going to lose an opportunity at a very good programmer. The vast majority of engineers are deeply introverted, just as a heads up. Trying to solve a problem while somebody is staring at you, that’s very hard. Not very many people do that.” — John Szeder
A quick note about introverts in the workplace
It’s worth noting that introverts excel at high-cognition, high-precision work in low-distraction conditions — a real edge in software and product engineering where sustained focus drives better debugging, systems thinking, testing rigor, and careful design decisions.
Classic brainstorming research shows that interactive groups often underperform individuals in both idea quantity and quality due to process losses like production blocking and evaluation apprehension, which supports engineering practices like async “write-first” RFCs and solo prototyping before group critique.
Optimizing for extroversion through always-on collaboration can systematically undervalue these strengths — solitude, deep thought, and measured contribution — even though they’re foundational to innovation and sound decisions.
Modern office design can make this mismatch worse: when organizations shift to open-plan layouts, empirical field research found face-to-face interaction dropped by about 70% while electronic communication increased, undermining the premise that constant proximity automatically improves collaboration.
Finally, in leadership contexts common to engineering (where proactive, self-directed contributors matter), research finds the “extroverted leadership advantage” can reverse when employees are proactive, because extroverted leaders may be less receptive — suggesting quieter leaders often outperform by creating space for initiative and letting the best ideas surface.
Ironically, optimizing for extroversion — like so many hiring teams do with real-time task assessments — can inadvertently help the exact candidates you’re often trying to filter out. That’s because smooth talkers can coast on good vibes and performative actions. This isn’t to say you should optimize for introversion, but simply that you should have robust, clear practices and controls for hiring that account for a variety of personality types.
How to effectively hire engineers in 2026 and beyond: Start building gates
In older markets, you could wing it more often. A few interviews, a gut check, maybe a code screen. But John’s stories point to why that collapses now. With 5,000 resumes, proxy interviewing, deepfakes, AI cheating and more, everything changes.
High volume means you need filters that scale. Fraud risk means you need identity integrity. So instead of an endless funnel of interviews, John suggests gates.
A gate is a step that does three things:
- It raises signal (you learn something real).
- It reduces gamesmanship (it’s hard to fake).
- It stays humane (it doesn’t waste time or dignity).
The principles to staple to every hiring loop
These principles are what make an engineering hiring process work in 2026.
- Signal over noise: If a step doesn’t predict job performance, it’s theater.
- Consistency over vibes: Same questions. Same rubric. Same scoring anchors. Less randomness.
- Job relevance over cleverness: No riddles, trivia, or gotchas. Simulate the work.
- Auditability over memory: Write it down. Make it reviewable. Don’t rely on how you felt.
- Smart friction over security theater: Only add checks that catch real risk. Avoid “culture fit” rituals.
- Respect for time: Provide reasonable time frames. Communicate clearly. Don’t steal labor.
- Tool realism: People use AI at work. So assess how they use it — with disclosure and boundaries.
- Candidate dignity: Treat the process like a product. Your best candidates are judging you.
What this looks like in practice
At a high level, you’re moving away from one big interview experience and toward a system you can run every week without losing your mind.
That’s why the mechanism John kept circling back to is so practical:
- Gate 1: lightweight authenticity checks
- Gate 2: A reasonable work sample + rubric
- Gate 3: structured interview scorecard
Notice what’s not here:
- 9 rounds
- puzzle marathons
- culture fit as a vibe test
- hiring based on charisma
This is defensible hiring in practice.
John’s 3-gate engineer-hiring playbook
If you’re hiring in 2026, you don’t need a perfect hiring process. You need a process you can run every week, under pressure, and at scale.
Throughout our conversation, John kept returning to the same simple premise:
- Gate 1: Remote authenticity checks (lightweight, humane, documented)
- Gate 2: A reasonable work sample (rubric-scored, role-relevant)
- Gate 3: A structured interview scorecard (consistent, anchored, fair)
This system is designed to do three things:
- Cut remote hiring fraud risk.
- Raise signal in engineering hiring 2026 volume.
- Make AI a controlled advantage, not a hidden liability.
Gate 1: Remote authenticity checks (keep it small, keep it respectful)
John shared the reality that forces this gate to exist when detailing the horror story about a new employee showing up for work who was not the same person interviewed in the screening process.
What Gate 1 does well:
- Adds basic friction against impersonation.
- Sets expectations early.
- Creates a documented process you can defend.
What Gate 1 should avoid:
- Creepy surveillance vibes.
- “Gotcha” behavior.
- One-size-fits-all steps with no accommodations.
Gate 2: Reasonable work sample + rubric (signal over performance art)
John’s example of how he executes this gate is beautifully simple: He gives job candidates a small take-home test to build a simple tic-tac-toe game.
To be clear, the point isn’t the game. The point is what the work sample forces into the open:
- How they think.
- How they structure code.
- How they handle edge cases.
- How they explain tradeoffs.
- Whether the work is actually theirs.
This allows you to get high-quality signal without a five-hour gauntlet.
Gate 3: Structured interview scorecard (stop rewarding vibes)
Structured interviews with scorecards force you to stop asking if you simply like a candidate and start asking if they meet the actual criteria for the job you’re trying to fill.
- Same core questions per role.
- Same scoring scale.
- Same definitions for what “good” means.
That becomes your structured interview scorecard. It’s how you cut variance across interviewers and it’s also how you can stay fairer to introverts and other personality types.
How the three gates work together
Each gate covers a different failure mode.
Gate 1 catches:
- identity mismatch
- proxy candidates
- obvious inconsistencies
Gate 2 catches:
- weak fundamentals hidden by charisma or AI
- AI one-shotting tasks with no understanding
- brittle thinking that won’t scale
Gate 3 catches:
- role mismatch
- teamwork gaps
- shaky decision-making
None of these gates is perfect alone. Together, they form a system that’s hard to game.
The candidate experience matters more than ever
This is where most teams accidentally self-sabotage. A strong engineer can handle standards. So design your hiring process like a product:
- Tell candidates what the steps are.
- Tell them how they’ll be scored.
- Tell them what AI use is allowed.
- Be reasonable with everything.
- Give fast feedback when you can.
That’s how you keep the best people for the job and reduce drop-off.
Concerns about inequitability and exploitation
It’s important to note that camera-on criteria can feel inequitable and there are valid concerns about exploitation that come from take-home tests.
The answer isn’t to ignore these, but to design around them:
- give accommodations
- keep work samples small
- pay when it’s longer than a trivial exercise
- communicate the why
- document consent and boundaries
- offer alternatives when needed
A humane, secure process is possible. You don’t have to choose between trust and rigor.
The Mentorship Standard: Prevent the junior gap from becoming a system outage
When John started talking about the “Silver Tsunami,” I took note. There’s a very real concern about the “junior gap” leading to market collapses due to entry-level work being overtaken by AI.
Essentially, if we don’t build mentorship as a standard, the entire workforce pipeline will eventually collapse: A lack of entry-level workers means no junior employees over time. And no junior employees ultimately means no seniors and executives, as they age out of the workforce (the aforementioned “Silver Tsunami”).
Yet remote work, high shipping pressure, and AI-assisted coding have all nudged teams toward the same failure mode: We’ve stopped investing in beginners because it feels too expensive.
But we will pay for it later — through churn, burnout, and a permanent talent shortage.
The “junior gap” isn’t a theory. It’s a delayed incident.
John notes that when skilled workers disappear, you don’t see the full damage in week one. You see it in year two or five:
- Seniors carry more load.
- Code review becomes triage.
- Nobody can own the system.
- You lose people who wanted to grow.
- The team stops taking real bets.
It’s a serious operational risk — and one that’s completely avoidable.
Why mentorship got harder (and why it matters more now)
The forces pushing mentorship out are real:
- Speed pressure: A focus on shipping leaves no time for teaching.
- Remote + async: learning by osmosis disappears.
- AI tools: juniors can produce faster, but may understand less.
- Volume hiring: managers get buried in interview loops.
But that’s exactly why mentorship matters more now. AI can help someone code. But it can’t automatically teach:
- judgment
- debugging instincts
- system design thinking
- tradeoffs and priorities
- why we do things certain ways
Those are learned socially, with feedback, over time.
John’s mentorship standard that we should all operationalize
When highlighting the coming Silver Tsunami, John mentioned one thing that would prevent our workforce pipeline from dropping out from underneath us: Dedicating 30 minutes each week to mentoring someone.
Here’s what that could look like, operationalized:
- Junior and entry-level individuals get an ongoing, consistent mentor
- Mentors provide a weekly 30-minute 1:1 focused on growth and blockers
- Mentors and mentees get one structured 60-minute pairing block per week
- Managers and mentors align monthly on training progress
- Everyone has a written growth plan with 3–5 skills and evidence markers
This way we can better ensure proper knowledge transfer from one generation to the next — securing our collective future in the process.
Mentorship is also a hiring advantage
John also pointed out that mentorship is a real competitive advantage now. Teams underestimate this, but when you can truthfully say:
- “We have a real mentorship standard.”
- “We invest in junior growth.”
- “We don’t throw people into the deep end.”
You attract candidates who want to build, not just cash checks. That means longer retention and stronger growth — which always provides a competitive edge.
If you’re hiring engineers in 2026, then here’s your action plan
If my conversation with John had one meta-lesson, it’s this: Hiring is now an engineering system. It needs gates, controls, and feedback loops — because the environment has changed. Remote work has scaled. Applicant volume has exploded. AI has raised both leverage and deception.
If you do one thing this week: Sign up for John’s upcoming webinar. You’ll get a complete hiring framework — and you can bring your real hiring cases to the discussion.
If you want to talk to John directly
- Email: johnszeder@gmail.com
- LinkedIn: John Szeder
FAQ: Hiring software and product engineers in 2026
Why does it feel like every role gets 5,000+ applications now?
Because applying is cheap (AI resume tailoring + auto-apply tools) and remote roles attract global volume — so you get more candidates but less truth per candidate.
What does “signal collapse” mean in practical hiring terms?
It means traditional indicators (resume polish, confident interview answers, shiny demos) are easier to manufacture — so they predict job performance less reliably.
Is remote interview fraud actually real, or just hype?
It’s real: impersonation, proxy interviewing, and profile manipulation exist — remote removes natural identity anchors, so basic verification matters more.
What’s the simplest anti-fraud step that doesn’t feel creepy?
Run a short camera-on conversation, ask a few consistency questions about their work, document the interaction, and keep it respectful and lightweight.
Why is “camera off” considered a risk signal in this framework?
Because it reduces your ability to verify presence and identity in a remote context; it’s not about preference — it’s about integrity controls.
How do we assess engineers fairly if AI tools are part of real work now?
Make AI usage explicit: allow it with disclosure and boundaries, then evaluate the candidate’s judgment, reasoning, and ability to explain tradeoffs.
What should an AI-use policy for interviews include?
What tools are allowed, what must be disclosed, what’s disallowed, and the consequence for hidden use (e.g., disqualification for non-disclosure).
Why not just add more interview rounds to increase certainty?
Because extra rounds often increase bias, burnout, and candidate drop-off. A better solution is one with fewer steps that produces higher signal with clear rubrics.
What’s wrong with “hiring on vibes,” especially for senior roles?
Vibes reward charisma and performance over competence, increase variance between interviewers, and make decisions hard to defend later.
What does a “structured interview scorecard” actually look like?
A set of role-specific competencies, the same core questions for each candidate, anchored scoring definitions (e.g. what a 1, 3, or 5 means), and written notes.
Why are work samples better than puzzles or whiteboard trivia?
Because they simulate real work: code structure, edge cases, communication, and decision-making — plus they’re easier to rubric-score.
What makes a work sample “humane”?
It’s a reasonable time commitment, clearly scoped, scored transparently, respects candidate time, and avoids unpaid labor disguised as an assessment.
What’s the point of the tic-tac-toe test specifically?
It provides signal. It showcases a candidate’s understanding of fundamentals, tradeoffs, testability, and clarity — as well as highlighting creative thinking and project enthusiasm.
How do we make work samples harder to “one-shot” with AI?
Require brief rationale (why this approach), request small modifications, ask about edge cases, and review the code like a real PR.
What are the “three gates” in this article’s hiring playbook?
Gate 1: authenticity checks; Gate 2: a reasonable work sample + rubric; Gate 3: structured interview scorecard.
What failure modes does each gate catch?
Gate 1: identity mismatch/proxy; Gate 2: weak fundamentals/brittle thinking; Gate 3: role fit/collaboration/decision quality.
How do we keep gates from becoming security theater?
Only add friction that catches real risk, keep steps minimal, document why they exist, and remove steps that don’t improve prediction.
How do we prevent false negatives for introverts or anxious candidates?
Avoid on-camera live coding; favor async, reasonable work samples + structured discussion so you measure skill, not performance under surveillance.
Is “vibe coding” always bad?
No. It can be great for prototypes and exploration, but dangerous when fast code gets treated as production-ready without tests, review, and ownership.
What engineering disciplines keep AI acceleration from becoming “drywall skyscrapers”?
Smaller batch sizes, strong tests, real code review, clear ownership, better docs, and slowing down where correctness matters.
Why does the article emphasize mentorship so heavily?
Because if teams stop training juniors, the talent pipeline collapses over time — today’s junior gap becomes tomorrow’s senior shortage.
What’s a simple mentorship standard a team can operationalize?
Weekly 30-minute mentoring, a structured pairing block, a written growth plan with evidence markers, and monthly manager alignment.
What should we do this week if we want to apply what’s in this article to our hiring process immediately?
Define role criteria, build a scorecard, design a 30–60 minute work sample with a rubric, and add a lightweight authenticity gate.
What’s the “north star” for hiring engineers in 2026?
A system that’s hard to game, easy to run repeatedly, and still respectful to honest people — reliable, consistent, and humane.