Toggle light / dark theme

Recruiters Are Going Analog to Fight the AI Application Overload

It’s not uncommon for tech roles to now receive hundreds or thousands of applicants. Round after round of layoffs since late 2022 have sent a mass of skilled tech workers job hunting, and the wide adoption of generative AI has also upended the recruitment process, allowing people to bulk apply to roles. All of those eager for work are hitting a wall: overwhelmed recruiters and hiring managers.

WIRED spoke with seven recruiters and hiring managers across tech and other industries, who expressed trepidation about the new tech—for now, much is still unknown about how and why AI makes the choices it does, and it has a history of making biased decisions. They want to understand why the AI is making the decisions it does, and to have more room for nuance before embracing it: Not all qualified applicants are going to fit into a role perfectly, one recruiter tells WIRED.

Recruiters say they are met with droves of résumés sent through tools like LinkedIn’s Easy Apply feature, which allows people to apply for jobs quickly within the site’s platform. Then there are third-party tools to write résumés or cover letters, and there’s generative AI built into tools on sites of major players like LinkedIn and Indeed—some for job seekers, some for recruiters. These come alongside a growing number of tools to automate the recruiting process, leaving some workers wondering if a person or bot is looking at their résumé.

Meet Nvidia CEO Jensen Huang, the man behind the $2 trillion company powering today’s artificial intelligence

AI that will be able to predict the weather 3,000 times faster than a supercomputer and a program that turns a text prompt into a virtual movie set. These are just two of the applications for AI-powered by Nvidia’s technology.


Jensen Huang leads Nvidia – a tech company with a skyrocketing stock and the most advanced technology for artificial intelligence.

DeepMind Researchers Propose Naturalized Execution Tuning (NExT): A Self-Training Machine Learning Method that Drastically Improves the LLM’s Ability to Reason about Code Execution

Understanding and reasoning about program execution is a critical skill for developers, often applied during tasks like debugging and code repair. Traditionally, developers simulate code execution mentally or through debugging tools to identify and fix errors. Despite their sophistication, large language models (LLMs) trained on code have struggled to grasp the deeper, semantic aspects of program execution beyond the superficial textual representation of code. This limitation often affects their performance in complex software engineering tasks, such as program repair, where understanding the execution flow of a program is essential.

Existing research in AI-driven software development includes several frameworks and models focused on enhancing code execution reasoning. Notable examples include CrossBeam, which leverages execution states in sequence-to-sequence models, and specialized neural architectures like the instruction pointer attention graph neural networks. Other approaches, such as the differentiable Forth interpreter and Scratchpad, integrate execution traces directly into model training to improve program synthesis and debugging capabilities. These methods pave the way for advanced reasoning about code, focusing on both the process and the dynamic states of execution within programming environments.

Researchers from Google DeepMind, Yale University, and the University of Illinois have proposed NExT, which introduces a novel approach by teaching LLMs to interpret and utilize execution traces, enabling more nuanced reasoning about program behavior during runtime. This method stands apart due to its incorporation of detailed runtime data directly into model training, fostering a deeper semantic understanding of code. By embedding execution traces as inline comments, NExT allows models to access crucial contexts that traditional training methods often overlook, making the generated rationales for code fixes more accurate and grounded in actual code execution.

ETH Zurich’s wheeled-legged robot masters urban terrain

ETH Zurich researchers have developed a locomotor control that can enable wheeled-legged robots to autonomously navigate various urban environments.

The robot was equipped with sophisticated navigational abilities thanks to a combination of machine learning algorithms. It was tested in the cities of Seville, Spain, and Zurich, Switzerland.

With little assistance from humans, the team’s ANYmal wheeled-legged robot accomplished autonomous operations in urban settings at the kilometer scale.

/* */