Researchers trained an A.I. on over 31,000 selfies to help it learn how to judge our personalities based on facial features alone.

Zhou Yi was terrible at math. He risked never getting into college. Then a company called Squirrel AI came to his middle school in Hangzhou, China, promising personalized tutoring. He had tried tutoring services before, but this one was different: instead of a human teacher, an AI algorithm would curate his lessons. The 13-year-old decided to give it a try. By the end of the semester, his test scores had risen from 50% to 62.5%. Two years later, he scored an 85% on his final middle school exam.
“I used to think math was terrifying,” he says. “But through tutoring, I realized it really isn’t that hard. It helped me take the first step down a different path.”
High-quality data is the fuel that powers AI algorithms. Without a continual flow of labeled data, bottlenecks can occur and the algorithm will slowly get worse and add risk to the system.
It’s why labeled data is so critical for companies like Zoox, Cruise and Waymo, which use it to train machine learning models to develop and deploy autonomous vehicles. That need is what led to the creation of Scale AI, a startup that uses software and people to process and label image, lidar and map data for companies building machine learning algorithms. Companies working on autonomous vehicle technology make up a large swath of Scale’s customer base, although its platform is also used by Airbnb, Pinterest and OpenAI, among others.
The COVID-19 pandemic has slowed, or even halted, that flow of data as AV companies suspended testing on public roads — the means of collecting billions of images. Scale is hoping to turn the tap back on, and for free.
Any time you log on to Twitter and look at a popular post, you’re likely to find bot accounts liking or commenting on it. Clicking through and you can see they’ve tweeted many times, often in a short time span. Sometimes their posts are selling junk or spreading digital viruses. Other accounts, especially the bots that post garbled vitriol in response to particular news articles or official statements, are entirely political.
It’s easy to assume this entire phenomenon is powered by advanced computer science. Indeed, I’ve talked to many people who think machine learning algorithms driven by machine learning or artificial intelligence are giving political bots the ability to learn from their surroundings and interact with people in a sophisticated way.
During events in which researchers now believe political bots and disinformation played a key role—the Brexit referendum, the Trump-Clinton contest in 2016, the Crimea crisis—there is a widespread belief that smart AI tools allowed computers to pose as humans and help manipulate the public conversation.
Good news.
In a paper published last week in Nature, though, researchers from Hong Kong University of Science and Technology devised a way to build photosensors directly into a hemispherical artificial retina. This enabled them to create a device that can mimic the wide field of view, responsiveness, and resolution of the human eye.
“The structural mimicry of Gu and colleagues’ artificial eye is certainly impressive, but what makes it truly stand out from previously reported devices is that many of its sensory capabilities compare favorably with those of its natural counterpart,” writes Hongrui Jiang, an engineer at the University of Wisconsin Madison, in a perspective in Nature.
Key to the breakthrough was an ingenious way of implanting photosensors into a dome-shaped artificial retina. The team created a hemisphere of aluminum oxide peppered with densely-packed nanoscale pores. They then used vapor deposition to grow nanowires inside these pores made from perovskite, a type of photosensitive compound used in solar cells.