Toggle light / dark theme

Do you agree?


Elon Musk may be a strong proponent of all things tech. But he’s far from positive on its implications for the jobs market.

In fact, the Tesla CEO says one of tech’s great developments — artificial intelligence — could spell the end of many jobs altogether.

“AI will make jobs kind of pointless,” Musk said Thursday, speaking alongside Alibaba’s founder Jack Ma at the World Artificial Intelligence Conference in Shanghai.

An AI algorithm is capable of automatically generating realistic-looking images from bits of pixels.

Why it matters: The achievement is the latest evidence that AI is increasingly able to learn from and copy the real world in ways that may eventually allow algorithms to create fictional images that are indistinguishable from reality.

What’s new: In a paper presented at this week’s International Conference on Machine Learning, researchers from OpenAI showed they could train the organization’s GPT-2 algorithm on images.

In order to see and then grasp objects, robots typically utilize depth-sensing cameras like the Microsoft Kinect. And while such cameras may be thwarted by transparent or shiny objects, scientists at Carnegie Mellon University have developed a work-around.

Depth-sensing cameras function by shining infrared laser beams onto an object, then measuring the amount of time that it takes for the light to reflect off of the contours of that object, and back to sensors on the camera.

While this system works well enough on relatively dull opaque objects, it has problems with transparent items that much of the light passes through, or shiny objects that scatter the reflected light. That’s where the Carnegie Mellon system comes in, by utilizing a color optical camera that also functions as a depth-sensing camera.

The snake bites its tail

Google AI can independently discover AI methods.

Then optimizes them

It Evolves algorithms from scratch—using only basic mathematical operations—rediscovering fundamental ML techniques & showing the potential to discover novel algorithms.

AutoML-Zero: new research that that can rediscover fundamental ML techniques by searching a space of different ways of combining basic mathematical operations. Arxiv: https://arxiv.org/abs/2003.


Machine learning (ML) has seen tremendous successes recently, which were made possible by ML algorithms like deep neural networks that were discovered through years of expert research. The difficulty involved in this research fueled AutoML, a field that aims to automate the design of ML algorithms. So far, AutoML has focused on constructing solutions by combining sophisticated hand-designed components. A typical example is that of neural architecture search, a subfield in which one builds neural networks automatically out of complex layers (e.g., convolutions, batch-norm, and dropout), and the topic of much research.

Do you agree with these predictions?


The first few months of 2020 have radically reshaped the way we work and how the world gets things done. While the wide use of robotaxis or self-driving freight trucks isn’t yet in place, the Covid-19 pandemic has hurried the introduction of artificial intelligence across all industries. Whether through outbreak tracing or contactless customer pay interactions, the impact has been immediate, but it also provides a window into what’s to come. The second annual ForbesAI 50, which highlights the most promising U.S.-based artificial intelligence companies, features a group of founders who are already pondering what their space will look like in the future, though all agree that Covid-19 has permanently accelerated or altered the spread of AI.

“We have seen two years of digital transformation in the course of the last two months,” Abnormal Security CEO Evan Reiser told Forbes in May. As more parts of a company are forced to move online, Reiser expects to see AI being put to use to help businesses analyze the newly available data or to increase efficiency.

With artificial intelligence becoming ubiquitous in our daily lives, DeepMap CEO James Wu believes people will abandon the common misconception that AI is a threat to humanity. “We will see a shift in public sentiment from ‘AI is dangerous’ to ‘AI makes the world safer,’” he says. “AI will become associated with safety while human contact will become associated with danger.”

Google’s Arts and Culture vertical has been known to release fun apps and tools to help people engage with art and history. In 2018, it launched a feature to let you find your fine art doppelganger by taking a selfie, and more recently it added ways for you to apply filters to your photos to take on the style of masters like Van Gogh or Da Vinci. Now, the company is launching a web-based AI tool to let users interact with ancient Egyptian hieroglyphs and also help researchers decode the symbols with machine learning. It’s called Fabricius, named after the “father of epigraphy, the study of ancient inscriptions,” according to Google, and will let you send roughly translated messages in hieroglyphs to your friends.

Fabricius has three sections: Learn, Play and Work. In the first part, you go through a quick six-stage course that introduces you to the history and study of hieroglyphs. There are activities here that include tracing and drawing a symbol, with machine learning analyzing your drawings to see how accurate you were. For example, my drawing of an Ankh symbol after having seen it for five seconds was determined to be 100 percent correct, while my attempt at a sceptre was deemed 98 percent accurate.