Toggle light / dark theme

2021 saw massive growth in the demand for edge computing — driven by the pandemic, the need for more efficient business processes, as well as key advances in the Internet of Things, 5G and AI.

In a study published by IBM in May, for example, 94 percent of surveyed executives said their organizations will implement edge computing in the next five years.

From smart hospitals and cities to cashierless shops to self-driving cars, edge AI — the combination of edge computing and AI — is needed more than ever.

Kindly see my latest FORBES article on technology predictions for the next decade:

Thanks and have a great weekend! Chuck Brooks.


We are approaching 2022 and rather than ponder the immediate future, I want to explore what may beckon in the ecosystem of disruptive technologies a decade from now. We are in the initial stages of an era of rapid and technological change that will witness regeneration of body parts, new cures for diseases, augmented reality, artificial intelligence, human/computer interface, autonomous vehicles, advanced robotics, flying cars, quantum computing, and connected smart cities. Exciting times may be ahead.

By 2032, it will be logical to assume that the world will be amid a digital and physical transformation beyond our expectations. It is no exaggeration to say we are on the cusp of scientific and technological advancements that will change how we live and interact.

What should we expect in the coming decade as we begin 2022? While there are many potential paradigms changing technological influences that will impact the future, let us explore three specific categories of future transformation: cognitive computing, health and medicine, and autonomous everything.

A new study claims machine learning is starting to look a lot like human cognition.

In 2019, The MIT Press Reader published a pair of interviews with Noam Chomsky and Steven Pinker, two of the world’s foremost linguistic and cognitive scientists. The conversations, like the men themselves, vary in their framing and treatment of key issues surrounding their areas of expertise. When asked about machine learning and its contributions to cognitive science, however, their opinions gather under the banner of skepticism and something approaching disappointment.

“In just about every relevant respect it is hard to see how [machine learning] makes any kind of contribution to science,” Chomsky laments, “specifically to cognitive science, whatever value it may have for constructing useful devices or for exploring the properties of the computational processes being employed.”

While Pinker adopts a slightly softer tone, he echoes Chomsky’s lack of enthusiasm for how AI has advanced our understanding of the brain:

“Cognitive science itself became overshadowed by neuroscience in the 1990s and artificial intelligence in this decade, but I think those fields will need to overcome their theoretical barrenness and be reintegrated with the study of cognition — mindless neurophysiology and machine learning have each hit walls when it comes to illuminating intelligence.”

Full Story:

Rather than engineering robotic solutions from scratch, some of our most impressive advances have come from copying what nature has already come up with.

New research shows how we can extend that approach to robot ‘minds’, in this case by getting a robot to learn the best route out of a maze all by itself – even down to keeping a sort-of memory of particular turns.

A team of engineers coded a Lego robot to find its way through a hexagonal labyrinth: by default it turned right at every function, until it hit a point it had previously visited or came to a dead end, at which point it had to start again.

The new machine-learning system can generate a 3D scene from an image about 15,000 times faster than other methods. Humans are pretty good at looking at a single two-dimensional image and understanding the full three-dimensional scene that it captures. Artificial intelligence agents are not.


The hunt is on for leptoquarks, particles beyond the limits of the standard model of particle physics —the best description we have so far of the physics that governs the forces of the Universe and its particles. These hypothetical particles could prove useful in explaining experimental and theoretical anomalies observed at particle accelerators such as the Large Hadron Collider (LHC) and could help to unify theories of physics beyond the standard model, if researchers could just spot them.

We’ve fine-tuned GPT-3 to more accurately answer open-ended questions using a text-based web browser. Our prototype copies how humans research answers to questions online – it submits search queries, follows links, and scrolls up and down web pages. It is trained to cite its sources, which makes it easier to give feedback to improve factual accuracy. We’re excited about developing more truthful AI, but challenges remain, such as coping with unfamiliar types of questions.

Read paperBrowse samples

Language models like GPT-3 are useful for many different tasks, but have a tendency to “hallucinate” information when performing tasks requiring obscure real-world knowledge. To address this, we taught GPT-3 to use a text-based web-browser. The model is provided with an open-ended question and a summary of the browser state, and must issue commands such as “Search …”, “Find in page: …” or “Quote: …”. In this way, the model collects passages from web pages, and then uses these to compose an answer.