Cross-party report suggests the education system must be adapted to “focus on things that machines will be less good at for longer”

My guess is there is some QC help in this picture.
Artificial neural networks — systems patterned after the arrangement and operation of neurons in the human brain — excel at tasks that require pattern recognition, but are woefully limited when it comes to carrying out instructions that require basic logic and reasoning. This is a problem for scientists working toward the creation of Artificial Intelligence (AI) systems capable of performing complex tasks with minimal human supervision.
In a step toward overcoming this hurdle, researchers at Google’s DeepMind — the company that developed the Go-playing computer program AlphaGo — announced earlier this week the creation of a neural network that can not only learn, but can also use data stored in its memory to “logically reason” and make inferences to answer questions.
DeepMind’s new system — called a Differentiable Neural Computer (DNC) — combines deep learning, wherein it can learn from examples and make sense of complex input it has never received before, with an external memory, which, as the DeepMind researchers Alexander Graves and Greg Wayne explain in a blog post, allows it to “store knowledge quickly and reason about it flexibly.”
For decades the efficient coding hypothesis has been a guiding principle in determining how neural systems can most efficiently represent their inputs. However, conclusions about whether neural circuits are performing optimally depend on assumptions about the noise sources encountered by neural signals as they are transmitted. Here, we provide a coherent picture of how optimal encoding strategies depend on noise strength, type, location, and correlations. Our results reveal that nonlinearities that are efficient if noise enters the circuit in one location may be inefficient if noise actually enters in a different location. This offers new explanations for why different sensory circuits, or even a given circuit under different environmental conditions, might have different encoding properties.
Citation: Brinkman BAW, Weber AI, Rieke F, Shea-Brown E (2016) How Do Efficient Coding Strategies Depend on Origins of Noise in Neural Circuits? PLoS Comput Biol 12(10): e1005150. doi:10.1371/journal.pcbi.1005150
Editor: Jeff Beck, Duke University, UNITED STATES
The DeepMind artificial intelligence (AI) being developed by Google’s parent company, Alphabet, can now intelligently build on what’s already inside its memory, the system’s programmers have announced.
Their new hybrid system – called a Differential Neural Computer (DNC) – pairs a neural network with the vast data storage of conventional computers, and the AI is smart enough to navigate and learn from this external data bank.
What the DNC is doing is effectively combining external memory (like the external hard drive where all your photos get stored) with the neural network approach of AI, where a massive number of interconnected nodes work dynamically to simulate a brain.
The American Dream is ending, and its automated software and hardware technology that’s ending it.
Now that machines can diagnose cancer, trade stocks, and write symphonies, they’re not just going to make humans more efficient as they have in the past—they are replacing them entirely and wrecking the economy along the way.