CS230 | Deep Learning
https://www.newworldai.com/cs230-deep-learning-stanford-engineering/
CS221 | Artificial Intelligence
https://www.newworldai.com/cs221-artificial-intelligence-pri…niversity/
CS230 | Deep Learning
https://www.newworldai.com/cs230-deep-learning-stanford-engineering/
CS221 | Artificial Intelligence
https://www.newworldai.com/cs221-artificial-intelligence-pri…niversity/
Circa 2017
Chess isn’t an easy game, by human standards. But for an artificial intelligence powered by a formidable, almost alien mindset, the trivial diversion can be mastered in a few spare hours.
In a new paper, Google researchers detail how their latest AI evolution, AlphaZero, developed “superhuman performance” in chess, taking just four hours to learn the rules before obliterating the world champion chess program, Stockfish.
In other words, all of humanity’s chess knowledge – and beyond – was absorbed and surpassed by an AI in about as long as it takes to drive from New York City to Washington, DC.
Circa 2017
Thousands of years of human knowledge has been learned and surpassed by the world’s smartest computer in just 40 days, a breakthrough hailed as one of the greatest advances ever in artificial intelligence.
Google DeepMind amazed the world last year when its AI programme AlphaGo beat world champion Lee Sedol at Go, an ancient and complex game of strategy and intuition which many believed could never be cracked by a machine.
AlphaGo was so effective because it had been programmed with millions of moves of past masters, and could predict its own chances of winning, adjusting its game-plan accordingly.
As others have pointed out, voxel-based games have been around for a long time; a recent example is the whimsical “3D Dot Game Hero” for PS3, in which they use the low-res nature of the voxel world as a fun design element.
Voxel-based approaches have huge advantages (“infinite” detail, background details that are deformable at the pixel level, simpler simulation of particle-based phenomena like flowing water, etc.) but they’ll only win once computing power reaches an important crossover point. That point is where rendering an organic world a voxel at a time looks better than rendering zillions of polygons to approximate an organic world. Furthermore, much of the effort that’s gone into visually simulating real-world phenomena (read the last 30 years of Siggraph conference proceedings) will mostly have to be reapplied to voxel rendering. Simply put: lighting, caustics, organic elements like human faces and hair, etc. will have to be “figured out all over again” for the new era of voxel engines. It will therefore likely take a while for voxel approaches to produce results that look as good, even once the crossover point of level of detail is reached.
I don’t mean to take anything away from the hard and impressive coding work this team has done, but if they had more academic background, they’d know that much of what they’ve “pioneered” has been studied in tremendous detail for two decades. Hanan Samet’s treatise on the subject tells you absolutely everything you need to know, and more: (http://www.amazon.com/Foundations-Multidimensional-Structure…sr=8-1) and even goes into detail about the application of these spatial data structures to other areas like machine learning. Ultimately, Samet’s book is all about the “curse of dimensionality” and how (and how much) data structures can help address it.
Circa 2017 we could eventually have euclidean geometry to have inifinite size that is infinitely small like an entire infinity computer on one byte.
Amazon’s Alexa heads for a future that looks a lot like the Starship Enterprise.
Drug companies are trying to find new ways to discover new blockbuster drug treatments faster, and AI is beginning to answer the call.
If robots are to help out in places like hospitals and phone repair shops, they’re going to need a light touch. And what’s lighter than not touching at all? Researchers have created a gripper that uses ultrasonics to suspend an object in midair, potentially making it suitable for the most delicate tasks.
It’s done with an array of tiny speakers that emit sound at very carefully controlled frequencies and volumes. These produce a sort of standing pressure wave that can hold an object up or, if the pressure is coming from multiple directions, hold it in place or move it around.
This kind of “acoustic levitation,” as it’s called, is not exactly new — we see it being used as a trick here and there, but so far there have been no obvious practical applications. Marcel Schuck and his team at ETH Zürich, however, show that a portable such device could easily find a place in processes where tiny objects must be very lightly held.
Human skin is a fascinating multifunctional organ with unique properties originating from its flexible and compliant nature. It allows for interfacing with external physical environment through numerous receptors interconnected with the nervous system. Scientists have been trying to transfer these features to artificial skin for a long time, aiming at robotic applications.
Robotic systems heavily rely on electronic and magnetic field sensing functionalities required for positioning and orientation in space. Much research has been devoted to implementation of these functionalities in a flexible, compliant form. Recent advancements in flexible sensors and organic electronics have provided important prerequisites. These devices can operate on soft and elastic surfaces, whereas sensors perceive various physical properties and transmit them via readout circuits.
To closely replicate natural skin, it is necessary to interconnect a large number of individual sensors. This challenging task became a major obstacle in realizing electronic skin. First demonstrations were based on an array of individual sensors addressed separately, which unavoidably resulted in a tremendous number of electronic connections. In order to reduce the necessary wiring, important technology had to be developed—namely, complex electronic circuits, current sources and switches had to be combined with individual magnetic sensors to achieve fully integrated devices.
“The thing I find rewarding about coding: You’re literally creating something out of nothing. You’re kind of like a wizard.”
When the smiley-faced robot tells two boys to pick out the drawing of an ear from three choices, one of the boys, about 5, touches his nose. “No. Ear,” his teacher says, a note of frustration in her voice. The child picks up the drawing of an ear and hands it to the other boy, who shows it to the robot. “Yes, that is the ear,” the ever-patient robot says. “Good job.” The boys smile as the teacher pats the first boy in congratulations.
The robot is powered by technology created by Movia Robotics, founded by Tim Gifford in 2010 and headquartered in Bristol, Connecticut. Unlike other companies that have made robots intended to work with children with autism spectrum disorder (ASD), such Beatbots, Movia focuses on building and integrating software that can work with a number of humanoid robots, such as the Nao. Movia has robots in three school districts in Connecticut. Through a U.S. Department of Defense contract, they’re being added to 60 schools for the children of military personnel worldwide.
It’s Gifford’s former computer science graduate student, Christian Wanamaker, who programs the robots. Before graduate school at the University of Connecticut, Wanamaker used his computer science degree to program commercial kitchen fryolators. He enjoys a crispy fry as much as anyone, but his work coding for robot-assisted therapy is much more challenging, interesting and rewarding, he says.