Toggle light / dark theme

Self-supervised machine learning adds depth, breadth and speed to sky surveys

Sky surveys are invaluable for exploring the universe, allowing celestial objects to be catalogued and analyzed without the need for lengthy observations. But in providing a general map or image of a region of the sky, they are also one of the largest data generators in science, currently imaging tens of millions to billions of galaxies over the lifetime of an individual survey. In the near future, for example, the Vera C. Rubin Observatory in Chile will produce 20 TB of data per night, generate about 10 million alerts daily, and end with a final data set of 60 PB in size.

As a result, sky surveys have become increasingly labor-intensive when it comes to sifting through the gathered datasets to find the most relevant information or new discovery. In recent years machine learning has added a welcome twist to the process, primarily in the form of supervised and unsupervised algorithms used to train the computer models that mine the data. But these approaches present their own challenges; for example, supervised learning requires image labels that must be manually assigned, a task that is not only time-consuming but restrictive in scope; at present, only about 1% of all known galaxies have been assigned such labels.

To address these limitations, a team of researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) is exploring a new tack: self-supervised representation learning. Like unsupervised learning, self-supervised learning eliminates the need for training labels, instead attempting to learn by comparison. By introducing certain data augmentations, self-supervised algorithms can be used to build “representations”—low-dimensional versions of images that preserve their inherent information—and have recently been demonstrated to outperform supervised learning on industry-standard image datasets.

DARPA Announces Researchers to Exploit Infrared Spectrum for Understanding 3D Scenes

DARPA announced the selection of four research teams to drive it home with no headlights on our Invisible Headlights program, which seeks to determine if it’s possible for autonomous vehicles to navigate in complete darkness using only passive sensors:

https://www.iflscience.com/plants-and-animals/uk-may-ban-boi…feel-pain/ More


DARPA has selected four industry and university research teams for the Invisible Headlights program, which seeks to determine if it’s possible for autonomous vehicles to navigate in complete darkness using only passive sensors.

Amazing Seeing Eye Shoes With Camera-Based AI Image Recognition to Assist the Visually Impaired

Austrian shoe company Tec-Innovation has partnered with students at the Graz University of Technology in Austria to implement camera-based AI image recognition into their line of shoes that are specifically made to help those who are visually impaired.

The original version of these “seeing eye” shoes features ultrasonic sensors, which warn the person wearing them of obstacles in their way through haptic or auditory signals. AI image recognition that constantly learns, allows the shoes to provide more specific information to the wearer.

This Magical AI Makes Your Photos Move! 🤳

❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd.

📝 The paper “Endless Loops: Detecting and Animating Periodic Patterns in Still Images” and the app are available here:
https://pub.res.lightricks.com/endless-loops/

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O’Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers.

Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2

Károly Zsolnai-Fehér’s links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers.
Web: https://cg.tuwien.ac.at/~zsolnai/

Face recognition is just the tip of the AI Computer Vision iceberg

These flaws in AI training give the technology a bad name, and so do regular media reports suggesting that intelligent machines are poised to decimate the human workforce. These themes, for many people, have obscured AI’s genuine usefulness in data analysis and conversational platforms. And while computer vision does indeed have its flaws, it is more than just a reflection of societal biases: it is potentially an essential tool for both society and business.

Computer vision, or CV, gives machines the power of visual recognition in a way that emulates human sight. Whether a machine is detecting dangers on the road or, more controversially, recognising faces in a crowd, the ultimate aim is to make decisions based on image interpretation.

The tech is an advanced form of pattern recognition, made through statistical comparison of data sets. This means that while machines can “see”, they have no real understanding of what they are looking at. They can distinguish one object from another, true, but can’t explain what this difference means.

These flaws in AI training give the technology a bad name, and so do regular media reports suggesting that intelligent machines are poised to decimate the human workforce

These themes, for many people, have obscured AI’s genuine usefulness in data analysis and conversational platforms. And while computer vision does indeed have its flaws, it is more than just a reflection of societal biases: it is potentially an essential tool for both society and business.

Computer vision, or CV, gives machines the power of visual recognition in a way that emulates human sight. Whether a machine is detecting dangers on the road or, more controversially, recognising faces in a crowd, the ultimate aim is to make decisions based on image interpretation.

The tech is an advanced form of pattern recognition, made through statistical comparison of data sets. This means that while machines can “see”, they have no real understanding of what they are looking at. They can distinguish one object from another, true, but can’t explain what this difference means.

Backflipping MIT Mini Cheetah

Circa 2019


MIT’S new mini cheetah robot is the first four-legged robot to do a backflip. At only 20 pounds the limber quadruped can bend and swing its legs wide, enabling it to walk either right side up or upside down. The robot can also trot over uneven terrain about twice as fast as an average person’s walking speed. (Learn more: http://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-…kflip-0304)

Watch more videos from MIT: https://www.youtube.com/user/MITNewsOffice?sub_confirmation=1

The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts. Our mission is to advance knowledge; to educate students in science, engineering, and technology; and to tackle the most pressing problems facing the world today. We are a community of hands-on problem-solvers in love with fundamental science and eager to make the world a better place.

The MIT YouTube channel features videos about all types of MIT research, including the robot cheetah, LIGO, gravitational waves, mathematics, and bombardier beetles, as well as videos on origami, time capsules, and other aspects of life and culture on the MIT campus. Our goal is to open the doors of MIT and bring the Institute to the world through video.

Drone swarms are coming to the Middle East and Israel is leading the way

Drone swarms are a new concept and are linked to the development of artificial intelligence and networked military units, a futuristic battlefield application that uses the latest advances in technology.


The use of this kind of technology in conflict has raised concerns for years as human-rights groups decried the advent of “killer robots.” Evidence shows that what is actually happening is not the creation of “killer robots,” but rather the use of technology to enable drones and other autonomous or unmanned systems to work together.

Why this matters is because other countries in the region are working on new technologies as well. Iran used drones and cruise missiles to attack Saudi Arabia in September 2019. Turkey has built a drone that reportedly “hunted down” people in Libya, although much remains shrouded in mystery regarding how autonomous the drone was and whether it really hunted down adversaries using artificial intelligence.

Regardless of how Turkey’s Kargu-2 autonomous drone worked, media headlines said it may represent the first use of “AI-armed drones,” and the “new era” of robot war may be upon us.

Johns Hopkins startup aims to shake up AI with a research-first approach

The formula for launching a machine learning company in health care looks something like this: Build a model, test it on historical patient data in a computer lab, and then start selling it to hospitals nationwide.

Suchi Saria, director of the machine learning and health care lab at Johns Hopkins University, is taking a different approach. Her company, Bayesian Health, is coming out of stealth mode on Monday by publishing a prospective study on how one of its lead products — an early warning system for sepsis — impacted the care of current patients in real hospitals.

/* */