Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1861

Jun 27, 2019

Researchers grow active mini-brain-networks

Posted by in categories: biotech/medical, robotics/AI

Cerebral organoids are artificially grown, 3D tissue cultures that resemble the human brain. Now, researchers from Japan report functional neural networks derived from these organoids in a study publishing June 27 in the journal Stem Cell Reports. Although the organoids aren’t actually “thinking,” the researchers’ new tool—which detects neural activity using organoids—could provide a method for understanding human brain function.

“Because they can mimic cerebral development, can be used as a substitute for the to study complex developmental and neurological disorders,” says corresponding author Jun Takahashi, a professor at Kyoto University.

However, these studies are challenging, because current cerebral organoids lack desirable supporting structures, such as blood vessels and surrounding tissues, Takahashi says. Since researchers have a limited ability to assess the organoids’ neural activities, it has also been difficult to comprehensively evaluate the function of neuronal networks.

Jun 27, 2019

Google Is Giving Away AI That Can Build Your Genome Sequence

Posted by in categories: biotech/medical, robotics/AI

Circa 2017


The deep learning tool can identify all the small mutations that make you unique, more accurately than every existing method.

Jun 26, 2019

The Rise of a New Generation of AI Avatars

Posted by in categories: innovation, robotics/AI

I recently discovered it’s possible for someone in their 20s to feel old—just mention Microsoft’s Clippy to anyone born after the late 90s. Weirdly, there is an entire generation of people who never experienced that dancing wide-eyed paper-clip interrupting a Word doc writing project.

For readers who never knew him, Clippy was an interactive virtual assistant that took the form of an animated paperclip designed to be helpful in guiding users through Microsoft Word. As an iconic symbol of its decade, Clippy was also famously terrible. Worldwide consensus decided that Clippy was annoying, intrusive, and Time magazine even named it among the 50 worst inventions of all time (squeezed between ‘New Coke’ and Agent Orange. Not a fun list).

Though Clippy was intended to help users navigate their software lives, it may have been 20 or so years ahead of its time.

Jun 26, 2019

Robots ‘to replace 20 million factory jobs’

Posted by in categories: economics, employment, robotics/AI

A huge acceleration in the use of robots will affect jobs around the world, Oxford Economics says.

Jun 26, 2019

The first AI universe sim is fast and accurate—and its creators don’t know how it works

Posted by in categories: robotics/AI, space

For the first time, astrophysicists have used artificial intelligence techniques to generate complex 3D simulations of the universe. The results are so fast, accurate and robust that even the creators aren’t sure how it all works.

Jun 26, 2019

What Could Possibly Be Cooler Than RoboBee? RoboBee X-Wing

Posted by in categories: robotics/AI, solar power, space, sustainability, transportation

They used to call it RoboBee—a flying machine half the size of a paperclip that could flap its pair of wings 120 times a second. It was always tethered to a power source, limiting its freedom. Now, though, RoboBee becomes RoboBee X-Wing, as Harvard researchers have added solar cells and an extra pair of wings, freeing the robot to blast off to a galaxy far, far away. Or at least partway across the room, as it can sustain flight for only half a second, and only indoors.

But hey, baby steps. The teeniest of quadrotors measure a few inches across and weigh a third of an ounce. RoboBee X-Wing is about the same size as those untethered fliers, but weighs a hundredth of an ounce, which earns it the distinction of being the lightest aerial vehicle to manage sustained untethered flight. One day that could make it ideal for navigating tight, sensitive spaces in a galaxy very, very near.

Continue reading “What Could Possibly Be Cooler Than RoboBee? RoboBee X-Wing” »

Jun 25, 2019

This camera app uses AI to erase people from your photographs

Posted by in categories: humor, robotics/AI

Bye Bye Camera is an iOS app built for the “post-human world,” says Damjanski, a mononymous artist based in New York City who helped create the software. Why post-human? Because it uses AI to remove people from images and paint over their absence.

“One joke we always make about it is: ‘finally, you can take a selfie without yourself,’” Damjanski tells The Verge.

The app costs $2.99 from the App Store, and, fair warning here, it’s not very good — or at least, it’s not flawless. The app is slow and removes people with a great deal of mess, leaving behind a smear of pixels like an AI hit man sending a message. If you’re looking to edit out political opponents from your Instagram, you’d be better off using Photoshop. But if you want to mess around with machine learning magic, Bye Bye Camera is good fun.

Jun 25, 2019

Russia reveals bulletproof ‘solider suit’ with claws to be ‘walking army robot’

Posted by in category: robotics/AI

Kalashnikov — famous for making the AK-47 — reveals machine monster which looks straight out of Avatar or Robocop as a ‘demonstration of what is to come’ at Moscow’s Army 2018 Fair.

Jun 25, 2019

PizzaGAN gets the picture on how to make a pizza

Posted by in category: robotics/AI

Is nothing sacred? Who would dare to even attempt to talk about a machine-learning experiment that results in the perfect (gasp) pizza? It is difficult to contemplate, but a research quintet did not shy away from trying, and they worked to teach a machine how to make a great pie.

Say hello to PizzaGAN, a compositional layer-based generative model that was aimed to mirror the step-by-step procedure of pizza-making.

Their goal was to teach the machine by building a generative model that mirrors an ordered set of instructions. How they proceeded: “Each operator is designed as a Generative Adversarial Network (GAN). Given only weak image-level supervision, the operators are trained to generate a visual layer that needs to be added to or removed from the existing image. The proposed model is able to decompose an image into an ordered sequence of layers by applying sequentially in the right order the corresponding removing modules.”

Jun 24, 2019

Hate speech on Twitter predicts frequency of real-life hate crimes

Posted by in categories: health, robotics/AI

According to a first-of-its-kind study, cities with a higher incidence of a certain kind of racist tweets reported more actual hate crimes related to race, ethnicity, and national origin.

A New York University research team analyzed the location and linguistic features of 532 million tweets published between 2011 and 2016. They trained a machine learning model—one form of artificial intelligence—to identify and analyze two types of tweets: those that are targeted—directly espousing discriminatory views—and those that are self-narrative—describing or commenting upon discriminatory remarks or acts. The team compared the prevalence of each type of discriminatory to the number of actual hate crimes reported during that same time period in those same cities.

The research was led by Rumi Chunara, an assistant professor of computer science and engineering at the NYU Tandon School of Engineering and biostatistics at the NYU College of Global Public Health, and Stephanie Cook, an assistant professor of biostatistics and social and behavioral sciences at the NYU College of Global Public Health.