Toggle light / dark theme

Meet the researcher using AI to build children’s animations

We’ve all seen children draw quirky, awesome characters, and even heard them talk about their illustrations as if they were real! How cool would it be to actually bring the characters to life?

Researchers at Meta AI have developed a way to do just that. We’re announcing a first-of-its-kind AI-powered animation tool that can automatically animate children’s drawings of human figures within minutes.

Boeing Says It’s Going to Build a New Airplane Model. In the Metaverse?

Welcome to Web 3.0.

It’s happening.

Major human-focused industries are injecting virtual interactions into the very design of next-gen vehicles, as Boeing announced that its 3D engineering designs will have digital twins that speak to each other via “robots” that converse, while human mechanics at factories throughout the world will be linked via $3,500 HoloLens headsets developed by Microsoft itself, according to an initial report from Reuters.

In other words, Boeing just took a major step into Web 3.0, with airline service operations and production becoming unified within a single digital ecosystem. And it could happen in just two years.

Boeing wants to enter 2022 fighting for engineering dominance Critics of Boeing cite the firm’s previous commitments to triggering an imminent digital revolution. But insiders familiar with Boeing’s announcement say its general aims to improve safety and quality have gained a stronger sense of urgency and significance through the aerospace company’s struggles with several threats. Nevertheless, Boeing plans to fly into 2022 fighting for its engineering dominance in the industry following the 737 MAX crisis, while also preparing for a future aircraft program in the coming decade. But make no mistake, this is a $15-billion gamble. And to make good on its pledge, Boeing will also have to develop a means of preventing manufacturing issues, like the structural flaws that delayed its 787 Dreamliner in 2021.

Full Story:

Giving Bug-Like Bots a Boost: New Artificial Muscles Improve the Performance of Flying Microrobots

A new fabrication technique produces low-voltage, power-dense artificial muscles that improve the performance of flying microrobots.

When it comes to robots, bigger isn’t always better. Someday, a swarm of insect-sized robots might pollinate a field of crops or search for survivors amid the rubble of a collapsed building.

MIT.

Orbital Insight to build AI for intelligence community based on artificial data

WASHINGTON – The National Geospatial-Intelligence Agency has selected a team of commercial and academic partners to build an artificial intelligence system with synthetic data, which will further help the agency determine how it builds machine learning algorithms moving forward.

Orbital Insight was issued a Phase II Small Business Innovation Research contract by the NGA, the company announced. Dec. 16. It will collaborate with Rendered.ai and the University of California, Berkeley, to develop a computer vision model.

As the organization charged with analyzing satellite imagery for the intelligence community, NGA has put increased emphasis on using AI for its mission. The agency sees human-machine pairing as critical for its success, with machine learning algorithms taking over the rote task of processing the torrent of satellite data to find potential intelligence and freeing up human operators to do more high level analysis and tasks.

How Wearable AI Will Amplify Human Intelligence

Imagine that your team is meeting to decide whether to continue an expensive marketing campaign. After a few minutes, it becomes clear that nobody has the metrics on-hand to make the decision. You chime in with a solution and ask Amazon’s virtual assistant Alexa to back you up with information: “Alexa, how many users did we convert to customers last month with Campaign A?” and Alexa responds with the answer. You just amplified your team’s intelligence with AI. But this is just the tip of the iceberg.

Intelligence amplification is the use of technology to augment human intelligence. And a paradigm shift is on the horizon, where new devices will offer less intrusive, more intuitive ways to amplify our intelligence.

Hearables, or wireless in-ear computational earpieces, are an example of intelligence amplification devices that have been adopted recently and rapidly. An example is Apple’s AirPods, which are smart earbuds that connect to Apple devices and integrate with Siri via voice commands. Apple has also filed a patent for earbuds equipped with biometric sensors that could record data such as a user’s temperature, heart rate, and movement. Similarly, Google’s Pixel Buds give users direct access to the Google Assistant and its powerful knowledge graph. Google Assistant seamlessly connects users to information stored in Google platforms, like email and calendar management. Google Assistant also provides users with highly-personalized recommendations, helps automate personal communication, and offloads monotonous tasks like setting timers, managing lists, and controlling IoT devices.

Curve Light: A highly performing indoor positioning system

In recent years, engineers have been trying to develop more effective sensors and tools to monitor indoor environments. Serving as the foundation of these tools, indoor positioning systems automatically determine the position of objects with high accuracy and low latency, enabling emerging Internet-of-Things (IoT) applications, such as robots, autonomous driving, VR/AR, etc.

A team of researchers recently created CurveLight, an accurate and efficient positioning system. Their technology, described in a paper presented at ACM’s SenSys 2021 Conference on Embedded Networked Sensor Systems, could be used to enhance the performance of autonomous vehicles, robots and other advanced technologies.

“In CurveLight, the signal transmitter includes an infrared LED, covered by a hemispherical and rotatable shade,” Zhimeng Yin, one of the researchers who developed the system at City University of Hong Kong, told TechXplore. “The receiver detects the light signals with a photosensitive diode. When the shade is rotating, the transmitter generates a unique sequence of light signals for each point in the covered space.”

Why deep-learning methods confidently recognize images that are nonsense

For all that neural networks can accomplish, we still don’t really understand how they operate. Sure, we can program them to learn, but making sense of a machine’s decision-making process remains much like a fancy puzzle with a dizzying, complex pattern where plenty of integral pieces have yet to be fitted.

If a model was trying to classify an image of said puzzle, for example, it could encounter well-known, but annoying adversarial attacks, or even more run-of-the-mill data or processing issues. But a new, more subtle type of failure recently identified by MIT scientists is another cause for concern: “overinterpretation,” where algorithms make confident predictions based on details that don’t make sense to humans, like random patterns or image borders.

This could be particularly worrisome for high-stakes environments, like split-second decisions for self-driving cars, and medical diagnostics for diseases that need more immediate attention. Autonomous vehicles in particular rely heavily on systems that can accurately understand surroundings and then make quick, safe decisions. The network used specific backgrounds, edges, or particular patterns of the sky to classify traffic lights and street signs—irrespective of what else was in the image.