Menu

Blog

Archive for the ‘information science’ category: Page 164

Oct 9, 2021

Liquid Neural Networks

Posted by in categories: information science, robotics/AI

Oct 8 2021
“Abstract: In this talk, we will discuss the nuts and bolts of the novel continuous-time neural network models: Liquid Time-Constant (LTC) Networks. Instead of declaring a learning system’s dynamics by implicit nonlinearities, LTCs construct networks of linear first-order dynamical systems modulated via nonlinear interlinked gates. LTCs represent dynamical systems with varying (i.e., liquid) time-constants, with outputs being computed by numerical differential equation solvers. These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations, and give rise to improved performance on time-series prediction tasks compared to advance recurrent network models.”


Ramin Hasani, MIT — intro by Daniela Rus, MIT

Continue reading “Liquid Neural Networks” »

Oct 9, 2021

AI Weekly: EU facial recognition ban highlights need for U.S. legislation

Posted by in categories: food, government, information science, law enforcement, privacy, robotics/AI, security, terrorism

This week, The European Parliament, the body responsible for adopting European Union (EU) legislation, passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places. The resolution, which also proposes a moratorium on the deployment of predictive policing software, would restrict the use of remote biometric identification unless it’s to fight “serious” crime, such as kidnapping and terrorism.

The approach stands in contrast to that of U.S. agencies, which continue to embrace facial recognition even in light of studies showing the potential for ethnic, racial, and gender bias. A recent report from the U.S. Government Accountability Office found that 10 branches including the Departments of Agriculture, Commerce, Defense, and Homeland Security plan to expand their use of facial recognition between 2020 and 2023 as they implement as many as 17 different facial recognition systems.

Commercial face-analyzing systems have been critiqued by scholars and activists alike throughout the past decade, if not longer. The technology and techniques — everything from sepia-tinged film to low-contrast digital cameras — often favor lighter skin, encoding racial bias in algorithms. Indeed, independent benchmarks of vendors’ systems by the Gender Shades project and others have revealed that facial recognition technologies are susceptible to a range of prejudices exacerbated by misuse in the field. For example, a report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects.

Oct 9, 2021

New Virtual Obstacle Courses Are Teaching Real Robots How to Walk

Posted by in categories: information science, robotics/AI

A virtual army of 4,000 doglike robots was used to train an algorithm capable of enhancing the legwork of real-world robots, according to an initial report from Wired. And new tricks learned in the simulation could soon see execution in a neighborhood near you.

While undergoing training, the robots mastered the up-and downstairs walk without too much struggle. But slopes threw them for a curve. Few could grasp the essentials of sliding down a slope. But, once the final algorithm was moved to a real-world version of ANYmal, the four-legged doglike robot with sensors equipped in its head and a detachable robot arm successfully navigated blocks and stairs, but had issues working at higher speeds.

Continue reading “New Virtual Obstacle Courses Are Teaching Real Robots How to Walk” »

Oct 8, 2021

Researchers create ‘self-aware’ algorithm to ward off hacking attempts

Posted by in categories: biotech/medical, cybercrime/malcode, information science, nuclear energy, robotics/AI

It sounds like a scene from a spy thriller. An attacker gets through the IT defenses of a nuclear power plant and feeds it fake, realistic data, tricking its computer systems and personnel into thinking operations are normal. The attacker then disrupts the function of key plant machinery, causing it to misperform or break down. By the time system operators realize they’ve been duped, it’s too late, with catastrophic results.

The scenario isn’t fictional; it happened in 2,010 when the Stuxnet virus was used to damage nuclear centrifuges in Iran. And as ransomware and other cyberattacks around the world increase, system operators worry more about these sophisticated “false data injection” strikes. In the wrong hands, the computer models and data analytics—based on artificial intelligence—that ensure smooth operation of today’s electric grids, manufacturing facilities, and power plants could be turned against themselves.

Purdue University’s Hany Abdel-Khalik has come up with a powerful response: To make the computer models that run these cyberphysical systems both self-aware and self-healing. Using the background noise within these systems’ data streams, Abdel-Khalik and his students embed invisible, ever-changing, one-time-use signals that turn passive components into active watchers. Even if an is armed with a perfect duplicate of a system’s model, any attempt to introduce falsified data will be immediately detected and rejected by the system itself, requiring no human response.

Oct 7, 2021

Enabling AI-driven health advances without sacrificing patient privacy

Posted by in categories: biotech/medical, encryption, health, information science, robotics/AI

There’s a lot of excitement at the intersection of artificial intelligence and health care. AI has already been used to improve disease treatment and detection, discover promising new drugs, identify links between genes and diseases, and more.

By analyzing large datasets and finding patterns, virtually any new algorithm has the potential to help patients — AI researchers just need access to the right data to train and test those algorithms. Hospitals, understandably, are hesitant to share sensitive patient information with research teams. When they do share data, it’s difficult to verify that researchers are only using the data they need and deleting it after they’re done.

Secure AI Labs (SAIL) is addressing those problems with a technology that lets AI algorithms run on encrypted datasets that never leave the data owner’s system. Health care organizations can control how their datasets are used, while researchers can protect the confidentiality of their models and search queries. Neither party needs to see the data or the model to collaborate.

Oct 6, 2021

The Facebook whistleblower says its algorithms are dangerous. Here’s why

Posted by in category: information science

Frances Haugen’s testimony at the Senate hearing today raised serious questions about how Facebook’s algorithms work—and echoes many findings from our previous investigation.

Oct 3, 2021

The Music of Proteins Is Made Audible Through a Computer Program That Learns From Chopin

Posted by in categories: chemistry, computing, information science, media & arts

Proteins are structured like folded chains. These chains are composed of small units of 20 possible amino acids, each labeled by a letter of the alphabet. A protein chain can be represented as a string of these alphabetic letters, very much like a string of music notes in alphabetical notation.

Protein chains can also fold into wavy and curved patterns with ups, downs, turns, and loops. Likewise, music consists of sound waves of higher and lower pitches, with changing tempos and repeating motifs.

Continue reading “The Music of Proteins Is Made Audible Through a Computer Program That Learns From Chopin” »

Oct 2, 2021

How Machine Learning Is Identifying New, Better Drugs

Posted by in categories: biotech/medical, chemistry, information science, robotics/AI

When Dr. Robert Murphy first started researching biochemistry and drug development in the late 1970s, creating a pharmaceutical compound that was effective and safe to market followed a strict experimental pipeline that was beginning to be enhanced by large-scale data collection and analysis on a computer.

Now head of the Murphy Lab for computational biology at Carnegie Mellon University (CMU), Murphy has watched over the years as data collection and artificial intelligence have revolutionized this process, making the drug creation pipeline faster, more efficient, and more effective.

Recently, that’s been thanks to the application of machine learning—computer systems that learn and adapt by using algorithms and statistical models to analyze patterns in datasets—to the drug development process. This has been notably key to reducing the presence of side effects, Murphy says.

Oct 2, 2021

Top Three Trends In Robotics: The Cambrian Explosion Is Happening

Posted by in categories: information science, robotics/AI, space

About six years ago, the CEO of Toyota Research Institute published a seminal paper about whether a Cambrian explosion was coming for robotics. The term “Cambrian explosion” refers to an important event approximately half a billion years ago in which there was a rapid expansion of different forms of life on earth. There are parallels with the field of robotics as modern technological advancements are fueling an analogous explosion in the diversification and applicability of robots. Today, we’re seeing this Cambrian explosion of robotics unfolding, and consequently, many distinct patterns are emerging. I’ll outline the top three trends that are rapidly evolving in the robotics space and that are most likely to dominate for years to come.

1. The Democratization Of AI And The Convergence Of Technologies.

The birth and proliferation of AI-powered robots are happening because of the democratization of AI. For example, open-source machine learning frameworks are now broadly accessible; AI algorithms are now in the open domain in cloud-based repositories like GitHub; and influential publications on deep learning from top schools can now be downloaded. We now have access to more computing power (e.g., Nvidia GPUs, Omniverse, etc.), data, cloud-computing platforms (e.g., Amazon AWS), new hardware and advanced engineering. Many robotics startup companies are capitalizing on this “super evolution” of technology to build more intelligent and more capable machines.

Oct 2, 2021

Machine learning algorithm could provide Soldiers feedback

Posted by in categories: information science, military, robotics/AI

November 12 2020


RESEARCH TRIANGLE PARK, N.C. — A new machine learning algorithm, developed with Army funding, can isolate patterns in brain signals that relate to a specific behavior and then decode it, potentially providing Soldiers with behavioral-based feedback.

“The impact of this work is of great importance to Army and DOD in general, as it pursues a framework for decoding behaviors from brain signals that generate them,” said Dr. Hamid Krim, program manager, Army Research Office, an element of the U.S. Army Combat Capabilities Develop Command, now known as DEVCOM, Army Research Laboratory. “As an example future application, the algorithms could provide Soldiers with needed feedback to take corrective action as a result of fatigue or stress.”

Continue reading “Machine learning algorithm could provide Soldiers feedback” »