Toggle light / dark theme

New AI algorithm promises defense against cyberattacks on robots

The researchers tested their algorithm on a replica of a US Army combat ground vehicle and found it was 99% effective in preventing a malicious attack.

Australian researchers have developed an artificial intelligence algorithm to detect and stop a cyberattack on a military robot in seconds.


The research was conducted by Professor Anthony Finn from the University of South Australia (UniSA) and Dr Fendy Santoso from Charles Sturt University in collaboration with the US Army Futures Command. They simulated a MitM attack on a GVT-BOT ground vehicle and trained its operating system to respond to it, according to the press release.

According to Professor Finn, an autonomous systems researcher at UniSA, the robot operating system (ROS) is prone to cyberattacks because it is highly networked. He explained that Industry 4, characterized by advancements in robotics, automation, and the Internet of Things, requires robots to work together, where sensors, actuators, and controllers communicate and share information via cloud services. He added that this makes them very vulnerable to cyberattacks. He also said that computing power is increasing exponentially every few years, enabling them to develop and implement sophisticated AI algorithms to protect systems from digital threats.

This AI tool can predict virus mutations before they occur

It can be used in the development of vaccines and treatments.

Now, scientists at Harvard Medical School and the University of Oxford have produced an AI tool that can achieve that called EVEscape.


What if we could predict virus mutations before they actually took place? We could prepare for their arrival and perhaps even conceive of vaccines in time to protect populations.

This is according to a press release by the institutions published on Wednesday.

Google’s Green Light: AI for smarter and greener traffic lights

Google’s Green Light initiative uses AI and Google Maps to optimize traffic lights and reduce emissions.

Traffic jams are not only frustrating but also harmful to the environment. According to a study, road transportation accounts for a large share of global and urban greenhouse gas emissions, and the situation is worse at city intersections, where pollution can be 29 times higher than on open roads. The main reason is vehicles’ frequent stopping and starting, which consumes more fuel and emits more carbon dioxide.

But what if we could use artificial intelligence (AI) to optimize traffic lights and reduce these emissions? That is the idea behind Green Light, a Google Research initiative that… More.


XH4D/iStock.

Meta is paying $5 million to celebrities for their AI likeness

Celebrities like Tom Brady, Kendall Jenner, and Snoop Dogg have inked deals with Meta.

Meta announced last month that it’s adding artificial intelligence characters of celebrities based on their likeness to its platforms, and according to The Information.

The compensation will be paid over two years, said a person close to the matter. As per the report, Meta’s initial budget was $1 million to use the celebrity’s likeness, but it ended up paying $5 million to creators. The insider didn’t take names of who got the $5 million deal.

Scientists begin building AI for scientific discovery using tech behind ChatGPT

An international team of scientists, including from the University of Cambridge, have launched a new research collaboration that will leverage the same technology behind ChatGPT to build an AI-powered tool for scientific discovery.

While ChatGPT deals in words and sentences, the team’s AI will learn from numerical data and physics simulations from across scientific fields to aid scientists in modeling everything from supergiant stars to the Earth’s climate.

The team launched the initiative, called Polymathic AI earlier this week, alongside the publication of a series of related papers on the arXiv open access repository.

New AI-driven tool streamlines experiments

Researchers at the Department of Energy’s SLAC National Accelerator Laboratory have demonstrated a new approach to peer deeper into the complex behavior of materials. The team harnessed the power of machine learning to interpret coherent excitations, collective swinging of atomic spins within a system.

This groundbreaking research, published recently in Nature Communications, could make experiments more efficient, providing real-time guidance to researchers during , and is part of a project led by Howard University including researchers at SLAC and Northeastern University to use machine learning to accelerate research in materials.

The team created this new data-driven tool using “neural implicit representations,” a machine learning development used in computer vision and across different scientific fields such as medical imaging, particle physics and cryo-electron microscopy. This tool can swiftly and accurately derive unknown parameters from , automating a procedure that, until now, required significant human intervention.

AI researchers expose critical vulnerabilities within major large language models

Large Language Models (LLMs) such as ChatGPT and Bard have taken the world by storm this year, with companies investing millions to develop these AI tools, and some leading AI chatbots being valued in the billions.

These LLMs, which are increasingly used within AI chatbots, scrape the entire Internet of information to learn and to inform answers that they provide to user-specified requests, known as “prompts.”

However, computer scientists from the AI security start-up Mindgard and Lancaster University in the UK have demonstrated that chunks of these LLMs can be copied in less than a week for as little as $50, and the information gained can be used to launch targeted attacks.

/* */