Toggle light / dark theme

UK Ministry of Defence Employed Rafael’s Drone Dome to Defend G7 Summit from Drone Threats

Earlier this year, in June 2,021 the British Ministry of Defence employed Rafael’s DRONE DOME counter-UAV system to protect world leaders during the G7 Summit in Cornwall, England from unmanned aerial threats. Three years ago, Britain’s Defence Ministry purchased several DRONE DOME systems which it has successfully employed in a multitude of operational scenarios, including for protecting both the physical site and participants of this year’s G7 summit. Rafael’s DRONE DOME is an innovative end-to-end, combat-proven counter-Unmanned Aerial System (C-UAS), providing all-weather, 360-degree rapid defence against hostile drones. Fully operational and globally deployed, DRONE DOME offers a modular, robust infrastructure comprised of electronic jammers and sensors and unique artificial intelligence algorithms to effectively secure threatened air space.

Meir Ben Shaya, Rafael EVP for Marketing and Business Development of Air Defence Systems: Rafael today recognizes two new and key trends in the field of counter-UAVs, both of which DRONE DOME can successfully defend against. The first trend is the number of drones employed during an attack, and the operational need to have the ability counter multiple, simultaneous attacks; this is a significant, practical challenge that any successful system must be able to overcome. The second trend is the type of tool being employed. Previously, air defense systems were developed to seek out conventional aircraft, large unmanned aerial vehicles, and missile, but today these defense systems must also tackle smaller, slower, low-flying threats which are becoming more and more autonomous.

Google AI Introduces Two New Families of Neural Networks Called ‘EfficientNetV2’ and ‘CoAtNet’ For Image Recognition

Training efficiency has become a significant factor for deep learning as the neural network models, and training data size grows. GPT-3 is an excellent example to show how critical training efficiency factor could be as it takes weeks of training with thousands of GPUs to demonstrate remarkable capabilities in few-shot learning.

To address this problem, the Google AI team introduce two families of neural networks for image recognition. First is EfficientNetV2, consisting of CNN (Convolutional neural networks) with a small-scale dataset for faster training efficiency such as ImageNet1k (with 1.28 million images). Second is a hybrid model called CoAtNet, which combines convolution and self-attention to achieve higher accuracy on large-scale datasets such as ImageNet21 (with 13 million images) and JFT (with billions of images). As per the research report by Google, EfficientNetV2 and CoAtNet both are 4 to 10 times faster while achieving state-of-the-art and 90.88% top-1 accuracy on the well-established ImageNet dataset.

This AI Makes Digital Copies of Humans! 👤

❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd.

📝 The paper “The Relightables: Volumetric Performance Capture of Humans with Realistic Relighting” is available here:
https://augmentedperception.github.io/therelightables/

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O’Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers.

Thumbnail background design: Felícia Zsolnai-Fehér — http://felicia.hu.

Károly Zsolnai-Fehér’s links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers.
Web: https://cg.tuwien.ac.at/~zsolnai/

#vr

Are you being automated out of work?

Aggregate of labor displacement from AI-spoiler-literally EVERYTHING.


OECD experts have calculated the probability a job will be automated, on the basis of how feasible it is for technology to perform the tasks that comprise that job.

Jobs are grouped into occupation categories according to the ISCO-08 standard. The mean probability of automation of each occupational category is displayed, along with an example of a typical job in that category.

This is very broad: the automatability of jobs within each occupation category can vary widely. Also, the tasks that make up each job can vary from country to country, but the mean probabilities displayed are from across OECD countries.

Nedelkoska, L. and G. Quintini (2018), “Automation, skills use and training”, OECD Social, Employment and Migration Working Papers, No. 202 OECD Publishing, Paris.

Tesla will open controversial FSD beta software to owners with a good driving record

Get ready.


Tesla CEO Elon Musk said the company will use personal driving data to determine whether owners who have paid for its controversial “Full Self-Driving” software can access the latest beta version that promises more automated driving functions.

Musk tweeted late Thursday night that the FSD Beta v10.0.1 software update, which has already been pushed out to a group of select owners, will become more widely available starting September 24.

Owners who have paid for FSD, which currently costs $10,000, will be offered access to the beta software through a “beta request button.” Drivers who select the beta software will be asked for permission to access their driving behavior using Tesla’s insurance calculator, Musk wrote in a tweet.

Boston Dynamics’ Spot robot is securing its position in a niche market

Improved autonomy

One of the main features of Spot is Autowalk, a system that enables the robot to record and repeat paths. An operator takes the robot through the path using the remote controller interface. The robot memorizes the path and can repeat when commanded to do so. Autowalk can be used for inspection missions in industrial facilities, mines, factories, and construction sites.

The new update improves Autowalk, reducing the need for human guidance and intervention. Robot operators can now edit Autowalk missions and add actions such as capturing images, reading indicators, or run third-party code. Spot also has been given better planning capabilities and can find the best path to perform target actions. Its pathfinding capacity has also been improved to adapt to changes in its inspection paths such as new obstacles. And it can be scheduled to carry out scheduled inspections without human supervision during off-hours.

/* */