Toggle light / dark theme

Responsible AI in a Global Context

CSIS will host a public event on responsible AI in a global context, featuring a moderated discussion with Julie Sweet, Chair and CEO of Accenture, and Brad Smith, President and Vice Chair of the Microsoft Corporation, on the business perspective, followed by a conversation among a panel of experts on the best way forward for AI regulation. Dr. John J. Hamre, President and CEO of CSIS, will provide welcoming remarks.

Keynote Speakers:
Brad Smith, President and Vice Chair, Microsoft Corporation.
Julie Sweet, Chair and Chief Executive Officer, Accenture.

Featured Speakers:
Gregory C. Allen, Director, Project on AI Governance and Senior Fellow, Strategic Technologies Program, CSIS
Mignon Clyburn, Former Commissioner, U.S. Federal Communications Commission.
Karine Perset, Head of AI Unit and OECD.AI, Digital Economy Policy Division, Organisation for Economic Co-Operation and Development (OECD)
Helen Toner, Director of Strategy, Center for Security and Emerging Technology, Georgetown University.

This event is made possible through general support to CSIS.

A nonpartisan institution, CSIS is the top national security think tank in the world.
Visit www.csis.org to find more of our work as we bring bipartisan solutions to the world’s greatest challenges.

Want to see more videos and virtual events? Subscribe to this channel and turn on notifications: https://cs.is/2dCfTve.

Warning for Samsung users as pre-installed app could let hacker control phone

MILLIONS of owners of the Samsung Galaxy smartphone face a security threat.

Those with an Android version 9 through 12 are at risk.

Researchers at Kryptowire published a report detailing how they discovered a serious vulnerability in the pre-installed Phone app across multiple models that could enable a hacker to take control of someone’s phone, Forbes reported.

Artificial intelligence is already upending geopolitics

The TechCrunch Global Affairs Project examines the increasingly intertwined relationship between the tech sector and global politics.

Geopolitical actors have always used technology to further their goals. Unlike other technologies, artificial intelligence (AI) is far more than a mere tool. We do not want to anthropomorphize AI or suggest that it has intentions of its own. It is not — yet — a moral agent. But it is fast becoming a primary determinant of our collective destiny. We believe that because of AI’s unique characteristics — and its impact on other fields, from biotechnologies to nanotechnologies — it is already threatening the foundations of global peace and security.

The rapid rate of AI technological development, paired with the breadth of new applications (the global AI market size is expected to grow more than ninefold from 2020 to 2028) means AI systems are being widely deployed without sufficient legal oversight or full consideration of their ethical impacts. This gap, often referred to as the pacing problem, has left legislatures and executive branches simply unable to cope.

New Linux Bug in Netfilter Firewall Module Lets Attackers Gain Root Access

A newly disclosed security flaw in the Linux kernel could be leveraged by a local adversary to gain elevated privileges on vulnerable systems to execute arbitrary code, escape containers, or induce a kernel panic.

Tracked as CVE-2022–25636 (CVSS score: 7.8), the vulnerability impacts Linux kernel versions 5.4 through 5.6.10 and is a result of a heap out-of-bounds write in the netfilter subcomponent in the kernel. The issue was discovered by Nick Gregory, a senior threat researcher at Sophos.

Warning: Objects in driverless car sensors may be closer than they appear

Researchers at Duke University have demonstrated the first attack strategy that can fool industry-standard autonomous vehicle sensors into believing nearby objects are closer (or further) than they appear without being detected.

The research suggests that adding optical 3D capabilities or the ability to share data with nearby cars may be necessary to fully protect from attacks.

The results will be presented Aug. 10–12 at the 2022 USENIX Security Symposium, a top venue in the field.

How GitHub Uses Machine Learning to Extend Vulnerability Code Scanning

Applying machine learning techniques to its rule-based security code scanning capabilities, GitHub hopes to be able to extend them to less common vulnerability patterns by automatically inferring new rules from the existing ones.

GitHub Code Scanning uses carefully defined CodeQL analysis rules to identify potential security vulnerabilities lurking in source code.

1 out of 3 WordPress plugins does not receive security updates; millions of websites at risk

A report specialized in WordPress security points to a 150% increase in reported flaws during 2021 compared to the previous year, in addition to establishing that almost 30% of the vulnerabilities detected in plugins for WordPress do not receive updates.

Since this is the most widely used content management system (CMS) in the world, this should be a worrisome issue for tens of millions of website administrators.

According to Patchstack specialists, of all the flaws reported in 2021, only 0.58% resided in the WordPress core, while the rest affect themes and plugins created by dozens of developers. In addition, about 92% of these flaws are in free plugins, while paid plugins were affected by 8.6% of the failures reported last year.

/* */