Analysis shows most security risk sits in longtail open source images, with 98% of CVEs outside top projects & Critical flaws fixed in under 20 hours.
In future high-tech industries, such as high-speed optical computing for massive AI, quantum cryptographic communication, and ultra-high-resolution augmented reality (AR) displays, nanolasers—which process information using light—are gaining significant attention as core components for next-generation semiconductors.
A research team has proposed a new manufacturing technology capable of high-density placement of nanolasers on semiconductor chips, which process information in spaces thinner than a human hair.
A joint research team led by Professor Ji Tae Kim from the Department of Mechanical Engineering and Professor Junsuk Rho from POSTECH, has developed an ultra-fine 3D printing technology capable of creating “vertical nanolasers,” a key component for ultra-high-density optical integrated circuits.
Security researchers found two Chrome extensions with 900,000 installs secretly collecting ChatGPT and DeepSeek chats and browsing data.
Attack Surface Management (ASM) tools promise reduced risk. What they usually deliver is more information.
Security teams deploy ASM, asset inventories grow, alerts start flowing, and dashboards fill up. There is visible activity and measurable output. But when leadership asks a simple question, “Is this reducing incidents?” the answer is often unclear.
This gap between effort and outcome is the core ROI problem in attack surface management, especially when ROI is measured primarily through asset counts instead of risk reduction.
Over 10,000 Fortinet firewalls are still exposed online and vulnerable to ongoing attacks exploiting a five-year-old critical two-factor authentication (2FA) bypass vulnerability.
Fortinet released FortiOS versions 6.4.1, 6.2.4, and 6.0.10 in July 2020 to address this flaw (tracked as CVE-2020–12812) and advised admins who couldn’t immediately patch to turn off username-case-sensitivity to block 2FA bypass attempts targeting their devices.
This improper authentication security flaw (rated 9.8÷10 in severity) was found in FortiGate SSL VPN and allows attackers to log in to unpatched firewalls without being prompted for the second factor of authentication (FortiToken) when the username’s case is changed.
#artificialintelligence #ai #technology #futuretech
This change will revolutionize leadership, governance, and workforce development. Successful firms will invest in technology and human capital by reskilling personnel, redefining roles, and fostering a culture of human-machine collaboration.
The Imperative of Strategy Artificial intelligence is not preordained; it is a tool shaped by human choices. How we execute, regulate, and protect AI will determine its impact on industries, economies, and society. I emphasized in Inside Cyber that technology convergence—particularly the amalgamation of AI with 5G, IoT, distributed architectures, and ultimately quantum computing—will augment both potential and hazards.
The issue at hand is not if AI will transform industries—it has already done so. The essential question is whether we can guide this change to enhance security, resilience, and human well-being. Individuals who interact with AI strategically, ethically, and with a long-term perspective will gain a competitive advantage and foster the advancement of a more innovative and secure future.
IBM has disclosed details of a critical security flaw in API Connect that could allow attackers to gain remote access to the application.
The vulnerability, tracked as CVE-2025–13915, is rated 9.8 out of a maximum of 10.0 on the CVSS scoring system. It has been described as an authentication bypass flaw.
“IBM API Connect could allow a remote attacker to bypass authentication mechanisms and gain unauthorized access to the application,” the tech giant said in a bulletin.
In December 2024, the popular Ultralytics AI library was compromised, installing malicious code that hijacked system resources for cryptocurrency mining. In August 2025, malicious Nx packages leaked 2,349 GitHub, cloud, and AI credentials. Throughout 2024, ChatGPT vulnerabilities allowed unauthorized extraction of user data from AI memory.
The result: 23.77 million secrets were leaked through AI systems in 2024 alone, a 25% increase from the previous year.
Here’s what these incidents have in common: The compromised organizations had comprehensive security programs. They passed audits. They met compliance requirements. Their security frameworks simply weren’t built for AI threats.
Tiny tubes of carbon that emit single photons from just one point along their length have been made in a deterministic manner by RIKEN researchers. Such carbon nanotubes could form the basis of future quantum technologies based on light.
Light is currently used to freight data over long distances via optical fibers. But harnessing its quantum nature could offer several benefits, including unprecedented security since any inception by a third party can be detected.
Such quantum communication technology requires light sources that emit one photon at a time. Several systems are capable of realizing that, but of them carbon nanotubes are the most promising.