Toggle light / dark theme

AI firm Hugging Face discloses leak of secrets on its Spaces platform

The disclosure notice also noted several security changes made to the Spaces platform in response to the leak, including the removal of org tokens to improve traceability and auditing capabilities, and the implementation of a key management service (KMS) for Spaces secrets.

Hugging Face said it plans to deprecate traditional read and write tokens “in the near future,” replacing them with fine-grained access tokens, which are currently the default.

Spaces users are recommended to switch their Hugging Face tokens to fine-grained access tokens if they are not already using them, and refresh any key or token that may have been exposed.

Unlocking The Potential Of Advanced AI For Business Innovation

Applied in this way, it’s not just generative AI—this is transformational AI. It goes beyond accelerating productivity; it accelerates innovation by sparking new business strategies and revamping existing operations, paving the way for a new era of autonomous enterprise.

Keep in mind that not all Large Language Models (LLMs) can be tailored for genuine business innovation. Most models are generalists that are trained on public information found on the internet and are not experts on your particular brand of doing business. However, techniques like Retrieval Augmented Generation (RAG) allow for the augmentation of general LLMs with industry-specific and company-specific data, enabling them to adapt to anyone’s requirements without extensive and expensive training.

We are still in the nascent stages of advanced AI adoption. Most companies are grappling with the basics—such as implementation, security and governance. However, forward-thinking organizations are already looking ahead. By reimagining the application of generative AI, they are laying the groundwork for businesses to reinvent themselves, ushering in an era where innovation knows no bounds.

Google Leak Reveals Thousands of Privacy Incidents

Google has accidentally collected childrens’ voice data, leaked the trips and home addresses of car pool users, and made YouTube recommendations based on users’ deleted watch history, among thousands of other employee-reported privacy incidents, according to a copy of an internal Google database which tracks six years worth of potential privacy and security issues obtained by 404 Media.

Individually the incidents, most of which have not been previously publicly reported, may only each impact a relatively small number of people, or were fixed quickly. Taken as a whole, though, the internal database shows how one of the most powerful and important companies in the world manages, and often mismanages, a staggering amount of personal, sensitive data on people’s lives.

The data obtained by 404 Media includes privacy and security issues that Google’s own employees reported internally. These include issues with Google’s own products or data collection practices; vulnerabilities in third party vendors that Google uses; or mistakes made by Google staff, contractors, or other people that have impacted Google systems or data. The incidents include everything from a single errant email containing some PII, through to substantial leaks of data, right up to impending raids on Google offices. When reporting an incident, employees give the incident a priority rating, P0 being the highest, P1 being a step below that. The database contains thousands of reports over the course of six years, from 2013 to 2018.

OpenAI Introduces ChatGPT Edu, Revolutionizing Higher Education

Summary: ChatGPT Edu, powered by GPT-4o, is designed for universities to responsibly integrate AI into academic and campus operations. This advanced AI tool supports text and vision reasoning, data analysis, and offers enterprise-level security.

Successful applications at institutions like Columbia University and Wharton School highlight its potential. ChatGPT Edu aims to make AI accessible and beneficial across educational settings.

‘Metaholograms’: Researchers develop a new type of hologram

This innovation has the potential to significantly improve AR/VR displays by enabling the projection of more complex and realistic scenes. It also holds promise for applications in image encryption, where the information is encoded into multiple holographic channels for enhanced security.

The research is a significant step forward in developing high-performance metaholograms with a vastly increased information capacity. This study paves the way for exciting new possibilities in various fields, from advanced displays to information encryption and .

Andreas Hein on LinkedIn: #interstellar #conference #luxembourg #exoplanet

Want to go on an unforgettable trip? Abstract Submission closing soon! Exciting news from SnT, Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg! We are thrilled to announce the 1st European Interstellar Symposium in collaboration with esteemed partners like the Interstellar Research Group, Initiative & Institute for Interstellar Studies, Breakthrough Prize Foundation, and Luxembourg Space Agency. This interdisciplinary symposium will delve into the profound questions surrounding interstellar travel, exploring topics such as human and robotic exploration, propulsion, exoplanet research, life support systems, and ethics. Join us to discuss how these insights will impact near-term applications on Earth and in space, covering technologies like optical communications, ultra-lightweight materials, and artificial intelligence. Don’t miss this opportunity to connect with a community of experts and enthusiasts, all united in a common goal. Check out the “Call for Papers” link in the comment section to secure your spot! Image credit: Maciej Rębisz, Science Now Studio #interstellar #conference #Luxembourg #exoplanet

How AI is poised to unlock innovations at unprecedented pace

How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.


These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.

Hackers target Check Point VPNs to breach enterprise networks

Threat actors are targeting Check Point Remote Access VPN devices in an ongoing campaign to breach enterprise networks, the company warned in a Monday advisory.

Remote Access is integrated into all Check Point network firewalls. It can be configured as a client-to-site VPN for access to corporate networks via VPN clients or set up as an SSL VPN Portal for web-based access.

Check Point says the attackers are targeting security gateways with old local accounts using insecure password-only authentication, which should be used with certificate authentication to prevent breaches.

Computer scientists discover Vulnerability in Cloud Server Hardware used by AMD and Intel Chips

Public cloud services employ special security technologies. Computer scientists at ETH Zurich have now discovered a gap in the latest security mechanisms used by AMD and Intel chips. This affects major cloud providers.

Over the past few years, hardware manufacturers have developed technologies that ought to make it possible for companies and governmental organizations to process sensitive data securely using shared cloud computing resources.

Known as confidential computing, this approach protects sensitive data while it is being processed by isolating it in an area that is impenetrable to other users and even to the cloud provider. But computer scientists at ETH Zurich have now proved that it is possible for hackers to gain access to these systems and to the data stored in them.