Google’s Identity Check locks sensitive Android settings behind biometrics, enhancing security outside trusted locations.
Category: privacy
If you’ve recently scrolled through Instagram, you’ve probably noticed it: users posting AI-generated images of their lives or chuckling over a brutal feed roast by ChatGPT. What started as an innocent prompt – “Ask ChatGPT to draw what your life looks like based on what it knows about you” – has gone viral, inviting friends, followers, and even ChatGPT itself to get a peek into our most personal details. It’s fun, often eerily accurate, and, yes, a little unnerving.
The trend that started it all
A while ago, Instagram’s “Add Yours” sticker spurred the popular trend “Ask ChatGPT to roast your feed in one paragraph.” What followed were thousands of users clamouring to see the AI’s take on their profiles. ChatGPT didn’t disappoint – delivering razor-sharp observations on everything from overused vacation spots to the endless brunch photos and quirky captions, blending humour with a dash of truth. The playful roasting felt oddly familiar, almost like a best friend’s inside joke.
Microsoft delays Windows Copilot+ Recall feature to enhance privacy, with a new release slated for December.
Related: SDR: a spectrum of possibilities
NAVWAR awarded the order on behalf of the Navy’s Program Executive Office for Command, Control, Communication, Computers, and Intelligence (PEO C4I) in San Diego.
The AN/USC-61© is a maritime software-defined radio (SDR) that has become standard for the U.S. military. The compact, multi-channel DMR provides several different waveforms and multi-level information security for voice and data communications.
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
Herbert Ong Brighter with Herbert.
Brazil bans Meta from using personal data for AI training, citing privacy concerns and risks to children. Meta has 5 days to comply or face fines.
Back in June, YouTube quietly made a subtle but significant policy change that, surprisingly, benefits users by allowing them to remove AI-made videos that simulate their appearance or voice from the platform under YouTube’s privacy request process.
First spotted by TechCrunch, the revised policy encourages affected parties to directly request the removal of AI-generated content on the grounds of privacy concerns and not for being, for example, misleading or fake. YouTube specifies that claims must be made by the affected individual or authorized representatives. Exceptions include parents or legal guardians acting on behalf of minors, legal representatives, and close family members filing on behalf of deceased individuals.
According to the new policy, if a privacy complaint is filed, YouTube will notify the uploader about the potential violation and provide an opportunity to remove or edit the private information within their video. YouTube may, at its own discretion, grant the uploader 48 hours to utilize the Trim or Blur tools available in YouTube Studio and remove parts of the footage from the video. If the uploader chooses to remove the video altogether, the complaint will be closed, but if the potential privacy violation remains within those 48 hours, the YouTube Team will review the complaint.
Large language models have emerged as a transformative technology and have revolutionized AI with their ability to generate human-like text with seemingly unprecedented fluency and apparent comprehension. Trained on vast datasets of human-generated text, LLMs have unlocked innovations across industries, from content creation and language translation to data analytics and code generation. Recent developments, like OpenAI’s GPT-4o, showcase multimodal capabilities, processing text, vision, and audio inputs in a single neural network.
Despite their potential for driving productivity and enabling new forms of human-machine collaboration, LLMs are still in their nascent stage. They face limitations such as factual inaccuracies, biases inherited from training data, lack of common-sense reasoning, and data privacy concerns. Techniques like retrieval augmented generation aim to ground LLM knowledge and improve accuracy.
To explore these issues, I spoke with Amir Feizpour, CEO and founder of AI Science, an expert-in-the-loop business workflow automation platform. We discussed the transformative impacts, applications, risks, and challenges of LLMs across different sectors, as well as the implications for startups in this space.
Microsoft’s AI-powered Recall feature sparked major privacy concerns. Now, it’s becoming an opt-in.