Toggle light / dark theme

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence

Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users’ actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants’ willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

AI could make it easier to create bioweapons that bypass current security protocols

Artificial intelligence is transforming biology and medicine by accelerating the discovery of new drugs and proteins and making it easier to design and manipulate DNA, the building blocks of life. But as with most new technologies, there is a potential downside. The same AI tools could be used to develop dangerous new pathogens and toxins that bypass current security checks. In a new study from Microsoft, scientists employed a hacker-style test to demonstrate that AI-generated sequences could evade security software used by DNA manufacturers.

“We believe that the ongoing advancement of AI-assisted design holds great promise for tackling critical challenges in health and the , with the potential to deliver overwhelmingly positive impacts on people and society,” commented the researchers in their paper published in the journal Science. “As with other emerging technologies, however, it is also crucial to proactively identify and mitigate risks arising from novel capabilities.”

UMass Engineers Create First Artificial Neurons That Could Directly Communicate With Living Cells

A team of engineers at the University of Massachusetts Amherst has announced the creation of an artificial neuron with electrical functions that closely mirror those of biological ones. Building on their previous groundbreaking work using protein nanowires synthesized from electricity-generating bacteria, the team’s discovery means that we could see immensely efficient computers built on biological principles which could interface directly with living cells.

“Our brain processes an enormous amount of data,” says Shuai Fu, a graduate student in electrical and computer engineering at UMass Amherst and lead author of the study published in Nature Communications. “But its power usage is very, very low, especially compared to the amount of electricity it takes to run a Large Language Model, like ChatGPT.”

The human body is over 100 times more electrically efficient than a computer’s electrical circuit. The human brain is composed of billions of neurons, specialized cells that send and receive electrical impulses all over the body. While it takes only about 20 watts for your brain to, say, write a story, a LLM might consume well over a megawatt of electricity to do the same task.

First Artificial Neurons That Might Communicate With Living Cells

A team of engineers at the University of Massachusetts Amherst has announced the creation of an artificial neuron with electrical functions that closely mirror those of biological ones. Building on their previous groundbreaking work using protein nanowires synthesized from electricity-generating bacteria, the team’s discovery means that we could see immensely efficient computers built on biological principles which could interface directly with living cells.

“Our brain processes an enormous amount of data,” says Shuai Fu, a graduate student in electrical and computer engineering at UMass Amherst and lead author of the study published in Nature Communications. “But its power usage is very, very low, especially compared to the amount of electricity it takes to run a Large Language Model, like ChatGPT.”

The human body is over 100 times more electrically efficient than a computer’s electrical circuit. The human brain is composed of billions of neurons, specialized cells that send and receive electrical impulses all over the body. While it takes only about 20 watts for your brain to, say, write a story, a LLM might consume well over a megawatt of electricity to do the same task.

Lab-Grown Brains Powers the World’s First Bio-Computer 🧠

Discover the world’s first computer powered by human brain cells! In this groundbreaking video, we dive into the revolutionary Neuroplatform by FinalSpark, merging biology with technology. Witness how biocomputing is transforming the future of artificial intelligence and computing with unparalleled energy efficiency and processing power. Subscribe now to stay updated on cutting-edge tech that blurs the lines between science fiction and reality! 🧠💻

#Biocomputing #Neuroplatform #AI #FutureTech #Innovation #FinalSpark #BrainPoweredComputer.

Stay ahead in 2024: Unlock the future of AI, tech, and innovative businesses! 🚀 Subscribe for the latest insights and be part of a community that learns, grows, and leads the way in a tech-driven world. Discover, ignite and soar into tomorrow with Quantum Spark.

View a PDF of the paper titled Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play, by Qinsi Wang and 8 other authors

Although reinforcement learning (RL) can effectively enhance the reasoning capabilities of vision-language models (VLMs), current methods remain heavily dependent on labor-intensive datasets that require extensive manual construction and verification, leading to extremely high training costs and consequently constraining the practical deployment of VLMs. To address this challenge, we propose Vision-Zero, a domain-agnostic framework enabling VLM self-improvement through competitive visual games generated from arbitrary image pairs. Specifically, Vision-Zero encompasses three main attributes: Strategic Self-Play Framework: Vision-Zero trains VLMs in “Who Is the Spy”-style games, where the models engage in strategic reasoning and actions across multiple roles. Through interactive gameplay, models autonomously generate their training data without human annotation. Gameplay from Arbitrary Images: Unlike existing gamified frameworks, Vision-Zero can generate games from arbitrary images, thereby enhancing the model’s reasoning ability across diverse domains and showing strong generalization to different tasks. We demonstrate this versatility using three distinct types of image datasets: CLEVR-based synthetic scenes, charts, and real-world images. Sustainable Performance Gain: We introduce Iterative Self-Play Policy Optimization (Iterative-SPO), a novel training algorithm that alternates between Self-Play and reinforcement learning with verifiable rewards (RLVR), mitigating the performance plateau often seen in self-play-only training and achieving sustained long-term improvements. Despite using label-free data, Vision-Zero achieves state-of-the-art performance on reasoning, chart question answering, and vision-centric understanding tasks, surpassing other annotation-based methods. Models and code has been released at https://github.com/wangqinsi1/Vision-Zero.

/* */