Toggle light / dark theme

Study shows we can be convinced an AI chatbot is trustworthy

Participants individually interacted with a conversational AI mental health chatbot for about 30 minutes to determine if they would recommend it to a friend.

As human beings, we rely on recommendations or warnings from our friends and family. It gives us an added perspective on what to expect from a particular service, a product, or another human being. As per the latest study, the same is true for the way in which we trust and perceive an AI chatbot.

Researchers from Massachusetts Institute of Technology (MIT) and Arizona State University conducted a study in which they found that even though every person in their sample size of 310 people interacted with the exact same chatbot, their interactions with it were influenced by what they had been told before.

An Introduction to the Problems of AI Consciousness

Once considered a forbidden topic in the AI community, discussions around the concept of AI consciousness are now taking center stage, marking a significant shift since the current AI resurgence began over a decade ago. For example, last year, Brad Lemoine, an engineer at Google, made headlines claiming the large language model he was developing had become sentient [1]. CEOs of tech companies are now openly asked in media interviews whether they think their AI systems will ever become conscious [2,3].

Unfortunately, missing from much of the public discussion is a clear understanding of prior work on consciousness. In particular, in media interviews, engineers, AI researchers, and tech executives often implicitly define consciousness in different ways and do not have a clear sense of the philosophical difficulties surrounding consciousness or their relevance for the AI consciousness debate. Others have a hard time understanding why the possibility of AI consciousness is at all interesting relative to other problems, like the AI alignment issue.

This brief introduction is aimed at those working within the AI community who are interested in AI consciousness, but may not know much about the philosophical and scientific work behind consciousness generally or the topic of AI consciousness in particular. The aim here is to highlight key definitions and ideas from philosophy and science relevant for the debates on AI consciousness in a concise way with minimal jargon.

Google’s Plan to Give YOU a Quantum Computer By 2029

While the Quantum Computer race is heating up with companies such as Atlantic Quantum Innovations joining the race, Google has published a plan to make Quantum Computers usable for everyday consumers by 2029. This is in hopes of revolutionizing Healthcare, finding room temperature superconductors, enabling with like artificial general intelligence through quantum AI and increasing supercomputer performance a million times. In this video, we’re exploring all of these secret projects and other Quantum Computing Companies.

TIMESTAMPS:
00:00 CPU’s, GPU’s and now QPU’s.
01:14 Google’s Secret Project.
04:36 Other Quantum Computer Companies.
07:17 Fastest Quantum Computer today.

#google #quantum #future

Why this AGI Leaker Disappeared — AGI Achieved Internally!?

If you’re in the know, you might’ve heard of the AGI leaker Jimmy Apples recently. After having made several correct leaks in the past and having taken the AI community by storm after announcing AGI, he disappeared off of Twitter. In this video I’ll describe what happened and how credible this person is and whether OpenAI or Deepmind are the ones who are in the posession of AGI.
The Jimmy Apples leak document: https://docs.google.com/document/d/1K–sU97pa54xFfKggTABU9Kh…gN3Rk/edit.

TIMESTAMPS:
00:00 News from Jimmy Apples.
00:30 The AGI Leak Recap.
01:58 Why this AGI leak is real.
04:15 Why this leak is scary.
06:13 What speaks against the leak.

#neuralink #ai #elonmusk

Sam Altman Is the Oppenheimer of Our Age

This article was featured in One Great Story, New York’s reading recommendation newsletter. Sign up here to get it nightly.

This past spring, Sam Altman, the 38-year-old CEO of OpenAI, sat down with Silicon Valley’s favorite Buddhist monk, Jack Kornfield. This was at Wisdom 2.0, a low-stakes event at San Francisco’s Yerba Buena Center for the Arts, a forum dedicated to merging wisdom and “the great technologies of our age.” The two men occupied huge white upholstered chairs on a dark mandala-backed stage. Even the moderator seemed confused by Altman’s presence.

“What brought you here?” he asked.

AI Identifies Brain Signals Associated With Recovering From Depression

It could soon be possible to measure changes in depression levels like we can measure blood pressure or heart rate.

In a new study, 10 patients with depression that had resisted treatment were enrolled in a six-month course of deep brain stimulation (DBS) therapy. Previous results from DBS have been mixed, but help from artificial intelligence could soon change that.

Success with DBS relies on stimulating the right tissue, which means getting accurate feedback. Currently, this is based on patients reporting their mood, which can be affected by stressful life events as much as it can be the result of neurological wiring.

AI Can Predict Future Heart Attacks By Analyzing CT Scans

An artificial intelligence platform developed by an Israeli startup can reveal whether a patient is at risk of a heart attack by analyzing their routine chest CT scans.

Results from a new study testing Nanox. AI’s HealthCCSng algorithm on such scans found that 58 percent of patients unknowingly had moderate to severe levels of coronary artery calcium (CAC) or plaque.

CAC is the strongest predictor of future cardiac events, and measuring it typically subjects patients to an additional costly scan that is not normally covered by insurance companies.

/* */