Unlike us, an algorithm selecting a most-likely next word or a program calculating the best move in a game of chess isn’t choosing and can’t feel regret.
A key aspect of how we think separates us from even the most advanced AI.
“The human brain has 100 billion neurons, each neuron connected to 10,000 other neurons. Sitting on your shoulders is the most complicated object in the known universe.” — Michio Kaku, PhD.
Since most examples of brain-inspired silicon chips are based on digital electronic principles, their capacity to fully imitate brain function is limited. Self-organizing brain organoids connected to microelectrode arrays (MEAs) can be changed in function to create neural networks. These networks, called organoid neural networks (ONNs), show the capacity for unsupervised learning, which is what artificial intelligence (AI) is based on. These mini-organs, when connected to the right hardware, can even be trained to recognize speech.
This brain-inspired computing hardware, or “Brainoware,” could overcome existing shortcomings in AI technologies, providing natural solutions to challenges regarding time and energy consumption and heat production of current AI hardware. These ONNs may also have the necessary complexity and diversity to mimic a human brain, which could inspire the development of more sophisticated and human-like AI systems.
Bad things can happen when you hallucinate. If you are human, you can end up doing things like putting your underwear in the oven. If you happen to be a chatbot or some other type of artificial intelligence (AI) tool, you can spew out false and misleading information, which—depending on the info—could affect many, many people in a bad-for-your-health-and-well-being type of way. And this latter type of hallucinating has become increasingly common in 2023 with the continuing proliferation of AI. That’s why Dictionary.com has an AI-specific definition of “hallucinate” and has named the word as its 2023 Word of the Year.
Dictionary.com noticed a 46% jump in dictionary lookups for the word “hallucinate” from 2022 to 2023 with a comparable increase in searches for “hallucination” as well. Meanwhile, there was a 62% jump in searches for AI-related words like “chatbot”, “GPT”, “generative AI”, and “LLM.” So the increases in searches for “hallucinate” is likely due more to the following AI-specific definition of the word from Dictionary.com rather than the traditional human definition:
hallucinate [ h uh-loo-s uh-neyt ]-verb-(of artificial i ntelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.
Here’s a non-AI-generated new flash: AI can lie, just like humans. Not all AI, of course. But AI tools can be programmed to serve like little political animals or snake oil salespeople, generating false information while making it seem like it’s all about facts. The difference from humans is that AI can churn out this misinformation and disinformation at even greater speeds. For example, a study published in JAMA Internal Medicine last month showed how OpenAI’s GPT Playground could generate 102 different blog articles “that contained more than 17,000 words of disinformation related to vaccines and vaping” within just 65 minutes. Yes, just 65 minutes. That’s about how long it takes to watch the TV show 60 Minutes and then make a quick uncomplicated bathroom trip that doesn’t involve texting on the toilet. Moreover, the study demonstrated how “additional generative AI tools created an accompanying 20 realistic images in less than 2 minutes.
WASHINGTON, Dec 14 (Reuters) — Twenty-eight healthcare companies, including CVS Health (CVS.N), are signing U.S. President Joe Biden’s voluntary commitments aimed at ensuring the safe development of artificial intelligence (AI), a White House official said on Thursday.
The commitments by healthcare providers and payers follow those of 15 leading AI companies, including Google, OpenAI and OpenAI partner Microsoft (MSFT.O) to develop AI models responsibly.
Biden’s government is pushing to set parameters around AI as it makes rapid gains in capability and popularity while regulation remains limited.