May 16, 2023
Can We Stop Runaway A.I.?
Posted by Shubham Ghosh Roy in categories: robotics/AI, singularity
Technologists warn about the dangers of the so-called singularity. But can anything actually be done to prevent it?
Technologists warn about the dangers of the so-called singularity. But can anything actually be done to prevent it?
Artificial intelligence could potentially replace 80% of jobs “in the next few years,” according to AI expert Ben Goertzel.
Goertzel, the founder and chief executive officer of SingularityNET, told France’s AFP news agency at a summit in Brazil last week that a future like that could come to fruition with the introduction of systems like OpenAI’s ChatGPT.
“I don’t think it’s a threat. I think it’s a benefit. People can find better things to do with their life than work for a living… Pretty much every job involving paperwork should be automatable,” he said.
Within a year, Karl Schwarzschild, who was “a lieutenant in the German army, by conscription, but a theoretical astronomer by profession,” as Mann puts it, heard of Einstein’s theory. He was the first person to work out a solution to Einstein’s equations, which showed that a singularity could form–and nothing, once it got too close, could move fast enough to escape a singularity’s pull.
Then, in 1939, physicists Rober Oppenheimer (of Manhattan Project fame, or infamy) and Hartland Snyder tried to find out whether a star could create Schwarzschild’s impossible-sounding object. They reasoned that given a big enough sphere of dust, gravity would cause the mass to collapse and form a singularity, which they showed with their calculations. But once World War II broke out, progress in this field stalled until the late 1950s, when people started trying to test Einstein’s theories again.
Physicist John Wheeler, thinking about the implications of a black hole, asked one of his grad students, Jacob Bekenstein, a question that stumped scientists in the late 1950s. As Mann paraphrased it: “What happens if you pour hot tea into a black hole?”
New AI systems released in 2023 demonstrate remarkable properties that have taken most observers by surprise. The potential both for positive AI outcomes and negative AI outcomes seems to have been accelerated. This leads to five responses:
1.) “Yawn” — AI has been overhyped before, and is being overhyped again now. Let’s keep our attention on more tangible issues.
2.) “Full speed ahead with more capabilities” — Let’s get to the wonderful positive outcomes of AI as soon as possible, sidestepping those small-minded would-be regulators who would stifle all the innovation out of the industry.
The Big Bang Theory is widely accepted as the explanation for the origin of the universe, but it doesn’t tell us what came before it. The idea of a universe before the Big Bang may seem impossible, but recent scientific discoveries suggest otherwise. In this article, we’ll explore the strongest evidence for a universe before the Big Bang.
The Big Bang Theory is the most widely accepted explanation for the origin of the universe. According to this theory, the universe began as a singularity, a point of infinite density and temperature. But what caused the Big Bang? And what came before it? These questions have puzzled scientists and philosophers for centuries.
There are a lot of reasons why we think technological singularity will happen sooner than 2045. With technology advancing at a rapid pace, an abundance of data, increased investment, collaboration, and potential breakthroughs, we might just wake up one day and realize that the robots have taken over. But hey, at least they’ll do our laundry.
Do you think singularity will happen sooner than 2045? Why or why not? Answer in the comment section below.
Despite the impressive recent progress in AI capabilities, there are reasons why AI may be incapable of possessing a full “general intelligence”. And although AI will continue to transform the workplace, some important jobs will remain outside the reach of AI. In other words, the Economic Singularity may not happen, and AGI may be impossible.
These are views defended by our guest in this episode, Kenneth Cukier, the Deputy Executive Editor of The Economist newspaper.
Continue reading “A defence of human uniqueness against AI encroachment, with Kenn Cukier” »
The Singularity is a technological event horizon beyond which we cannot see – a moment in future history when exponential progress makes the impossible possible. This video discusses the concept of the Singularity, related technologies including AI, synthetic biology, cybernetics and quantum computing, and their potential implications.
My previous video “AI, Robots & the Future” is here:
https://www.youtube.com/watch?v=iaGIo_Viazs.
The GPT phenomenon and the future of humanity in the face of advances in Artificial Intelligence.
The Age of Artificial Intelligence is an increasingly present reality in our daily lives. With the rise of technologies such as Natural Language Processing (NLP) and Artificial Neural Networks (ANN), the possibility of creating machines capable of performing tasks that were previously exclusive to humans has emerged.
One of these technologies is the Generative Pre-trained Transformer, better known as GPT. It’s the Large Language Model (LLM) developed by OpenAI.
OpenAI was founded in San Francisco, California in 2015 by Sam Altman, Reid Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Peter Thiel, among others, who collectively pledged $1 billion. Musk resigned from the board in 2018, but continued to be a donor to the project.