Menu

Blog

Archive for the ‘existential risks’ category: Page 11

May 13, 2023

Advanced Life Should Have Already Peaked Billions of Years Ago

Posted by in categories: alien life, existential risks, information science

Did humanity miss the party? Are SETI, the Drake Equation, and the Fermi Paradox all just artifacts of our ignorance about Advanced Life in the Universe? And if we are wrong, how would we know?

A new study focusing on black holes and their powerful effect on star formation suggests that we, as advanced life, might be relics from a bygone age in the Universe.

Universe Today readers are familiar with SETI, the Drake Equation, and the Fermi Paradox. All three are different ways that humanity grapples with its situation. They’re all related to the Great Question: Are We Alone? We ask these questions as if humanity woke up on this planet, looked around the neighbourhood, and wondered where everyone else was. Which is kind of what has happened.

Apr 30, 2023

The ‘Don’t Look Up’ Thinking That Could Doom Us With AI

Posted by in categories: asteroid/comet impacts, existential risks, robotics/AI

Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.

Sadly, I now feel that we’re living the movie Don’t look up for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.

Continue reading “The ‘Don’t Look Up’ Thinking That Could Doom Us With AI” »

Apr 29, 2023

We Solved The Fermi Paradox

Posted by in category: existential risks

It’s possibly the most famous question in all of science — where is everyone? Join us today for deep dive into Fermi Paradox.

Thanks to NordVPN for sponsoring this episode. 🌏 Get exclusive NordVPN deal here ➵ https://NordVPN.com/coolworlds It’s risk free with Nord’s 30 day money-back guarantee!✌

Continue reading “We Solved The Fermi Paradox” »

Apr 28, 2023

Huge cache of mammal genomes offers fresh insights on human evolution

Posted by in categories: asteroid/comet impacts, biotech/medical, evolution, existential risks, genetics

Using Zoonomia’s data, researchers have also constructed a phylogenetic tree that estimates when each mammalian species diverged from its ancestors5. This analysis lends support to the hypothesis that mammals had already started evolutionarily diverging before Earth was struck by the asteroid that killed the dinosaurs about 65 million years ago — but that they diverged much more rapidly afterwards.

Only the beginning

The Zoonomia Project is just one of dozens of efforts to sequence animal genomes. Another large effort is the Vertebrate Genomes Project (VGP), which aims to generate genomes for roughly all 71,000 living vertebrate species, which include mammals, reptiles, fish, birds and amphibians. Although the two projects are independent of one another, many researchers are a part of both, says Haussler, who is a trustee of the VGP.

Apr 28, 2023

There Is No A.I.

Posted by in categories: existential risks, robotics/AI

“As a computer scientist, I don’t like the term ” A.I.” In fact, I think it’s misleading—maybe even a little dangerous. Everybody’s already using the term, and it might seem a little late in the day to be arguing about it. But we’re at the beginning of a new technological era—and the easiest way to mismanage a technology is to misunderstand it.

The term artificial intelligence has a long history—it was coined in the nineteen-fifties, in the early days of computers. More recently, computer scientists have grown up on movies like The Terminator and The Matrix, and on characters like Commander Data, from Star Trek: The Next Generation. These cultural touchstones have become an almost religious mythology in tech culture. It’s only natural that computer scientists long to create A.I. and realize a long-held dream.

What’s striking, though, is that many of the people who are pursuing the A.I. dream also worry that it might mean doomsday for mankind. It is widely stated, even by scientists at the very center of today’s efforts, that what A.I. researchers are doing could result in the annihilation of our species, or at least in great harm to humanity, and soon. In a recent poll, half of A.I. scientists agreed that there was at least a ten-per-cent chance that the human race would be destroyed by A.I.

Apr 27, 2023

Genomes from 240 mammalian species reveal what makes the human genome unique

Posted by in categories: biotech/medical, evolution, existential risks, genetics, health

Over the past 100 million years, mammals have adapted to nearly every environment on Earth. Scientists with the Zoonomia Project have been cataloging the diversity in mammalian genomes by comparing DNA sequences from 240 species that exist today, from the aardvark and the African savanna elephant to the yellow-spotted rock hyrax and the zebu.

This week, in several papers in a special issue of Science, the Zoonomia team has demonstrated how can not only shed light on how certain species achieve extraordinary feats, but also help scientists better understand the parts of our genome that are functional and how they might influence health and disease.

In the new studies, the researchers identified regions of the genomes, sometimes just single letters of DNA, that are most conserved, or unchanged, across mammalian species and millions of years of evolution—regions that are likely biologically important. They also found part of the genetic basis for uncommon mammalian traits such as the ability to hibernate or sniff out faint scents from miles away. And they pinpointed species that may be particularly susceptible to extinction, as well as genetic variants that are more likely to play causal roles in rare and common human diseases.

Apr 26, 2023

Researchers Took The First Pics Of DEATH — It Is Actually PALE BLUE And Looks Nice

Posted by in categories: biological, existential risks

In today’s well-researched world, death is one of those unknown barriers. It was pursued by British scientists… The color of death is a faint blue.

British scientists got a firsthand look at what it’s like to die. They took a close look at the worm in the experiment. During this stage of passage, cells will perish. It starts a chain reaction that leads to the creature’s extinction and destroys cell connections.

Continue reading “Researchers Took The First Pics Of DEATH — It Is Actually PALE BLUE And Looks Nice” »

Apr 24, 2023

The biggest fear with AI is fear itself | De Kai | TEDxSanMigueldeAllende

Posted by in categories: ethics, existential risks, media & arts, robotics/AI

In this talk, De Kai examines how AI amplifies fear into an existential threat to society and humanity, and what we need to be doing about it. De Kai’s work across AI, language, music, creativity, and ethics centers on enabling cultures to interrelate. For pioneering contributions to machine learning of AIs like Google/Yahoo/Microsoft Translate, he was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows worldwide and by Debrett’s HK 100 as one of the 100 most influential figures of Hong Kong. De Kai is a founding Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s ICSI (International Computer Science Institute). His public campaign applying AI to show the impact of universal masking against Covid received highly influential mass media coverage, and he serves on the board of AI ethics think tank The Future Society. De Kai is also creator of one of Hong Kong’s best known world music collectives, ReOrientate, and was one of eight inaugural members named by Google to its AI ethics council. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Apr 21, 2023

Chandra X-ray Observatory identifies new stellar danger to planets

Posted by in categories: cosmology, existential risks

Astronomers using data from NASA’s Chandra X-ray Observatory and other telescopes have identified a new threat to life on planets like Earth: a phase during which intense X-rays from exploded stars can affect planets over 100 light-years away. This result, as outlined in our latest press release, has implication for the study of exoplanets and their habitability.

This newly found threat comes from a supernova’s blast wave striking dense gas surrounding the exploded star, as depicted in the upper right of our artist’s impression. When this impact occurs it can produce a large dose of X-rays that reaches an Earth-like planet (shown in the lower left, illuminated by its host star out of view to the right) months to years after the explosion and may last for decades. Such intense exposure may trigger an extinction event on the planet.

A new study reporting this threat is based on X-ray observations of 31 and their aftermath—mostly from NASA’s Chandra X-ray Observatory, Swift and NuSTAR missions, and ESA’s XMM-Newton—show that planets can be subjected to lethal doses of located as much as about 160 light-years away. Four of the supernovae in the study (SN 1979C, SN 1987A, SN 2010jl, and SN 1994I) are shown in composite images containing Chandra data in the supplemental image.

Apr 20, 2023

Why do some AI researchers dismiss the potential risks to humanity?

Posted by in categories: existential risks, robotics/AI

Existential risk from AI is admittedly more speculative than pressing concerns such as its bias, but the basic solution is the same. A robust public discussion is long overdue, says David Krueger

By David Krueger

Page 11 of 135First89101112131415Last