Toggle light / dark theme

In this talk, De Kai examines how AI amplifies fear into an existential threat to society and humanity, and what we need to be doing about it. De Kai’s work across AI, language, music, creativity, and ethics centers on enabling cultures to interrelate. For pioneering contributions to machine learning of AIs like Google/Yahoo/Microsoft Translate, he was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows worldwide and by Debrett’s HK 100 as one of the 100 most influential figures of Hong Kong. De Kai is a founding Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s ICSI (International Computer Science Institute). His public campaign applying AI to show the impact of universal masking against Covid received highly influential mass media coverage, and he serves on the board of AI ethics think tank The Future Society. De Kai is also creator of one of Hong Kong’s best known world music collectives, ReOrientate, and was one of eight inaugural members named by Google to its AI ethics council. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Astronomers using data from NASA’s Chandra X-ray Observatory and other telescopes have identified a new threat to life on planets like Earth: a phase during which intense X-rays from exploded stars can affect planets over 100 light-years away. This result, as outlined in our latest press release, has implication for the study of exoplanets and their habitability.

This newly found threat comes from a supernova’s blast wave striking dense gas surrounding the exploded star, as depicted in the upper right of our artist’s impression. When this impact occurs it can produce a large dose of X-rays that reaches an Earth-like planet (shown in the lower left, illuminated by its host star out of view to the right) months to years after the explosion and may last for decades. Such intense exposure may trigger an extinction event on the planet.

A new study reporting this threat is based on X-ray observations of 31 and their aftermath—mostly from NASA’s Chandra X-ray Observatory, Swift and NuSTAR missions, and ESA’s XMM-Newton—show that planets can be subjected to lethal doses of located as much as about 160 light-years away. Four of the supernovae in the study (SN 1979C, SN 1987A, SN 2010jl, and SN 1994I) are shown in composite images containing Chandra data in the supplemental image.

Some of Daniel Schmarchtenberger’s friends say you can be “Schmachtenberged”. It means realising that we are on our way to self-destruction as a civilisation, on a global level. This is a topic often addressed by the American philosopher and strategist, in a world with powerful weapons and technologies and a lack of efficient governance. But, as the catastrophic script has already started to be written, is there still hope? And how do we start reversing the scenario?

After lightning struck a tree in New Port Richey, Florida, a team of scientists from the University of South Florida (USF) discovered that this strike led to the formation of a new phosphorous material in a rock. This is the first time such a material has been found in solid form on Earth and could represent a member of a new mineral group.

“We have never seen this material occur naturally on Earth – minerals similar to it can be found in meteorites and space, but we’ve never seen this exact material anywhere,” said study lead author Matthew Pasek, a geoscientist at USF.

According to the researchers, high-energy events such as lightning can sometimes cause unique chemical reactions which, in this particular case, have led to the formation of a new material that seems to be transitional between space minerals and minerals found on Earth.

We may build incredible AI. But can we contain our cruelty? Oxford professor Nick Bostrom explains.

Up next, Is AI a species-level threat to humanity? With Elon Musk, Michio Kaku, Steven Pinker & more ► https://youtu.be/91TRVubKcEM

Nick Bostrom, a professor at Oxford University and director of the Future of Humanity Institute, discusses the development of machine superintelligence and its potential impact on humanity. Bostrom believes that in this century, we will create the first general intelligence that will be smarter than humans. He sees this as the most important thing humanity will ever do, but it also comes with an enormous responsibility.

Bostrom notes that there are existential risks associated with the transition to the machine intelligence era, such as the possibility of an underlying superintelligence that overrides human civilization with its own value structures. In addition, there is the question of how to ensure that conscious digital minds are treated well. However, if we succeed in ensuring the well-being of artificial intelligence, we could have vastly better tools for dealing with everything from diseases to poverty.

The last few weeks have been abuzz with news and fears (well, largely fears) about the impact chatGPT and other generative technologies might have on the workplace. Goldman Sachs predicted 300 million jobs would be lost, while the likes of Steve Wozniak and Elon Musk asked for AI development to be paused (although pointedly not the development of autonomous driving).

Indeed, OpenAI chief Sam Altman recently declared that he was “a little bit scared”, with the sentiment shared by OpenAI’s chief scientist Ilya Sutskever, who recently said that “at some point it will be quite easy, if one wanted, to cause a great deal of harm”.


As fears mount about the jobs supposedly at risk from generative AI technologies like chatGPT, are these fears likely to prevent people from taking steps to adapt?

In 1942 The Manhattan Project was established by the United States as part of a top-secret research and development (R&D) program to produce the first nuclear weapons. The project involved thousands of scientists, engineers, and other personnel who worked on different aspects of the project, including the development of nuclear reactors, the enrichment of uranium, and the design and construction of the bomb. The goal: to develop an atomic bomb before Germany did.

The Manhattan Project set a precedent for large-scale government-funded R&D programs. It also marked the beginning of the nuclear age and ushered in a new era of technological and military competition between the world’s superpowers.

Today we’re entering the age of Artificial Intelligence (AI)—an era arguably just as important, if not more important, than the age of nuclear war. While the last few months might have been the first you’ve heard about it, many in the field would argue we’ve been headed in this direction for at least the last decade, if not longer. For those new to the topic: welcome to the future, you’re late.

Several asteroids are set to dash past Earth in the coming days, according to a list released by NASA’s Jet Propulsion Laboratory, close encounters that are almost certain to pass harmlessly and come days after the White House announced new plans to defend the planet against threats from space.

Two asteroids, one bus-sized and the other the size of a house, will make relatively close approaches to Earth on Wednesday, according to NASA’s Asteroid Watch Dashboard.

Three more, all approximately airplane-sized, are also set to whizz past Earth on Thursday, the agency said.


Three plane-sized asteroids—and one the size of a house—are set to pass close by Earth on Wednesday and Thursday, NASA said.