Toggle light / dark theme

Discovering a system’s causal relationships and structure is a crucial yet challenging problem in scientific disciplines ranging from medicine and biology to economics. While researchers typically adopt the graphical formalism of causal Bayesian networks (CBNs) to induce a graph structure that best describes these relationships, such unsupervised score-based approaches can quickly lead to prohibitively heavy computation burdens.

A research team from DeepMind, Mila – University of Montreal and Google Brain challenges the conventional causal induction approach in their new paper Learning to Induce Causal Structure, proposing a neural network architecture that learns the graph structure of observational and/or interventional data via supervised training on synthetic graphs. The team’s proposed Causal Structure Induction via Attention (CSIvA) method effectively makes causal induction a black-box problem and generalizes favourably to new synthetic and naturalistic graphs.

The team summarizes their main contributions as:

And an AI could generate a picture of a person from scratch if it wanted or needed to. its only a matter of time before someone puts it all together. 1. AI writes a script. 2. AI generates pictures of a cast (face/&body). 3. AI animates pictures of the cast into scenes. 4. it cant create voices from scratch yet, but 10 second audio sample of a voice is enough for it to make voices say anything; AI voices all the dialog. And, viola, you ve reduced TV and movie production costs by 99.99%. Will take place by 2030.


Google’s PHORUM AI shows how impressive 3D avatars can be created just from a single photo.

Until now, however, such models have relied on complex automatic scanning by a multi-camera system, manual creation by artists, or a combination of both. Even the best camera systems still produce artifacts that must be cleaned up manually.

As the early universe cooled shortly after the Big Bang, bubbles formed in its hot plasma, triggering gravitational waves that could be detectable even today, a new study suggests.

For some time, physicists have speculated that a phase transition took place in the early universe shortly after the Big Bang. Phase transition is a change of form and properties of matter that usually accompanies temperature changes such as the evaporation of water into vapor or the melting of metal. In the young and fast expanding universe, something similar likely took place as the plasma, which was filling the space at that time, cooled down.

With the massive degree of progress in AI over the last decade or so, it’s natural to wonder about its future – particularly the timeline to achieving human (and superhuman) levels of general intelligence. Ajeya Cotra, a senior researcher at Open Philanthropy, recently (in 2020) put together a comprehensive report seeking to answer this question (actually, it answers the slightly different question of when transformative AI will appear, mainly because an exact definition of impact is easier than one of intelligence level), and over 169 pages she lays out a multi-step methodology to arrive at her answer. The report has generated a significant amount of discussion (for example, see this Astral Codex Ten review), and seems to have become an important anchor for many people’s views on AI timelines. On the whole, I found the report added useful structure around the AI timeline question, though I’m not sure its conclusions are particularly informative (due to the wide range of timelines across different methodologies). This post will provide a general overview of her approach (readers who are already familiar can skip the next section), and will then focus on one part of the overall methodology – specifically, the upper bound she chooses – and will seek to show that this bound may be vastly understated.

Part 1: Overview of the Report

In her report, Ajeya takes the following steps to estimate transformative AI timelines:

Aulos Biosciences is now recruiting cancer patients in Australian medical centers for a trial of the world’s first antibody drug designed by a computer.

The computationally designed antibody, known as AU-007, was planned by the artificial intelligence platform of Israeli biotech company Biolojic Design from Rehovot, in a way that would target a protein in the human body known as interleukin-2 (IL-2).

The goal is for the IL-2 pathway to activate the body’s immune system and attack the tumors.

A comet discovered last July is fast approaching our part of the solar system and might reach binocular visibility (at least) by May 2022. It’s designated C/2021 O3 (PanSTARRS). The comet will be emerging into our western evening sky at least by early May. The comet is currently passing close to the sun, and it might not survive that passage … but if it does, get ready! Charts below.

Comet PanSTARRS will come close to the sun, closer than the planet Mercury. Its closest point, called its perihelion, will come on April 21, 2022. It’ll sweep 0.29 astronomical units (AU) from our star (1 AU = 1 average Earth-sun unit of distance). So – given that Mercury’s sunny side reaches temperatures of around 750 to 800 degrees Fahrenheit (up to about 430 degrees Celsius) – you can see that Comet C/2021 O3 (PanSTARRS) will really feel the sun’s heat.

That’s why Comet C/2021 O3 might disintegrate, as some comets do, when nearest our star.

The 3D-printed containers keep a log of all break-in attempts, meaning your snail mail just got way safer.


Suppose you want to mail a court document to someone across the country—you don’t want anyone to see the secure information inside, of course. So, you seal it into a container that has special sensors built into its walls, and electronics that monitor the shield of sensors. Now, the container is armed and monitoring.

On the way to its intended recipient, let’s say the container is hacked. When the intended recipient later opens the container, they pull out the court document, along with an SD card (just like the ones you might use to store digital photos). They plug the card into a computer and look at the file. They see an encrypted historical record of the container’s experiences, from the time you put that document into the container and sealed it, up until the time they opened it. In the list of messages is a notification about a tampering attack, along with the date and time of the incident. The message also specifies the type of breach detected, such as the container being opened or cut.

[an error occurred while processing this directive]