Toggle light / dark theme

By Leah Crane

Break out the censor’s black bars for naked singularities. Quantum effects could be obscuring these impossible predictions of general relativity, new calculations show.

Albert Einstein’s classical equations of general relativity do a fairly good job of describing gravity and space-time. But when it comes to the most extreme objects, such as black holes, general relativity runs into problems.

Read more

In a previous essay, I suggested how we might do better with the unintended consequences of superintelligence if, instead of attempting to pre-formulate satisfactory goals or providing a capacity to learn some set of goals, we gave it the intuition that knowing all goals is not a practical possibility. Instead, we can act with a modest confidence having worked to discover goals, developing an understanding of our discovery processes that allows asserting an equilibrium between the risk of doing something wrong and the cost of work to uncover more stakeholders and their goals. This approach promotes moderation given the potential of undiscovered goals potentially contradicting any particular action. In short, we’d like a superintelligence that applies the non-parametric intuition, the intuition that we can’t know all the factors but can partially discover them with well-motivated trade-offs.

However, I’ve come to the perspective that the non-parametric intuition, while correct, on its own can be cripplingly misguided. Unfortunately, going through a discovery-rich design process doesn’t promise an appropriate outcome. It is possible for all of the apparently relevant sources not to reflect significant consequences.

How could one possibly do better than accepting this limitation, that relevant information is sometimes not present in all apparently relevant information sources? The answer is that, while in some cases it is impossible, there is always the background knowledge that all flourishing is grounded in material conditions, and that “staying grounded” in these conditions is one way to know that important design information is missing and seek it out. The Onion article “Man’s Garbage To Have Much More Significant Effect On Planet Than He Will” is one example of a common failure at living in a grounded way.

In other words, “staying grounded” means recognizing that just because we do not know all of the goals informing our actions does not mean that we do not know any of them. There are some goals that are given to us by the nature of how we are embedded in the world and cannot be responsibly ignored. Our continual flourishing as sentient creatures means coming to know and care for those systems that sustain us and creatures like us. A functioning participation in these systems at a basic level means we should aim to see that our inputs are securely supplied, our wastes properly processed, and the supporting conditions of our environment maintained.

Suppose that there were a superintelligence where individual agents have a capacity as compared to us such that we are as mice are to us. What might we reasonably hope from the agents of such an intelligence? My hope is that these agents are ecologists who wish for us to flourish in our natural lifeways. This does not mean that they leave us all to our own preserves, though hopefully they will see the advantage to having some unaltered wilderness in which to observe how we choose to live left to our own devices. Instead, we can be participants in patterned arrangements aimed to satisfy our needs in return for our engaged participation in larger systems of resource management. By this standard, our human systems might be found wanting by many living creatures today.

Given this, a productive approach to developing superintelligence would not only be concerned with its technical creation, but also by being in the position to demonstrate how all can flourish through good stewardship, setting a proper example for when these systems emerge and are trying to understand what goals should be like. We would also want the facts of its and our material conditions readily apparent, so that it doesn’t start from a disconnected and disembodied basis.

Overall, this means that in addition to the capacity to discover more goals, it would be instructive to supply this superintelligence with a schema of describing the relationships and conditions under which current participants flourish, as well as the goal to promote such flourishing whenever the means are clear and circumstances indicate such flourishing will not emerge of its own accord. This kind of information technology for ecological engineering might also be useful for our own purposes.

What will a superintelligence take as its flourishing? It is hard to say. However, hopefully it will find sustaining, extending, and promoting the flourishing of the ecology that allowed its emergence as a inspiring, challenging, and creative goal.

This week saw researchers announce a promising new approach to Parkinson’s by the use of cellular reprogramming. The team lead by Ernest Arenas used a cocktail of four transcription factors to reprogram support cells inside the brain.

The research team placed the reprogramming factors into a harmless type of lentivirus and injected them en masse into a Parkinson’s disease model mice. The viruses infected support cells in the brain known as astrocytes (a support cell that regulates the transmission of electrical impulses within the brain) which are present in large numbers. The lentiviruses delivered their four factor payload to the target cells changing them from astrocytes into dopamine producing neurons.

Within three weeks the first cells had been reprogrammed and could be detected, and after fifteen weeks there were abundant numbers of dopamine producing neurons present. This is good news indeed as it also confirms that once reprogrammed the cells remain changed and stable and do not revert back into astrocytes.

Read more

In its new budget, the government of Prime Minister Justin Trudeau pledged $93 million ($125 million Canadian) to support A.I. research centers in Toronto, Montreal and Edmonton, which will be public-private collaborations.


Today’s striking advances in artificial intelligence owe a lot to research in Canada over the years. But the country has so far failed to cash in.

Read more

Despite the popular belief that artificial intelligence is coming to take your jobs away, accountants would love some robotic help to get them through the day. This is according to a new report by Sage, which says 96 percent of accountants are confident about the future of accountancy as well as their role in it.

Despite welcoming change, more than two thirds of respondents (68 percent) expect their roles to change through automation, in the future.

Here’s what accountants are expecting from automation: almost four in ten (38 percent) see number-crunching as their number one frustration. Thirty-two percent still use manual methods for this work. A quarter (25 percent) use Excel while seven percent still use handwritten notes.

Read more

Nowadays, the latest buzzword of attraction is “Artificial Intelligence” and its immediate impact on our advertising sector. As the CEO of Gravity4, I thought it to be only appropriate to help dissect this new evolutionary phase of our industry as we apply it. It is no doubt that ‘Deep Learning’ is our future, and it is on course to have a huge impact on the lives of everyday consumers and business sectors. In the scientific world, deep learning is referred to as “deep neural networks”. These involve a family of artificial intelligence, popularly known as AI, something named way back in 1955, and something which Facebook, Google and Microsoft are all now pushing for with Herculean force. In fact, according to the international data corporation, it is estimated that from a global standpoint, by 2020, the artificial intelligence market could reach close to $50 billion.

Getting to Grips With the Terminology

AI refers to a collection of tools and technologies, some of which are relatively new, and some of which are time-tested. The techniques that are employed allow computers to use these tools and technologies to imitate human intelligence. These include: machine learning such as deep learning, decision trees, if-then rules, and logic.

Read more

The humans never had a chance.

As expected, the latest poker-playing bot powered by an artificial intelligence designed by a duo from Carnegie Mellon University beat a team of some of the best poker players in China.

Lengpudashi, the AI developed by Professor Tuomas Sandholm and Noam Brown, a graduate student at CMU, finished five days of Heads-Up, No-Limit Texas Hold’em with nearly $800,000 in chips and walked away with $290,000.

Read more