Toggle light / dark theme

A recent patent filing offers a window into future forays by Apple into automotive design. Apple is exploring artificial intelligence systems that will enable future motorists to enjoy windows that continuously change characteristics as they drive.

Titled “Systems with adjustable windows,” U.S. Patent No. 10,625,580 envisions glass components that control light, reflection and heat conductance based on both user preference and sensory input.

The would contain multiple adjustable layers sandwiched between two panes of glass that could perform such functions as keeping a cool interior, providing privacy to occupants, allowing viewing through haze and blocking harmful sunlight radiation.

Gerd Leonhard discussion regarding Humanism and Transhumanism.


This is an excerpt from my latest digital conference, April 23, 2020, “Humanist vs Transhumanist” featuring Calum Chace and me.

My Futures Agency colleague and fellow futurist Calum Chace disagree with me on many of my core messages on topics such as the singularity, (trans)-humanism, artificial intelligence and what I call ‘man+machine futures’.

Albert Einstein famously postulated that “the only real valuable thing is intuition,” arguably one of the most important keys to understanding intention and communication.

But intuitiveness is hard to teach—especially to a machine. Looking to improve this, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a method that dials us closer to more seamless human– collaboration. The system, called “Conduct-A-Bot,” uses human signals from wearable sensors to pilot a robot’s movement.

“We envision a world in which machines help people with cognitive and physical work, and to do so, they adapt to people rather than the other way around,” says Professor Daniela Rus, director of CSAIL, deputy dean of research for the MIT Stephen A. Schwarzman College of Computing, and co-author on a paper about the system.

Using machine learning, a team of Western computer scientists and biologists have identified an underlying genomic signature for 29 different COVID-19 DNA sequences.

This new data discovery tool will allow researchers to quickly and easily classify a deadly virus like COVID-19 in just minutes—a process and pace of high importance for strategic planning and mobilizing medical needs during a pandemic.

The study also supports the scientific hypothesis that COVID-19 (SARS-CoV-2) has its origin in bats as Sarbecovirus, a subgroup of Betacoronavirus.

Scientists believe the world will see it’s first working thermonuclear fusion reactor by the year 2025. That’s a tall order in short form, especially when you consider that fusion has been “almost here” for nearly a century.

Fusion reactors – not to be confused with common fission reactors – are the holiest of Grails when it comes to physics achievements. According to most experts, a successful fusion reactor would function as a near-unlimited source of energy.

In other words, if there’s a working demonstration of an actual fusion reactor by 2025, we could see an end to the global energy crisis within a few decades.

Can we study AI the same way we study lab rats? Researchers at DeepMind and Harvard University seem to think so. They built an AI-powered virtual rat that can carry out multiple complex tasks. Then, they used neuroscience techniques to understand how its artificial “brain” controls its movements.

Today’s most advanced AI is powered by artificial neural networks —machine learning algorithms made up of layers of interconnected components called “neurons” that are loosely inspired by the structure of the brain. While they operate in very different ways, a growing number of researchers believe drawing parallels between the two could both improve our understanding of neuroscience and make smarter AI.

Now the authors of a new paper due to be presented this week at the International Conference on Learning Representations have created a biologically accurate 3D model of a rat that can be controlled by a neural network in a simulated environment. They also showed that they could use neuroscience techniques for analyzing biological brain activity to understand how the neural net controlled the rat’s movements.

Built in about 24 hours, this robot is undergoing in-hospital testing for coronavirus disinfection.


UV disinfection is one of the few areas where autonomous robots can be immediately and uniquely helpful during the COVID pandemic. Unfortunately, there aren’t enough of these robots to fulfill demand right now, and although companies are working hard to build them, it takes a substantial amount of time to develop the hardware, software, operational knowledge, and integration experience required to make a robotic disinfection system work in a hospital.

Conor McGinn, an assistant professor of mechanical engineering at Trinity College in Dublin and co-leader of the Robotics and Innovation Lab (RAIL), has pulled together a small team of hardware and software engineers who’ve managed to get a UV disinfection robot into hospital testing within a matter of just a few weeks. They made it happen in such a short amount of time by building on previous research, collaborating with hospitals directly, and leveraging a development platform: the TurtleBot 2.

Software bugs have been a concern for programmers for nearly 75 years since the day programmer Grace Murray Hopper reported the cause of an error in an early Harvard Mark II computer: a moth stuck between relay contacts. Thus the term “bug” was born.

Bugs range from slight computer hiccups to catastrophes. In the Eighties, at least five patients died after a Therac-25 radiation therapy device malfunctioned due to an error by an inexperienced programmer. In 1962, NASA mission control destroyed the Mariner I space probe as it diverted from its intended path over the Atlantic Ocean; incorrectly transcribed handwritten code was blamed. In 1982, a later alleged to have been implanted into the Soviet trans-Siberian gas pipeline by the CIA triggered one of the largest non– in history.

According to data management firm Coralogix, programmers produce 70 bugs per 1,000 lines of code, with each bug solution demanding 30 times more hours than it took to write the code in the first place. The firm estimates the United States spends $113 billion a year identifying and remediating bugs.