Renowned longevity researcher David Sinclair believes aging is not inevitable but a treatable condition. In his talk at Science Unlimited 2019, he explained why we age – and how we can reverse aging to extend human healthspan and lifespan.
David Sinclair is Professor in the Department of Genetics, Blavatnik Institute and co-Director of the Paul F. Glenn Center for the Biological Mechanisms of Aging at Harvard Medical School. Science Unlimited is held in Montreux, Switzerland, as part of the annual Frontiers Forum. See all speakers: https://forum.frontiersin.org
With that basic research, mankind found the first major clue to the origins of aging and death. They discovered that some cells in our bodies that may never die. These “immortal cells” and the philosophical shift in thinking they engendered, will likely change medicine as we know it.
Phase transitions occur when a substance changes from a solid, liquid or gaseous state to a different state—like ice melting or vapor condensing. During these phase transitions, there is a point at which the system can display properties of both states of matter simultaneously. A similar effect occurs when normal metals transition into superconductors—characteristics fluctuate and properties expected to belong to one state carry into the other.
Scientists at Harvard have developed a bismuth-based, two-dimensional superconductor that is only one nanometer thick. By studying fluctuations in this ultra-thin material as it transitions into superconductivity, the scientists gained insight into the processes that drive superconductivity more generally. Because they can carry electric currents with near-zero resistance, as they are improved, superconducting materials will have applications in virtually any technology that uses electricity.
The Harvard scientists used the new technology to experimentally confirm a 23-year-old theory of superconductors developed by scientist Valerii Vinokur from the U.S. Department of Energy’s (DOE) Argonne National Laboratory.
With the end of the Vietnam and Cold wars, Jason members began to branch out from physics and engineering. In 1977, they did their first assessment of global climate models and later advised DOE on which atmospheric measurements were most critical for the models. Since the mid-1990s, Jason has studied biotechnologies, including techniques for detecting biological weapons.
After near-death experience, top scientists seek a long-term home in the U.S. government.
More than a half-century ago, the ‘cognitive revolution’, with the influential tenet ‘cognition is computation’, launched the investigation of the mind through a multidisciplinary endeavour called cognitive science. Despite significant diversity of views regarding its definition and intended scope, this new science, explicitly named in the singular, was meant to have a cohesive subject matter, complementary methods and integrated theories. Multiple signs, however, suggest that over time the prospect of an integrated cohesive science has not materialized. Here we investigate the status of the field in a data-informed manner, focusing on four indicators, two bibliometric and two socio-institutional. These indicators consistently show that the devised multi-disciplinary program failed to transition to a mature inter-disciplinary coherent field. Bibliometrically, the field has been largely subsumed by (cognitive) psychology, and educationally, it exhibits a striking lack of curricular consensus, raising questions about the future of the cognitive science enterprise.
Artificial Intelligence (AI) is an emerging field of computer programming that is already changing the way we interact online and in real life, but the term ‘intelligence’ has been poorly defined. Rather than focusing on smarts, researchers should be looking at the implications and viability of artificial consciousness as that’s the real driver behind intelligent decisions.
Consciousness rather than intelligence should be the true measure of AI. At the moment, despite all our efforts, there’s none.
Significant advances have been made in the field of AI over the past decade, in particular with machine learning, but artificial intelligence itself remains elusive. Instead, what we have is artificial serfs—computers with the ability to trawl through billions of interactions and arrive at conclusions, exposing trends and providing recommendations, but they’re blind to any real intelligence. What’s needed is artificial awareness.
Elon Musk has called AI the “biggest existential threat” facing humanity and likened it to “summoning a demon,”[1] while Stephen Hawking thought it would be the “worst event” in the history of civilization and could “end with humans being replaced.”[2] Although this sounds alarmist, like something from a science fiction movie, both concerns are founded on a well-established scientific premise found in biology—the principle of competitive exclusion.[3]