Association between low-density lipoprotein cholesterol and cardiovascular mortality in statin nonusers: a prospective cohort study in 14.9 million Korean adults.
Visit https://brilliant.org/Veritasium/ to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription. Digital computers have served us well for decades, but the rise of artificial intelligence demands a totally new kind of computer: analog.
▀▀▀ References: Crevier, D. (1993). AI: The Tumultuous History Of The Search For Artificial Intelligence. Basic Books. – https://ve42.co/Crevier1993 Valiant, L. (2013). Probably Approximately Correct. HarperCollins. – https://ve42.co/Valiant2013 Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review, 65, 386–408. – https://ve42.co/Rosenblatt1958 NEW NAVY DEVICE LEARNS BY DOING; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser (1958). The New York Times, p. 25. – https://ve42.co/NYT1958 Mason, H., Stewart, D., and Gill, B. (1958). Rival. The New Yorker, p. 45. – https://ve42.co/Mason1958 Alvinn driving NavLab footage – https://ve42.co/NavLab. Pomerleau, D. (1989). ALVINN: An Autonomous Land Vehicle In a Neural Network. NeurIPS, 1305-313. – https://ve42.co/Pomerleau1989 ImageNet website – https://ve42.co/ImageNet. Russakovsky, O., Deng, J. et al. (2015). ImageNet Large Scale Visual Recognition Challenge. – https://ve42.co/ImageNetChallenge. AlexNet Paper: Krizhevsky, A., Sutskever, I., Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks. NeurIPS, (25)1, 1097–1105. – https://ve42.co/AlexNet. Karpathy, A. (2014). Blog post: What I learned from competing against a ConvNet on ImageNet. – https://ve42.co/Karpathy2014 Fick, D. (2018). Blog post: Mythic @ Hot Chips 2018. – https://ve42.co/MythicBlog. Jin, Y. & Lee, B. (2019). 2.2 Basic operations of flash memory. Advances in Computers, 114, 1–69. – https://ve42.co/Jin2019 Demler, M. (2018). Mythic Multiplies in a Flash. The Microprocessor Report. – https://ve42.co/Demler2018 Aspinity (2021). Blog post: 5 Myths About AnalogML. – https://ve42.co/Aspinity. Wright, L. et al. (2022). Deep physical neural networks trained with backpropagation. Nature, 601, 49–555. – https://ve42.co/Wright2022 Waldrop, M. M. (2016). The chips are down for Moore’s law. Nature, 530144–147. – https://ve42.co/Waldrop2016
▀▀▀ Special thanks to Patreon supporters: Kelly Snook, TTST, Ross McCawley, Balkrishna Heroor, 65square.com, Chris LaClair, Avi Yashchin, John H. Austin, Jr., OnlineBookClub.org, Dmitry Kuzmichev, Matthew Gonzalez, Eric Sexton, john kiehl, Anton Ragin, Benedikt Heinen, Diffbot, Micah Mangione, MJP, Gnare, Dave Kircher, Burt Humburg, Blake Byers, Dumky, Evgeny Skvortsov, Meekay, Bill Linder, Paul Peijzel, Josh Hibschman, Mac Malkawi, Michael Schneider, jim buckmaster, Juan Benet, Ruslan Khroma, Robert Blum, Richard Sundvall, Lee Redden, Vincent, Stephen Wilcox, Marinus Kuivenhoven, Clayton Greenwell, Michael Krugman, Cy ‘kkm’ K’Nelson, Sam Lutfi, Ron Neal.
However, experts have pointed out that these techniques aren’t generalized tools – they will only be the great leap forward in computer power for very specialized algorithms, and even more rarely will they be able to work on the same problem.
One such example of where they might work together is modeling the answer to one of the thorniest problems in physics: How does General Relativity relate to the Standard Model?
“We were really surprised by this result because our motivation was to find an indirect route to improve performance, and we thought trust would be that—with real faces eliciting that more trustworthy feeling,” Nightingale says.
Farid noted that in order to create more controlled experiments, he and Nightingale had worked to make provenance the only substantial difference between the real and fake faces. For every synthetic image, they used a mathematical model to find a similar one, in terms of expression and ethnicity, from databases of real faces. For every synthetic photo of a young Black woman, for example, there was a real counterpart.
A mini brain with trillions of petaflops in your pant pocket? Sounds Good!
“This is what we’re announcing today,” said Knowles. “A machine that in fact will exceed the parametric capacity of the human brain.”
That next-gen IPU, he said, would realize the vision of 1960s compute scientist Jack Good, a colleague of Alan Turing’s who conceived of an “intelligence explosion.”
Those synapses are “very similar to the parameters that are learned by an artificial neural network.” Today’s neural nets have gotten close to a trillion, he noted, “so we clearly have another two or more orders of magnitude to go before we have managed to build an artificial neural network that has similar parametric capacity to a human brain.
The company said it is working on a computer design, called The Good Computer, which will be capable of handling neural network models that employ 500 trillion parameters, making possible what it terms super-human ultra-intelligence.
The Bow is the first chip to use what’s called wafer-on-wafer chip technology, where two die are bound together. It was developed in close collaboration with contract chip manufacturing giant Taiwan Semiconductor Manufacturing.
The chip can perform 350 trillion floating point per second of mixed-precision AI arithmetic, said Knowles, which he said made the chip the highest-performing AI processor in the world today.
It’s easy to see why: as shockingly powerful mini-processors, neurons and their connections—together dubbed the connectome—hold the secret to highly efficient and flexible computation. Nestled inside the brain’s wiring diagrams are the keys to consciousness, memories, and emotion. To connectomics, mapping the brain isn’t just an academic exercise to better understand ourselves—it could lead to more efficient AI that thinks like us.
But often ignored are the brain’s supporting characters: astrocytes—brain cells shaped like stars—and microglia, specialized immune cells. Previously considered “wallflowers,” these cells nurture neurons and fine-tune their connections, ultimately shaping the connectome. Without this long-forgotten half, the brain wouldn’t be the computing wizard we strive to imitate with machines.
In a stunning new brain map published in Cell, these cells are finally having their time in the spotlight. Led by Dr. H. Sebastian Seung at Princeton University, the original prophet of the connectome, the map captures a tiny chunk of the mouse’s visual cortex, less than 1,000 times smaller than a pea. Yet jam-packed inside the map aren’t just neurons; in a technical tour de force, the team mapped all brain cells, their connections, blood vessels, and even the compartments inside cells that house DNA and produce energy.
It should come as little surprise that pioneering work in biological robotics is as controversial as it is exciting. Take for example the article published in December 2021 in the Proceedings of the National Academy of Sciences by Sam Kreigman and Douglas Blackiston at Tufts University and colleagues. This article, entitled “Kinematic self-replication in reconfigurable organisms,” is the third installment of the authors’ “xenobots” series.