Toggle light / dark theme

Beating Moore’s Law: This photonic computer is 10X faster than NVIDIA GPUs using 90% less energy

Moore’s Law is dead, right? Not if we can get working photonic computers.

Lightmatter is building a photonic computer for the biggest growth area in computing right now, and according to CEO Nick Harris, it can be ordered now and will ship at the end of this year. It’s already much faster than traditional electronic computers a neural nets, machine learning for language processing, and AI for self-driving cars.

It’s the world’s first general purpose photonic AI accelerator, and with light multiplexing — using up to 64 different colors of light simultaneously — there’s long path of speed improvements ahead.

Links:
TechFirst transcripts: https://johnkoetsier.com/category/tech-first/
Forbes columns: https://www.forbes.com/sites/johnkoetsier/

Keep in touch: https://twitter.com/johnkoetsier

Photonic Neuromorphic Computing: The Future of AI?

Photonic computing processes information using light, whilst neuromorphic computing attempts to emulate the human brain. Bring the two together, and we may have the perfect platform for next generation AI, as this video explores.

If you like this video, you may also enjoy my previous episodes on:

Organic Computing:

Brain-Computer Interfaces:
https://www.youtube.com/watch?v=xMxJYhUg0pc.

More videos on computing and related topics can be found at:
https://www.youtube.com/explainingcomputers.

You may also like my ExplainingTheFuture channel at: https://www.youtube.com/explainingthefuture.

Deep Learning Is Hitting a Wall

Brain Scans of 1. rat, 2. crow, (both completed by end of 2022) ; 3. pig, 4. chimp, (both completed by end of 2023) 5. ending on human, (completed by end of 2025). While we create an AI feedback loop, to use best AI to build better AI s, all at same time. Aiming for Agi 2025–2029.


What would it take for artificial intelligence to make real progress?

Retina-inspired sensors for more adaptive visual perception

To monitor and navigate real-world environments, machines and robots should be able to gather images and measurements under different background lighting conditions. In recent years, engineers worldwide have thus been trying to develop increasingly advanced sensors, which could be integrated within robots, surveillance systems, or other technologies that can benefit from sensing their surroundings.

Researchers at Hong Kong Polytechnic University, Peking University, Yonsei University and Fudan University have recently created a new sensor that can collect data in various illumination conditions, employing a mechanism that artificially replicates the functioning of the retina in the human eye. This bio-inspired sensor, presented in a paper published in Nature Electronics, was fabricated using phototransistors made of molybdenum disulfide.

“Our research team started the research on five years ago,” Yang Chai, one of the researchers who developed the sensor, told TechXplore. “This emerging device can output light-dependent and history-dependent signals, which enables image integration, weak signal accumulation, spectrum analysis and other complicated image processing functions, integrating the multifunction of sensing, data storage and data processing in a single device.”

U.S. eliminates human controls requirement for fully automated vehicles

WASHINGTON, March 10 (Reuters) — U.S. regulators on Thursday issued final rules eliminating the need for automated vehicle manufacturers to equip fully autonomous vehicles with manual driving controls to meet crash standards.

Automakers and tech companies have faced significant hurdles to deploying automated driving system (ADS) vehicles without human controls because of safety standards written decades ago that assume people are in control.

Last month, General Motors Co (GM.N) and its self-driving technology unit Cruise petitioned the U.S. National Highway Traffic Safety Administration (NHTSA) for permission to build and deploy a self-driving vehicle without human controls like steering wheels or brake pedals.

Amazon and Virginia Tech launch AI and ML research initiative

Amazon and Virginia Tech today announced the establishment of the Amazon – Virginia Tech Initiative for Efficient and Robust Machine Learning.

The initiative will provide an opportunity for doctoral students in the College of Engineering who are conducting AI and ML research to apply for Amazon fellowships, and it will support research efforts led by Virginia Tech faculty members. Under the initiative, Virginia Tech will host an annual public research symposium to share knowledge with the machine learning and related research communities. And in collaboration with Amazon, Virginia Tech will co-host two annual workshops, and training and recruiting events for Virginia Tech students.

“This initiative’s emphasis will be on efficient and robust machine learning, such as ensuring algorithms and models are resistant to errors and adversaries,” said Naren Ramakrishnan, the director of the Sanghani Center and the Thomas L. Phillips Professor of Engineering. “We’re pleased to continue our work with Amazon and expand machine learning research capabilities that could address worldwide industry-focused problems.”

/* */