Toggle light / dark theme

Photonic Neuromorphic Computing: The Future of AI?

Photonic computing processes information using light, whilst neuromorphic computing attempts to emulate the human brain. Bring the two together, and we may have the perfect platform for next generation AI, as this video explores.

If you like this video, you may also enjoy my previous episodes on:

Organic Computing:

Brain-Computer Interfaces:
https://www.youtube.com/watch?v=xMxJYhUg0pc.

More videos on computing and related topics can be found at:
https://www.youtube.com/explainingcomputers.

You may also like my ExplainingTheFuture channel at: https://www.youtube.com/explainingthefuture.

Deep Learning Is Hitting a Wall

Brain Scans of 1. rat, 2. crow, (both completed by end of 2022) ; 3. pig, 4. chimp, (both completed by end of 2023) 5. ending on human, (completed by end of 2025). While we create an AI feedback loop, to use best AI to build better AI s, all at same time. Aiming for Agi 2025–2029.


What would it take for artificial intelligence to make real progress?

Retina-inspired sensors for more adaptive visual perception

To monitor and navigate real-world environments, machines and robots should be able to gather images and measurements under different background lighting conditions. In recent years, engineers worldwide have thus been trying to develop increasingly advanced sensors, which could be integrated within robots, surveillance systems, or other technologies that can benefit from sensing their surroundings.

Researchers at Hong Kong Polytechnic University, Peking University, Yonsei University and Fudan University have recently created a new sensor that can collect data in various illumination conditions, employing a mechanism that artificially replicates the functioning of the retina in the human eye. This bio-inspired sensor, presented in a paper published in Nature Electronics, was fabricated using phototransistors made of molybdenum disulfide.

“Our research team started the research on five years ago,” Yang Chai, one of the researchers who developed the sensor, told TechXplore. “This emerging device can output light-dependent and history-dependent signals, which enables image integration, weak signal accumulation, spectrum analysis and other complicated image processing functions, integrating the multifunction of sensing, data storage and data processing in a single device.”

U.S. eliminates human controls requirement for fully automated vehicles

WASHINGTON, March 10 (Reuters) — U.S. regulators on Thursday issued final rules eliminating the need for automated vehicle manufacturers to equip fully autonomous vehicles with manual driving controls to meet crash standards.

Automakers and tech companies have faced significant hurdles to deploying automated driving system (ADS) vehicles without human controls because of safety standards written decades ago that assume people are in control.

Last month, General Motors Co (GM.N) and its self-driving technology unit Cruise petitioned the U.S. National Highway Traffic Safety Administration (NHTSA) for permission to build and deploy a self-driving vehicle without human controls like steering wheels or brake pedals.

Amazon and Virginia Tech launch AI and ML research initiative

Amazon and Virginia Tech today announced the establishment of the Amazon – Virginia Tech Initiative for Efficient and Robust Machine Learning.

The initiative will provide an opportunity for doctoral students in the College of Engineering who are conducting AI and ML research to apply for Amazon fellowships, and it will support research efforts led by Virginia Tech faculty members. Under the initiative, Virginia Tech will host an annual public research symposium to share knowledge with the machine learning and related research communities. And in collaboration with Amazon, Virginia Tech will co-host two annual workshops, and training and recruiting events for Virginia Tech students.

“This initiative’s emphasis will be on efficient and robust machine learning, such as ensuring algorithms and models are resistant to errors and adversaries,” said Naren Ramakrishnan, the director of the Sanghani Center and the Thomas L. Phillips Professor of Engineering. “We’re pleased to continue our work with Amazon and expand machine learning research capabilities that could address worldwide industry-focused problems.”

Army Special Operations Forces use Project Origin systems in latest Soldier experiment

DUGWAY, Utah — Army Green Berets from the 1st Special Forces Group conducted two weeks of hands-on experimentation with Project Origin Unmanned Systems at Dugway Proving Ground. Engineers from the U.S. Army DEVCOM Ground Vehicle Systems Center were on site to collect data on how these elite Soldiers utilized the systems and what technology and behaviors are desired.

Project Origin vehicles are the evolution of multiple Soldier Operational Experiments. This GVSC-led rapid prototyping effort allows the Army to conduct technology and autonomous behavior integration for follow-on assessments with Soldiers in order to better understand what Soldiers need from unmanned systems.

For the two-week experiment, Soldiers with the 1st Special Forces Group attended familiarization and new equipment training in order to develop Standard Operating Procedures for Robotic Combat Vehicles. The unit utilized these SOPs to conduct numerous mission-oriented exercises including multiple live-fire missions during the day and night.

/* */