Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1259

Jun 18, 2020

Baidu Breaks Off an AI Alliance Amid Strained US-China Ties

Posted by in category: robotics/AI

The search giant was the only Chinese member of the Partnership on Artificial Intelligence, a US-led effort to foster collaboration on ethical issues.

Jun 18, 2020

Qualcomm Brings 5G And AI To Next Gen Robotics And Drones

Posted by in categories: drones, internet, robotics/AI, security

Qualcomm today announced its RB5 reference design platform for the robotics and intelligent drone ecosystem. As the field of robotics continues to evolve towards more advanced capabilities, Qualcomm’s latest platform should help drive the next step in robotics evolution with intelligence and connectivity. The company has combined its 5G connectivity and AI-focused processing along with a flexible peripherals architecture based on what they are calling “mezzanine” modules. The new Qualcomm RB5 platform promises an acceleration in the robotics design and development process with a full suite of hardware, software and development tools. The company is making big promises for the RB5 platform, and if current levels of ecosystem engagement are any indicator, the platform will have ample opportunities to prove itself.

Targeting robot and drone designs meant for enterprise, industrial and professional service applications, at the heart of the platform is Qualcomm’s QRB5165 system on chip (SOC) processor. The QRB5165 is derived from the Snapdragon 865 processor used in mobile devices, but customized for robotic applications with increased camera and image signal processor (ISP) capabilities for additional camera sensors, higher industrial grade temperature and security ratings and a non-Package-on-Package (POP) configuration option.

To help bring highly capable artificial intelligence and machine learning capabilities to bear in these applications, the chip is rated for 15 Tera Operations Per Second (TOPS) of AI performance. Additionally, as it is critical that robots and drones can “see” their surroundings, the architecture also includes support for up to seven concurrent cameras and a dedicated computer vision engine meant to provide enhanced video analytics. Given the sheer amount of information that the platform can generate, process and analyze, the platform also has support for a communications module boasting 4G and 5G connectivity speeds. In particular, the addition of 5G to the platform will allow high speed and low latency data connectivity to the robots or drones.

Jun 18, 2020

The Future Of Conversational AI

Posted by in categories: futurism, robotics/AI

With conversational AI, organizations can dramatically improve their customer experience. Here’s a look at the technology and where it’s headed.

Jun 18, 2020

OpenAI’s New Text Generator Writes Even More Like a Human

Posted by in categories: information science, robotics/AI

The data came from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. In 2017 the average monthly “crawl” yielded over three billion web pages. Common Crawl has been doing this since 2011, and has petabytes of data in over 40 different languages. The OpenAI team applied some filtering techniques to improve the overall quality of the data, including adding curated datasets like Wikipedia.

GPT stands for Generative Pretrained Transformer. The “transformer” part refers to a neural network architecture introduced by Google in 2017. Rather than looking at words in sequential order and making decisions based on a word’s positioning within a sentence, text or speech generators with this design model the relationships between all the words in a sentence at once. Each word gets an “attention score,” which is used as its weight and fed into the larger network. Essentially, this is a complex way of saying the model is weighing how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on the other words in the sentence.

Through finding the relationships and patterns between words in a giant dataset, the algorithm ultimately ends up learning from its own inferences, in what’s called unsupervised machine learning. And it doesn’t end with words—GPT-3 can also figure out how concepts relate to each other, and discern context.

Jun 18, 2020

Slightly Unnerving AI Produces Human Faces Out of Totally Pixelated Photos

Posted by in category: robotics/AI

Artificial intelligence networks have learnt a new trick: being able to create photo-realistic faces from just a few pixelated dots, adding in features such as eyelashes and wrinkles that can’t even be found in the original.

Before you freak out, it’s good to note this is not some kind of creepy reverse pixelation that can undo blurring, because the faces the AI comes up with are artificial – they don’t belong to real people. But it’s a cool technological step forward from what such networks have been able to do before.

The PULSE (Photo Upsampling via Latent Space Exploration) system can produce photos with up to 64 times greater resolution than the source images, which is 8 times more detailed than earlier methods.

Jun 18, 2020

Teaching humanoid robots different locomotion behaviors using human demonstrations

Posted by in category: robotics/AI

In recent years, many research teams worldwide have been developing and evaluating techniques to enable different locomotion styles in legged robots. One way of training robots to walk like humans or animals is by having them analyze and emulate real-world demonstrations. This approach is known as imitation learning.

Researchers at the University of Edinburgh in Scotland have recently devised a for training humanoid robots to walk like humans using human demonstrations. This new framework, presented in a paper pre-published on arXiv, combines imitation learning and deep reinforcement learning techniques with theories of robotic control, in order to achieve natural and dynamic locomotion in humanoid robots.

“The key question we set out to investigate was how to incorporate useful human knowledge in locomotion and human motion capture data for imitation into deep reinforcement learning paradigm to advance the autonomous capabilities of legged robots more efficiently,” Chuanyu Yang, one of the researchers who carried out the study, told TechXplore. We proposed two methods of introducing human prior knowledge into a DRL framework.”

Jun 18, 2020

AI creates realistic faces from crude sketches

Posted by in categories: innovation, robotics/AI

https://youtube.com/watch?v=HSunooUTwKs

Back in the Sixties, one of the hottest toys in history swept America. It was called Etch-A-Sketch, and its popularity was based on a now-laughably simple feature. It was a handheld small-laptop-sized device that allowed users to create crude images by turning two control knobs that drew horizontal, vertical and diagonal lines composed of aluminum particles sealed in a plastic case. It allowed experienced artists to compose simple and sometimes recognizable portraits. And it allowed inexperienced wannabe artists who could barely draw stick-figure characters to feel like masters of the genre by generating what, frankly, still looked pretty much like mush. But Etch-A-Sketch was fun, and it went on to sell 100 million units to this day.

Six decades later, researchers at the Chinese Academy of Sciences and City University of Hong Kong have come up with an invention that actually does what so many wishful enthusiasts imagined Etch-A-Sketch did all those years ago.

Continue reading “AI creates realistic faces from crude sketches” »

Jun 18, 2020

Boston Dynamics’ robotic dog could be yours for a very, very high price

Posted by in categories: government, robotics/AI

The Spot Explorer is likely not coming to a workplace near you. The high price point makes it impractical to all but a few institutions, like high-end construction firms, energy companies, and government agencies. But just seeing out in the world, doing more than dancing to “Uptown Funk,” is surely a sign of progress.


These viral robots have ranked up big YouTube numbers for years. Now, they’re about to start their day jobs.

Jun 17, 2020

Follow the road to launch for our next mission to the Red Planet, the Mars 2020 Perseverance rover

Posted by in categories: alien life, climatology, robotics/AI

Administrator Jim Bridenstine, leadership and a panel of scientists and engineers will preview the upcoming mission at 2 p.m. EDT on Wednesday, June 17. Submit your questions during the briefing using #AskNASA!

Perseverance is a robotic scientist that will search for signs of past microbial life on Mars and characterize the planet’s climate and geology. It will also collect rock and soil samples for future return to Earth and pave the way for human exploration of the Red Planet. The mission is scheduled to launch from Space Launch Complex 41 at NASA’s Kennedy Space Center in Florida at 9:15 a.m. EDT July 20. It will land at Mars’ Jezero Crater on Feb. 18, 2021. #CountdownToMars

Jun 17, 2020

Technological Singularity Will Be Late But Antiaging and Advanced Biotech is Near

Posted by in categories: biotech/medical, life extension, Ray Kurzweil, robotics/AI, singularity

A rejuvenation roadmap, and some info on Rejuvenate Bio.


Ray Kurzweil predicted the Technological Singularity will be reached in 2045. This actually means there will be strong AI, something like AGI that is 1 billion times more capable than the human brain in many aspects.