Nov 17, 2024
Humanoid robot may fly on China’s Chang’e 8 moon mission in 2028
Posted by Shailesh Prasad in categories: robotics/AI, space
A new slide about the planned 2028 mission shows a four-wheeled lunar craft with a humanoid form.
A new slide about the planned 2028 mission shows a four-wheeled lunar craft with a humanoid form.
The rigid structures of language we once clung to with certainty are cracking. Take gender, nationality or religion: these concepts no longer sit comfortably in the stiff linguistic boxes of the last century. Simultaneously, the rise of AI presses upon us the need to understand how words relate to meaning and reasoning.
A global group of philosophers, mathematicians and computer scientists have come up with a new understanding of logic that addresses these concerns, dubbed “inferentialism”
One standard intuition of logic, dating back at least to Aristotle, is that a logical consequence ought to hold by virtue of the content of the propositions involved, not simply by virtue of being “true” or “false”. Recently, the Swedish logician Dag Prawitz observed that, perhaps surprisingly, the traditional treatment of logic entirely fails to capture this intuition.
Researchers are developing atomically precise memristors for advanced neuromorphic computing systems.
The University of Kansas and University of Houston, backed by $1.8 million from the National Science Foundation’s Future of Semiconductor program (FuSe2), are collaborating to develop atomically tunable memory resistors, known as “memristors.” These advanced components are designed for brain-inspired computing applications and will support workforce development in the semiconductor industry.
Launched in 2023, the FuSe2 program addresses key challenges in semiconductor research and development, with industry partners including Micron, Intel, and Samsung.
To determine the type and severity of a cancer, pathologists typically analyze thin slices of a tumor biopsy under a microscope. But to figure out what genomic changes are driving the tumor’s growth—information that can guide how it is treated—scientists must perform genetic sequencing of the RNA isolated from the tumor, a process that can take weeks and costs thousands of dollars.
Now, Stanford Medicine researchers have developed an artificial intelligence-powered computational program that can predict the activity of thousands of genes within tumor cells based only on standard microscopy images of the biopsy.
The tool, described online in Nature Communications Nov. 14, was created using data from more than 7,000 diverse tumor samples. The team showed that it could use routinely collected biopsy images to predict genetic variations in breast cancers and to predict patient outcomes.
In a podcast on Monday, Anthropic CEO Dario Amodei warned that a future with human-level AIs is not far away. In fact, it might happen as soon as 2026.
The podcast was hosted by AI influencer Lex Fridman where Amodei was invited for an interview that went on for 5 hours. A lot of interesting topics were discussed, starting from Anthropic’s upcoming project to the timeline for superintelligent models and so on.
Continue reading “Anthropic CEO Says Human-level AI Models Might Arrive By 2026” »
Researchers have developed a robot capable of performing surgical procedures with the same skill as human doctors by training it using videos of surgeries.
The team from Johns Hopkins and Stanford Universities harnessed imitation learning, a technique that allowed the robot to learn from a vast archive of surgical videos, eliminating the need for programming each move. This approach marks a significant step towards autonomous robotic surgeries, potentially reducing medical errors and increasing precision in operations.
Continue reading “Robot That Watched Surgical Videos Now Operates With Human-Level Skill” »
Researchers at the Max-Planck-Institute (MPL) proposed a novel way to entangle optical photons with phonons.
Unveiling faster and smarter reasoning in AI:*
Researchers have introduced a breakthrough in AI reasoning, specifically for Large Language Models (LLMs), with a method called*.
Continue reading “Interpretable Contrastive Monte Carlo Tree Search Reasoning” »
Letting AI systems argue with each other may help expose when a large language model has made mistakes.