Toggle light / dark theme

Researchers are developing atomically precise memristors for advanced neuromorphic computing systems.

The University of Kansas and University of Houston, backed by $1.8 million from the National Science Foundation’s Future of Semiconductor program (FuSe2), are collaborating to develop atomically tunable memory resistors, known as “memristors.” These advanced components are designed for brain-inspired computing applications and will support workforce development in the semiconductor industry.

Launched in 2023, the FuSe2 program addresses key challenges in semiconductor research and development, with industry partners including Micron, Intel, and Samsung.

To determine the type and severity of a cancer, pathologists typically analyze thin slices of a tumor biopsy under a microscope. But to figure out what genomic changes are driving the tumor’s growth—information that can guide how it is treated—scientists must perform genetic sequencing of the RNA isolated from the tumor, a process that can take weeks and costs thousands of dollars.

Now, Stanford Medicine researchers have developed an artificial intelligence-powered computational program that can predict the activity of thousands of genes within based only on standard microscopy images of the biopsy.

The tool, described online in Nature Communications Nov. 14, was created using data from more than 7,000 diverse tumor samples. The team showed that it could use routinely collected biopsy images to predict genetic variations in breast cancers and to predict .

In a podcast on Monday, Anthropic CEO Dario Amodei warned that a future with human-level AIs is not far away. In fact, it might happen as soon as 2026.

The podcast was hosted by AI influencer Lex Fridman where Amodei was invited for an interview that went on for 5 hours. A lot of interesting topics were discussed, starting from Anthropic’s upcoming project to the timeline for superintelligent models and so on.

Note: Human-level AIs basically refer to Artificial General Intelligence (AGI) which many companies like OpenAI are already working on. AGI is a new state in which an AI model will be as competent as a human in every field.

Researchers have developed a robot capable of performing surgical procedures with the same skill as human doctors by training it using videos of surgeries.

The team from Johns Hopkins and Stanford Universities harnessed imitation learning, a technique that allowed the robot to learn from a vast archive of surgical videos, eliminating the need for programming each move. This approach marks a significant step towards autonomous robotic surgeries, potentially reducing medical errors and increasing precision in operations.

Revolutionary Robot Training

Unveiling faster and smarter reasoning in AI:*

https://arxiv.org/abs/2410.

Researchers have introduced a breakthrough in AI reasoning, specifically for Large Language Models (LLMs), with a method called*.


Interpretable Contrastive Monte Carlo Tree Search Reasoning — zitian-gao/SC-MCTS.

Former COO at Paypal, David Sacks says that OpenAI recently gave investors a product roadmap update and said their AI models will soon be at PHD level reasoning, act as agents to use tools, meaning the model will be able to pretend to be a human. — - — 👉Get FREE access to the latest AI startup stories! Link in bio. — - — #todayinai #openai #largelanguagemodels