Toggle light / dark theme

The library of two-dimensional (2D) layered materials keeps growing, from basic 2D materials to metal chalcogenides. Unlike their bulk counterparts, 2D layered materials possess novel features that offer great potential in next-generation electronics and optoelectronics devices.

Doping engineering is an important and effective way to control the peculiar properties of 2D materials for the application in logical circuits, sensors, and optoelectronic devices. However, additional chemicals have to be used during the process, which may contaminate the materials. The techniques are only possible at specific steps during material synthesis or device fabrication.

In a new paper published in eLight, a team of scientists led by Professor Han Zhang of Shenzhen University and Professor Paras N Prasad of the University of Buffalo studied the implementation of neutron-transmutation doping to manipulate . Their paper, titled has demonstrated the change for the first time.

For the first time, researchers have demonstrated an artificial organic neuron, a nerve cell, that can be integrated with a living plant and an artificial organic synapse. Both the neuron and the synapse are made from printed organic electrochemical transistors.

On connecting to the carnivorous Venus flytrap, the electrical pulses from the artificial nerve cell can cause the plant’s leaves to close, although no fly has entered the trap. Organic semiconductors can conduct both electrons and ions, thus helping mimic the ion-based mechanism of pulse (action potential) generation in plants. In this case, the small electric pulse of less than 0.6 V can induce action potentials in the plant, which in turn causes the leaves to close.

“We chose the Venus flytrap so we could clearly show how we can steer the biological system with the artificial organic system and get them to communicate in the same language,” says Simone Fabiano, associate professor and principal investigator in organic nanoelectronics at the Laboratory of Organic Electronics, Linköping University, Campus Norrköping.

As our tech needs grow and the Internet of Things increasingly connects our devices and sensors together, figuring out how to provide power in remote locations has become an expanding field of research.

Professor Seokheun “Sean” Choi—a faculty member in the Department of Electrical and Computer Engineering at Binghamton University’s Thomas J. Watson College of Engineering and Applied Science—has been working for years on biobatteries, which generate electricity through bacterial interaction.

One problem he encountered: The batteries had a lifespan limited to a few hours. That could be useful in some scenarios but not for any kind of long-term monitoring in remote locations.

Quantum computers are one of the key future technologies of the 21st century. Researchers at Paderborn University, working under Professor Thomas Zentgraf and in cooperation with colleagues from the Australian National University and Singapore University of Technology and Design, have developed a new technology for manipulating light that can be used as a basis for future optical quantum computers. The results have now been published in Nature Photonics.

New optical elements for manipulating light will allow for more advanced applications in modern information technology, particularly in quantum computers. However, a major challenge that remains is non-reciprocal light propagation through nanostructured surfaces, where these surfaces have been manipulated at a tiny scale.

Professor Thomas Zentgraf, head of the working group for ultrafast nanophotonics at Paderborn University, explains that “in reciprocal propagation, light can take the same path forward and backward through a structure; however, non-reciprocal propagation is comparable to a one-way street where it can only spread out in one direction.”

An international team of scientists, led by the University of Leeds, have assessed how robotics and autonomous systems might facilitate or impede the delivery of the UN Sustainable Development Goals (SDGs).

Their findings identify key opportunities and key threats that need to be considered while developing, deploying and governing robotics and autonomous systems.

The key opportunities robotics and autonomous systems present are through autonomous task completion, supporting human activities, fostering innovation, enhancing and improving monitoring. Emerging threats relate to reinforcing inequalities, exacerbating , diverting resources from tried-and-tested solutions, and reducing freedom and privacy through inadequate governance.

Artificial intelligence (AI) and machine learning techniques have proved to be very promising for completing numerous tasks, including those that involve processing and generating language. Language-related machine learning models have enabled the creation of systems that can interact and converse with humans, including chatbots, smart assistants, and smart speakers.

To tackle dialog-oriented tasks, language models should be able to learn high-quality dialog representations. These are representations that summarize the different ideas expressed by two parties who are conversing about specific topics and how these dialogs are structured.

Researchers at Northwestern University and AWS AI Labs have recently developed a self-supervised learning model that can learn effective dialog representations for different types of dialogs. This model, introduced in a paper pre-published on arXiv, could be used to develop more versatile and better performing dialog systems using a limited amount of training data.

An engineer from the Johns Hopkins Center for Language and Speech Processing has developed a machine learning model that can distinguish functions of speech in transcripts of dialogs outputted by language understanding, or LU, systems in an approach that could eventually help computers “understand” spoken or written text in much the same way that humans do.

Developed by CLSP Assistant Research Scientist Piotr Zelasko, the new model identifies the intent behind words and organizes them into categories such as “Statement,” “Question,” or “Interruption,” in the final transcript: a task called “dialog act recognition.” By providing other models with a more organized and segmented version of text to work with, Zelasko’s model could become a first step in making sense of a conversation, he said.

“This new method means that LU systems no longer have to deal with huge, unstructured chunks of text, which they struggle with when trying to classify things such as the topic, sentiment, or intent of the text. Instead, they can work with a series of expressions, which are saying very specific things, like a question or interruption. My model enables these systems to work where they might have otherwise failed,” said Zelasko, whose study appeared recently in Transactions of the Association for Computational Linguistics.

As robots are gradually introduced into various real-world environments, developers and roboticists will need to ensure that they can safely operate around humans. In recent years, they have introduced various approaches for estimating the positions and predicting the movements of robots in real-time.

Researchers at the Universidade Federal de Pernambuco in Brazil have recently created a new deep learning model to estimate the pose of robotic arms and predict their movements. This model, introduced in a paper pre-published on arXiv, is specifically designed to enhance the safety of robots while they are collaborating or interacting with humans.

“Motivated by the need to anticipate accidents during (HRI), we explore a framework that improves the safety of people working in close proximity to robots,” Djamel H. Sadok, one of the researchers who carried out the study, told TechXplore. “Pose detection is seen as an important component of the overall solution. To this end, we propose a new architecture for Pose Detection based on Self-Calibrated Convolutions (SCConv) and Extreme Learning Machine (ELM).”

An estimated one-quarter of adults in the U.S. have nonalcoholic fatty liver disease (NAFLD), an excess of fat in liver cells that can cause chronic inflammation and liver damage, increasing the risk of liver cancer. Now, UT Southwestern researchers have developed a simple blood test to predict which NAFLD patients are most likely to develop liver cancer.

“This test lets us noninvasively identify who should be followed most closely with regular ultrasounds to screen for cancer,” said Yujin Hoshida, M.D. Ph.D., Associate Professor of Internal Medicine in the Division of Digestive and Liver Diseases at UTSW, a member of the Harold C. Simmons Comprehensive Cancer Center, and senior author of the paper published in Science Translational Medicine.

NAFLD is rapidly emerging as a major cause of chronic liver disease in the United States. With rising rates of obesity and diabetes, its incidence is expected to keep growing. Studies have found that people with NAFLD have up to a seventeenfold increased risk of liver cancer. For NAFLD patients believed to be most at risk of cancer, doctors recommend a demanding screening program involving a liver ultrasound every six months. But pinpointing which patients are in this group is challenging and has typically involved invasive biopsies.

An international team of researchers has demonstrated a technique that allows them to align gold nanorods using magnetic fields, while preserving the underlying optical properties of the gold nanorods.

“Gold nanorods are of interest because they can absorb and scatter specific , making them attractive for use in applications such as biomedical imaging, sensors, and other technologies,” says Joe Tracy, corresponding author of a paper on the work and a professor of materials science and engineering at North Carolina State University.

It is possible to tune the wavelengths of light absorbed and scattered by engineering the dimensions of the gold nanorods. Magnetically controlling their orientation makes it possible to further control and modulate which wavelengths the nanorods respond to.