Toggle light / dark theme

Live human brain tissue — generously donated by brain surgery patients with epilepsy or tumors — is yielding incredible #neuroscience insights. A study on cells… See More.


As part of an international effort to map cell types in the brain, scientists identified increased diversity of neurons in regions of the human brain that expanded during our evolution.

New research by a City College of New York team has uncovered a novel way to combine two different states of matter. For one of the first times, topological photons—light—has been combined with lattice vibrations, also known as phonons, to manipulate their propagation in a robust and controllable way.

The study utilized topological photonics, an emergent direction in photonics which leverages fundamental ideas of the mathematical field of topology about conserved quantities—topological invariants—that remain constant when altering parts of a geometric object under continuous deformations. One of the simplest examples of such invariants is number of holes, which, for instance, makes donut and mug equivalent from the topological point of view. The topological properties endow photons with helicity, when photons spin as they propagate, leading to unique and unexpected characteristics, such as robustness to defects and unidirectional propagation along interfaces between topologically distinct materials. Thanks to interactions with vibrations in crystals, these helical photons can then be used to channel along with vibrations.

The implications of this work are broad, in particular allowing researchers to advance Raman spectroscopy, which is used to determine vibrational modes of molecules. The research also holds promise for vibrational spectroscopy—also known as —which measures the interaction of infrared radiation with matter through absorption, emission, or reflection. This can then be utilized to study and identify and characterize .

Google AI Introduces FLAN: An Instruction-Tuned Generalizable Language (NLP) Model To Perform Zero-Shot Tasks


To generate meaningful text, a machine learning model needs a lot of knowledge about the world and should have the ability to abstract them. While language models that have been trained to accomplish this are becoming increasingly capable of acquiring this knowledge automatically as they grow, it is unclear how to unlock this knowledge and apply it to specific real-world activities.

Fine-tuning is one well-established method for doing so. It involves training a pretrained model like BERT or T5 on a labeled dataset to adjust it to a downstream job. However, it has a large number of training instances and stored model weights for each downstream job, which is not always feasible, especially for large models.

A recent Google study looks into a simple technique known as instruction fine-tuning, sometimes known as instruction tuning. This entails fine-tuning a model to make it more receptive to performing NLP (Natural language processing) tasks in general rather than a specific task.

Circa 2020


Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.

“While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.”

Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.

As well as high-tech greenhouses, vertical farms, where food is grown indoors in vertically stacked beds without soil or natural light, are growing in popularity. NextOn operates a vertical farm in an abandoned tunnel beneath a mountain in South Korea. US company AeroFarms plans to build a 90,000-square-foot indoor vertical farm in Abu Dhabi, and Berlin-based Infarm has brought modular vertical farms directly to grocery stores, growing fresh produce in Tokyo stores.


AppHarvest says its greenhouse in Morehead, Kentucky, uses robotics and artificial intelligence to grow millions of tons of tomatoes, using 90% less water than in open fields.