Toggle light / dark theme

The problem of intelligence — its nature, how it is produced by the brain and how it could be replicated in machines — is a deep and fundamental problem that cuts across multiple scientific disciplines. Philosophers have studied intelligence for centuries, but it is only in the last several decades that developments in science and engineering have made questions such as these approachable: How does the mind process sensory information to produce intelligent behavior, and how can we design intelligent computer algorithms that behave similarly? What is the structure and form of human knowledge — how is it stored, represented, and organized? How do human minds arise through processes of evolution, development, and learning? How are the domains of language, perception, social cognition, planning, and motor control combined and integrated? Are there common principles of learning, prediction, decision, or planning that span across these domains?

This course explores these questions with an approach that integrates cognitive science, which studies the mind; neuroscience, which studies the brain; and computer science and artificial intelligence, which study the computations needed to develop intelligent machines. Faculty and postdoctoral associates affiliated with the Center for Brains, Minds and Machines discuss current research on these questions.

A new algorithm may make robots safer by making them more aware of human inattentiveness. In computerized simulations of packaging and assembly lines where humans and robots work together, the algorithm developed to account for human carelessness improved safety by about a maximum of 80% and efficiency by about a maximum of 38% compared to existing methods.

The work is reported in IEEE Transactions on Systems, Man, and Cybernetics: Systems.

“There are a large number of accidents that are happening every day due to carelessness—most of them, unfortunately, from human errors,” said lead author Mehdi Hosseinzadeh, assistant professor in Washington State University’s School of Mechanical and Materials Engineering.

The US National Institute of Standards and Technology has released Federal Information Processing Standards (FIPS) publications for three quantum-resistant cryptographic algorithms.

In a landmark announcement, the National Institute of Standards and Technology (NIST) has published its first set of post-quantum cryptography (PQC) standards. This announcement serves as an inflection point in modern cybersecurity: as the global benchmark for cryptography, the NIST standards signal to enterprises, government agencies, and supply chain vendors that the time has come to make the world’s information security systems resistant to future cryptographically relevant quantum computers.


NIST released FIPS publications for three quantum-resistant cryptographic algorithms.

Abstract: In ecological systems, be it a garden or a galaxy, populations evolve from some initial value (say zero) up to a steady state equilibrium, when the mean number of births and deaths per unit time are equal. This equilibrium point is a function of the birth and death rates, as well as the carrying capacity of the ecological system itself. The growth curve is S-shaped, saturating at the carrying capacity for large birth-to-death rate ratios and tending to zero at the other end. We argue that our astronomical observations appear inconsistent with a cosmos saturated with ETIs, and thus SETI optimists are left presuming that the true population is somewhere along the transitional part of this S-curve. Since the birth and death rates are a-priori unbounded, we argue that this presents a fine-tuning problem. Further, we show that if the birth-to-death rate ratio is assumed to have a log-uniform prior distribution, then the probability distribution of the ecological filling fraction is bi-modal — peaking at zero and unity. Indeed, the resulting distribution is formally the classic Haldane prior, conceived to describe the prior expectation of a Bernoulli experiment, such as a technological intelligence developing (or not) on a given world. Our results formally connect the Drake Equation to the birth-death formalism, the treatment of ecological carrying capacity and their connection to the Haldane perspective.

From: David Kipping [view email].

“These results confirm that computerized tongue analysis is a secure, efficient, user-friendly and affordable method for disease screening that backs up modern methods with a centuries-old practice,”


This technology could be aah-mazing!

Researchers in Iraq and Australia say they have developed a computer algorithm that can analyze the color of a person’s tongue to detect their medical condition in real time — with 98% accuracy.

“Typically, people with diabetes have a yellow tongue; cancer patients a purple tongue with a thick greasy coating; and acute stroke patients present with an unusually shaped red tongue,” explained senior study author Ali Al-Naji, who teaches at Middle Technical University in Baghdad and the University of South Australia.

Three new encryption algorithms to bolster global cybersecurity efforts against future attacks using quantum technologies were published today by the National Institute of Standards and Technology (NIST), a division of the U.S. Department of Commerce. The new standards are designed for two tasks: general encryption and digital signatures.

These new standards are the culmination of an eight-year effort from the agency to tap the best minds in cybersecurity to devise the next generation of cryptography strong enough to withstand quantum computers. Experts expect quantum computers capable of breaking current current cryptographic algorithms within a decade. The new standards, the first released by NIST’s post-quantum cryptography (PQC) standardization project, are published on the department’s website. The documents contain the algorithms’ computer code, instructions for how to implement them in products and in encryption systems, and use cases for each.