Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
We are on the cusp of a paradigm shift brought by generative AI — but it isn’t about making creativity “quick and easy.” Generative technology opens new heights of human expression and helps creators find their authentic voices.
How we create is changing. The blog you read earlier today may have been made with generative AI. Within 10 years, most creative content will be produced with generative technologies.
Researchers at QuTech—a collaboration between the Delft University of Technology and TNO—have engineered a record number of six, silicon-based, spin qubits in a fully interoperable array. Importantly, the qubits can be operated with a low error-rate that is achieved with a new chip design, an automated calibration procedure, and new methods for qubit initialization and readout. These advances will contribute to a scalable quantum computer based on silicon. The results are published in Nature today.
Different materials can be used to produce qubits, the quantum analog to the bit of the classical computer, but no one knows which material will turn out to be best to build a large-scale quantum computer. To date there have only been smaller demonstrations of silicon quantum chips with high quality qubit operations. Now, researchers from QuTech, led by Prof. Lieven Vandersypen, have produced a six qubit chip in silicon that operates with low error-rates. This is a major step towards a fault-tolerant quantum computer using silicon.
To make the qubits, individual electrons are placed in a linear array of six “quantum dots” spaced 90 nanometers apart. The array of quantum dots is made in a silicon chip with structures that closely resemble the transistor—a common component in every computer chip. A quantum mechanical property called spin is used to define a qubit with its orientation defining the 0 or 1 logical state. The team used finely-tuned microwave radiation, magnetic fields, and electric potentials to control and measure the spin of individual electrons and make them interact with each other.
Have you ever been faced with a problem where you had to find an optimal solution out of many possible options, such as finding the quickest route to a certain place, considering both distance and traffic?
If so, the problem you were dealing with is what is formally known as a “combinatorial optimization problem.” While mathematically formulated, these problems are common in the real world and spring up across several fields, including logistics, network routing, machine learning, and materials science.
Artificial Intelligence (AI), a term first coined at a Dartmouth workshop in 1956, has seen several boom and bust cycles over the last 66 years. Is the current boom different?
The most exciting advance in the field since 2017 has been the development of “Large Language Models,” giant neural networks trained on massive databases of text on the web. Still highly experimental, Large Language Models haven’t yet been deployed at scale in any consumer product — smart/voice assistants like Alexa, Siri, Cortana, or the Google Assistant are still based on earlier, more scripted approaches.
A new field of science has been emerging at the intersection of neuroscience and high-performance computing — this is the takeaway from the 2022 BrainComp conference, which took place in Cetraro, Italy from the 19th to the 22nd of September. The meeting, which featured international experts in brain mapping, machine learning, simulation, research infrastructures, neuro-derived hardware, neuroethics and more, strengthened the current collaborations in this emerging field and forged new ones.
Now in its 5th edition, BrainComp first started in 2013 and is jointly organised by the Human Brain Project and the EBRAINS digital research infrastructure, University of Calabria in Italy, the Heinrich Heine University of Düsseldorf and the Forschungszentrum Jülich in Germany. It is attended by researchers from inside and outside the Human Brain Project. This year was dedicated to the computational challenges of brain connectivity. The brain is the most complex system in the observable universe due to the tight connections between areas down to the wiring of the individual neurons: decoding this complexity through neuroscientific and computing advances benefits both fields.
Hosted by the organising committee of Katrin Amunts, Scientific Research Director of the HBP, Thomas Lippert, Leader of EBRAINS Computing Services from the Juelich Supercomputing Centre and Lucio Grandinetti from the University of Calabria, the sessions included a variety of topics over four days.
A well-known game studio is allegedly using AI voices for a video game. A clarification includes a commitment to human creativity. It’s another footnote in the debate over the value of human labor that will become more common in the future.
It’s the very debate that has erupted so vehemently around AI-generated images in recent months. Are AI images art? If so, can they be equated with human art? Are they detrimental to art? Are they even plagiarism, because the AI examines human works during training – in the inspiration phase, so to speak – and then imitates them in trace elements?
Open source is fertile ground for transformative software, especially in cutting-edge domains like artificial intelligence (AI) and machine learning. The open source ethos and collaboration tools make it easier for teams to share code and data and build on the success of others.
This article looks at 13 open source projects that are remaking the world of AI and machine learning. Some are elaborate software packages that support new algorithms. Others are more subtly transformative. All of them are worth a look.
Neurodegenerative diseases—like amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease), Alzheimer’s, and Parkinson’s—are complicated, chronic ailments that can present with a variety of symptoms, worsen at different rates, and have many underlying genetic and environmental causes, some of which are unknown. ALS, in particular, affects voluntary muscle movement and is always fatal, but while most people survive for only a few years after diagnosis, others live with the disease for decades. Manifestations of ALS can also vary significantly; often slower disease development correlates with onset in the limbs and affecting fine motor skills, while the more serious, bulbar ALS impacts swallowing, speaking, breathing, and mobility. Therefore, understanding the progression of diseases like ALS is critical to enrollment in clinical trials, analysis of potential interventions, and discovery of root causes.
However, assessing disease evolution is far from straightforward. Current clinical studies typically assume that health declines on a downward linear trajectory on a symptom rating scale, and use these linear models to evaluate whether drugs are slowing disease progression. However, data indicate that ALS often follows nonlinear trajectories, with periods where symptoms are stable alternating with periods when they are rapidly changing. Since data can be sparse, and health assessments often rely on subjective rating metrics measured at uneven time intervals, comparisons across patient populations are difficult. These heterogenous data and progression, in turn, complicate analyses of invention effectiveness and potentially mask disease origin.
In a new paper published today in the journal Nature Ecology and Evolution, scientists have estimated the conservation status of nearly 1,900 palm species using artificial intelligence, and found more than 1,000 may be at risk of extinction.
The international team of researchers from the Royal Botanic Gardens, Kew, the University of Zurich, and the University of Amsterdam, combined existing data from the International Union for Conservation of Nature (IUCN) Red List with novel machine learning techniques to paint a clearer picture of how palms may be threatened. Although popular and well represented on the Red List, the threat to some 70% of these plants has remained unclear until now.
The IUCN Red List of Threatened Species is widely considered to be a gold standard for evaluating the conservation status of animal, plant, and fungal species. But there are gaps in the Red List that need to be addressed, as not all species have been listed and many of the assessments are in need of an update. Conservation efforts are further complicated by inadequate funding, the sheer amount of time needed to manually assess a species, and public perception favoring certain vertebrate species over plants and fungi.