https://www.futuretimeline.net/images/socialmedia/ Systems, a California-based developer of semiconductors and AI, has announced a new system that can support models of 120 trillion parameters in a single computer.

https://www.futuretimeline.net/images/socialmedia/ Systems, a California-based developer of semiconductors and AI, has announced a new system that can support models of 120 trillion parameters in a single computer.
New research has found that artificial intelligence (AI) analyzing medical scans can identify the race of patients with an astonishing degree of accuracy, while their human counterparts cannot. With the Food and Drug Administration (FDA) approving more algorithms for medical use, the researchers are concerned that AI could end up perpetuating racial biases. They are especially concerned that they could not figure out precisely how the machine-learning models were able to identify race, even from heavily corrupted and low-resolution images.
In the study, published on pre-print service Arxiv, an international team of doctors investigated how deep learning models can detect race from medical images. Using private and public chest scans and self-reported data on race and ethnicity, they first assessed how accurate the algorithms were, before investigating the mechanism.
“We hypothesized that if the model was able to identify a patient’s race, this would suggest the models had implicitly learned to recognize racial information despite not being directly trained for that task,” the team wrote in their research.
https://buff.ly/3y6P5Zu #unmanned #Boeing #northropgrumman #aircraft
The Boeing-owned test Stingray, MQ-25 T1, passed fuel to an E-2D airborne early warning and control (AEW&C) receiver aircraft flown by the US Navy’s (USN’s) Air Test and Evaluation Squadron VX-20 during the event the day prior to the announcement.
“During a test flight from MidAmerica St Louis Airport on 18 August, pilots from VX-20 conducted a successful wake survey behind MQ-25 T1 to ensure performance and stability before making contact with T1’s aerial refuelling drogue. The E-2D received fuel from T1’s aerial refuelling store during the flight,” Boeing said.
This first contact for the Stingray unmanned tanker with an Advanced Hawkeye receiver aircraft came nearly three months after the first aerial refuelling test was performed on 4 June with a Boeing F/A-18F Super Hornet receiver. Both the Advanced Hawkeye and Super Hornet flights were conducted at operationally relevant speeds and altitudes, with both receiver aircraft performing manoeuvres in close proximity to the Stingray.
See how perception and adaptability enable varied, high-energy behaviors like parkour. https://bit.ly/3AZWMCu
Technology Breakthroughs Enable Training of 120 Trillion Parameters on Single CS-2, Clusters of up to 163 Million Cores with Near Linear Scaling, Push Button Cluster Configuration, Unprecedented Sparsity Acceleration.
For more information, please visit http://cerebras.net/product/.
About Cerebras Systems.
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art. The CS-2 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE-2). The WSE-2 is the largest chip ever built. It contains 2.6 trillion transistors and covers more than 46,225 square millimeters of silicon. The largest graphics processor on the market has 54 billion transistors and covers 815 square millimeters. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras CS-2 powered by the WSE-2.
Israeli synthetic media startup D-ID wants to disrupt production of Hollywood movies with artificial intelligence.
IBM has announced its new chip, Telum – a new CPU chip that will allow IBM clients to leverage deep learning inference at scale. The new chip features a centralised design, which allows clients to leverage the full power of the AI processor for AI-specific workloads, making it ideal for financial services workloads like fraud detection, loan processing, clearing and settlement of trades, anti-money laundering, and risk analysis.
A Telum-based system is planned for the first half of 2022. “Our goal is to continue improving AI hardware compute efficiency by 2.5 times every year for a decade, achieving 1,000 times better performance by 2029,” said IBM in a press release.
The chip contains eight processor cores, running with more than 5GHz clock frequency, optimised for the demands of enterprise-class workloads. The completely redesigned cache and chip-interconnection infrastructure provide 32MB cache per core. The chip also contains 22 billion transistors and 19 miles of wire on 17 metal layers.