Menu

Blog

Archive for the ‘robotics/AI’ category: Page 691

Jun 12, 2023

Mastering AI with Synthetic Data: Unlocking New Frontiers of Innovation

Posted by in categories: innovation, robotics/AI

Unlock the power of synthetic data in AI innovation. Discover its benefits in privacy, scalability, and diversity for advanced machine learning models.

Jun 12, 2023

5 ways to explore the use of generative AI at work

Posted by in category: robotics/AI

ChatGPT and other generative AI tools have captured the public imagination. Here’s how you can turn a much hyped tool into a productivity boon.

Jun 12, 2023

DeepMind AI creates algorithms that sort data faster than those built by people

Posted by in categories: entertainment, information science, robotics/AI

Computer scientists have, for decades, been optimizing how computers sort data to shave off crucial milliseconds in returning search results or alphabetizing contact lists. Now DeepMind, based in London, has vastly improved sorting speeds by applying the technology behind AlphaZero — its artificial-intelligence system for playing the board games chess, Go and shogi — to a game of building sorting algorithms. “This is an exciting result,” said Emma Brunskill, a computer scientist at Stanford University, California.

The system, AlphaDev, is described in a paper in Nature1, and has invented faster algorithms that are already part of two standard C++ coding libraries, so are being used trillions of times per day by programmers around the world.

Jun 12, 2023

AI researcher, Stanford professor Andrew Ng: AI poses ‘no extinction risk’ for humans

Posted by in categories: existential risks, robotics/AI

Unlike many of his peers in the artificial intelligence community, Andrew Ng isn’t convinced about the dangers of AI.

In a video posted to Twitter this week, Ng, a Stanford University professor and founder of several Silicon Valley AI startups, expressed doubt about the doomsday predictions of other executives and experts in the field.

Jun 12, 2023

Can AI actually increase human productivity?

Posted by in category: robotics/AI

Explore how artificial intelligence (AI) is revolutionizing industries, unleashing transformative potential, and supercharging efficiency.

Jun 12, 2023

How generative AI language models are unlocking the secrets of DNA

Posted by in categories: biotech/medical, robotics/AI

From gene expression to protein design, large language models are creating a suite of powerful genomic tools for DNA analysis.

Jun 11, 2023

Technology For Technology’s Sake Is The Downfall Of The CIO

Posted by in categories: business, finance, robotics/AI

A unique use case for AI is around enhanced transaction monitoring to help combat financial fraud. Traditional rule-based approaches to anti-money laundering (AML) use static thresholds that only capture one element of a transaction, meaning they deliver a high rate of false positives. Not only is this hugely inefficient, but it can also be very demotivating for staff. With AI, multiple factors can be reviewed simultaneously to extract a risk score and develop an intelligent understanding of what risky behavior looks like. A feedback loop based on advanced analytics means that the more data is collected, the more intelligent the solution becomes. Pinpointing financial crime becomes more efficient and employees also benefit from more free time to focus efforts on other areas of importance like strategy and business development.

Thanks to its ever-increasing applications to evolving business challenges, regulators and financial institutions can no longer turn a blind eye to the potential of AI, with the power to revolutionize the financial system. It presents unique opportunities to reduce the capacity for human error, costing highly regulated industries billions each year.

What’s clear is that some technologies will, over time, become too difficult to ignore. As we saw with the adoption of the cloud, failure to embrace innovative technologies means organizations will get left behind. The cloud was once a pipedream, but now it’s a crucial part of all business operations today. Businesses implemented (or are in the process of implementing) huge digital transformation projects to migrate business processes to the cloud. Similarly, new organizations will kickstart their businesses in the cloud. This is a lesson that technologists must remain alert and continue to keep their finger on the pulse when it comes to incorporating fresh solutions.

Jun 11, 2023

It’s Alive? This Billionaire Funds Startup Growing Brain Cell ‘Biocomputers’

Posted by in categories: biotech/medical, finance, robotics/AI

Billionaire investor Li Ka-Shing is funding a new technology that can potentially rival artificial intelligence (AI) by using brain cells blended with computers in a technology it calls DishBrain.

Peter Thiel, Mark Cuban and Warren Buffet funded early-stage startups and made millions. You don’t need to be a well-connected billionaire to do the same. Click here to invest in promising startups today.

This science fiction-sounding tech comes from Australian biotech firm Cortical Labs. The company recently raised $10 million in a round led by Horizons Ventures, the investment vehicle of the 94-year-old Ka-Shing, the richest person in Hong Kong. Additional investors included Blackbird Ventures, an Australian venture capital (VC) fund; In-Q-Tel, the investment arm of the Central Intelligence Agency; U.S. firm LifeX Ventures; and others.

Jun 11, 2023

Apple Researchers Introduce ByteFormer: An AI Model That Consumes Only Bytes And Does Not Explicitly Model The Input Modality

Posted by in categories: habitats, robotics/AI

The explicit modeling of the input modality is typically required for deep learning inference. For instance, by encoding picture patches into vectors, Vision Transformers (ViTs) directly model the 2D spatial organization of images. Similarly, calculating spectral characteristics (like MFCCs) to transmit into a network is frequently involved in audio inference. A user must first decode a file into a modality-specific representation (such as an RGB tensor or MFCCs) before making an inference on a file that is saved on a disc (such as a JPEG image file or an MP3 audio file), as shown in Figure 1a. There are two real downsides to decoding inputs into a modality-specific representation.

It first involves manually creating an input representation and a model stem for each input modality. Recent projects like PerceiverIO and UnifiedIO have demonstrated the versatility of Transformer backbones. These techniques still need modality-specific input preprocessing, though. For instance, before sending picture files into the network, PerceiverIO decodes them into tensors. Other input modalities are transformed into various forms by PerceiverIO. They postulate that executing inference directly on file bytes makes it feasible to eliminate all modality-specific input preprocessing. The exposure of the material being analyzed is the second disadvantage of decoding inputs into a modality-specific representation.

Think of a smart home gadget that uses RGB photos to conduct inference. The user’s privacy may be jeopardized if an enemy gains access to this model input. They contend that deduction can instead be carried out on inputs that protect privacy. They make notice that numerous input modalities share the ability to be saved as file bytes to solve these shortcomings. As a result, they feed file bytes into their model at inference time (Figure 1b) without doing any decoding. Given their capability to handle a range of modalities and variable-length inputs, they adopt a modified Transformer architecture for their model.

Jun 11, 2023

What Nvidia and the AI boom might mean for Silicon Valley

Posted by in categories: business, robotics/AI

Here’s what Silicon Valley Business Journal senior reporter Max A. Cherney had to say on KQED’s Forum about Nvidia and the AI boom.

Page 691 of 2,403First688689690691692693694695Last