May 15, 2023
AI and the New Future of Work
Posted by Shubham Ghosh Roy in categories: futurism, robotics/AI
With this Call for Proposals (CFP) we are targeting work that specifically supports the use of LLMs in productivity scenarios.
With this Call for Proposals (CFP) we are targeting work that specifically supports the use of LLMs in productivity scenarios.
New AI tech will give a much-needed push for robot Astro, one of the most ambitious devices at Amazon that hasn’t yet lived up to expectations.
AI CREATING NEW TYPES OF JOBS
Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Amazon, the online retail behemoth, has long been quiet about its plans for conversational artificial intelligence, even as its rivals Google and Microsoft make strides in developing and deploying chatbots that can interact with users and answer their queries.
Continue reading “Amazon job listings hint at ChatGPT-like conversational AI for online store” »
Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
AI technology is exploding, and industries are racing to adopt it as fast as possible. Before your enterprise dives headfirst into a confusing sea of opportunity, it’s important to explore how generative AI works, what red flags enterprises need to consider, and how to evolve into an AI-ready enterprise.
One of the most common and powerful techniques for generative AI is large language models (LLMs), such as GPT-4 or Google’s BARD. These are neural networks that are trained on vast amounts of text data from various sources such as books, websites, social media and news articles. They learn the patterns and probabilities of language by guessing the next word in a sequence of words. For example, given the input “The sky is,” the model might predict “blue,” “clear,” “cloudy” or “falling.”
A Google AI document was leaked a few days ago, with groundbreaking revelations where the tech giant admits to being outpaced by open source AI! This video takes you through the details of the leak, highlighting how open source solutions are rapidly closing the quality gap and are becoming more capable, faster, and more private than the AI models developed by industry leaders like Google and OpenAI. We delve into what this means for the future of AI development, focusing on the role of open-source models, LoRA (Low Rank Adaptation), and the growing influence of public involvement.
The full article can be found here:
https://natural20.com/google-ai-documents-leak/
The feature image you see above was generated by an AI text-to-image rendering model called Stable Diffusion typically runs in the cloud via a web browser, and is driven by data center servers with big power budgets and a ton of silicon horsepower. However, the image above was generated by Stable Diffusion running on a smartphone, without a connection to that cloud data center and running in airplane mode, with no connectivity whatsoever. And the AI model rendering it was powered by a Qualcomm Snapdragon 8 Gen 2 mobile chip on a device that operates at under 7 watts or so.
It took Stable Diffusion only a few short phrases and 14.47 seconds to render this image.
This is an example of a 540p pixel input resolution image being scaled up to 4K resolution, which results in much cleaner lines, sharper textures, and a better overall experience. Though Qualcomm has a non-algorithmic version of this available today, called Snapdragon GSR, someday in the future, mobile enthusiast gamers are going to be treated to even better levels of image quality without sacrificing battery life and with even higher frame rates.
Continue reading “Powering AI On Mobile Devices Requires New Math And Qualcomm Is Pioneering It” »
Last week, around 4,000 IBM employees, customers, and partners attended IBM Think, the company’s annual conference, to hear the latest innovations, updates, and news from IBM. This year’s event came with many announcements, but with AI in focus, its announcement of watsonx drew significant attention—with the market zeroing in on the substantial opportunities around AI.
After attending the event, and hearing from IBM executives as well as following the broad swath of recent AI and generative AI announcements, I believe that IBM’s announcement of watsonx is a significant milestone in the advancement of enterprise AI. Built on top of the Red Hat OpenShift platform, watsonx offers a full tech stack for training, deploying, and supporting AI capabilities across any cloud environment This move by IBM is indicative of the growing importance of supporting generative AI, and the potential for businesses to benefit from the ease and reliability of this technology. As I see it, this announcement is one of the more important announcements tying together much of the exciting generative AI news and analysis with the more practical connective tissues that will drive meaningful adoption in the enterprise.
Watsonx features three different components: watsonx.ai, watsonx.data, and watsonx.governance. The first component, watsonx.ai, is a design studio for base models, machine learning, and generative AI. It can be used to train, tune, and deploy AI models including IBM supplied models, open-source models, and client provided models, and is currently in preview with select IBM clients and partners, and is expected to be available to the general public in July.
Update: The image for the ChatGPT 3.5 and vicuna-13B comparison has been updated for readability.
With the launch of Large Language Models (LLMs) for Generative Artificial Intelligence (GenAI), the world has become both enamored and concerned with the potential for AI. The ability to hold a conversation, pass a test, develop a research paper, or write software code are tremendous feats of AI, but they are only the beginning to what GenAI will be able to accomplish over the next few years. All this innovative capability comes at a high cost in terms of processing performance and power consumption. So, while the potential for AI may be limitless, physics and costs may ultimately be the boundaries.
Tirias Research forecasts that on the current course, generative AI data center server infrastructure plus operating costs will exceed $76 billion by 2028, with growth challenging the business models and profitability of emergent services such as search, content creation, and business automation incorporating GenAI. For perspective, this cost is more than twice the estimated annual operating cost of Amazon’s cloud service AWS, which today holds one third of the cloud infrastructure services market according to Tirias Research estimates. This forecast incorporates an aggressive 4X improvement in hardware compute performance, but this gain is overrun by a 50X increase in processing workloads, even with a rapid rate of innovation around inference algorithms and their efficiency. Neural Networks (NNs) designed to run at scale will be even more highly optimized and will continue to improve over time, which will increase each server’s capacity. However, this improvement is countered by increasing usage, more demanding use cases, and more sophisticated models with orders of magnitude more parameters. The cost and scale of GenAI will demand innovation in optimizing NNs and is likely to push the computational load out from data centers to client devices like PCs and smartphones.
Major announcements from CEO Sundar Pichai, CEO, Google at the I/O conference yesterday that generative AI will underpin their search, Gmail, and other products. Coming at the heels of major announcements from Microsoft and OpenAI’s partnership since January, 2023, Google has been scrambling to get their market and generative AI product positioning up to snuff. This announcement was applauded after the recent gaffaw in early February, when Google announced its AI chatbot Bard — a rival to OpenAI’s ChatGPT.
This blog highlights Google’s generative AI announcement against Microsoft’s OpenAI. Also key issues on data bias and impacts to society and citizen privacy caution to ensure AI legislation speeds up in 2023 to balance out the technology giants power.
Seems obvious enough, but organizations need to move past the hype and determine how they can benefit from artificial intelligence, so they can identify skillsets they need to bring the benefits to fruition.