By Phil Le-Brun, Enterprise Strategist and Evangelist, Amazon Web Services (AWS) I’m fascinated by the technological tipping points in history that have ignited the public’s imagination—the first TV broadcast, human space flight, the internet. Each of these events made a previously esoteric technology or concept tangible. The most current example of such emerging technology is generative AI. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. It’s powered by machine learning (ML) models—very large models pre-trained on vast amounts of data and commonly referred to as foundation models (FMs). Amazon has been investing in and using FMs for several years in areas such as search on Amazon.com and delivering conversational experiences with Alexa. At AWS, we’ve focused on democratizing those technologies for more organizations. And customers are talking to us about using generative AI to speed pharmaceutical discovery, aid research, expedite customer service, and more. The potential is exciting, but many leaders don’t know where to start. So here are a few things you should think about:
Start considering use cases
I love the saying, “Fall in love with the problem, not the solution.” It reminds us that while technology is a brilliant enabler, it is just one more set of tools we can apply to real-world problems.
What time-consuming, difficult, or impossible problems could generative AI help you solve? Think big about the opportunities, but start small with problems that cause day-to-day irritations for your employees or customers, or what we call “paper cuts.”
Can internal annoyances be automated away, freeing up organizational time while getting a better understanding of how AI can help your business? For instance, by using Amazon Code Whisperer, which uses an FM to generate code suggestions, Accenture helps reduce its development efforts by up to 30% while gaining an understanding of generative AI’s power in assisting productivity.
Experiment with solutions and models
Amazon has been developing AI applications, like our e-commerce recommendations engine, for over 20 years. We’ve learned that the best way to build a broad understanding of AI—so that it can be improved—is for many diverse people to experiment, solve problems, and innovate with it.
Since the launch of Amazon SageMaker in 2017, we have released a continual stream of ML and AI services with a focus on democratizing the technology. We have continued this approach with the launch of Amazon Bedrock, a new service that makes FMs from Amazon and leading AI startups such as AI21 Labs, Anthropic, and Stability AI accessible via an API.
Bedrock makes it easier for customers to build and scale generative AI-based applications using FMs. By providing a variety of FMs, Amazon Bedrock addresses the fact that a single solution or model is unlikely to solve every business problem you face. For example, some FMs are specialized for conversational and text processing tasks while others are for generating high-quality images.
Customize for differentiation
For some organizations, having your own custom data sets will help you differentiate your generative AI applications. And that proprietary data is one of your most important assets. It’s what you use to fine-tune existing models to be highly accurate for your organization and use case.
Customers can easily customize models using Bedrock. You simply point to a few labeled examples in storage, and the service can fine-tune the model for a particular task without having to annotate large volumes of data. And customers can configure cloud setup to provide model fine-tuning data in a secure manner with all their data encrypted.
Ensure a strong data foundation
The boldest house built on dodgy foundations will not last. The same is true in the world of ML. With generative AI, quality trumps the quantity of business data available. For example, if the raw data you use to fine tune ML models contains errors, this can affect the accuracy of the predictions and content it creates.
However, ensuring data is relevant, complete, and accurate can be a time-consuming process, sometimes taking weeks. That’s why we created a solution in Amazon SageMaker that helps you complete each step of the data preparation workflow, including data selection, cleansing, exploration, bias detection, and visualization from a single visual interface in minutes.
Understand the impact of infrastructure
Whatever you are trying to do with FMs—running them, building them, customizing them—they need performant, cost-effective infrastructure that is purpose-built for ML. Otherwise, generative AI isn’t practical for the vast majority of organizations.
For a decade, we have been investing in our own silicon to push the envelope on performance and price performance for demanding workloads like ML training and Inference. Our AWS Trainium and AWS Inferentia chips offer a high-performance, low-cost solution for training models and running inference in the cloud.
Last, be excited and approach generative AI with an open, curious mind. Our mission is to make it possible for developers of all skill levels and for organizations of all sizes to innovate using generative AI. This is just the beginning of what we believe will be the next wave of ML, powering new possibilities for all of us.