Product of the Day
Amazon Bedrock opens fast GenAI
The latest innovations that offer a fast and secure way to develop advanced generative artificial intelligence applications using many of the leading foundation models.
Amazon Web Services (AWS) has launched Amazon Bedrock innovations that offer a fast and secure way to develop advanced generative artificial intelligence (AI) applications using many of the leading foundation models (FMs).
Bedrock provides access to AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon, along with the enterprise security they need to quickly build and deploy generative AI applications. The powerful models are offered as a fully managed service, so customers do not have to look after the underlying infrastructure.
This makes it simple to find the best model for specific use cases, makes it easier to apply safeguards to generative AI applications, and provides more model choice.
Organisations across all industries, from the world’s fastest growing startups to the most security-conscious enterprises and government institutions, are using Amazon Bedrock as a platform for innovation, productivity, and new end-user experiences.
For example, the New York Stock Exchange (NYSE) is leveraging Amazon Bedrock’s choice of FMs and cutting-edge AI generative capabilities across several use cases, including the processing of thousands of pages of regulations to provide answers in easy-to-understand language.
Ryanair, Europe’s largest airline, is using Amazon Bedrock to help its crew to instantly find answers to questions about country-specific regulations or extract summaries from manuals.
Netsmart, a technology provider that specialises in designing, building, and delivering electronic health records (EHRs) for community-based care organsations, is working to enhance the clinical documentation experience for healthcare providers. It aims to reduce the time spent managing the health records of individuals by up to 50% with a generative AI automation tool built on Amazon Bedrock. This will lead Netsmart clients to speed-up patient reimbursement submissions while also improving patient care.
“Amazon Bedrock is experiencing explosive growth, with tens of thousands of organisations of all sizes and across all industries choosing it as the foundation for their generative AI strategy because they can use it to move from experimentation to production more quickly and easily than anywhere else,” says Dr Swami Sivasubramanian, vice president of AI and data at AWS.
“It offers enterprise-grade security and privacy, wide choice of leading foundation models, and the easiest way to build generative AI applications.”
Customers across healthcare, financial services, and other industries are increasingly putting their own data to work by customising publicly available models for their domain-specific use cases. When organisations want to build these models using their proprietary data, they typically turn to services like Amazon SageMaker, which offers the training capabilities to train a model from scratch or perform advanced customisation on publicly available models such as Llama, Mistral, and Flan-T5.
Since launching 2017, Amazon SageMaker has become the place where the world’s high-performing FMs are built and trained. Customers also use Amazon Bedrock’s advanced, built-in generative AI tools, such as Knowledge Bases, Guardrails, Agents, and Model Evaluation, with their customised models, without having to develop all these capabilities themselves.
With Amazon Bedrock Custom Model Import, organisations can now import and access their own custom models as a fully managed application programming interface (API) in Amazon Bedrock. Customers can take models that they customised on Amazon SageMaker, or other tools, and quickly add them to Amazon Bedrock.
Once through an automated validation process, they can seamlessly access their custom model, like any other on Amazon Bedrock, getting all the same benefits that they get today–including seamless scalability and powerful capabilities to safeguard their applications, adhering to responsible AI principles, the ability to expand a model’s knowledge base with retrieval augmented generation (RAG), easily creating agents to complete multi-step tasks, and carrying out fine-tuning to keep teaching and refining models–without needing to manage the underlying infrastructure.
With Guardrails for Amazon Bedrock, customers can use best-in-class technology to implement safeguards to remove personal and sensitive information, profanity, specific words, and block harmful content.
For generative AI to be pervasive across every industry, organisations need to implement it in a safe, trustworthy, and responsible way. Many models use built-in controls to filter undesirable and harmful content, but most customers want to further tailor their generative AI applications so responses remain relevant, align with company policies, and adhere to responsible AI principles.
Now generally available, Guardrails for Amazon Bedrock offers safety protection on top of the native capabilities of FMs, helping customers block up to 85% of harmful content. To create a guardrail, customers provide a natural-language description defining the denied topics within the context of their application. Customers can also configure thresholds to filter across areas like hate speech, insults, sexualized language, prompt injection, and violence, as well as filters to remove any personal and sensitive information, profanity, or specific blocked words.
Exclusive to Amazon Bedrock, Amazon Titan models are created and pre-trained by AWS on large and diverse datasets for a variety of use cases, with built-in support for the responsible use of AI. The new Amazon Titan Text Embeddings V2, which is optimised for working with RAG use cases, is designed for a variety of tasks, such as information retrieval, question and answer chatbots, and personalised recommendations.
To augment FM responses with additional data, many organisations turn to RAG, a popular model-customisation technique where the FM connects to a knowledge source that it can reference to augment its responses. However, running these operations can be compute and storage intensive. The new Amazon Titan Text Embeddings V2 model reduces storage and compute costs, while increasing accuracy. It does so by allowing flexible embeddings to customers, which reduces overall storage up to 4x, significantly reducing operational costs, while retaining 97% of the accuracy for RAG use cases, out-performing other leading models.
Now, generally available, Amazon Titan Image Generator helps customers in industries like advertising, ecommerce, and media and entertainment produce studio-quality images or enhance and edit existing images, at low cost, using natural language prompts. Amazon Titan Image Generator also applies an invisible watermark to all images it creates, helping identify AI-generated images to promote the safe, secure, and transparent development of AI technology and helping reduce the spread of disinformation. The model can also check for the existence of watermark, helping customers confirm whether an image was generated by Amazon Titan Image Generator.