Connect with us


Amazon Web Services builds Guardrails into AI

It’s a word that sums up every concern about the misuse of artificial intelligence. Now AWS has turned Guardrails into its own new service, writes ARTHUR GOLDSTUCK.

A year ago, at the annual Amazon Web Services (AWS) re:Invent conference, CEO Adam Selispsky spoke of the company’s support for calls from governments and industry to build guardrails into AI to prevent its misuse. This week, at the 2023 edition of the annual event in Las Vegas, he announced that AWS had built exactly that: a service called Guardrails, built into Amazon Bedrock, the platform that provides access to most major AI models.

“An important component of responsible AI is promoting the interactions between your consumers and the applications that are safe, that avoid harmful outputs, and then stay within your company’s guidelines,” said Selipsy in his opening keynote address on Tuesday. “And the easiest way to do this is actually placing limits on what information models can and can’t return.

“Guardrails for Amazon Bedrock … is a new capability that helps you safeguard your generative AI applications with more responsible AI policies. To create a guardrail, Bedrock provides configuration wizards. You can enter a natural language description of the topics you want the model to deny. Guardrails can be used with any of the foundation models that are accessible via Bedrock, including the custom models that you create through your own fine tuning. So now you have a consistent level of protection across all of your generative AI developments. 

“For example, a bank can configure an online assistant to refrain from providing investment advice, or to prevent inappropriate content. An e-commerce site could ensure that its online assistant doesn’t use hate speech or insults. Or utility companies can remove personally identifiable information – or PII – from a customer service call summary.”

The announcement comes in the wake of hearings by a United States Senate Judiciary subcommittee holding hearings in September, during which almost all speakers from both government and industry agreed that AI needed guardrails to keep it in check. This was followed by British Prime Minister Rishi Sunak launching an AI Safety Institute to act as a global hub on AI safety,

Selipsky, who participated in the launch, said at the time: “As one of the world’s leading developers and deployers of AI tools and services, Amazon is committed to collaborating with government and industry in the UK and around the world to support the safe, secure, and responsible development of AI technology. We are dedicated to driving innovation on behalf of our customers and consumers, while also establishing and implementing the necessary safeguards to protect them.”

AWS spelled out the role of Guardrails further in a statement released on Wednesday, saying: “While many models use built-in controls to filter undesirable and harmful content, organisations want to further customise interactions to remain on topics relevant to their business, align with company policies, and adhere to responsible AI principles. 

“Organisations may need to change models, use multiple models, or replicate policies across applications, and they want a simple way to consistently deploy their preferences across all these areas simultaneously. Deep expertise is required to build custom protection systems with these kinds of safeguards and integrate them into applications, and the processes can take months.”

In effect, Guardrails is the AI safety equivalent of cloud computing allowing the global deployment of a business like Netflix or Uber, while applying specific local and regional policies that comply with laws, regulations and business rules – at the push of a button.

While that kind of policy functionality is a powerful means of expanding a business to new markets, Guardrails is a powerful means of protecting a business from itself as it ventures into untested AI models.

“Guardrails drive consistency in how models in Amazon Bedrock respond to undesirable and harmful content within applications,” AWS said in the statement. To create a guardrail in the Amazon Bedrock console, customers start with natural language descriptions to define the denied topics within the context of their application. Customers can also configure thresholds across hate speech, insults, sexualized language, and violence to filter out harmful content to their desired level.”

One of the early customers of the service, SmartBots AI, helps enterprises build their own conversational chatbots to enhance customer support and sales experiences. Co-founder and chief technology officer Jaya Prakash Kommu said: “It is critical that our customers can trust the responses of the conversational chatbots they are building with SmartBots AI, as these responses represent their brands and will be used by their employees and clients.

“Because our chatbot development service is powered by Amazon Bedrock, customers’ chatbots inherit AWS data safety and privacy best practices. Their chatbots will also have access to Guardrails for Amazon Bedrock to set, control, and avoid inappropriate and unwanted content in user prompts and responses that their chatbots generate, so users have safe and brand-adhering experiences.”

The big winner, of course, is AWS itself, It now in effect owns the word that encapsulates the safeguarding of AI in the business world.

* Arthur Goldstuck is founder of World Wide Worx and editor-in-chief of Follow him on Twitter and Instagram on @art2gee

Subscribe to our free newsletter
To Top