Microsoft claims its new tools make language models safer to use

Timed to align with Build 2022, Microsoft today open-sourced tools and data sets designed to audit AI-powered content moderation systems and automatically write tests highlighting potential bugs in AI models. The company claims that the projects, AdaTest and (De)ToxiGen, could lead to more reliable large language models (LLMs), or models akin to OpenAI’s GPT-3 that can analyze and generate text with human-level sophistication.

Link: Microsoft claims its new tools make language models safer to use
via techcrunch.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: