Timed to align with Build 2022, Microsoft today open-sourced tools and data sets designed to audit AI-powered content moderation systems and automatically write tests highlighting potential bugs in AI models. The company claims that the projects, AdaTest and (De)ToxiGen, could lead to more reliable large language models (LLMs), or models akin to OpenAI’s GPT-3 that can analyze and generate text with human-level sophistication.
Link: Microsoft claims its new tools make language models safer to use
via techcrunch.com