This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
CDT
This is some text inside of a div block.

Robots don’t have taste, but you do: How to define and measure AI content quality

This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.

In the rush to add generative AI features to products, content quality often takes a back seat. Content designers can tell when something’s off, but explaining what’s wrong — and making the case for fixing it — is still new terrain.

In this session, you’ll learn a practical framework for evaluating AI-generated content using clear acceptance criteria, consistent test data, and adversarial content testing. Using real examples, we’ll break down how to create a repeatable benchmarking process that translates subjective assessments into actionable data. You’ll walk away with strategies to measure AI-generated content, identify risks, and build a strong case for quality in AI-driven projects.

In this session, you’ll learn how to:

  • Define acceptance criteria and use consistent “golden data sets” to more objectively measure any type of AI-generated content.
  • Stress test with adversarial content to uncover hidden risks in AI-generated output.
  • Gain strategies for effectively communicating AI content quality issues to engineers, product managers, and other stakeholders.
  • Make the business case for investing in AI content evaluation.

Sign up for Button email!

Subscribe for content design resources, event announcements, and special offers just for you.

Thanks! Check your inbox to confirm your subscription.
👉 IMPORTANT: Firewalls and spam filters can block us. Add “hello@buttonconf.com” to your email contacts so they don’t!
Oops! Something went wrong while submitting the form.