From misinformation and privacy violations to security threats, AI harm is real and far-reaching. However, content design can often serve as a first line of defense through tactical approaches like comprehensive onboarding, clear scope definition, effective disclaimers, and more.
In this talk, Sangeeta will share insights from her experience designing content for AI-driven features at Microsoft Security and reviewing AI work for partner teams as a member of the Responsible AI Audit Board. She’ll provide actionable strategies for designing content that complies with responsible AI principles, helping build experiences that are transparent, ethical, and trustworthy.
After this session, you’ll be able to:
From misinformation and privacy violations to security threats, AI harm is real and far-reaching. However, content design can often serve as a first line of defense through tactical approaches like comprehensive onboarding, clear scope definition, effective disclaimers, and more.
In this talk, Sangeeta will share insights from her experience designing content for AI-driven features at Microsoft Security and reviewing AI work for partner teams as a member of the Responsible AI Audit Board. She’ll provide actionable strategies for designing content that complies with responsible AI principles, helping build experiences that are transparent, ethical, and trustworthy.
After this session, you’ll be able to:
Subscribe for content design resources, event announcements, and special offers just for you.