ShieldGemma 2 – Developing safe and responsible AI for images

youtube
ShieldGemma 2 – Developing safe and responsible AI for images Responsible AI development in multimodal contexts requires robust image safety classifiers. Explore our model for classifying images as "safe" or "unsafe" across key safety categories, which effectively moderates generated images (from any image generation model) and real images (e.g. an image input to a Vision-Language Model), enabling safer use of image generation and VLMs. Subscribe to Google for Developers → #Gemma #GemmaDeveloperDay Speakers: Dana Kurniawan, Wenjun Zeng Products Mentioned: Gemma
  2025/04/02      youtube

Our Tag

最近投稿されたプログラミング学習動画

Let’s rewind it back to Google IO ‘24

Google

Get ready for #GoogleIO May 20-21, where...

  2025/04/03

Architecting for Multi-Cloud: AWS and Beyond with PwC

Amazon
cloud

This webinar explores the fundamentals o...

  2025/04/03

Can you beat our time solving the green world? #GoogleIO

iot
Google

Think you’ve mastered the #GoogleIO puzz...

  2025/04/03

Pushing the capabilities of Gemma 3 via distillation and RL fine-tunin

Specialized capabilities (e.g. math abil...

  2025/04/02

Welcome to the Gemmaverse

The Gemma family of open models keeps ev...

  2025/04/02

Gemma on mobile and web. Best and worst practices

モバイル

Come learn new methods and best practice...

  2025/04/02

ShieldGemma 2 – Developing safe and responsible AI for images

Responsible AI development in multimodal...

  2025/04/02

Agentic AI vs Generative AI | Agentic AI vs Generative AI Explained |

🔥Generative AI Course: Masters Program: ...

  2025/04/02

Modern Observability and Event Driven Architectures - Martin Thwaites

This talk was recorded at NDC London in ...

  2025/04/02

Advanced Cloud Native Development with .NET Aspire - Scott Hunter & Ma

cloud

This talk was recorded at NDC London in ...

  2025/04/02