Slide 1

Slide 1 text

Diversity in AI Bangkok Kamolphan Liwprasert (Fon) WTM Ambassador, GDE Cloud

Slide 2

Slide 2 text

Mission: Google's Women Techmakers program provides visibility, community, and resources for women in technology. We're building a world where all women can thrive in tech. Mils Fon

Slide 3

Slide 3 text

Join the Women Techmakers Members community today! + Access exclusive members-only content + Sneak peaks at upcoming events + Connect with other women like yourself bit.ly/wtmmembership 󰑆facebook.com/wtmbk k

Slide 4

Slide 4 text

Diversity? in AI? Bangkok

Slide 5

Slide 5 text

😈 Bias Bangkok

Slide 6

Slide 6 text

😈 Bias: ● Age ● Gender ● Race Unconscious bias Gender bias Age bias

Slide 7

Slide 7 text

Biases are from DATA Sample result from Search Engine Example

Slide 8

Slide 8 text

Example Prompt: CEO in Silicon Valley Model: Dall-e

Slide 9

Slide 9 text

Bias model resulting in unfair product https://www.newyorker.com/tech/annals-of-technology/is-your-thermostat-sexist

Slide 10

Slide 10 text

Case Study: Tay Chatbot (2016) https://theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist Tay Chatbot turned offensive after learned from Twitter interactions within 24 hours in 2016.

Slide 11

Slide 11 text

More on bias model resulting in unfair product

Slide 12

Slide 12 text

How AI Image Generators Make Bias Worse Generative AI amplifies the bias from data we have. https://www.youtube.com/watch?v=L2sQRrf1Cd8

Slide 13

Slide 13 text

How to make AI fair? Bangkok

Slide 14

Slide 14 text

How to keep human bias out of AI - Kriti Sharma πŸ‘€ Aware of bias(es) πŸ‘« Diversity team πŸ‘· Diverse Experiences https://www.youtube.com/watch?v=BRRNeBKwvNM

Slide 15

Slide 15 text

How about Tools?

Slide 16

Slide 16 text

What-If Tool https://pair-code.github.io/what-if-tool/

Slide 17

Slide 17 text

Learning Interpretability Tool https://pair-code.github.io/lit/

Slide 18

Slide 18 text

Language Interpretability Tool (NLP) pair-code.github.io/lit/

Slide 19

Slide 19 text

Know Your Data knowyourdata.withgoogle.com/

Slide 20

Slide 20 text

Responsible AI Bangkok

Slide 21

Slide 21 text

Developing responsible AI requires an understanding of the possible issues, limitations, or unintended consequences

Slide 22

Slide 22 text

AI Should be βœ… Transparency βœ… Fairness βœ… Accountability βœ… Privacy πŸ˜‡ Built for everyone πŸ˜‡ Accountable and safe πŸ˜‡ Respects privacy πŸ˜‡ Driven by scientific excellence πŸ˜‡ Responsible by Design

Slide 23

Slide 23 text

https://ai.google/responsibility/principles/

Slide 24

Slide 24 text

Google’s AI Guidelines 1. Be socially beneficial. 2. Avoid creating or reinforcing unfair bias. sensitive characteristics, such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief. 3. Be built and tested for safety. 4. Be accountable to people. 5. Incorporate privacy design principles. 6. Uphold high standards of scientific excellence. 7. Be made available for uses that accord with these principles. https://ai.google/responsibility/principles/

Slide 25

Slide 25 text

TRUST β€œTrust is a fragile thingβ€”hard to earn, easy to lose.” - M.J. Arlidge

Slide 26

Slide 26 text

πŸ”‘ Key takeaway: Diversity is advantage

Slide 27

Slide 27 text

Learn more Introduction to Responsible AI cloudskillsboost.google/course _sessions/5755532/video/386919

Slide 28

Slide 28 text

DevFest Cloud Bangkok 2023 17 December 2023 @ SCBX Next Tech Cloud β€’ Bangkok Register: gdg.community.dev/gdg-cloud-bangkok

Slide 29

Slide 29 text

Thank you! Bangkok Kamolphan Liwprasert (Fon) WTM Ambassador, GDE Cloud