Autor: |
Shah, Simmone, Campbell, Charlie, Chow, Andrew R., Garza, Alejandro De La, Guzman, Chad De, Haupt, Angela, Henshall, Will, Javed, Ayesha, Kluger, Jeffrey, Lapowsky, Issie, Mandel, Kyla, Park, Alice, Perrigo, Billy, Popli, Nik, Rajvanshi, Astha, Serhan, Yasmeen, Worland, Justin, Bergengruen, Vera |
Předmět: |
|
Zdroj: |
Time International (Atlantic Edition); 10/9/2023, Vol. 202 Issue 11/12, p50-50, 1p, 1 Color Photograph |
Abstrakt: |
So Raji shifted her focus toward research, and how AI companies could ensure that their models do not cause undue harm--especially among populations that are likely to be overlooked during the development process. In 2017, while interning at the machine-learning company Clarifai, Inioluwa Deborah Raji had an alarming realization. Raji, 27, was helping the startup train a content-moderation model intended to filter out explicit images, but noticed that the model was disproportionately flagging nonexplicit content containing people of color. [Extracted from the article] |
Databáze: |
Complementary Index |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|