Similarly, in May of 2018, Facebook described how they use Instagram posts with hashtags #️⃣ to train software to identify subjects in images. This training didn’t only pinpoint humans, but taught AI recognition of objects like a can of soda!
Many social media users are infuriated with their content being used without their permission but are especially upset by the thought of these corporations using their faces to train software.
Why you ask? 🤔
The problems at hand
AI facial recognition has become an increasingly popular software for use in identifying 🔎 missing individuals and criminals.
Additionally, it’s been used for automated photo tags as well as granting phone 📱 and house 🏠 access.
IBM’s database of Flickr images, more specifically, is intended to broaden the scope of AI recognition to include more minorities, including women and people of color to improve inclusivity in facial identification.
However, some individuals are worried that IBM’s attempt at reducing 📉 racial bias could actually lead to increased targeting 🎯 of minorities.
Not to mention, although the subjects of Flickr’s images did consent to appearing on that website, they never 🙅 consented to aiding in the improvement of AI facial recognition.
To make matters worse, IBM has claimed that individuals who appear in these images can opt-out of the project, but the company has made it nearly impossible to actually do so. They’ve kept their dataset private 🔒 and barred users from finding their usernames in the set.
Luckily, NBC News provided a username checker to grant Flickr users the ability to opt-out of the dataset if their photos were included.
Social media lawyer users, be aware of how corporations are using your photos online, and don’t fall prey to it!