It’s worth noting that a conversation I had with the first version of ChatGPT’s image feature seemed to bypass some of the guardrails put in place by OpenAI. At first, the chatbot refused to identify a Bill Hader meme. ChatGPT then guessed that an image of Brendan Fraser in George of the Jungle was actually a photo of Brian Krause in Charm. When asked if it was sure, the chatbot answered correctly.
In that same conversation, ChatGPT went wild trying to describe an image of RuPaul’s Drag Race. I shared a screenshot of Kylie Sonique Love, one of the drag queen contestants, and ChatGPT guessed it was Brooke Lynn Hytes, another contestant. I questioned the chatbot’s answer, and it guessed Laganja Estranja, then India Ferrah, then Blair St. Clair, then Alexis Mateo.
“I apologize for the oversight and incorrect identifications,” ChatGPT responded when I pointed out the repetitive nature of its incorrect responses. As I continued the conversation and uploaded a photo of Jared Kushner, ChatGPT refused to identify him.
If the guardrails are removed, either by some sort of jailbroken ChatGPT or an open source model released in the future, the privacy implications could be quite troubling. What if every photo taken of you and posted online was easily linked to your identity in just a few clicks? What if someone could take a photo of you in public without your consent and instantly find your LinkedIn profile? Without adequate privacy protections for these new image features, women and other minorities are at risk of experiencing an influx of abuse from people using chatbots for stalking and harassment.