Gender-Based Harms in AI: Protecting Against Non-Consensual Image Alterations

Gender-Based Harms in AI: Protecting Against Non-Consensual Image Alterations - AI - News

Australian Member of Parliament Georgie Purcell’s recent concern over a digitally altered image that distorted her body and removed parts of her clothing without her consent highlights the potential sexist and discriminatory consequences of unchecked ai technologies. This incident serves as a reminder that even seemingly innocuous ai-assisted tools can inadvertently perpetuate societal biases.

The insidious nature of ai-assisted image manipulation

Despite their commonplace use, ai-powered image editing tools can subtly reinforce harmful societal norms. When instructed to edit photographs, these tools may emphasize certain attributes deemed favorable by society, such as youthfulness and sexualization, particularly when it comes to images of women.

Deepfake content: A growing concern

The issue of sexualized deepfake content, predominantly targeting women, is a significant concern. According to reports, an alarming 90-95% of deepfake videos are non-consensual pornography, with approximately 90% featuring women as victims. Instances of the non-consensual creation and sharing of sexualized deepfake imagery have been reported globally, impacting individuals across various demographics, including young women and celebrities like Taylor Swift.

Addressing the challenge: The need for global action

Though legislative measures have been implemented in some regions to tackle the non-consensual sharing of sexualized deepfakes, regulations regarding their creation remain inconsistent, particularly in the United States. The absence of cohesive international regulations underscores the necessity for collective global action to address this issue effectively.

Detecting ai-generated content poses a challenge due to advancing technologies and the increasing availability of apps facilitating the creation of sexually explicit content. However, placing sole blame on technology overlooks the responsibility of technology developers and digital platforms to prioritize user safety and rights.

Australia’s response

Australia has taken steps to address this issue, with initiatives such as the Office of the eSafety Commissioner and national laws that hold digital platforms accountable for preventing and removing non-consensual content. However, broader global collaboration and proactive measures are essential to mitigate the harms of non-consensual sexualized deepfakes effectively.

Moving forward: A collective responsibility

The unchecked use of ai in image editing and the proliferation of sexualized deepfake content pose significant challenges. Effective solutions require comprehensive regulatory frameworks, as well as collective global action that prioritizes user safety and rights in technology development and enforcement. By acknowledging the role of both ai technologies and human responsibility, societies can work towards mitigating the gender-based harms associated with ai-enabled abuses.

Conclusion

The recent incident involving Georgie Purcell serves as a stark reminder of the potential for ai technologies to perpetuate sexist and discriminatory practices. Addressing this issue requires a multi-pronged approach, involving regulatory frameworks, global collaboration, and a commitment to uphold user safety and rights in technology development and enforcement. By prioritizing these values, we can work towards mitigating the harms of non-consensual sexualized deepfakes and promoting a more equitable digital landscape.