The latest International AI Safety Report 2025 sends a sobering message: as AI evolves, so too does its potential for misuse—especially against women. One of the most alarming consequences of these advances is the proliferation of technology-facilitated gender-based violence (TFGBV), from deepfake pornography to online stalking, impersonation, and psychological abuse.
While much of the global discourse has focused on the existential risks of AI—loss of control, biosecurity, and labour market disruptions—the very real, immediate threat to women’s digital safety is often treated as peripheral. But for millions of women, AI’s unintended consequences are already here. And they’re devastating.
A Growing Threat: Deepfakes and AI-Generated Harassment
According to the International AI Safety Report, malicious actors are increasingly weaponizing general-purpose AI to generate fake content that harms individuals. Deepfake pornography—non-consensual synthetic media created using AI—is among the most egregious forms of abuse. These synthetic videos, often indistinguishable from real footage, are being used to humiliate, blackmail, and silence women.
Despite growing awareness, reliable global statistics remain scarce. However, independent research by Sensity AI reported a 900% increase in deepfake pornographic videos from 2018 to 2023, with over 96% targeting women, primarily without their knowledge or consent. The situation in countries like Indonesia, India, and the Philippines—where digital gender divides persist and reporting mechanisms remain weak—is even more precarious.
The report confirms what women’s rights organisations have long argued: the cost of inaction is rising. Advanced AI models are now capable of generating photorealistic images, cloned voices, and scripted video content at scale, with minimal technical expertise. These tools, originally designed to augment creativity and productivity, are being twisted into weapons of surveillance and shame.
Not Just a Technical Problem: A Crisis of Ethics and Power
The threat of TFGBV isn’t rooted in technology alone—it’s a product of unequal systems, patriarchal norms, and lack of regulatory accountability. When developers fail to consider the gendered impact of AI systems, they inadvertently reinforce harmful biases and leave women digitally vulnerable.
The 2025 report acknowledges that AI models continue to reflect and amplify societal biases, including those related to gender, race, and political ideology. Even as technical approaches for bias mitigation improve, they often come with trade-offs—typically at the expense of underrepresented communities.

Moreover, open-weight models—those publicly available for download and adaptation—pose a dual dilemma. While they foster transparency and innovation, they also allow abusers to fine-tune and deploy generative models for harm, beyond the reach of corporate safety updates or oversight.
It is not enough to tweak datasets or add content filters. What we need is a fundamental shift in AI ethics—one that centres the safety of women, children, and marginalised communities from the ground up.
Policy Gaps, Platform Failures, and the Role of the Private Sector
Despite mounting evidence, most countries still lack legislation explicitly addressing AI-generated gender-based violence. Indonesia, for instance, has taken bold steps with its Law No. 12/2022 on Sexual Violence Crimes (UU TPKS), which recognises digital abuse and coercive manipulation. However, enforcement remains weak, and the legal system is slow to adapt to emerging technologies like deepfakes.
At the international level, the failure of many governments—including Indonesia—to ratify ILO Convention No. 190 underscores a broader hesitancy to treat workplace and digital harassment as systemic, rights-based issues.
Meanwhile, social media platforms and AI developers continue to profit from viral synthetic content, rarely acting unless public outrage forces their hand. Tools like Meta’s “Deepfake Detector” or YouTube’s content flagging algorithms are reactive at best and often ineffective in protecting victims—especially in non-English-speaking contexts.
The private sector must do more than issue ethics statements. Companies should:
- Implement safety-by-design principles in all AI product development.
- Invest in robust moderation and real-time detection tools trained on diverse global datasets.
- Collaborate with civil society organisations, especially women-led groups, to develop context-sensitive safeguards.
Towards Safer, Gender-Responsive AI
The future of AI is not deterministic—it is shaped by human choices. We must decide, collectively and urgently, how we want these tools to serve society. For TFGBV, this means integrating gender-responsive approaches into every layer of AI governance, from data collection and model training to deployment and monitoring.
Here’s what must happen:
- Global Standards for TFGBV, Legislative Innovation and Cross-Border Enforcement
Governments and international bodies must include TFGBV prevention in their national AI strategies and ethics guidelines. Deepfake abuse transcends borders. We need harmonised laws and cross-jurisdictional agreements that allow rapid takedown of abusive content and legal redress for victims—even if the perpetrator operates anonymously or abroad. - Research and Evidence Generation
There is a pressing need for gender-disaggregated data on AI harm. Policymakers cannot act on anecdotal evidence alone. Governments, academia, and tech firms must invest in interdisciplinary research on the psychological, legal, and economic impacts of TFGBV. - Victim-Centred Platforms and Reporting Tools
Platforms like us; Bullyid App, already providing confidential legal and mental health support in Indonesia, should be scaled and integrated into national protection systems. Survivors deserve trusted, trauma-informed pathways to justice—online and offline. - Ethics Committees with Gender Representation
AI labs, think tanks, and regulatory bodies must include women, especially from the Global South, in ethics panels and product audits. If women’s lived experiences aren’t at the table, the technology will never reflect their realities.
AI has the potential to be a powerful force for equality—but only if we treat safety as the foundation, not an afterthought. Technology-facilitated gender-based violence is not a “niche” issue. It is a human rights crisis, an economic burden, and a glaring failure of our ethical frameworks.
AI’s future will be written by those who build it—and those who choose to protect the people it affects most.