Written by: Agita Pasaribu
The digital landscape, while offering unprecedented connectivity and innovation, also harbors emerging threats that challenge fundamental notions of privacy, trust, and personal security. Among these, deepfake pornography stands out as a particularly insidious form of abuse. This sophisticated manipulation of digital media has far-reaching consequences, extending beyond individual harm to impact public trust and societal stability. Understanding its nature, prevalence, and the legal and practical responses required is crucial for navigating this evolving digital challenge.
What are Deepfakes and Non-Consensual Intimate Imagery (NCII)?
Deepfakes are a form of synthetic media, predominantly videos or images, that leverage advanced artificial intelligence (AI) techniques to create hyper-realistic yet entirely fabricated depictions of individuals. This technology can convincingly make it appear as though someone is saying or doing something they never did, blurring the lines between reality and fabrication. The core of this technology lies in machine learning, enabling the creation of content that is often indistinguishable from genuine media.
Non-Consensual Intimate Imagery (NCII) specifically refers to these AI-generated “deepfakes” that portray individuals in sexually explicit scenarios without their explicit consent. This definition is broad, encompassing not only direct images but also instances where a person’s face or other recognizable characteristics are edited onto existing sexually explicit or intimate images.
For instance, the U.S. TAKE IT DOWN Act formally defines a “Deepfake” as a “video or image… generated or substantially modified using machine-learning techniques… to falsely depict an individual’s appearance or conduct within an intimate visual depiction”. This explicit inclusion of AI-generated content within the definition of NCII signifies a critical shift in how harm is understood in the digital age. Traditionally, “pornography” implied a real, recorded act. However, the recognition of deepfakes as NCII demonstrates that the legal and societal understanding of harm is expanding beyond physical exposure to encompass the fabrication and dissemination of false intimate content.

Why Deepfake Pornography is a Growing Concern in Indonesia
Indonesia has witnessed a concerning rise in technology-facilitated Gender-Based Violence (GBV), which includes the non-consensual dissemination of intimate images. This issue has been steadily increasing, with a particularly dramatic spike in reported cases since the onset of the COVID-19 pandemic. The Indonesian National Commission on Violence Against Women (Komnas Perempuan) reported a nearly 400% increase in technology-facilitated GBV cases, from 2019 to 940 reports in 2020.

The proliferation of deepfake incidents in Indonesia is further exacerbated by the absence of specific regulatory frameworks and the widespread misuse of AI technology. Platforms like Telegram have notably emerged as key channels for the dissemination of this altered content.
The observed dramatic spike in technology-facilitated GBV since the COVID-19 pandemic points to a concerning causal relationship. During the pandemic, increased internet use meant individuals spent significantly more time online, thereby increasing their digital footprint and vulnerability. This heightened online presence, combined with existing societal vulnerabilities and inadequate legal protections, created an environment ripe for digital exploitation, including deepfake pornography. This trend underscores the critical need for rapid adaptation of digital literacy programs and robust legal frameworks that can keep pace with accelerated technological adoption, especially during periods of societal shifts that push more activities into the online realm.
The Technology Behind the Deception: How Deepfakes Are Made
The unsettling realism of deepfakes stems from sophisticated artificial intelligence techniques that learn to mimic human appearance and behavior with uncanny accuracy. Understanding these underlying technologies is key to grasping the scale of the challenge they present.
A Simple Look at AI: Generative Adversarial Networks (GANs) and Autoencoders
At the heart of deepfake creation are powerful AI techniques, primarily involving advanced neural networks such as Autoencoders and Generative Adversarial Networks (GANs).
Autoencoders can be conceptualized as a two-part system designed for image and video manipulation. The “encoder” component compresses an image or video into a concise “latent sketch,” effectively capturing the essential and unique features of a face. This compressed information is then passed to the “decoder,” which learns to reconstruct the face from that sketch.
For the purpose of face-swapping, a common deepfake application, a shared encoder is trained on two different faces. However, each face is given its own distinct decoder. By feeding one person’s facial sketch into the other person’s decoder, the second face can then convincingly adopt the expressions and movements of the first, creating a seamless illusion.
Generative Adversarial Networks (GANs) operate on a competitive principle, involving a “duel” between two AI components: a “Generator” and a “Discriminator”.
The Generator’s primary task is to create fake images or videos that are as realistic as possible, while the Discriminator’s role is to identify whether an image is real or fake. Through continuous iteration and feedback, the Generator constantly improves its ability to produce increasingly convincing fakes, pushing the boundaries of realism. This adversarial process is also employed to detect and refine any flaws in the deepfake during multiple rounds of processing, making the final output exceptionally difficult for human eyes, and even some automated detectors, to discern as fake.
The Human Cost: Devastating Impacts on Victims
The consequences of deepfake pornography are not merely digital; they inflict profound and lasting harm on individuals, often with devastating psychological, social, and economic repercussions.
Psychological Trauma and Emotional Distress
Victims of deepfake pornography endure significant psychological, social, and economic repercussions. They commonly experience intense humiliation, shame, anger, a deep sense of violation, and self-blame.
These feelings contribute to immediate and ongoing emotional distress, leading to withdrawal from family and school life, and significant challenges in maintaining trusting relationships. In severe cases, the emotional toll can tragically escalate to self-harm and suicidal thoughts. Victims often report feeling isolated, disconnected, and deeply mistrustful of those around them, manifesting in symptoms of depression and anxiety.
Reputational Damage and the “Silencing Effect”
Deepfakes can severely damage a victim’s reputation, potentially leading to lower academic performance and diminished confidence in future opportunities, driven by the pervasive fear that the fabricated images will remain permanently accessible online.
The cumulative effect of this public humiliation and distress is what Amnesty International terms the “silencing effect,” where victims withdraw from various aspects of their public life, both online and offline, due to the lasting ramifications of online gendered abuse.
Unlike traditional forms of abuse, digital content, especially deepfakes, can persist online indefinitely, leading to “permanent online availability”.

Indonesia’s Legal Landscape: Navigating a Patchwork of Laws
Indonesia’s efforts to combat deepfake pornography are currently navigating a complex legal landscape, characterized by a patchwork of existing laws that, while relevant, often fall short in comprehensively addressing AI-generated content.
Existing Frameworks: ITE Law, Pornography Law, Sexual Violence Law, and Personal Data Protection Law
Indonesia’s current legal framework encompasses several relevant statutes, including the Information and Electronic Transactions Act (ITE Law 1/2024), the Pornography Law 44/2008, the Sexual Violence Law 12/2022, and the Personal Data Protection Law 27/2022. Specific provisions within these laws aim to address illicit content: Article 45(1) of the ITE Law addresses the distribution of child pornography; Article 4(1) of the Pornography Law 44/2008 tackles the production of pornographic content; and Article 4(2)(c) of the Sexual Violence Law 12/2022 focuses on the “very deed” of sexual violence.
The Personal Data Protection Law (Law No. 27 of 2022) is particularly relevant, prohibiting the creation or falsification of personal data for self-benefit or to harm others, with penalties of up to 6 years imprisonment and a maximum fine. Importantly, facial images are explicitly recognized as specific biometric data under this law. Furthermore, the “Right to be Forgotten” is implicitly recognized in Article 26(3) of the ITE Law (Law No. 1 Year 2024), theoretically allowing victims to request the removal of their images or videos from platforms.
The presence of multiple existing laws suggests a legislative intent to address various forms of digital and sexual harm. However, a consistent finding across numerous sources is that these laws are “insufficient,” “lack precision,” or are “outdated” when it comes to deepfake pornography. This highlights a fundamental and recurring challenge in cyber law: the inherent difficulty for legal frameworks, which are often slow to develop and enact, to keep pace with the exponential and rapid evolution of new technologies like AI. This fundamental lag underscores the need for legislative foresight and more agile legal mechanisms that can adapt to novel technological threats. This might involve drafting broader, principle-based laws rather than overly specific, technology-dependent prohibitions, or establishing faster legislative amendment processes.

Obstacles for Victims: The “Right to be Forgotten” and Enforcement Challenges
Despite the implicit recognition of the “right to be forgotten” in the ITE Law, individuals affected by deepfake pornography in Indonesia face significant obstacles in asserting this right. These challenges stem from a combination of factors: a lack of clear implementing regulations, the overall inadequacy of existing legal frameworks, and, crucially, law enforcement practices that do not adequately consider gender issues. The “right to be forgotten” as regulated in Article 26(3) of the ITE Law is considered to have “legal vague” provisions, which ultimately hinders the achievement of effective legal protection for deepfake pornography victims. A particularly distressing obstacle is that victims may not be believed by authorities or society, leading to re-traumatization when they assert that they are not the person depicted in the deepfake video.
The problem extends beyond the mere existence of laws to their practical implementation and enforcement. The “lack of implementing regulations” and “insufficient gender sensitivity within law enforcement” indicate systemic failures. This means victims encounter a double burden: the initial trauma of the deepfake itself, compounded by the frustrating and often re-traumatizing struggle to find justice within a system that is either ill-equipped, lacks clear guidelines, or is not empathetic to the unique nature of AI-generated abuse.
The risk of “victim blaming” further exacerbates this issue. The causal relationship is that ambiguous laws and a lack of gender sensitivity in enforcement lead to significant obstacles for victims in asserting their rights and seeking justice, potentially resulting in re-traumatization. Addressing deepfake pornography effectively requires not only the enactment of new, explicit laws but also comprehensive, mandatory training for law enforcement, judicial officials, and victim service providers. This training must foster a victim-centered approach that understands the unique psychological and legal complexities of AI-generated harm.
Fighting Back: Current Efforts and What More is Needed
Addressing the complex challenge of deepfake pornography requires concerted efforts from various stakeholders, encompassing government initiatives, policy reforms, and technological advancements.
Government Initiatives: Cyber Patrols and Ethical AI Guidelines (Kominfo, Polri)
Indonesia’s Ministry of Communication and Digital Affairs (Kominfo) has taken steps by issuing ethical guidelines for the use of AI, building upon its existing efforts to remove harmful online content. Kominfo utilizes an AI-based crawling machine, “AIS,” which has been operational since 2018. This system is designed to identify harmful content across the internet, including pornography and fake news, and subsequently issues orders for content removal within 2×24 hours.
The National Police’s Directorate of Cyber Crime actively collaborates with Kominfo to conduct cyber patrols. Their objective is to detect and prevent the spread of deepfake videos, particularly those used for misinformation or fraudulent purposes. They also provide recommendations to Kominfo aimed at enhancing public digital literacy regarding deepfakes. Indonesia has also established “AI War Rooms” since 2017. These initiatives combine AI technology with expert teams to continuously monitor and address online misinformation and fraud, representing a proactive approach to identifying and mitigating potential threats.
Fighting Back: Current Efforts and What More is Needed
Addressing the complex challenge of deepfake pornography requires concerted efforts from various stakeholders, encompassing government initiatives, policy reforms, and technological advancements.
Government Initiatives: Cyber Patrols and Ethical AI Guidelines (Kominfo, Polri)
Indonesia’s Ministry of Communication and Digital Affairs (Kominfo) has taken steps by issuing ethical guidelines for the use of AI, building upon its existing efforts to remove harmful online content. Kominfo utilizes an AI-based crawling machine, “AIS,” which has been operational since 2018. This system is designed to identify harmful content across the internet, including pornography and fake news, and subsequently issues orders for content removal within 2×24 hours.
The National Police’s Directorate of Cyber Crime actively collaborates with Kominfo to conduct cyber patrols. Their objective is to detect and prevent the spread of deepfake videos, particularly those used for misinformation or fraudulent purposes. They also provide recommendations to Kominfo aimed at enhancing public digital literacy regarding deepfakes. Indonesia has also established “AI War Rooms” since 2017. These initiatives combine AI technology with expert teams to continuously monitor and address online misinformation and fraud, representing a proactive approach to identifying and mitigating potential threats.
The government’s deployment of “AI-based crawling machines” and “cyber patrols” demonstrates a commendable proactive stance in identifying illicit content.
However, the continued proliferation of deepfakes, despite these efforts, suggests that detection alone is insufficient. The system’s reliance on “ordering content providers to remove” identified content and “processing reports from the public” indicates a significant reactive component in the enforcement chain. This reactive element, coupled with the sheer volume and rapid spread of deepfakes, creates a scalability challenge, limiting the immediate impact of detection.
While Indonesia’s detection efforts are valuable, the current framework appears to be more focused on identifying and then reacting to content, rather than enabling immediate, comprehensive, and automated removal at the source.
This highlights the need for stronger, legally mandated platform accountability, potentially through requirements for swift, automated content removal upon detection, similar to the U.S. TAKE IT DOWN Act’s 48-hour rule. It also suggests the importance of fostering direct, real-time collaboration between government agencies and tech platforms for faster response times.
Policy Recommendations: Strengthening Laws, Specific Deepfake Legislation, and Cross-Sector Collaboration
Indonesia must address a complex array of legal, cultural, and enforcement obstacles, including the insufficient gender sensitivity within law enforcement and the ambiguity surrounding existing laws.
A collaborative effort involving both government and civil society is deemed essential to effectively protect victims’ rights and address the increasing prevalence of deepfake pornography.
The Ministry of Communication and Digital Affairs (Komdigi) is urged to spearhead a regulation that explicitly outlaws AI-generated Child Sexual Abuse Material (CSAM). This approach is considered a potentially quicker solution than pursuing an entirely new law. Komdigi should also actively seek partnerships with social media and technology platforms to facilitate the deployment of advanced deepfake detection tools.
There is an urgent and widely recognized need for specific legal regulations tailored to address the misuse of deepfake technology.
Recommended preventive strategies include:
- significantly improving digital literacy nationwide; developing robust,
- large-scale false content detection systems; fostering cross-sector collaboration among government, media, and civil society organizations;
- enforcing clear, firm regulations with appropriate legal sanctions,
- comprehensive regulatory reform, potentially involving a revision of the ITE Law,
- exploration of new authentication technologies such as blockchain for ensuring digital content transparency.
The breadth of these recommendations—ranging from legal reform and digital literacy to tech partnerships and cross-sector collaboration —clearly indicates that no single solution will suffice.
The problem of deepfake pornography is deeply multifaceted, requiring a comprehensive and integrated strategy.
The repeated emphasis on “insufficient gender sensitivity” further underscores that legal changes alone are insufficient without addressing underlying cultural biases and improving enforcement practices.
This points to the need for an interconnected “ecosystem” where different interventions reinforce each other. Combating deepfake pornography effectively requires a holistic, multi-stakeholder approach that seamlessly integrates legal, technological, educational, and societal interventions, rather than relying on isolated or fragmented measures.
This implies that policymakers must adopt a systemic perspective, designing policies that foster collaboration across government, industry, academia, and civil society. The goal is to build a resilient digital environment where laws are supported by cutting-edge technology, an informed public, and a responsive, empathetic justice system.