AI can be a powerful tool for campaigners – if used cautiously and ethically

Logo for Witness
shirin anlen and Raquel Vazquez Llorente 11th July 2023
AI capturing an image of campaigners

AI can be an effective tool for advocacy but campaigners need to be aware of the risks involved, warn shirin anlen and Raquel Vazquez Llorente from the human rights group, WITNESS.

WITNESS is driven by a deep belief in the power of audiovisual technologies to protect and defend human rights. We recognise the potential for using AI to support human rights advocacy – but only with appropriate caution and ethical considerations. Here are some practical ways in which AI can be used by campaigners. 

This post is an excerpt from a longer piece, available at WITNESS

Key considerations

1) The use of AI to create or edit media should not undermine the credibility, safety and content of other human rights organisations, journalists, fact checkers and documentation groups. When generating or modifying visual content with AI, it is important to think about the role of global human rights organisations in terms of setting standards and using tools in a way that doesn’t have collateral harms to smaller, local groups who face much more extreme pressures.a

2) AI output should be clearly labelled and watermarked, and consider including metadata or invisible fingerprints to track the provenance of media. We strongly advocate for a more innovative and principled approach to content labelling that can express complex ideas and provide audiences with meaningful context on how the media has been created or manipulated.a

3) A careful approach to consent is critical in AI audiovisual content. Human rights organisations can draw from existing guidelines about informed consent in visual content.

Potential use cases 

  • AI can anonymise individuals in order to protect their identities

Filters such as blurring, pixelization, or voice alterations applied to individuals or places can help create digital disguises for images and audio. Similarly, creative applications can both protect individuals and engage an audience, like the deepfake methods employed in the documentary Welcome to Chechnya

However, the use of AI techniques can produce dehumanising results that should be avoided. Current AI tools often generate results that enhance social, racial, and gender biases, as well as produce visual errors that depict deformed human bodies. Any process that uses AI for identity protection should always have careful human curation and oversight, along with a deep understanding of the community and audience it serves. For questions to help guide the use of AI for identity protection, see here

  • Artistic approaches can be used to visualise testimonies, as long as certain conditions are met

There is a rich history of animations and alternative documentary storytelling forms, and AI can help advance audiovisual forms that effectively convey stories and engage audiences for advocacy purposes. AI tools such as text-to-image, text-to-video, and frame interpolations can be used to generate visuals for testimonies that are missing images and video content to show the underlying emotions and subtexts of the experience. However, it should be clearly mentioned that these visuals are meant to be an artistic expression rather than a strict word-by-word representation. For questions to help guide the use of AI for visualising testimonies, see here

  • AI can reconstruct places for advocacy purposes

When physical places are inaccessible for security reasons or have been destroyed, AI techniques can enable audiences to visualise a site of historical importance or the circumstances experienced by a certain community in a given place (such as detention conditions). Similarly, they can help us imagine alternative realities and futures, like environmental devastation. However, in these instances, it should be noted how these visuals were generated, and the use of AI should always match the advocacy objectives and be grounded in verified information. For questions to help guide the use of AI for reconstructing places, see here

  • The use of AI to “resurrect” deceased people needs special consideration 

Bringing deceased individuals “back to life” raises many ethical challenges around consent, exploitation of the dead, and re-traumatisation of surviving families and communities. Generating lifelike representations using AI, which utilises someone’s likeness, may replicate the harm and abuse that the individual or their community suffered in the first place.  On the other hand, the careful use of AI can help represent alternative realities or bring back someone’s message, and have a powerful advocacy effect. When using AI tools for these purposes, it is critical to consider the legal implications; incorporate strict consent by the next of kin, community, or others depending on the culture, context, and risks; think about respect to the memory of the individual in the curation and creation process, and clearly disclose the use of AI.  

  • AI should not be used to generate humans and events where real-life footage can be obtained

Photojournalists, human rights defenders and documentation teams put their safety at risk to cover events and collect valuable information. Their work can expose abuses, gather evidence of crimes, or connect with audiences. In a world where mis- and disinformation are rampant, using AI to generate visual content takes us further away from real evidence and undermines our ability to fight against human rights violations and atrocities. Importantly, in these situations, audiences expect to receive real information about real events.

  • AI should not be used on its own to edit the words of personal testimonies

When using written or visual testimonies for advocacy purposes, applying AI to edit the material can produce errors and changes in tone and meaning. Interpreting these testimonies requires a level of sensitivity and comprehension of the subject-matter and the purpose of the material that is not within AI’s capabilities. 

With the anticipation of the significant social impact of generative AI and the lack of regulations that address the risks and harms on a global scale, we must ensure we understand the ethical challenges this technology poses, and our role in addressing them. Otherwise, we are at risk of devaluing and undermining the credibility of visual content and those who have the most to lose. For how some specific uses as the ones outlined above should be approached, and questions that can help guide organisations, read the full piece

shirin anlen is the Media Technologist, Technology Threats and Opportunities, WITNESS. 

Raquel Vazquez Llorente is the Head of Law and Policy, Technology Threats and Opportunities, WITNESS.

WITNESS is a global human rights organisation that helps people use video and technology to protect and defend their rights. Their Technology Threats and Opportunities Team engages early on with emerging technologies that have the potential to enhance or undermine trust on audiovisual content. More about WITNESS’s work on deepfakes, synthetic media and generative AI can be found here: https://witnessgenai.global/

Keep up to date with IBT news

Non-members can sign up to our mailing list here