top of page

STATEMENT 8: AI WATERMARKING TECHNIQUES

  • Bruce Mullan
  • 7 days ago
  • 2 min read

If your business intends to use AI generated content, which we suspect will generally be most businesses, then Statement 8: Watermarking Techniques applies to your AI content.


Watermarking is crucial because fake images, videos or digital content can be hard to spot. The attached image was created by AI. If you look closely at the newspaper publication date, it was in 1984. Well before generative AI, but one had to look closely to spot the mistake. It's quite common for AI created humans to have six fingers! In this example, a green watermark makes it clear and transparent to the reader the origin of the image. The one thing that's true is at some stage in the near future a Board member will trigger a data breach by using generative AI to create confidential board papers, breaching their director's duties.


The DTA's AI standard defines a watermark as "information embedded into digital content, either perceptibly or imperceptibly by humans to establish digital content origin, authorship or informing users that the contents are AI-generated or significantly modified".

 

A visual watermark can be established at either the content generation stage or during post-generation but prior to distribution.


Watermarking provides a reader with transparency, authenticity, and trust in a simple enough way to show people they are interacting with an AI system. The Coalition for Content Provenance and Authenticity (C2PA) is developing an open technical standard for publishers, creators, and consumers to establish the origin and edits of digital content. However, use of this watermarking method is out of scope in the DTA standard.

 

Not all use cases require watermarking. If the content is low risk or does not directly impact the end-user (such as creating a company logo) then AI disclosure would not be required.


If you rely heavily on digital content consider situations where watermarking may not be useful. Watermarks can distract from the content or be over-used in low-risk contexts. Of course, there's the risk of others replicating your watermark or removing it and reproducing the content. The App stores are full of cheap tools to alter images, and specifically remove watermarks.


Lastly, the AI standard stipulates visual watermarks are mandatory for high risk content uses, and must be compatible with WCAG accessibility requirements where relevant.


Statement 8 mandates watermarks be used throughout an AI system's lifecycle. This is further reinforced during the Design phase, where you must establish AI transparency (see Statement 10 - Adopt a human-centred approach). It's mandatory to establish a mechanism to inform users of AI interactions and output, as part of transparency (Criteria 33).


- Post was written by human. ;-)


Contact us to learn more about how we help implement safe and responsible Ai governance practices by emailing


Comments


bottom of page