top of page

TECHNICAL STATEMENT 8: AI WATERMARKING TECHNIQUES

  • Writer: Bruce Mullan
    Bruce Mullan
  • Mar 30
  • 2 min read

Updated: Apr 17

If your business intends to use AI generated content, which we suspect will generally be most businesses, then Statement 8: Watermarking Techniques applies to your AI content.


Watermarking is crucial because fake images, videos or digital content can be hard to spot. The attached image was created by AI. If you look closely at the newspaper publication date, it was in 1984. Well before generative AI, but one had to look closely to spot the mistake. It's quite common for AI created humans to have six fingers! In this example, a green watermark makes it clear and transparent to the reader the origin of the image. The one thing that's true is at some stage in the near future a Board member will trigger a data breach by using generative AI to create confidential board papers, breaching their director's duties.


The DTA's AI technical standard defines a watermark as "information embedded into digital content, either perceptibly or imperceptibly by humans to establish digital content origin, authorship or informing users that the contents are AI-generated or significantly modified".


This image show how Adobe is meeting the technical standard.


Adobe watermark meets the watermark technical standard
Example of an Adobe watermark that meets the technical standard

 

A visual watermark can be established at either the content generation stage or during post-generation but prior to distribution.


Watermarking provides a reader with transparency, authenticity, and trust in a simple enough way to show people they are interacting with an AI system. The Coalition for Content Provenance and Authenticity (C2PA) is developing an open technical standard for publishers, creators, and consumers to establish the origin and edits of digital content. However, use of this watermarking method is out of scope in the DTA standard.

 

Not all use cases require watermarking. If the content is low risk or does not directly impact the end-user (such as creating a company logo) then AI disclosure would not be required according to the technical statement.


If you rely heavily on digital content consider situations where watermarking may not be useful. Watermarks can distract from the content or be over-used in low-risk contexts. Of course, there's the risk of others replicating your watermark or removing it and reproducing the content. The App stores are full of cheap tools to alter images, and specifically remove watermarks.


Lastly, the AI standard stipulates visual watermarks are mandatory for high risk content uses, and must be compatible with WCAG accessibility requirements where relevant.


Statement 8 mandates watermarks be used throughout an AI system's lifecycle. This is further reinforced during the Design phase, where you must establish AI transparency (see Statement 10 - Adopt a human-centred approach). It's mandatory to establish a mechanism to inform users of AI interactions and output, as part of transparency (Criteria 33).


Stay safe, Bruce


ABOUT ME

I write all my own content, you can tell by the odd typo and occasional missing word. I use AI for research. I also teach organisations how to implement the Australian AI Governance Standard and confidently transition to AI systems. To learn about my upcoming public AI Governance workshops visit: Public workshops


To learn more about AI Governance, check out my Hitchhikers Guide to AI Governance Podcast visit:  Hitchhikers Guide to AI Governance Podcast


Podcast that covers the AI Technical Statements

Comments


bottom of page