Uncategorized

Google open-sourced its watermarking tool for AI-generated text

Illustration: The Verge

Google’s SynthID text watermarking technology, a tool the company created to make AI-generated text easier to identify, is now available open-source through the Google Responsible Generative AI Toolkit, the company announced on X.
“Now, other [generative] AI developers will be able to use this technology to help them detect whether text outputs have come from their own [large language models], making it easier for more developers to build AI responsibly,” Pushmeet Kohli, the vice president of research at Google DeepMind, told MIT Technology Review.

Watermarks have become increasingly important tools as large language models are used to spread political misinformation, generate nonconsensual sexual content, and for other malicious purposes. California’s already looking into making AI watermarking mandatory, while China’s government started requiring it last year. Yet the tools are still a work in progress.
SynthID, which was announced last August, helps make AI-generated output detectable by adding an invisible watermark into images, audio, video, and text as they’re generated. Google says the text version of SynthID works by making the text output slightly less probable in a way that is detectable by software but not humans:

An LLM generates text one token at a time. These tokens can represent a single character, word or part of a phrase. To create a sequence of coherent text, the model predicts the next most likely token to generate. These predictions are based on the preceding words and the probability scores assigned to each potential token.
For example, with the phrase “My favorite tropical fruits are __.” The LLM might start completing the sentence with the tokens “mango,” “lychee,” “papaya,” or “durian,” and each token is given a probability score. When there’s a range of different tokens to choose from, SynthID can adjust the probability score of each predicted token, in cases where it won’t compromise the quality, accuracy and creativity of the output.
This process is repeated throughout the generated text, so a single sentence might contain ten or more adjusted probability scores, and a page could contain hundreds. The final pattern of scores for both the model’s word choices combined with the adjusted probability scores are considered the watermark.

Google claims the system, which it’s already integrated into its Gemini chatbot, doesn’t compromise the quality, accuracy, creativity, or speed of generated text, which has long been an issue with watermarking systems. Google says it can work on text as short as three sentences, as well as text that’s been cropped, paraphrased, or modified. But it struggles with short text, content that’s been rewritten or translated, and even responses to factual questions.
“SynthID isn’t a silver bullet for identifying AI generated content,” Google wrote in a blog post in May. “[But it] is an important building block for developing more reliable AI identification tools and can help millions of people make informed decisions about how they interact with AI-generated content.”

Illustration: The Verge

Google’s SynthID text watermarking technology, a tool the company created to make AI-generated text easier to identify, is now available open-source through the Google Responsible Generative AI Toolkit, the company announced on X.

“Now, other [generative] AI developers will be able to use this technology to help them detect whether text outputs have come from their own [large language models], making it easier for more developers to build AI responsibly,” Pushmeet Kohli, the vice president of research at Google DeepMind, told MIT Technology Review.

Watermarks have become increasingly important tools as large language models are used to spread political misinformation, generate nonconsensual sexual content, and for other malicious purposes. California’s already looking into making AI watermarking mandatory, while China’s government started requiring it last year. Yet the tools are still a work in progress.

SynthID, which was announced last August, helps make AI-generated output detectable by adding an invisible watermark into images, audio, video, and text as they’re generated. Google says the text version of SynthID works by making the text output slightly less probable in a way that is detectable by software but not humans:

An LLM generates text one token at a time. These tokens can represent a single character, word or part of a phrase. To create a sequence of coherent text, the model predicts the next most likely token to generate. These predictions are based on the preceding words and the probability scores assigned to each potential token.

For example, with the phrase “My favorite tropical fruits are __.” The LLM might start completing the sentence with the tokens “mango,” “lychee,” “papaya,” or “durian,” and each token is given a probability score. When there’s a range of different tokens to choose from, SynthID can adjust the probability score of each predicted token, in cases where it won’t compromise the quality, accuracy and creativity of the output.

This process is repeated throughout the generated text, so a single sentence might contain ten or more adjusted probability scores, and a page could contain hundreds. The final pattern of scores for both the model’s word choices combined with the adjusted probability scores are considered the watermark.

Google claims the system, which it’s already integrated into its Gemini chatbot, doesn’t compromise the quality, accuracy, creativity, or speed of generated text, which has long been an issue with watermarking systems. Google says it can work on text as short as three sentences, as well as text that’s been cropped, paraphrased, or modified. But it struggles with short text, content that’s been rewritten or translated, and even responses to factual questions.

“SynthID isn’t a silver bullet for identifying AI generated content,” Google wrote in a blog post in May. “[But it] is an important building block for developing more reliable AI identification tools and can help millions of people make informed decisions about how they interact with AI-generated content.”

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy