Bill SB 1047 ('restrict commercial use of harmful AI models in California') :https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047
Bill AB 3211 ('AI generation services must include tamper-proof watermarks on all AI generated content') : https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB3211
Image is Senator Scott Wiener (Democrat) , who put forward this bill.
//----//
SB 1047 is written like the California senators think AI-generated models are Skynet or something.
Quotes from SB 1047:
"This bill .... requires that a developer, before beginning to initially train a covered model, as defined, comply with various requirements, including implementing the capability to promptly enact a full shutdown, as defined, and implement a written and separate safety and security protocol, as specified."
"c) If not properly subject to human controls, future development in artificial intelligence may also have the potential to be used to create novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities."
Additionally , SB 1047 is written so that California senators can dictate what AI models is "ok to train" and which "are not ok to train".
Or put more plainly; "AI models that adhere to California politics" vs. "Everything else".
Legislative "Woke bias" for AI-models , essentially.
//----//
AB 3211 has a more grounded approach , focusing on how AI generated content can potentially be used for online disinformation.
The AB 3211 bill says that all AI generation services must have watermarks that can identify the content as being produced by an AI-tool.
I don't know of any examples of AI being used for political disinformation yet (have you?).
Though seeing the latest Flux model I realize it is becoming really hard to tell the difference between an AI generated image and a real stockphoto.
The AB 3211 is frustratingly vague with regards to proving an image is AI generated vs. respecting user privacy.
Quotes from AB 3211:
"This bill, the California Digital Content Provenance Standards, would require a generative artificial intelligence (AI) provider, as provided, to, among other things, apply provenance data to synthetic content produced or significantly modified by a generative AI system that the provider makes available, as those terms are defined, and to conduct adversarial testing exercises, as prescribed."
The bill does not specify what method(s) should be used to provide 'Providence'.
But practically speaking for present-day AI image generation sites, this refers to either adding extra text to the metadata , or by subtly encoding text into the image by modifying the RGB values of the pixels in the image itself.
This latter as known as a "invisible watermark": https://www.locklizard.com/document-security-blog/invisible-watermarks/
The AB 3211 explicitly states that the 'Providence' must be tamper proof.
This is strange, since any aspect of a .png or .mp3 file can be modified at will.
It seems legislators of this bill has no clue what they are talking about here.
Depending on how one wishes to interpret the AB 3211 bill , this encoding can include any other bits of information as well; such as the IP adress of the user , the prompt used for the image and the exact date the image was generated.
Any watermarking added to a public image that includes private user data will be in violation of EU-law.
That being said , expect a new "Terms of Service" on AI generation site in response to this new bill.
Read the terms carefully. There might be a watermark update on generated images , or there might not.
Watermarks on RGB values of generated images can easily be removed by resizing the image in MS paint , or by taking a screenshot of the image.
Articles:
900% true: "It seems legislators of this bill have no clue what they are talking about."