AB316: No AI scapegoating allowed!
(I am not a lawyer)
An interesting law is now in effect in California - AB316. The law is as follows:
The people of the State of California do enact as follows:
SECTION 1. Section 1714.46 is added to the Civil Code, to read:
1714.46. (a) “Artificial intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
(b) In an action against a defendant that developed or used artificial intelligence that is alleged to have caused a harm to the plaintiff, it shall not be a defense, and the defendant may not assert, that the artificial intelligence autonomously caused the harm to the plaintiff.Source
In practice, this means that you as a developer cannot be free from liability if the AI you used causes harm to a user due to the unpredictable nature of LLMs.
If your chatbot decide to tell your customer to kill themselves, it's your problem.
I think this is a reasonable law but feels a bit vague. Good software needs guardrails against failures, LLMs in the end can be muzzled, we do control the spigot of the text or operations that get returned.
The vagueness comes from who the "developer" is when the LLM goes awry. Is it OpenAI's fault if a third-party app has a slip up, or is the third party? If a research lab puts out a new LLM that another company decides to put in their airplane that crashes, can the original lab be liable or are they only liable if they claim it to be an OSS airplane LLM?
As for AI safety and guardrail companies, we will probably see greater adoption as companies will want greater ways to shift liability. AI insurance for the research labs? Probably already gearing up for YC Spring-26.
- Less: What death felt like
