Do you want an agency in the age of AI? Prepare to fight

Writers are protesting studios’ use of AI language models to write screenplays. The actors are on strike after rejecting a proposal from companies seeking to use AI technology to scan people’s faces and bodies, and they own the right to use these deepfake-style digital copies without consent or compensation in perpetuity.
What connects these cases is a fear of humans being replaced by computer programs and a sense that there is very little we can do about it. No surprise. Our lax approach to regulating the excesses of the previous tech boom means AI companies have felt safe building and launching exploitative and harmful products.
But that’s about to change. The boom in generative AI has revived the enthusiasm of US politicians to pass specific laws on AI. While it will take some time for this to take effect, existing laws already provide plenty of ammunition for those who claim their rights have been harmed by AI companies.
I just published a story examining the spate of lawsuits and investigations that have hit these companies recently. These lawsuits are likely to be highly influential in ensuring that the way AI is developed and used in the future is fairer and more equitable. Read here.
The bottom line is that the Federal Trade Commission opened an investigation last week into whether OpenAI violated consumer protection laws by scraping people’s online data to train its popular AI chatbot ChatGPT.
Meanwhile, artists, authors and image company Getty are suing AI companies like OpenAI, Stability AI and Meta, claiming they broke copyright laws by training their models on their work without providing any acknowledgment or payment. Last week comedian and author Sarah Silverman joined the fight for authors’ copyright against AI companies.
Both the FTC investigation and the slew of lawsuits revolve around AI data practices, which rely on hoovering the internet for data to train models. This inevitably includes personal data and copyrighted works.
These cases will essentially determine how AI companies are legally allowed to conduct themselves, says Matthew Butterick, a lawyer who represents artists and authors, including Silverman, in class action suits against GitHub and Microsoft, OpenAI, Stability AI and Meta.