After 148 days, the Writers Guild of America (WGA) has lifted its strike against several major Hollywood studios. As writers walk off the picket lines, a new labor agreement will aim to limit the use and training of generative artificial intelligence tools in Hollywood, setting a precedent for future labor agreements in a wide variety of industries. industries.
On September 27, 2023, the WGA, a union representing American screenwriters, and the Alliance of Motion Picture and Television Producers (AMPTP), representing major television and film production companies, reached an agreement agreement in principle to govern their relationship for the next three years. This crucial agreement ends a strike that ended around 10 billion dollars of media productions in 2023.
In July, following the Directors Guild of America (DGA) resolution with AMPTP, McGuireWoods released an article discuss this strike and other labor market frictions involving AI – recognizing that “employers can mitigate the risk of employee disenfranchisement by developing thoughtful AI policies and safeguards, while promoting transparency and committing to developing and using AI responsibly.” While fears around artificial intelligence (AI) contributed significantly to the WGA strike, safeguards around AI contributed significantly to the WGA’s resolution.
Among more traditional labor issues, like wages and benefits, the use of generative AI was an integral part of the WGA’s demands. Because generative AI can generate new text based on a universe of existing material, many writers feared that technology would make their creativity and their craft obsolete. WGA firmly held in its requests that:
- AI “cannot write or rewrite literary material.
- AI “cannot be used as a source.”
- Hardware covered by the agreement “may not be used to train AI.”
The negotiated agreement
The WGA was tentatively successful in obtaining its AI-related demands. THE Memorandum of Understanding for 2023 WGA Theater and Television Basic Agreement (MOA) includes a section regarding generative artificial intelligence, or GAI, which states that neither AI nor GAI can be considered a “writer” and, “therefore, any written material produced by AI or traditional GAI will not be considered literary material.” The MOA also makes clear that writers cannot be forced to use GAI to create what would otherwise be considered “literary material” if written by an individual. If studios ask writers to use material produced by GAI as a basis for writing or rewriting “literary material”, they must disclose to the writer that the material was produced by GAI.
Material produced by GAI will also not be considered “source material”, meaning that writer compensation, writing credits or other rights cannot be changed even when using material produced by GAI. The MOA provides the following example:
The Company provides Writer A with material written primarily in the form of a screenplay produced by GAI which has not been previously published or exploited and does not transfer any other material. The company asks Writer A to rewrite the written material produced by GAI. The Company must pay Writer A at least the minimum remuneration for a screenplay under Section 13.A.1.a. (2), as well as at least the amount specified in Article 13.A.1.a. (9). , “Additional Pay Scenario – No Materials Assigned.” Written material produced by GAI is not considered source material when determining Writer A’s writing credit and will not disqualify Writer A from eligibility for separate rights.
The MOA is not, however, without reciprocity. If writers choose to use GAI in their work, they must obtain company consent to do so and must further comply with all company policies governing GAI programs. And although GAI programs that produce written material are prohibited, the MOA points out that writers may still be required to use other forms of AI in their work – for example, programs that “detect potential violations of the copyright or plagiarism.
Most importantly, both parties retain flexibility to assert their rights in the future, given the rapidly evolving legal landscape and GAI technology. The MOA specifically notes that a writer is not prohibited from “claiming that the exploitation of his literary material to form, inform or otherwise develop software or GAI systems, falls within these rights and n ‘is not otherwise permitted by applicable law.’ This is particularly relevant, as the limits of copyright and other intellectual property protections for AI-assisted inventions remain a burning question.
The WGA is not alone in seeking to balance the benefits and risks of AI tools. A recent survey of McKinsey & Company reported that while the use of AI tools has increased rapidly in 2023, organizations are also carefully examining the negative aspects: 20% of companies surveyed say they are implementing policies and procedures to monitor risks related to AI. In addition to concerns about hallucinations, potential bias and intellectual property issues – and the risks presented by evolution of state law And regulatory measure — employers implementing AI tools should continue to evaluate their impact on a company’s workforce and the benefits of a responsible use policy.
There is still much to come from Hollywood and other creative professions. Although the WGA and DGA left the picket lines, the Screen Actors Guild-American Federation of Television and Radio Artists not yet found an agreement with the AMPTP, and this group has made its own GAI-related demands to limit digital manipulation and replication of artists’ voices and appearances. Like the AI-driven resolution for writers, an AI-driven resolution for actors and performers will continue to set precedent for employer employment agreements and potential safeguards for use of AI. Stay tuned.…