Charlie Morgan

22 September 2023
Topics in this article
  • Communications & Technology
  • Strategy & Planning

In the rapidly evolving landscape of business technology, one particular evolution is leading the conversation: generative AI. This cutting-edge branch of artificial intelligence harnesses vast datasets to create dynamic content for its users, from text to images and videos. In a 2023 survey from Forbes Advisor, 64% of businesses expect AI to increase their overall productivity, demonstrating the increasing confidence that generative AI will transform business operations. But as businesses delve deeper into the intricate foundations of generative AI, they may also be exposing themselves to unwanted legal consequences.

Tech companies responsible for the design of generative AI tools are seeing an emerging trend: lawsuits over copyrights. AI tools are trained using a variety of data sources that, if not explicitly excluded, most likely include materials owned or restricted by the original creator. Adding to the challenge, generative AI is largely considered a ‘black box’, which means no one – not even the creators – fully understands the steps that AI takes to generate outputs. These outputs are intricate and difficult to trace back to their sources, often involving complex, non-linear processes and vast datasets. This creates a legal risk to users of generative AI as they may unknowingly reference someone’s work when generating content.

Although there are many examples where developers of AI technology are experiencing challenges in IP protection and copyright infringement, the risk to users is still not clearly defined.

  • In February 2023, a lawsuit was filed against Stability AI by Getty Images, claiming the service “copied more than 12 million photographs from Getty Images’ collection, along with associated captions and metadata, without permission from or compensation to Getty Images”
  • In June 2023, a lawsuit was filed against OpenAI by two US authors for claims of copyright infringement, stating that OpenAI’s training datasets came from copyrighted sources that were used without consent, credit, or compensation
  • In September 2023, Microsoft pledged to take full responsibility and protect its users against any copyright infringement lawsuits that ensue from the use of their Copilot AI tool

While generative AI might provide substantial improvements to businesses, minimizing the risks it can bring requires a delicate balancing act. The consequences for copyright infringement are no laughing matter: penalties can range up to $150,000 for each piece of work that is illegally referenced, which can have financial and reputational implications on a brand for years to come.

So, the question is: How can you prioritize safeguarding your business against legal risks when piloting or implementing AI solutions?  

  • Your people are your first line of defense: Implement robust training programs to educate your employees on the benefits and risks associated with external AI tools. Create airtight policies to mitigate the risk of your data from being shared unintentionally with open-source AI, and make sure original sources are being cited in employee work if AI is involved in output development. Utilize experts and third-party consulting firms where possible to help design and roll out programs for your employees.
  • Data source transparency is key: To reduce the risk of intellectual property infringement, AI tools should be developed using safe and legal data sources, such as data that is licensed or owned by the developer or your company. Businesses should seek to avoid AI tools where the provider cannot confirm their training data is properly acquired. Work with your provider to request certifications, check company reports, and seek a mutual understanding of their data handling and storage policies.
  • Get a legal guarantee: Ensure you have contracts in place with all suppliers and customers that have access to your data. All legal documents should provide strict confidentiality protections that define what data sources are used and how they are being used by each party. Employ third-party audit companies to identify any gaps in your contracts and ensure your supply base is compliant.

By carefully considering and addressing the priorities listed, you can start to unlock the full potential of generative AI, while diligently shielding your business from potential legal pitfalls that could jeopardize your operations and reputation. As of the time this article was written, the legal ramifications of AI trained on copyrighted data remain a puzzle. End-users of generative AI have not yet been targeted in lawsuits; however, tech corporations are beginning to address this concern and seek out ways to reassure potential customers through transparent data practices.

As we look forward to the future, we will undoubtedly see international policies and laws around artificial intelligence developed and implemented. Many of these initial policies will be fluid but eventually will be shaped by the outcomes and precedents set by today’s lawsuits. In this ever-shifting landscape, companies must remain vigilant and proactively informed about evolving copyright restrictions.

To learn more from Proxima’s technology experts, contact us today.


Authored by Charlie Morgan with assistance from Al Maranon, Simon Geale, and Simon O’Brien.

Let’s talk.

If you are looking to drive purposeful and profitable change, get in touch.

Contact us