May 16, 2023 at 11:54 AM #10907SUCULTUREAdminLondon, United Kingdom
and OpenAI AI
What does this mean
for Black-owned /
African tech start-
The ongoing litigation involving Microsoft, GitHub, and OpenAI underscores the importance of understanding legal, IP, and contractual implications in this era of AI innovation. It highlights prominent considerations around copyright law, the US instituted Digital Millennium Copyright Act [DMCA] compliance, and the application of licensed/unlicensed materials in AI, redefining how tech start-ups and organisations approach software development.
Its implications are far-reaching particularly in the realm of Open Source, and in the technology sector more broadly.
On Friday 11 May 2023, the Defendants in a case [Microsoft, GitHub, et.al] successfully defended a majority of the claims brought against them.
The lawsuit alleges Microsoft, GitHub, and OpenAI breached the proper and ethical use of licensed open source materials and the violation of the DMCA in the creation of AI tools Copilot and Codex – particularly the reproduction, profiteering and distribution of copyrighted code without regard to associated copyright notices or license terms, violation of open-source community terms, misrepresenting the source and extent of the code used, thereby facilitating copyright infringement and fraud.
As noted in the response filed by Microsoft and OpenAI, they argue; the complaints “fails on two intrinsic defects: the lack of injury and lack of an otherwise viable claim,” alleging “a grab bag of claims that fail to plead violations of cognizable legal rights.” These all raise important legal and ethical questions.
In arriving at the judgement, the Court held that claims for contravention/infringement of specific sections of the DMCA, including false representation and designation of origin, tortious interference, unjust enrichment, fraud, unfair competition, and negligence were denied – although with leave and open to amendment.
The reasoning behind this was that the court felt the claimants failed to prove sufficient injury-in-fact for their privacy and property rights damage claims, but however acknowledged their standing to seek injunctive relief, which may prevent the defendants [Microsoft, GitHub, and OpenAI] from taking specific action – to protect assets or business interests of others.
What does this mean for Black-owned / African tech start-ups?:
The current guidance by the US Copyright office states explicitly that unless there is sufficient human involvement, AI creations aren’t subject to copyright protection.
Other countries, or monetary union’s, like the EU have signed up to the 1996 WIPO Copyright Treaty through the CIS Directive 2001, including the UK. In the UK, the primary legislation governing copyright law is the Copyright, Designs, and Patents Act 1988 (CDPA), which may be enforceable internationally.
With other companies, and Getty Images, looking into taking legal action for AI art tools created by Stability AI, Midjourney, and DeviantArt for violating copyright laws by illegally scraping images and artists’ work from the internet, there may be significant IP and contractual issues to consider if you are using AI to create or develop code. If you’re a startup or venture backed company these are important considerations to be aware of, and companies need to make sure they are protected or their tech/software investment is sound, meeting all VC, technical/investment due diligence.
Similarly, it may be important to consider if you are inputting code owned by your employer into ChatGPT, GPT4, Google Bard AI, etc as this may violate your company’s intellectual property or confidentiality policies and/or agreements.
Three important questions every patent or copyright lawyer/attorney working with AI and software development may ask, include – what specific component of the software code was developed using open source, what specific open source licence/s were utilised, and which code were autonomously generated etc… and this list is not exhaustive.
The important issue here is that with using AI generated code, provenance and originality needs to be determinable in the system and design of products.
As we monitor the progress of this case, as well as others, it prompts the crucial re-evaluation of tech strategies in the face of the fast evolving AI-related legal landscape.
What steps are you taking within your organisation to ensure that your tech/risk management strategy keeps up with the challenging legal landscape around AI?
What legal, or technical measures are you considering to mitigate potential IP and copyright issues?
May 16, 2023 at 12:20 PM #10908Julian ColeMemberLondon., England, United Kingdom.
@SUCULTURE Thank you very much for making the tech community aware of this case.
It has wide implications. The lesson to be learned here is that ethical AI use, fostering trust in open-source communities, and safeguarding stakeholder rights are all components of a robust AI risk strategy.
It sounds like businesses should seek legal advice early.
You must be logged in to reply to this topic.