Through the “Black Mirror” – “Joan Is Awful” and AI Malpractice

In a recently published article, IDTechEx discussed the importance of ownership and culpability when deploying AI tools, especially in the context of creative works.

This matter of accountability and the potential insidious use of artificial intelligence in creating intellectual property is a theme in the first episode of the new “Black Mirror” season, “Joan Is Awful”.

For those who do not already know, “Black Mirror” is a speculative fiction anthology series created by Charlie Brooker. Premiering in 2011 and now on its sixth season, “Black Mirror” runs the gamut of existential subject matter, from questions of ethics and morality (the good of the many against the good of the self) to the potential consequences of unchecked and unregulated scientific advancement.

“Joan Is Awful” follows Joan – played by Annie Murphy – a manager who sits below the board at a tech company and has to make one of her employees redundant despite the ramifications of this on the company’s recent green initiative pledge. We also see her texting with her ex-boyfriend, visiting a therapist, and finally sitting down with her fiancé to watch a show on “Streamberry”, this season’s in-universe analogue of Netflix.

Spoilers ahead. Skip this section if you are sensitive to spoilers

They come across a new show, the titular “Joan Is Awful”, and begin to watch. All that we have seen thus far in the show is dramatised for Joan, with Salma Hayek playing the character of Joan in the “Streamberry” adaptation. Joan (the one we know) is completely taken aback, as she doesn’t know how her life has been so deeply invaded.

The dramatisation skews her personality to display exaggerated negative personality traits (such as callousness for the employee she makes redundant, where she feels somewhat powerless in the decision and does express a modicum of pity).

Frighteningly for Joan, the show is not restricted to her account, and she becomes ostracised by friends and family and fired from work (as the dramatisation of her life is seen to be a breakage of an NDA).

Joan ultimately seeks legal advice from a lawyer, whereupon the lawyer informs Joan that she had consented to Streamberry’s terms and conditions, including the ability to use and dramatize any and all aspects of her life, including her name. As the lawyer states at the beginning of their conversation, “I’m as shocked as you are”. Joan then changes tack and proposes suing Salma Hayek for portraying her. Again, the lawyer negates this by informing Joan that it is only Salma Hayek’s likeness: the entire show is CGI.

Joan is marooned with no legal recourse. And then, in a stroke of genius born from utter desperation, Joan envisions a way to get Salma Hayek invested in this. So she defecates in a church during a wedding ceremony, knowing that this event will be repeated on the show. The real (at least in terms of Salma Hayek now playing herself) Salma Hayek understandably takes issue with this and talks to her lawyer about suing Streamberry. But again, she has licensed her image to the company, allowing them to take such liberties.

Spoilers end here

Joan and Salma, canvas and paint alike, have no ownership over how they are portrayed. And it is this question of ownership will be more frequently asked in connection with AI tools as they develop and draw on a more diverse range of data sets.

While from a legal standpoint, the case against Joan and Salma may appear pretty watertight in the “Black Mirror” episode, the usage of generative AI to create content – even in its current, comparatively limited form – still poses the important question of ownership, a question that has no robust answer by way of current IP laws; Patent law generally considers the inventor as the first owner of the invention.

In the case of AI, who invents? The human creates the (initial) prompt, but the AI tool creates the output. An AI may also prompt other AI tools so that AI can act as both the prompt and the creator. Other parties should also be considered, such as the AI tool’s developers and the data owners that comprise the dataset used to train the AI tool.

The latter was a key component of the reasoning behind Italy’s ban of ChatGPT in April 2023. Italy banned ChatGPT for all users accessing the platform with an Italian IP due to four key points of contention. Two of these were claims by the Italian Garante (the Italian data protection authority) that OpenAI did not properly inform users that it had collected personal data and that ChatGPT did not require users to verify their age, even though the content that ChatGPT can generate is at times intended for mature audiences.

ChatGPT was restored in Italy at the end of April, with OpenAI addressing these points by making their privacy policy more accessible to people before registering with ChatGPT and rolling out a new tool to verify the age of users.

This event could well be a sign of things to come. As AI becomes more advanced and so too does the type of content that it can generate, the approach taken by the Italian Garante could – and, most would probably agree, should – be one taken by all data protection agencies to ensure that personal data used to train such algorithms cannot be misused.

Because no one wants to be Joan.

Report coverage

IDTechEx forecasts that the global AI chips market will grow to US$257.6 billion by 2033. The report covers the global AI Chips market across eight industry verticals, with 10-year granular forecasts in seven categories (geography, chip architecture, and application).

In addition to the revenue forecasts for AI chips, costs at each stage of the supply chain (design, manufacture, assembly, test, and packaging, and operation) are quantified for a leading-edge AI chip. Rigorous calculations and a customizable template for customer use are provided, and analyses of comparative costs between leading and trailing edge node chips.

IDTechEx’s latest report, “AI Chips 2023–2033”, answers the major questions, challenges, and opportunities faced by the AI chip value chain.