AI and Security: Ensuring That Opportunities Outweigh the Threats, Reports IDTechEx
Anyone accessing social media platforms will likely have heard of ChatGPT by now. Created by OpenAI and released in November 2022, the generative AI tool had 100 million registered users as of three months after release, with it – the world’s most advanced chatbot – being used to answer simple queries, write songs, and draft press releases, all with varying degrees of success.
While in the creative domain, ChatGPT is not (yet) a match for the human mind, its capabilities are nothing less than staggering, with the potential to transform entire workflows. Yet, along with the opportunities afforded by generative AI, there are risks.
In IDTechEx’s recent “AI Chips 2023-2033” report, IDTechEx forecast that the global AI chips market will grow to exceed US$250 billion by 2033, with the IT & Telecoms, BFSI (Banking, Financial Services and Industrial), and Consumer Electronics industry verticals leading the way in terms of revenue generated up to 2033.
This growth is made possible by machine learning models’ growing complexities and functionalities, representing significant opportunities for both businesses and consumers. However, improper use of AI tools threatens the aforementioned groups. Measures must be taken to ensure that the opportunities afforded by advanced AI greatly outweigh the threats.
ChatGPT, DALL-E 2, and Siri are all examples of generative AI tools. These are systems capable of generating text, images, or other media in response to prompts, where the data produced is based on the training data sets employed to create and refine the models used.
Current intellectual property (IP) laws are not well-suited to account for the legal ownership of intangible assets such as AI tools such as these can generate. Patent law generally considers the inventor as the first owner of the invention. In the case of AI, who invents? The human creates the (initial) prompt, but the AI tool creates the output.
An AI may also be used to prompt other AI tools so that AI can act as both the prompt and the creator. But granting an AI tool IP ownership status – as it currently stands – necessarily gives the AI the same status extended to a legal person.
Therein is the question of culpability and ethics; if the AI is the legal owner of a piece of work, then the human that deployed, commissioned, or used the AI tool is exempt from culpability. This could be considered unethical, as the human user should bear some responsibility for the ethical ramifications of using such an AI tool (particularly in the case of unlawful AI use, such as assistance with writing a script for malware).
A possible way around this is to grant the AI the same legal status as a child, wherein the human user would be analogous to a child’s guardian and so bear some responsibility. At the same time, the AI would still retain ownership. While this ensures that the human user bears responsibility for the AI tool’s actions, it does not adequately address autonomous AI, where no human prompting is necessary. In addition, legislators may be uncomfortable about bestowing legal status onto machines.
Other parties that should be considered are the AI tool’s developers and the data owners that comprise the dataset used to train the AI tool (a key component of the reasoning behind Italy’s ban of ChatGPT in March 2023). The answer to ownership will be sought more urgently as AI tools and their outputs grow, as they surely will over the coming years.
Malpractice
Where AI can be used for good intentions – such as assisting with the appropriate syntax for computer script writing and detecting fraudulent financial transactions – it can also be used for ill, ranging from the deceptive to the illegal. Given that generative AI tools can assist with script writing, it does not matter to the tool what type of script is being written.
As such, generative AI can be used to assist with the writing of malware (malicious software). The AI tool used has, of course, not intentionally created a piece of malware, but this potential mistreatment of a nominally apathetic system needs to be addressed.
Generative AI is very effective for streamlining certain work functions, such as advertising copy and marketing materials. And yet, the question remains of ownership and culpability. From marketing to consumers (where companies will ultimately still be liable for ambiguous or defamatory language) to academic institutions (where the use of a language tool by a student to write a part of their thesis calls into question the legitimacy of their conferred degree), clear guidelines – regulatory and legal – need to be given for fair use of such tools.
Ultimately, we are still a long way off from the types of existential threats posed by AI central to seminal works of science fiction, such as 2001: A Space Odyssey and The Terminator. And yet, even as AI technology advances towards Artificial General Intelligence, clear practices and codes of conduct are needed to ensure that risks are appropriately mitigated, such that AI transforms industries for the better. The new IDTechEx report, “AI Chips 2023-2033”, discusses opportunities across the three aforementioned industry verticals and others.
Report Coverage
IDTechEx forecasts that the global AI chips market will grow to US$257.6 billion by 2033. The report covers the global AI Chips market across eight industry verticals, with 10-year granular forecasts in seven categories (such as by geography, chip architecture, and application).
In addition to the revenue forecasts for AI chips, costs at each stage of the supply chain (design, manufacture, assembly, test and packaging, and operation) are quantified for a leading-edge AI chip. Rigorous calculations and a customisable template for customer use are provided, and analyses of comparative costs between leading and trailing edge node chips.
IDTechEx’s latest report, “AI Chips 2023-2033”, answers the major questions, challenges and opportunities faced by the AI chip value chain. For further understanding of the markets, players, technologies, opportunities, and challenges.