Balancing the Societal Risks and Rewards of Generative AI

Aila Danish - Sep 4 - - Dev Community

When Alan Turing first published his theory of computation, he envisioned a future that thrived upon artificial intelligence (AI) and its many applications. Today, that future is present. As AI continues its proliferation through all aspects of our lives, it is essential to consider the societal implications these systems have, from improving education to copyright concerns, generative AI has one of the most significant impacts on our generation and future ones. Turing laid crucial foundations for the development of artificial intelligence, and today, we build upon the scaffolding he created.

Generative AI uses machine learning to construct neural networks to generate content such as text, images or other means of data visualisation (Nvidia, 2023). This human brain simulation allows for infinite generation and, therefore, infinite uses. Today generative AI is applied in the vast majority of sectors, such as software engineering, education, legal, and business. As its implementation multiplies, obstacles within its uses arise, ranging from plagiarism to impersonation (Dhoni, 2023).

AI in Education

Students everywhere utilise ChatGPT and other generative AIs for learning, tutoring and brainstorming. This worldwide use causes many ethical discrepancies with giving students access to this infinite resource, the rewards and risks of which can alter education entirely.

In 1984, educational psychologist Benjamin Bloom introduced his ‘two sigma problem’. He aimed to compare various methods of teaching and to monitor their effectiveness, his findings showed that pupils who received one-to-one tuition performed two standard deviations better than pupils in the average school environment (BLOOM, 1984).

After Bloom reviewed his findings, he brainstormed ideas to make one-to-one tuition accessible to all pupils, and today, generative AI may have been the answer he was looking for. AI cannot only communicate concisely and clearly but also act as a form of tuition. ChatGPT specifically can teach concepts to students as many times as needed, at varying levels of complexity, catering to individual pupils' needs and academic abilities (Kadurrin, 2023). The AI chatbot can also generate infiinte questions on a specific topic, and mark student’s work. Additionally, the accessibility of the AI allows students from lower socioeconomic status to access this resource with just an internet connection. This is groundbreaking in education and increases efficiency for both pupils and teachers. AI can be utilised by teachers to generate content and lesson plans and interpret assessment data to find areas of weaknesses in specific students.

Technology and Software Engineering

Coding is a fundamental skill that constructs the foundations of technology today, in fields such as software engineering, it is vital to be knowledgeable about a wide range of programming languages in order to switch between, migrate and utilise code. Generative AI allows for seamless code migration from one programming language to another, (Dhoni, 2023) and can generate lines of code that would otherwise be tedious and time consuming for developers to manually input. These AI tools offer functions such as identifying errors in code, allow developers to make requests, and can act as an extension for IDEs (Integrated development environments)(IBM, 2023). AI can also be at the forefront of innovation, especially within software engineering. The generation of algorithm designs and optimisation can further development within the field; however, developers need to be transparent and discreet when they use AI in their work, as it can lead to mistrust and issues surrounding intellectual property.

Additionally, whilst ChatGPT has the potential to generate code, its functionality may vary. Other IDE extensions produce higher quality code because they learnt from thousands of databases of successful code, contrastingly, ChatGPT uses NLP (natural language processing) to interpret and try to generate code in python and javascript (Banu, 2023).

AI also provides endless pathways for new technological roles, generating jobs and income for thousands as many companies compete to dominate the AI market. These jobs include AI researchers, robotic engineers, and machine learning engineers, their roles require extensive knowledge of data analytics and logic and are less program-orientated (Morgan McKinley, 2023). The emergence of these jobs provides security for many families ( as many of these tasks can be done remotely) and opens up opportunities for those living in developing and emerging countries, furthering their economies and improving their quality of life.

Intellectual Property

The widespread access to GenAI, such as ChatGPT, has raised concerns surrounding copyright infringement and intellectual property rights. Laws regarding the copyright of AI-generated content vary by country, making it challenging to regulate online discourse (Nvidia, 2023). Although some social media platforms advise users to disclose if an image of sound is AI- generated, it is still not mandatory (Dhoni, 2023). Platforms such as TikTok may automatically add a label for the viewer disclosing if the content they're viewing is AI-generated, however, this is based on manual input and does not apply to all types of content. Another significant concern is the ease at which AI companies can steal code posted on public forums or accidentally inputted into an open AI, leading to companies advising engineers to refrain from inputting their code for testing or bug recognition into any unapproved AI.

Moreover, ChatGPT can generate content from the perspective of a real personality, such as a celebrity or politician. This has resulted in controversial deep fakes of celebrities and created a fear of defamation among the public. This fear has caused a rise in scrutiny of AI-generated content, leading many to advocate for stricter regulations (Elysse, 2023). As AI technology continues to evolve, it is crucial to design a legal framework that preserves intellectual property rights whilst ensuring the advantages of AI are not inhibited.

Cybersecurity

Maanak Gupta describes generative AI as a ‘double-edged sword in cybersecurity’ because of its use in hacking and defending (Gupta, 2023). Hackers may use ChatGPT to generate more convincing emails for phishing, the AI chatbot has no restrictions on this and methods such as these can lead to malware being downloaded on the victim's device, viruses being installed, spyware, and worms. Alternatively, white-hat hackers (hackers of good intent) use programs such as PentestGPT (an AI built upon ChatGPT) to penetrate a system's weaknesses, allowing for security improvements and increased performance.

Machine learning systems are meticulously accurate at identifying patterns in vast amounts of data, and can create predictions even with immense amounts of uncertainty. In cybersecurity, this translates to defenders significantly raising the bar for attackers through remarkably improved threat detection systems. One way this is used is within intrusion detection systems, which process large amounts of network activity data to set a standard for ‘normal behaviour’. If any slight anomalies are found, the system identifies it as a cybersecurity threat. This is effective as it requires attackers to not only avoid obvious red flags that would be detected by human defenders but also imitate legitimate usage of the system at a granular level (Hoffman, 2021).

Defenders can also use ChatGPT to generate threat reports on incidents and create assessments that reveal the vulnerability of a system. These comprehensive reports can even suggest how to mitigate future attacks/potential threats, saving time and data for many companies. Today, cybersecurity services already utilise machine learning and AI to track and recognise malicious code, exposing malware within a system. This allows for a seamless, safe user experience and highly benefits the organisation and private data companies.

Hallucination

‘Hallucination’ refers to a phenomenon whereby AI models create seemingly nonsensical responses or responses that contain errors that misalign with real-world logic (Ji, 2023). This is common in large-scale AIs such as ChatGPT’s GPT-3 and can cause a variety of misinformation. In data analysis, ChatGPT may identify nonexistent trends based on made-up or ‘hallucinated’ data. The same issue applies to image generation, where AI models generate objects or lifeforms that do not align with reality. This leads to another significant issue involving bias and unwanted outcomes, as an AI model only outputs knowledge which is based on the training data, if such data is biased (e.g. it has a political agenda, spreads hate towards a group of people, or misinformation about a party) then the output would also be bias. This widespread distribution of harmful information can sway voting polls and potentially destroy reputations (Elysse, 2023).

Environmental Impact

The most overlooked risk of generative AI is its astounding environmental impact, a concern which stretches far beyond the use of artificial intelligence. As our demand for ever-growing AI platforms increases, the usage and cost of these platforms skyrocket. Consequently, more GPU cores are required to maintain demand as larger and larger amounts of data require larger and larger amounts of processing power. This exponential increase in energy consumption puts strain on non-reusable technologies to generate electricity, resulting in colossal amounts of fossil fuels being converted to carbon emissions(An. J, 2023). Emissions that lead to deforestation, desertification, acid rain, and a multitude of environmental detriments. Unfortunately, this consequence is considered an afterthought within companies, evidenced by the lack of awareness from the public eye.

Ultimately, the implementation of AI in society has been revolutionary across many fields, furthering education and opening up employment opportunities. GenAI has brought remarkable advances and unprecedented risks, whilst AI models such as ChatGPT have generated extraordinary capabilities across all industries, however, they also present an abundance of risks such as cybersecurity threats, misleading information, and immense potential for misuse. Ethical guidelines and continuous monitoring are crucial to harness the benefits of generative AI and mitigate any pitfalls.

References:
(Nvidia, 2023)
Generative AI – What is it and How Does it Work? | NVIDIA. (n.d.). Retrieved from https://www.nvidia.com/en-us/glossary/data-science/generative-ai/
(Dhoni, 2023)
Dhoni, P. (2023). Unleashing the Potential: Overcoming Hurdles and Embracing Generative AI in IT Workplaces: Advantages, Guidelines, and Policies. Authorea Preprints.
(BLOOM, 1984)
Two sigma problem : BLOOM, B. S. (1984). The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring. Educational Researcher. https://doi.org/10.3102/0013189X013006004
(Kadurrin, 2023)
Kadaruddin, K. (2023). Empowering Education through Generative AI: Innovative Instructional Strategies for Tomorrow's Learners. International Journal of Business, Law, and Education, 4(2), 618-625.
(IBM, 2023)
Education, I. (2023, September 19). AI code-generation software: What it is and how it works. IBM Blog. https://www.ibm.com/blog/ai-code-generation/
(Banu, 2023)
Banu, S. (2023). Exploring CHATGPT’s coding capabilities: Can it write code? Retrieved from https://ambcrypto.com/blog/exploring-chatgpts-coding-capabilities-can-it-write-code/
(Elysse, 2023)
Bell, Elysse. “Generative AI: How It Works, History, and Pros and Cons.”
Investopedia, 26 Dec 2023, www.investopedia.com/generative-ai-
7497939#:~:text=Generative%20AI%20can%20benefit%20just,futur
e%20AI%20models%20can%20train
(An. J, 2023)
An, J., Ding, W., & Lin, C. (2023). ChatGPT: Tackle the growing carbon footprint of generative AI. https://doi.org/10.1038/d41586-023-00843-2
(Gupta, 2023)
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy. IEEE Access.
(Ji, 2023)
Ziwei Ji, Nayeon Lee, et al., via ACM Digital Library. “Survey of Hallucination in Natural Language Generation.” ACM Computing Surveys, Vol. 55, No. 12, Pages 1–38.
(Morgan McKinley, 2023)
The rise of artificial intelligence jobs in the Tech Industry. Morgan McKinley Recruitment.(2023,June30). https://www.morganmckinley.com/article/rise-artificial-intelligence-jobs-in-tech-industry#:~:text=Additional%20positions%20to%20have%20emerged,Scientists%20and%20Machine%20Learning%20Engineers
(Hoffman, 2021)
Hoffman, W. (2021). Making AI Work for Cyber Defense. Center for Security and Emerging Technology.

.
Terabox Video Player