Generative AI tools: where to draw the line

0

By Mr Sanjeev Chhabra, Managing Director & CEO, Beetel Teletech Ltd

The origin of Artificial Intelligence (AI) can be traced to the 1950s when John McCarthy presented his vision for an intuitive tool capable of precisely simulating ‘every aspect of learning or any other feature of intelligence’. Decades of experimentation led to the invention of many noteworthy tools including Eliza, a spoken-language interpretation tool; Deep Blue, the infamous chess playing program and even Kismet, a robot capable of both recognizing human emotion. Open AI’s ChatGPT is the latest in the crop of inventions to take the world and workplace by storm. Its potential to respond to the most complex of prompts has seen use cases across multiple domains, ushering in both optimism about its possibilities as well as fears about what could happen if things were to go wrong.
Scope of AI

Within the scope of the workplace, Generative AI has immense potential in terms of reducing redundant work and enabling individuals to take on more complex tasks. It can also efficiently curate intelligence, generate insights and provide creative possibilities to enhance the quality of work done. However, these tools also come with serious challenges that require to be considered and addressed. The violation of Intellectual Property Rights is among the foremost of these concerns as many of the tools draw upon existing reference material to create output. Experts have pointed out more sinister consequences including misinformation, cybersecurity threats and the loss of privacy. 

Inconsistency in the factual correctness of the information put out by Generative AI tools can lead to the spread of misinformation if fact-checking is not undertaken by the user. Misinformation, by itself, can pose a significant threat to business stability and alter the public perception of organisations.  Deep Fakes or holograms which have proven to be useful from an academic or entertainment perspective can be harmful when not accompanied by disclaimers or credits. It is imperative that provisions for fact-checking be embedded into Generative AI tools such that users are encouraged to take any information with a pinch of salt and conduct a round of research to verify the data generated. Fogginess surrounding user data collection on the part of these tools can be detrimental to both individual privacy and the confidentiality of company information. As Generative AI is built on the training of tools to further refine results, there are possibilities that data inputted by users or collected by the tool itself could lead to further breaches. The leakage of confidential or sensitive company information to outside sources can pose several cybersecurity threats to organisations. 

Stakeholders everywhere have started to recognize the need for regulating Generative AI tools. Measures ranging from bans to moderation to dialogue are being undertaken the world over. Italy, for instance, had briefly banned ChatGPT over data privacy and storage concerns. The European Parliament is set to introduce a precedent-setting AI Act which classifies tools into: a) unacceptable risk, b) high risk, c) limited risk and d) minimal or no risk. A range of provisions are also being strategized to address each of these categories so as to prevent cybersecurity threats, misinformation and instability at large. The Government of India is yet to announce any specific regulation for Generative AI although provisions to protect user safety could be on the anvil. 

The potential for AI to advance into a tool capable of influencing outcomes, emotion, and action is unlimited, as are the risks that they pose to companies, governments, and civil society. Strengthening the cybersecurity guardrails and putting in place provisions to verify the authenticity of the information generated is critical for the future of human engagement with Generative AI. A mix of regulation, cybersecurity, and protocols for fact-checking is key to users fearlessly harnessing the full potential of Generative AI and enriching the quality of work done. 

To avoid potential harms like disinformation, intellectual property infringement, and privacy violations, NASSCOM, in June 2023, set recommendations for the appropriate use of Generative AI technology. The guidelines consist of best practices to help stakeholders reach a consensus on the responsibilities of those involved in AI development and use. NASSCOM also published a Responsible AI Resource Kit to assist firms in implementing ethical AI practices.

Generative AI tools have great potential to boost workplace efficiency and creativity. However, how they are used poses serious problems that must be addressed to maintain public trust and safety. As we continue to look at the possibilities of these tools, we must draw a line and develop specific standards for their proper use. 

Efforts like the NASSCOM recommendations in India are an excellent way to start creating stakeholder consensus on the responsibility of those involved in AI development and use. By working collaboratively, we can leverage the promise of generative AI while reducing its possible risks.

LEAVE A REPLY

Please enter your comment!
Please enter your name here