Is Generative AI a New Threat to Cybersecurity?

0

By Prashanth GJ, CEO, TechnoBind Solutions

In today’s era of technological advancements, artificial intelligence (AI) has emerged as a game-changer for businesses across various industries. One of the most promising and rapidly evolving branches of AI is generative AI. This innovative technology enables machines to create and generate new content, whether it’s images, music, text, or even entire virtual worlds. These AI models, fueled by deep learning techniques like Generative Adversarial Networks (GANs) and Transformers, have the potential to revolutionise various industries, from entertainment and design to healthcare and robotics. The potential benefits of generative

AI for businesses is vast, ranging from enhancing creativity and innovation to streamlining operations and customer engagement. One-third of annual McKinsey Global survey respondents say that they are using Gen AI tools in at least one business function. 40% of respondents say their organisations will increase their investment in AI overall because of advances in gen AI. The most commonly reported business functions using these newer tools are the same as those in which AI use is most common overall: marketing and sales, product and service development, and service operations, such as customer care and back-office support.

 

While generative AI has enormous potential to be utilised by organisations, this has also opened the floodgate of cyber threats and breaches against its users. 21% of the annual McKinsey Global survey respondents say their organisations have established policies governing employees’ use of gen AI technologies in their work. A recent report by cybersecurity firm Group-IB revealed that over 100,000 ChatGPT accounts have been compromised and their data is being illicitly traded on the dark web, with India alone accounting for 12,632 stolen credentials. Many companies have forbidden their employees from using any generative AI-powered bots. However, the percentage of Gen AI users worrying about AI’s cybersecurity concerns has reduced from last year’s 51% to 38% says McKinsey Global survey.

 

It is the unknown that has made users skeptical about readily utilising generative AI

Research by PA Consulting found that 69% of individuals are afraid of AI and 72% say they don’t know enough about AI to trust it. According to a survey among 200 enterprise security officials, a staggering 91% of companies reported experiencing API-related security issues in the past year. As organisations are looking forward to leveraging LLP APIs, their lack of trust and knowledge about generative AI and news about security breaches pose a challenge in readily adopting it. The open-source code in generative AI is considered a double-edged sword by many. While cost-effectiveness, transparency, and easy availability are a plus, open-source code also leaves users vulnerable to attacks.

 

OpenAI’s ethical policy prevents LLMs from aiding the threat actors with malicious information. However, the threat actors can bypass these restrictions using various malicious techniques, such as – jailbreaking, reverse psychology, prompt injection attacks and ChatGPT-4 model escaping. Apart from API and open-source threats, generative AI leaves room to create various other threats:

 

Deepfake Threats: One of the most prominent concerns stemming from generative AI is the rise of deepfake technology. Deepfakes utilize generative AI to manipulate and fabricate realistic videos or images that convincingly mimic real people or events. This can have severe consequences, such as political disinformation, impersonation, and reputational damage.

 

Phishing Attacks: Cybercriminals can exploit generative AI to enhance the sophistication of phishing attacks. By generating hyper-realistic emails, websites, or user interfaces, hackers can deceive individuals into revealing sensitive information or unknowingly downloading malware.

 

Malware Generation: Generative AI can be used to develop novel strains of malware that are harder to detect and eradicate. By continuously evolving their code and behavior, AI-powered malware can evade traditional security measures, potentially causing significant damage to computer networks and systems. Polymorphic malware is one such example of malicious software that continuously modifies its code to evade antivirus detection.

 

Automated Social Engineering: Generative AI can be leveraged to automate social engineering attacks, such as personalised spear-phishing campaigns. By analysing vast amounts of data, AI can craft persuasive messages that target specific individuals or groups, increasing the chances of success for cybercriminals.

 

Challenges in combating and mitigating these threats

Effective defense against generative AI threats requires access to vast amounts of training data to understand and detect malicious patterns. However, obtaining labelled data that covers the diverse landscape of potential attacks can be challenging due to privacy concerns and legal limitations.

Cybersecurity professionals face a continuous battle to keep up with the evolving sophistication of generative AI. As AI techniques progress, adversaries can quickly adapt and develop new attack vectors, necessitating constant vigilance and proactive measures to mitigate emerging threats.

Generative AI models are often regarded as black boxes, making it difficult to ascertain their decision-making process. When malicious content is generated, attributing responsibility to the perpetrators becomes challenging. This hampers effective countermeasures and legal actions.

As organizations strive to combat generative AI threats, they must navigate the delicate balance between security measures and privacy concerns. Mitigation efforts should avoid unnecessary invasions of privacy while still protecting individuals and organisations from potential harm.

These challenges can be mitigated using advanced detection techniques, collaboration between researchers, industry experts, and policymakers, and a robust legal framework. Ethical consideration along with bias and fairness are the foundation of building and utilising generative AI. Organisations currently seem to be mostly preoccupied with the cost-benefits and the strong support a generative AI provides. There is always a threat looming around the adoption of technologies that haven’t been tried and tested for loopholes. While some may argue that generative AI is an advantageous tool in combating cyber threats, the lack of knowledge about the tool and its possible misuse by threat actors should be a bigger concern.

 

Generative AI holds immense potential to revolutionise various industries and foster innovation. However, the challenges it presents, such as ethical concerns, bias, misuse, transparency, and human-AI collaboration, cannot be overlooked. As generative AI continues to advance, it is imperative for researchers, developers, policymakers, and society at large to work collaboratively to address these challenges, ensuring responsible and ethical use of this powerful technology. By prioritising advanced detection techniques, fostering collaboration, and establishing robust legal frameworks, we can protect against the misuse of generative AI and ensure a safer digital landscape for all. By harnessing the power of generative AI responsibly, businesses can unlock its immense potential and pave the way for a future driven by innovation and success.

LEAVE A REPLY

Please enter your comment!
Please enter your name here