Two years ago, ChatGPT burst onto the scene, ushering in a new era of artificial intelligence. The integration of AI technologies like ChatGPT has become both friend and foe in the ongoing battle to protect our interconnected world.
There’s no denying that ChatGPT and similar AI models have made a big impact on cybersecurity defenses. Being able to analyze data and identify patterns that could be easy to miss if reviewed manually.
However, the accessibility of AI tools has also lowered the barrier to entry for cybercriminals. Criminal hackers can now leverage ChatGPT to craft more convincing phishing emails, generate malicious code, and even create deepfakes for social engineering attacks.
Ransomware attacks, already a significant threat, have become more sophisticated with AI. The 2023 attack on ICBC’s U.S. arm being a significant example.
In addition, AI-powered tools like ChatGPT have made business email compromises (BEC) even more dangerous. These models can now automate – and in some cases mimic – executive writing styles, making fraudulent emails nearly indistinguishable from legitimate ones.
Voice cloning technology, powered by AI, has also added a new dimension to credential theft. “Vishing” attacks using deepfake voices of company executives pose a significant threat to even the most security-conscious organizations. In one prominent example, hackers used a deepfake of the CEO of LastPass and tried to convince an employee to make a large wire transfer.
Similarly, AI generated videos can be used to dupe unsuspecting victims. The reach of these attacks now transcends language barriers. AI can generate flawless written, audio, or video content in any language, eliminating the telltale signs of past attacks that were often riddled with spelling and grammatical errors.
Lead security awareness advocate at Knowbe4
Responding to the AI cybersecurity challenge
Organizations are increasingly deploying AI features within security technologies to combat evolving threats – and one will be hard-pressed to find a security vendor which doesn’t incorporate a level of AI within their offerings. Yet, despite the capabilities of ChatGPT and others, these are not a replacement for human expertise in cybersecurity, which will always be required to ensure proper oversight and that AI recommendations are accurate and contextually sound.
Furthermore, from a social engineering perspective in particular, trying to identify when an attack is AI-generated may be the wrong way to look at the challenge. Rather, one should look at where the attack is originating from, what it is requesting, and what kind of urgency is being emphasized. In doing so, people are more likely to be able to spot and defend against attacks regardless of whether they are AI-generated or not.
Likewise, in the broader context, fundamental cyber hygiene remains crucial. Employee training, strong access control, patching, incident response planning, amongst other practices remain vital to building a secure organization.
What the future holds
Looking to the future, it’s clear that ChatGPT and other AI tools will continue to evolve and manifest in different ways. It will eventually be as ubiquitous as internet search engines.
The ongoing development of AI will undoubtedly drive innovation for both offensive and defensive cybersecurity activities. Attackers will likely leverage capabilities for more complex, multi-vectored attacks. While defenders will use AI to identify and even predict threats, automate incident response, and become a trusted companion.
However, it’s crucial to remember that AI, including ChatGPT, is ultimately a tool – and like any tool, it can be wielded for both constructive and destructive purposes. The ethical use of AI in cybersecurity will become a paramount concern. To navigate this, we need three key elements:
Legislation
Smart legislation that keeps pace with technological advancements and balances innovation with security and privacy concerns will be critical as AI progresses. The EU AI Act, expected to be finalized by late 2024, aims to regulate AI using a risk-based approach, classifying them into four categories: unacceptable, high, limited, and minimal risk. High-risk AI (e.g., in healthcare or law enforcement) will face stringent requirements, while minimal-risk applications remain largely unregulated. Penalties for non-compliance can reach €30 million or 6% of global turnover.
The UK’s approach, however, is more flexible and focuses on five principles: safety, transparency, fairness, accountability and contestability. Rather than a single law, it lets existing regulators oversee AI within their sectors. This strategy aims to balance innovation with safety, positioning the UK as a tech-friendly hub.
Ethical frameworks
Robust ethical frameworks that guide the development and deployment of AI in cybersecurity will ensure it is used responsibly. For instance, they prevent bias, discrimination, and privacy violations while promoting transparency, fairness, and accountability. This is critical to building trust in AI systems, not to mention protecting human rights and preventing harm as AI becomes more integrated into critical systems like finance, healthcare and law enforcement.
Education and awareness
Continuous education and awareness programs that help cybersecurity professionals, policymakers and the general public understand the implications of AI in the digital ecosystem will be needed every step of the way throughout AI’s journey. The more we see, hear and read about the issues and challenges of AI, the more we can use our critical thinking to make better decisions and avoid over-reliance on AI systems.
By focusing on these areas, we can work towards a future where AI enhances the world’s collective cybersecurity posture without compromising our values or freedoms. And while it may not be easy, it is the essential path needed to allow AI to be an integral, yet managed, part of a safer digital world for all.
We’ve featured the best encryption software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro