It is common to say that in the world of cybersecurity there are three facts: either we have not yet suffered a security breach, or we are not aware of having suffered one, and the human element is the weakest link in the chain.
If there is one thing we’ve learned in the last few years with the tech advancement, COVID and the impact of geopolitics is that cyber security is here to stay, and it is becoming a lucrative industry for both the one who exploit and defend it.
In 2022, the annual report on the status of the cybersecurity threat landscape, the European Union's Agency for Cybersecurity (ENISA) ranked the observed top threats and major trends classifying ransomware on first place followed by other malware and social engineering threats. This top three was then followed by threats against data, against availability, such as denial of service and other internet threats, disinformation, and last, but not least, supply-chain attacks. To this we can add several other industry reports all confirming that reported cases of ransomware attacks have risen with 220% year-on-year-basis, while 74% of all breaches include the human element, with people being involved either via error, privilege misuse, use of stolen credentials or social engineering.
From the wilderness to our own back yard
Cyberattacks have become a constant menace for companies with tech landscape, regardless of their size or prominence. As digitalisation of operations and services take the lead, malicious actors have grown equally adept at exploiting vulnerabilities or misconfigurations.
Advanced persistent threats (APTs), ransomware attacks, and sophisticated phishing schemes have evolved to bypass traditional security measures, making it crucial for companies to stay one step ahead in their defensive strategies.
As security specialists, we know the importance of understanding a company’s present context and predicting, as much as possible the future, and this is where horizon scanning, and threat intelligence play a vital part. This intelligence serves to quickly respond to zero-day vulnerabilities, adapt to any new attack vectors and strategies, and develop our strategy to constantly strengthen our security posture.
Based on what we are observing can there are four main topics we keep closely under our radar: DDoS, ransomware, use of AI and social engineering.
DDoS Attacks – The threat against availability
The aim of a Distributed Denial of Service (DDoS) attack is to disrupt websites, making them unavailable to users by overwhelming them with more traffic than they can handle.
Although, by definition, DDoS attacks are simple, it has been noticed that these threats continue to evolve, both in type and strategy, and Cloudflare reports to notice an increase in tailored, sophisticated, and persistent attacks recently.
Offenders continue to leverage zero-day vulnerabilities in their attacks, targeting higher-level systems like DNS Providers (DNS Floods), but they also have been adopting deliberate strategies to overcome systems' mitigations by imitating browser behaviour, introduce a lot of randomness on various properties (like HTTP Headers), making it harder to pinpoint patterns and techniques. It has been observed also a lower rate of consecutive requests to bypass common protections, like rate limiting, and impact the infrastructure underneath.
This kind of attacks usually have two purposes: cause denial of service in the affected systems by encrypting data to make it unavailable to the organisations and data exfiltration, which depending on the type of data can cause severe issues to the organisations.
After the successful compromise of the systems the attackers will request a ransom payment to decrypt the information or to keep it private (mid-year ransom payments have increased from about $300M in 2022 to about $450M in 2023). In case their demands are not met, they might make the data public, or the files encrypted on the victims' infrastructure, disabling the systems that consume this information.
Ransomware attacks are becoming normal nowadays usually perpetrated by criminal gangs, some are even linked to state-sponsored groups, being the most recent perpetrated by Cl0p which already affect at least 122 organisations and many more indirectly. For some of them the data leaked is already available in dark web websites to download. Ransomware attacks usually happen due to unpatched security vulnerabilities, systems misconfigurations or social engineering attacks that trick users to install malware that will later obtain access credentials to the company's critical systems.
Social engineering – Hacking the human
Since the beginning of humankind, there were always outliers ready to manipulate people for their own benefit. While that might have meant tricking someone to steal basic items of survival, in the modern this takes the form of cyberattacks, with much higher stakes.
Usually, the attacks begin by manipulating employees or processes of an organisation to gain a foothold in a company. Unlike traditional hacking methods that focus on exploiting technical vulnerabilities, social engineering preys on the innate human desire to trust and assist others, making it an increasing threat.
The astonishing efficacy and versatility of attacks and attackers makes protecting against social engineering a challenge for all organisations. Depending on the determination and available information, the types of social engineering methods you can expect are the following:
Phishing, vishing and smishing: These attacks are one of the most prevalent and dangerous forms of social engineering. Cybercriminals craft deceptive emails, messages, or websites that appear legitimate to trick users into revealing their login credentials, financial information, or other sensitive data. These attacks often use urgent or enticing language to evoke emotional responses, pressuring victims into acting impulsively.
Impersonation and CEO Fraud: Sits within phishing, target employees with access to financial or sensitive information. The attacker pretends to be a high-ranking executive or trusted authority figure, instructing the victim to transfer funds or provide confidential data. These attacks rely on urgency and authority to bypass normal verification processes.
- Pretexting: It involves creating a fabricated scenario or pretext to extract information from an individual. Malicious actors may impersonate colleagues, suppliers, or authority figures to manipulate victims into sharing sensitive data or performing specific actions. These attackers skilfully weave a plausible narrative, leading the victim to believe they are acting in good faith.
- Baiting: It capitalises on human curiosity and temptation. Attackers offer something enticing, such as a free download, music, or movie, containing malicious software. Unsuspecting users are lured into downloading the bait, unknowingly infecting their systems with malware, granting unauthorised access to the attacker.
- Insider threats: Social engineering can also exploit insiders within organisations. Malicious actors may manipulate employees with access to critical systems or data to carry out malicious activities on their behalf. This could involve an employee willingly or unknowingly aiding the attacker, circumventing established security protocols.
The increased sophistication of attacks makes it considerably difficulty to stay clear of or easily cover educating people around the means how they can be attacked, especially if those impact personal habits. The use of mobile devices, QR codes, chat options and especially social media platforms offer more opportunities to find victims and gather sufficient data to be utilised for identity theft, credential stuffing, or even to target the victim with tailored phishing attempts.
Artificial Intelligence (AI) has emerged as a transformative technology, revolutionising various industries – including the one dominated by malicious actors. Capitalising on AI's capabilities means using AI to fast-track to launch sophisticated cyberattacks, achieve a wider impact do to 24/7 availability, and easier ways to bypass existing security measures of the target.
Generative AI tools will play a fundamental role in the evolution of cyberattacks. As demonstrated, they can be used to generate malicious requests in their various forms, with the advancement of ChatGPT and the threat of prompt injection. These tools will begin to be used by a broader group of individuals, including those with less knowledge or expertise to carry out these attacks on their own. Even in advanced cases, these tools will expedite the research process to learn more about a system, technology, or property, meaning they will adapt more quickly and produce mitigations for all types of defences that companies can implement to prevent such attacks.
This paradigm shift creates an entirely new dimension of complexity in defending against cyberattacks, requiring teams to be faster and more agile than ever before. While it is not clear at this moment, it will be crucial to explore how these tools can be used on the defensive side, both to assist automated systems in recognising patterns in these attacks and to help organisations learn more quickly about technologies, methodologies, and security patterns in general. It is intriguing to consider that, in some cases, this could easily become a battle between AI and AI in systems that rely on AI/ML models, where engineers will emerge as winners.
Let's explore the dangers of AI in cybersecurity and delve into how various types of malicious actors can leverage this technology for their nefarious purposes:
AI-Driven Threat Landscape: Processing vast amounts of data, analysing patterns, and learning from experience enabled significant advancements in cybersecurity. These results support threat detection, vulnerability assessment, and incident response, boosting the security posture of organisations making a double-edged sword out of this information when in the hands of malicious actors.
Adversarial AI: It is a technique where AI models are manipulated or "tricked" to produce incorrect results. Cybercriminals can exploit vulnerabilities in AI algorithms and launch adversarial attacks to evade detection. For example, they can create malicious content (e.g., images, documents) that appear normal to human eyes but confuse AI systems, allowing malware to bypass security measures.
AI-Enhanced Social Engineering: Phishing attacks were already a constant threat in the cybersecurity realm and AI helps cybercriminals create more convincing and personalised phishing emails. AI can analyse massive datasets about potential victims, crafting tailored messages that increase the chances of success. Online platforms are not safe as AI-powered chatbots can convincingly impersonate humans, leading to more successful social engineering attempts. Moreover, AI can analyse social media data to gather intelligence on potential targets, making these attacks even more precise and effective. Moreover, AI can automate responses to potential victims, making these attacks more efficient and challenging to identify. Some tools can even be used to generate false documentation which are impossible to distinguish for humans responsible for analysing and triaging said documents.
Automated Cyber Attacks: AI can be used to automate cyberattacks on an unprecedented scale. With the use of AI, attackers can achieve more persistence and control over the bots. A fully AI-based DDoS attack removes the human element from the equation, with means the source of the attack will be harder to trace, will be available around the clock by also automating repetitive tasks and has an almost non-existent error rate. The most dangerous part of an AI-based DDoS attack is the capability to speed up decision-making and adapt the attack approach to the predictable defence strategies applied by the target, making it overall harder to defend themselves using traditional security measures.
AI-Generated Malware: It is sophisticated malware created by AI that can mutate and evolve to avoid detection by traditional antivirus software and become hard to detect by analysts. AI-powered malware can analyse security measures in real-time, adapting its behaviour to bypass existing defences. This "intelligent" malware poses a severe threat to organisations, as it can remain undetected for extended periods, causing significant damage. Furthermore, it is worth noting these artifacts can now be generated by people with little to no technical knowledge, which is concerning as is an indicator the threat landscape overall is likely to increase in quantity, as well as quality.
Where is the silver lining?
While the main goal of tech advancements is to make life better, save time or produce more revenue, it comes as a double edge sword by also aiding adversaries. So, what can we do as organisations and especially as security teams to protect businesses while enabling it to achieve their objectives? The well know formula, tech, process, and people gives you a good option for starting your layered defence.
There are plenty of frameworks indicating what and how to secure your data and infrastructure. While all of them are important, you might get the biggest wins by thoroughly segmenting your network, sorting your data leak prevention/email security, and identity and access management. The latter can be the hardest to remediate if not done right and is crucial in its role as a preventive measure.
Practicing secure architecture is ideal but can be hard to achieve with a resistant culture and with low Senior leadership support. Two processes will make a significant difference in your security posture: one is governing the way your staff has access to corporate assets and, second, is how well you will respond to incidents when these occur. However, these two processes will only work if there is enough visibility over the company's infrastructure changes, exposure and the right ownership of those assets. Incident response capabilities establish response plans to outline the steps to be taken in case of a security breach, data leak, or other security-related incidents. The goal is to minimise damage, restore services, and learn from the incident to prevent similar ones in the future.
The people factor is probably the hardest part to tackle, especially for a durable change. The major success factors in successful behaviour change are understanding the source of human error and designing to not work against but support it. This means focusing your actions on reviewing and designing controls that work for people and makes choosing the right thing easy or hard picking the wrong one. It also means understanding and measuring your culture and tailoring the educational program, so it fits the corporate culture and the organisation as a whole.
Whether AI or just regular tech advancement, security professionals face the challenge of finding a delicate balance between secure, agile, productive, available, and functional products while enhancing defence mechanisms against threats.
The convergence of AI, DDoS attacks, and social engineering poses unprecedented risks to our interconnected world putting an even bigger emphasis on the necessity to stay ahead of these dangers. Embracing a security-first mindset, investing in robust cybersecurity measures, and educating our staff will fortify our defences and protect us against the adversaries. Embracing a security-first mindset, investing in robust cybersecurity measures, and educating our personnel will strengthen our defences and protect us against adversaries. Remember that every individual plays a crucial role in this collective endeavour to safeguard our digital ecosystem, and by doing so, we secure the pathway to a brighter, safer tomorrow.