
AI and Cyber Security: Navigating the Evolving Landscape
Artificial Intelligence (AI) is rapidly transforming the cyber security landscape, reshaping not only how organisations detect and respond to threats but also how professionals manage their day-to-day responsibilities.
From security operations to governance, risk, and compliance (GRC), the influence of AI is being felt across all levels of the cyber workforce.
This blog explores how AI is reshaping the industry, the skills professionals need to thrive, and where the human touch remains irreplaceable.
AI is Changing Day-to-Day Roles But Not Replacing Them
For cyber security professionals, particularly in Security Operations Centres (SOCs) or GRC teams, AI is removing much of the manual, repetitive work that previously consumed hours each day. This shift is being driven by tools like Microsoft Sentinel, Splunk Enterprise Security, and CrowdStrike Falcon, which use machine learning to automate log analysis, triage alerts, and surface actionable risks.
As a result, roles are evolving. Instead of combing through logs or chasing false positives, analysts are increasingly focused on interpreting the output of AI systems, conducting deeper investigations, and making strategic recommendations. In the GRC space, professionals are using AI to streamline compliance assessments, cross-map controls, and generate reports, freeing them to focus on risk leadership and stakeholder engagement.
This pivot from operational execution to strategic oversight is one of the most profound shifts AI is ushering in.
AI Strengthens Threat Detection, But Has Limits
AI's ability to parse enormous datasets at speed makes it ideal for detecting threats at scale. By analysing behavioural patterns, network traffic, and historical indicators of compromise, AI can identify risks far faster than human teams alone. According to IBM’s 2024 "Cost of a Data Breach" report, 2.2M USD is the global average cost savings in million for organisations that used security AI and automation extensively in prevention versus those that didn’t.
However, there are well-documented limitations. AI models are generally trained on known threats and historical data. They can struggle with zero-day attacks, novel malware, or advanced social engineering tactics designed to slip past algorithmic defences. Attackers are now engineering exploits that evade AI detection which is a form of adversarial AI.
AI also lacks contextual understanding and intent analysis. While it might flag suspicious behaviour, it cannot always discern whether that behaviour is malicious or benign without human input. For example, a legitimate employee accessing an unusual system after hours could be flagged as a threat which would be a false positive best resolved through human judgement.
This reinforces the value of human-AI collaboration: AI for speed and scale, humans for nuance and strategy.
What Tasks Will AI Automate in Cyber Security?
Tasks ripe for automation include:
- Log and event analysis: AI can scan millions of events in real time, filtering out noise and identifying key patterns.
- Phishing detection: Machine learning models can analyse emails for linguistic cues and anomalous behaviour.
- Vulnerability prioritisation: AI can assess patching requirements based on asset criticality and exploit availability.
- Incident triage: Basic alert response and classification are increasingly handled by AI-enabled SOAR (Security Orchestration, Automation and Response) platforms.
- Compliance checks: AI is being used to automate audit trails, regulatory mapping, and policy enforcement.
Yet AI is not about eliminating jobs. It’s about elevating them. Professionals remain essential for roles involving:
- Regulatory interpretation
- Threat hunting
- Architecture design
- Business engagement
- Crisis response
Coinciding with the shift of organisations using AI in cyber security, the majority of major adopters still report needing more skilled professionals, not fewer.
Skills Cyber Security Professionals Need in the Age of AI
To thrive, cyber security professionals must develop a dual focus: deepening their human skills while gaining fluency in AI-enabled tools.
Key Human Skills:
- Critical thinking: The ability to question AI outputs, investigate anomalies, and challenge assumptions.
- Communication: Translating complex risk insights into language stakeholders can understand.
- Ethical reasoning: Making value-based decisions when AI outputs conflict with company culture or regulation.
- Collaboration: Working cross-functionally across tech, legal, risk, and executive teams.
AI Literacy:
- Understanding how AI algorithms work (e.g., supervised vs unsupervised learning).
- Vast education on where AI excels and where it falls short.
- Using AI tools as part of an integrated cyber defence strategy.
Courses such as Microsoft’s AI-900 and the ISC2 AI in Cyber Security certificate are strong starting points. Even free platforms like Coursera and edX offer accessible, industry-relevant AI learning paths.
Ethical Considerations in AI-Driven Cyber Security
With great automation comes great responsibility. AI raises challenging ethical questions:
- Accountability: If an AI tool misclassifies a threat or enables a breach, who is liable?
- Bias: AI models trained on biased data may make unfair decisions, potentially flagging activity disproportionately.
- Transparency: Black-box AI models that make decisions without explain-ability can undermine trust.
Organisations need clear policies around AI governance, including:
- Human-in-the-loop oversight for all critical decisions
- Transparent documentation of AI model training and assumptions
- Regular audits for fairness, accuracy, and compliance
The OECD AI Principles provide emerging guidance on ethical AI use. Locally, the Office of the Australian Information Commissioner (OAIC) has released guidance on how the Privacy Act 1988 applies to AI decision-making.
Future-Proofing a Career in Cyber Security
The best way to prepare for an AI-shaped future is to stay adaptable. That means:
- Continuous learning: Enrol in AI-focused short courses or micro-credentials.
- Stay industry aware: Subscribe to AI and cyber threat intel from leading bodies like ACSC, ISACA, and SANS.
- Work on cross-functional projects: Exposure to areas like legal, audit, and compliance builds broader skills.
- Join professional networks: Stay connected through groups like AISA, ISC2, or AWSN.
Above all, focus on becoming a translator between technology and business. That will always be a high-value role, regardless of how advanced automation becomes.
Deep Dive: The Evolution of Cyber Threats and AI’s Response
To understand why AI is so transformative in cyber security, it’s important to recognise how cyber threats have evolved. Threat actors have grown more sophisticated, leveraging automation themselves to conduct widespread phishing attacks, ransomware campaigns, and even AI-generated social engineering scams. According to the Australian Cyber Security Centre (ACSC), cybercrime reports fell by 7% in the 2023-24 financial year, however there is still one report made approximately every 6 minutes which remains unchanged from the following year.
This consistency in threat volume and complexity means traditional human-centred defence models alone are no longer sufficient. AI’s capacity to scale, learn, and adapt to emerging attack vectors is an essential element for maintaining security in a hyperconnected world.
The Role of Generative AI in Cyber Security
One of the most disruptive developments in the past year has been the rise of generative AI. Tools like ChatGPT, while useful for writing code, automating documentation, or generating security use cases, can also be misused by malicious actors.
For example, threat actors have used generative AI to:
- Write more convincing phishing emails
- Automatically generate polymorphic malware
- Clone voices or fabricate video content (deepfakes) for social engineering
These capabilities present new challenges for cyber security teams. Defenders must not only understand how generative AI tools work but also develop detection techniques for AI-generated threats.
Cyber Security Vendors and AI Integration
Many leading cyber security companies have embedded AI into their platforms:
- CrowdStrike uses AI-driven behaviour analytics to detect anomalies in endpoint behaviour.
- Palo Alto Networks employs machine learning in its Cortex XDR product to correlate threat signals across network, endpoint, and cloud environments.
- Darktrace has built its entire value proposition around AI, using unsupervised learning to model “normal” behaviour in an organisation and detect deviations in real time.
These types of tools are no longer optional for modern cyber security teams as they are quickly becoming the foundation of an effective defence posture.
AI in Incident Response and Forensics
In incident response, AI is enabling faster root cause analysis and smarter playbook execution. Security Orchestration, Automation, and Response (SOAR) platforms integrate AI to automate routine containment steps, isolating compromised systems, disabling credentials, or collecting forensic data while escalating complex cases to human analysts.
AI also plays a growing role in digital forensics. Tools can now automatically parse memory dumps, reverse-engineer malware, and identify lateral movement patterns across systems. These advancements reduce the time it takes to investigate breaches and restore systems.
AI and the Cyber Security Talent Gap
One of the most compelling arguments for AI in cyber security is its ability to ease the ongoing talent shortage. ISACA’s 2024 State of Cyber Security report found that 57% of organisations have unfilled cyber roles and 55% report increased turnover in cyber teams. AI can alleviate burnout by reducing manual workloads, allowing professionals to focus on high-impact tasks.
Rather than replacing jobs, AI changes the profile of what a “cyber professional” looks like. Demand is growing for:
- Cyber risk advisors with SME AI literacy
- Threat intelligence analysts who can validate AI findings
- Compliance leads who understand automated governance tools
- Product security specialists who can secure AI models against adversarial manipulation
Real-World Case Studies: AI in Action
Banking Sector: Major Australian banks have adopted AI for fraud detection. Westpac, for example, uses AI models to monitor customer transactions and flag suspicious behaviour in real time, improving fraud prevention without compromising customer experience.
Government: The Digital Transformation Agency has flagged the need for secure AI adoption in government services, highlighting the use of AI in digital identity verification while also acknowledging the importance of oversight and transparency.
Health Sector: Healthcare providers are using AI to protect sensitive medical records. AI monitors access logs, detects unauthorised access attempts, and helps ensure compliance with strict privacy laws.
Looking Ahead: AI Regulations and Cyber Security Policy
As AI adoption grows, so does regulatory scrutiny. The European Union’s AI Act, set to become law in 2025, will impose risk-based rules for AI systems. In Australia, the government is consulting on frameworks to manage AI safety, with the Department of Industry, Science and Resources leading discussions on responsible AI.
Cyber security professionals must stay alert to these developments, especially those working in regulated industries. GRC teams should begin mapping how AI use intersects with laws such as the Privacy Act, CPS 234 (for APRA-regulated entities), and critical infrastructure legislation.
The Human Factor Remains Central
Despite all this progress, the human element in cyber security remains vital. AI is a tool not a decision-maker. Human oversight ensures:
- AI outputs are ethically and legally defensible
- Contextual factors are considered in risk decisions
- Organisational values are upheld
As cyber threats become more agile and machine-enabled, human defenders must be equally agile. This means working with AI, not against it.
Conclusion: AI as a Co-Pilot, Not Competitor
The intersection of AI and cyber security marks a profound shift in how we defend against threats. But it is not the beginning of the end for cyber jobs. Instead, it is a transformation of purpose. AI will do more of the grunt work. Humans will do more of the high-order thinking.
For cyber security professionals, the mission is clear: embrace the tools, sharpen the soft skills, and lean into a strategic role. The industry doesn’t need fewer people because of AI, it needs smarter, more adaptable people who can work alongside it.
As Jacob Bywater, Director at e2 cyber, puts it: “The future of cyber security isn’t AI versus people, it’s AI with people. The smartest tools still need the sharpest minds behind them.”
Final Thoughts
The AI/cyber security relationship is one of augmentation, not replacement. At e2 cyber, we see AI as a tool that can amplify human expertise, not diminish it. Cyber security professionals who understand this dynamic will not only remain relevant, they will lead the industry forward.
Whether you’re just starting in cyber or are a seasoned CISO, the message is the same: evolve, upskill, and stay curious. The next chapter in cyber security won’t be written by AI alone. It will be written by professionals who know how to wield AI wisely.
Full list of resources
- https://www.e2cyber.com.au/grc-recruitment
- https://www.aisa.org.au/
- https://www.cyber.gov.au/about-us/view-all-content/reports-and-statistics/asd-cyber-threat-report-july-2022-june-2023
- https://www.darktrace.com/
- https://www.e2cyber.com.au/news/common-soc-obstacles-solutions
- https://www.ibm.com/think/topics/supervised-vs-unsupervised-learning
- https://www.edx.org/
- https://www.techtarget.com/searchsecurity/definition/SOAR
- https://www.isaca.org/
- https://www.crowdstrike.com/en-au/
- https://www.upguard.com/blog/cps-234-compliance
- https://www.awsn.org.au/
- https://learn.microsoft.com/en-us/training/courses/ai-900t00
- https://www.dta.gov.au/blogs/responsible-choices-new-policy-using-ai-australian-government
- https://www.crowdstrike.com/platform/?utm_campaign=brand&utm_content=crwd-brand-apj-anz-en-psp-x-x-x-tct-x_x_x_rlsa-x&utm_medium=sem&utm_source=goog&utm_term=crowdstrike%20falcon&cq_cmp=22070106858&cq_plac=&gad_source=1&gad_campaignid=22070106858&gbraid=0AAAAAC-K3YQranibNq8GIC_z8im2lfH5x&gclid=CjwKCAjwl_XBBhAUEiwAWK2hzhLuzKNHbv2cqcu3-j-RHI6r5fJicJkoOoYW2b8hQF7MUoar7KXakhoC2HMQAvD_BwE
- https://www.coursera.org/
- https://www.isc2.org/professional-development/courses/ai-for-cybersecurity
- https://www.westpac.com.au/news/making-news/2025/05/westpac-deploys-real-time-AI-to-take-on-scammers/
- https://www.splunk.com/en_us/products/enterprise-security.html
- https://www.sans.org/au_en/
- https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-developing-and-training-generative-ai-models
- https://www.ibm.com/reports/data-breach
- https://artificialintelligenceact.eu/
- https://chatgpt.com/
- https://www.paloaltonetworks.com/cyberpedia/role-of-artificial-intelligence-ai-in-security-automation
- https://www.cyber.gov.au/
- https://www.isaca.org/about-us/newsroom/press-releases/2023/new-isaca-research-59-percent-of-cybersecurity-teams-are-understaffed
- https://www.ahpra.gov.au/Resources/Artificial-Intelligence-in-healthcare.aspx
- https://oecd.ai/en/ai-principles
- https://learn.microsoft.com/en-us/azure/sentinel/overview?tabs=defender-portal
If you’re looking to engage the top professionals in cyber security, or a cyber security professional seeking to propel your career further, it’s always the right time to commence discussions, our expert team look forward to connecting with you.