13 CISOs predict how AI will shape the compliance landscape in 2025

As artificial intelligence (AI) continues its rapid evolution, industry experts are predicting a profound impact on compliance in 2025. From real-time monitoring to adaptive risk management, AI promises both transformative benefits and new challenges. We asked 13 CISOs and security leaders how they see AI impacting the world of compliance in 2025.

Meet the CISOs

  1. David Lackey, CISO and Founder, CISOnow
  2. Harsh Kashiparekh, CEO, Securis360
  3. Jay Trinckes, Data Protection Officer/CISO, Thoropass, Inc.
  4. Tosin Ojo, Founder and Principal Consultant, Citsap
  5. Kevin Barona, Founder & CEO, Cycore
  6. Christopher Stoneff, CISO, Analog Informatics
  7. Phil Lerner, CISO, Blackbird.ai
  8. Kris Hansen, CTO, Sagard
  9. Alexander Preston, CTO, Intrepid Ltd
  10. Adam Harris, Security Consultant, HMG
  11. Justin Gratto, vCISO, Justin Gratto Consulting
  12. Bastian Bartels, Managing Director, Context GRC Advisors
  13. Linda Brown, CISO, Viridis Security

Here’s a closer look at the five key trends they expect to take shape next year.

AI will shift audits from point-in-time to always-on

AI is poised to revolutionize the audit process, moving from periodic, point-in-time audits to continuous compliance monitoring. By analyzing real-time data, AI systems can help companies maintain ongoing compliance rather than scrambling to prepare for annual audits.

According to David Lackey, CISO and Founder of CISONow, “AI presents a significant opportunity to identify non-compliance issues as they occur. However, it could also present challenges as organizations will need to integrate and rely on AI tools effectively, which requires upfront investment and skilled resources.”

Harsh Kashiparekh, CEO of Securis360 agrees stating that, “AI will provide audit and comments in real time and generate alerts if any non-compliance is observed based on established policies.”

While AI will introduce very helpful efficiencies, Jay Trinckes, CISO & DPO at Thoropass warns that security professionals should be wary of relying too heavily on this technology just yet.

AI will bring both threats and efficiencies, which can be used to counteract new challenges

The old adage “fight fire with fire” could evolve into something like “fight AI with AI” in 2025.  CISOs predict that, while AI will introduce a myriad of opportunities and threats, we will see more organizations leveraging offensive AI to fend off the potential damage of adversarial AI.

Emerging threats may take the form of deepfakes, says Tosin Ojo, Founder & Principal Consultant at CITSAP. “There will be an unprecedented increase in AI-driven automated cyberattacks, misinformation, and deepfakes. Bad actors will become more empowered in creating sophisticated malware and automated phishing campaigns that are trained to evade current detection methods, while also morphing into new attack methods by real-time evaluation of vulnerabilities and security defenses of the environment being attacked.”

“With AI-generated deepfakes, there will be increased impersonation attempts of executives, e.g., CEO fraud, through the creation of fake visual-audio impersonations. Infosec compliance professionals must recognize that this risk is real and that traditional cybersecurity defenses and security awareness training will need to adapt and evolve rapidly to effectively prevent and detect these types of attacks.”

Kevin Barona, Founder & CEO of Cycore points out two channels that can introduce new vulnerabilities: AI systems and third-party vendors. “AI systems introduce new security challenges, such as susceptibility to adversarial attacks. These attacks can manipulate AI models to produce incorrect or harmful outcomes, potentially leading to compliance breaches. In addition, relying on external AI vendors introduces risks related to data security, compliance with internal policies, and vendor reliability.”

But it’s not all doom and gloom, AI will introduce efficiencies like none we’ve ever seen before. Christopher Stoneff, CISO at Analog Informatics, points out that, “AI will open doors to advanced data analysis such as leveraging AI to recognize trends and behaviors from disparate sources.” This level of data intelligence could potentially arm organizations to fend off AI-related risks.

Further, Kris Hansen points out the advantages AI can bring to auditors emphasizing that they may be able to “perform more specific or directed audits.”

Phil Lerner, CISO of Blackbird.ai adds that, “we will see organizations have a greater ability to defend against adversarial AI.”

Organizations will be able to use AI for adaptive security postures and risk management

AI’s ability to dynamically adjust security controls based on real-time threat intelligence is also predicted to play a transformative role for infosec compliance teams. The experts predict that AI will power a new wave of automation and maturity in how risk assessments are performed and how the results are consumed. This will be achieved through real-time risk assessments, driven by predictive analysis based on the ability of AI to analyze vast historical risk events datasets, vulnerabilities, and exploitable threats. As Kris Hansen, CTO at Sagard puts it, “AI will improve the efficiency of understanding, managing, and maintaining infosec compliance.”

Alexander Preston, CTO at Intrepid, explains, “AI will dynamically adjust security measures based on real-time threat intelligence and organizational risk profiles. AI-powered systems will continuously assess the threat landscape and automatically modify security controls, ensuring compliance frameworks are up-to-date with emerging threats. Or in layman’s terms, AI is about to become your security bouncer, constantly updating the guest list based on who’s lurking outside.”

AI’s ability to contribute to an adaptive risk environment extends beyond security measures to policies and documentation. “AI-powered tools will increasingly automate the creation, management, and updating of security policies and compliance documents (e.g., ISO 27001, PCI DSS),” says David Lackey. “These tools can assist in auto-generating tailored documents based on regulatory requirements and company-specific data.”

Further, according to Tosin Ojo, “AI will power a new wave of automation and maturity in how risk assessments are performed and how the results are consumed… AI will evaluate the likelihood and impact of risks to organizations in real time and recommend appropriate risk remediation strategies on the fly.”

She goes on to say, “Infosec compliance professionals will experience an infusion of superpowers when performing risk assessments, as risk events that would otherwise be evaluated through a narrow lens will now have the capability of being assessed using Large Language Models (LLMs) based on massive risk-relevant data points.”

Adam Harris, Security Consultant at HMG adds, “AI is enabling better data recognition, which offers an opportunity for more nuanced control and reshaping both attack surfaces and compliance requirements. Because of the improved tooling, I’m already moving my startup clients to a cloud-only approach, saving them time, money, and effort securing hardware endpoints for security compliance.”

There will be a growing regulatory focus on AI across the globe, with a focus on ethical AI use and data integrity

Several CISOs pointed to the evolving regulatory landscape as a challenge that organizations must stay abreast of and address proactively. 

According to David Lackey, “Regulatory bodies may begin to introduce compliance requirements specifically aimed at ensuring that AI systems used for security are safe, ethical, and free from bias. Organizations using AI in their compliance operations will need to stay ahead of this evolving regulatory landscape.”

New regulations similar to the EU AI Act will emerge to govern the use of AI in cybersecurity. Frameworks are likely to focus on ethical AI use and minimizing bias. It’s paramount that organizations using AI stay ahead of evolving compliance requirements and adapt to these new standards, as the stakes are higher than ever.

“Information Security is built on three fundamental principles outlined in the Confidentiality, Integrity, and Availability (CIA) triad. Historically, the focus has largely been on Confidentiality, in relation to data breaches, and Availability, particularly in the context of ransomware,” says Bastian Bartels, Managing Director at Context GRC Advisor. “However, the rise of AI shifts the emphasis toward Integrity, as maintaining the accuracy and reliability of both the input and output generated by AI systems becomes crucial.”

Justin Gratto, vCISO of Justin Gratto Consulting, agrees, saying, “We will see more integration of compliance with standards like ISO 42001 and NIST AI RMF into existing information security management systems.”

However, Kevin Barona warns that despite the efforts to increase regulations around the use of AI, technological advancements are outpacing the protective measures.

New and emerging AI security vulnerabilities will stress the importance of humans-in-the-loop

CISOs warned about the growing security vulnerabilities introduced by AI, such as adversarial attacks designed to manipulate AI models and shadow AI, unintentionally introducing vulnerabilities from the inside. 

Jay Trinckes cautions, “AI will introduce risks and possible errors, so keeping humans in the loop will be very important. (as well as not over-relying on AI to get it right – especially from a context basis).”

“Bad actors will become more empowered in creating sophisticated malware and automated phishing campaigns that are trained to evade current detection methods,” explains Tosin Ojo. “While also morphing into new attack methods by real-time evaluation of vulnerabilities and security defenses of the environment being attacked.”

These risks may not be malicious but arise due to the introduction of new AI tools and systems. This further stresses the importance of human oversight of AI inputs and outputs. Linda Brown, CISO at Viridis says, “All tools that use AI to create communications will need to be listed as risks for reputation harm as there are still harmful bias and hallucination issues with broad generative AI data.”

It’s clear that in 2025 it’s never been more important to keep humans in the loop and judiciously monitor all AI tools and processes until we can build more trust in them.

How do you predict organizations like yours will leverage AI 5 years from now?

It’s difficult enough to predict how AI will change the compliance landscape next year, but we took it a step further and also asked the CISOs how they feel AI will personally impact them and their organizations in the next five years. Here is what they had to say:

Jay Trinckes: AI will help reduce some of the ‘redundant’ tasks and make automation easier/faster. We’ll use AI to make the ‘first pass’ over document reviews, provide summaries, and assist in automating routine tasks. AI will be a tool, but not a replacement for human expertise.

David Lackey: In five years, CISOnow could leverage AI to transform its services by automating security assessments, offering predictive risk and threat modeling, and providing “Compliance as a Service” with continuous monitoring of client environments. AI could also enhance incident response and forensics and deliver personalized cybersecurity strategies and client training. These capabilities would allow CISOnow to offer more proactive, scalable, and efficient services, shifting its focus from manual processes to strategic advisory and oversight for clients.

Phil Lerner: To provide advanced situational awareness and respond with highly accurate targeted AI attacks to defend the enterprise assets.

Adam Harris: As a security consultant, I see AI adoption as an opportunity to enhance my team’s quality and speed to market. Personal relationships and situational execution will still matter, and AI will improve my team’s ability to produce high-quality security solutions without having to go offshore for administrative support.

Justin Gratto: Complimenting and supplementing human roles to an ever-increasing degree of competence.

Harsh Kashiparekh: For ensuring minimal to no human error and ensuring that clients do not face any compliance risk whatsoever.

Alexander Preston: The level we have to think about day to day is constantly up-levelling. AI is increasingly taking care of the lower level detail, leaving the human operator to think more creatively and strategically at a higher level. For Intrepid, that means AI can generate code snippets and functions that are valid and work in isolation. It can’t yet classically engineer a whole application codebase that is well-architected and maintainable. This is what we are watching for and contributing R&D hours to ourselves.

Christopher Stoneff: Data analysis leveraging AI to recognize trends and behaviors from disparate resources such as tying “anonymous reviews” to actual customers or patients.

Bastian Bartels: As the owner of a consulting business, I believe AI will play a crucial role in summarizing large sets of data, such as compliance frameworks, regulatory guidelines, and vulnerability reports. However, when it comes to delivering consulting services and expertise to clients, I do not intend to use AI to generate reports or outputs.

Linda Brown: AI will continue to improve efficiency in tasks and in learning from large data. In five years it will be involved in some way in every company, process, and governance questions to third parties will become more nuanced in order to accurately measure risk.

Kevin Barona: Here is where we believe AI can be leveraged best:

  • Incorporation into Customer Processes
  • Operational enhancements: optimizing operational workflows, ensuring efficiency and effectiveness across the organization
  • Adopting mini-AI SaaS Tools: Address specific business needs with cost-effective SaaS solutions

Tosin Ojo: In five years, organizations like ours will have AI as an integral part of our strategy for automating various routine tasks involved in designing, implementing, and maintaining cybersecurity programs for our clients. For example, AI will be fully leveraged to solve real technology and cybersecurity risks by automating routine testing and compliance verifications in areas like Identity and Access Management (IAM), Vendor Management, Contract Reviews, Vulnerability Assessments, Threat Analysis, Risk Assessments, etc., just to name a few. This will drive operational efficiencies at lower costs while enabling improved decision-making beyond what is available today.

Kris Hansen: I think AI will become a significant part of an adaptive cybersecurity defense posture. Adapting to emerging threats and applying controls that are more useful and applicable to the current and emerging threat landscape. AI will help cybersecurity teams filter out the noise and focus on the real risks and threats.

The future of compliance is now: Explore AI for your business, safely and securely

Thoropass has a host of features and services to help you arm yourself for the AI evolution, such as AI pentests and GenAI-powered DDQs.

Guided by a comprehensive vision of the future of AI, Thoropass is leading the charge, supporting organizations with enterprise-ready AI products and services.

Learn more here at thoropass.com/ai.

Share this post with your network:

LinkedIn