For decades, compliance has demanded extensive manual work. Consider a typical access review: after user permissions are provisioned or revoked, compliance teams must manually confirm that changes were authorized, documented, and correctly executed. Change management, policy reviews, and document control have similarly required labor-intensive checks after the fact, creating operational costs and bottlenecks to business operations.

AI is shifting this landscape from reactive manual reviews to proactive, continuous oversight. AI-enabled systems can monitor access changes in real-time, identify unusual patterns, compare changes against policy, and surface potential issues as they occur. While final decisions remain with compliance professionals, much of the routine work is handled through intelligent automation and pre-filtered insights, allowing human experts to focus on complex or high-risk decisions.

For organizations, AI reduces both administrative burden and oversight risk by detecting anomalies earlier and embedding compliance into daily workflows.

What is AI compliance?

AI compliance refers to ensuring that artificial intelligence systems and their applications adhere to relevant laws, regulations, standards, and ethical guidelines. This includes making sure AI systems operate within legal frameworks, respect privacy, maintain security, avoid bias, and function as intended while minimizing risks. AI compliance encompasses both compliance of AI systems themselves and how AI can be used to enhance broader organizational compliance activities.

Why is AI compliance important?

AI compliance is crucial because it:

AI in documentation and due diligence

A significant challenge in compliance—particularly for companies scaling operations or pursuing new partnerships—is completing large numbers of due diligence questionnaires (DDQs). These often contain framework-specific questions referencing an organization’s policies, procedures, and audit evidence. Historically, completing DDQs required searching through policy documents, audit reports, and prior questionnaires, resulting in duplicated effort, outdated responses, and delays.

AI transforms this process. Generative AI models integrated into compliance platforms can cross-reference an organization’s policy repository, certifications, and audit materials. When new DDQs arrive, AI systems review existing documentation to produce accurate, current draft answers, highlighting relevant excerpts and surfacing evidence automatically.

For example, when a fintech company receives questions about data encryption protocols in different DDQs, the AI system retrieves the latest encryption policy, relevant SOC 2 report sections, and cross-references previous answers to present a pre-drafted, current response. Compliance managers need only approve, refine, or update as needed, eliminating hours of manual review.

This approach saves time and reduces the risk of inconsistent, outdated, or incorrect answers.

Improving audit readiness and execution

Traditional audits require companies to provide extensive information—unstructured data including screenshots, change logs, email communications, and configuration files. Auditors then review this material to find evidence that controls were followed and identify potential compliance gaps.

AI integration transforms both internal and third-party audits. AI-powered tools can scan and analyze unstructured data, recognize patterns of compliant versus non-compliant behavior, and present auditors with focused alerts or evidence packages.

For a healthcare organization preparing for a HIPAA audit, compliance staff previously compiled hundreds of email chains about health record access requests, along with sample logs and policy updates. An AI system can now automatically highlight access permission deviations, correlate email approvals with system changes, and summarize exceptions requiring auditor attention.

This speeds audit cycles and enables “real-time attestation”—where internal stakeholders and external partners can receive current compliance evidence without waiting for annual reports. For instance, users can verify that their data will be encrypted at entry, with AI systems confirming real-time security status rather than relying on months-old certifications.

As AI adoption increases and new applications like generative models emerge, regulations are evolving rapidly. Standards such as the EU AI Act and ISO 42001 address technical security controls, responsible data use, copyright, bias, and machine learning model provenance and training.

AI can help organizations track, interpret, and implement regulatory changes. When new regulatory updates change model transparency requirements, an AI-powered compliance system can:

For an insurance company operating across jurisdictions, an AI engine can review European regulatory updates concerning data retention for AI-driven decisions, compare requirements with current retention policies, and notify compliance officers if retention periods need adjustment to avoid violations.

This capability is essential for cost-effective regulatory adaptation and reducing inadvertent non-compliance risk as regulations continue changing.

What is the compliance standard for AI?

Multiple standards are emerging for AI compliance, including:

These standards focus not only on technical security controls but also on responsible data use, copyright, bias, and the provenance and training of machine learning models.

Does the US have AI regulations?

The US currently takes a sector-specific approach to AI regulation rather than implementing comprehensive AI-specific legislation like the EU. Regulations affecting AI come from:

The landscape is evolving rapidly, with several federal initiatives working toward more coordinated approaches to AI governance and oversight.

Elevating the role of compliance professionals

A recurring compliance challenge has been high turnover of skilled professionals, partly due to the historically administrative or reactive nature of their work. With AI handling repetitive review and documentation tasks, compliance officers can move into strategic roles—interpreting complex regulatory requirements, designing controls, engaging with regulators, and advising business leaders on risk.

In a multinational technology enterprise, AI can automate supplier due diligence documentation collection and initial review while flagging ambiguous cases (such as non-standard security controls implemented by third-party vendors) for expert human judgment. This targeted focus adds organizational value, reduces burnout from repetitive manual work, and elevates compliance as a business enabler.

Data analysis, risk prioritization, and operational efficiency

AI provides significant advantages in compliance through large-scale data analysis for risk management. By analyzing historical trends and continuously monitoring transaction logs, access records, or customer interactions, AI systems can identify patterns and anomalies.

For a global retail chain handling payment card data, AI can continuously review payment processing logs for patterns historically associated with data breaches or regulatory violations—such as repeated failed logins, after-hours access, or data exfiltration attempts. Rather than overwhelming compliance staff with low-value alerts, the system prioritizes issues most indicative of actual risk and possible compliance failures.

This precision reduces false positives, improves organizational risk awareness, and ensures compliance resources are allocated where they matter most.

What are AI tools for regulatory compliance?

AI tools for regulatory compliance include systems that:

Security testing and the new attack surface created by AI

As organizations deploy AI-powered solutions—such as large language models (LLMs) for customer support or document summarization—they introduce new potential attack vectors. Beyond traditional penetration testing, specialized assessments are needed to identify how prompt engineering or API misuse could extract sensitive information from AI models or trigger unauthorized actions.

For example, when a hospital integrates an LLM-based chatbot to answer patient questions, penetration testers use sophisticated prompts to determine if the chatbot inadvertently reveals private patient data or internal logic not meant for disclosure. AI-specific penetration testing becomes essential for ensuring these technologies do not compromise information security or regulatory compliance.

Purpose-built AI penetration testing can uncover vulnerabilities such as improperly scoped access to internal APIs, data leakage through context windows, or susceptibility to adversarial prompts—addressing risks unique to generative AI adoption in customer-facing workflows.

A human-centric approach to AI in compliance

Despite AI’s efficiencies, the human element remains central. Ultimate responsibility for interpreting complex scenarios, approving sensitive information sharing, and making judgment calls rests with skilled compliance professionals. AI serves as an augmentative tool, providing clarity, automation, and risk prioritization, but never removing the need for human judgment or oversight.

Organizations should use AI to:

The future of AI and compliance

The integration of AI into compliance represents a significant technological and operational shift. Processes once characterized by manual checklists, fragmented evidence gathering, and constant catch-up are becoming proactive, data-driven, and risk-focused compliance programs. AI is not only reducing costs but improving accuracy, transparency, and agility across all sectors.

A software company scaling internationally illustrates this future: AI-enabled compliance systems help the company understand emerging local privacy laws, generate and approve DDQ responses quickly, monitor audit readiness in real time, and proactively identify genuine risks from operational data. Compliance becomes a competitive advantage rather than an innovation constraint—freeing expertise for strategic engagement rather than administrative tasks.

While AI reshapes compliance, its most significant impact is empowering professionals and organizations to engage more effectively with risk, ethics, and proactive governance in the digital era. The next generation of compliance—augmented and accelerated

When talking about AI and penetration testing, we can split the discussion into two main areas: using AI to perform pentests and performing pentests on AI systems. While Thoropass offers testing for large language models (LLMs), the core of many AI systems, this article focuses on the former: how AI is transforming modern pentesting. Can AI deliver a full-fledged test? Will it replace human testers? Is it an ally or a risk? Can it satisfy compliance requirements? Let’s dive in.

How Does AI Help Penetration Testers?

Manual penetration testing typically involves two main tasks: finding vulnerabilities and delivering a clear, actionable report. While testers often prefer the hands-on challenge of identifying weaknesses, documenting those findings remains critical. AI helps simplify this process by assisting with report drafting, organizing insights, and ensuring content is accessible to both technical and non-technical audiences. This allows testers to focus more of their time and energy on in-depth security analysis while maintaining high-quality deliverables.

AI also automates repetitive tasks:

Used correctly, AI helps pentesters scale their work efficiently without sacrificing quality.

What Are the Limitations of AI in Pentesting?

AI has made pentesting more data-driven, but human judgment remains irreplaceable.

In short, AI can assist, but it cannot independently lead or replace the nuanced process of penetration testing.

How Thoropass Uses AI to Enhance Pentesting

AI will not replace penetration testers, but it can make them more effective. Thoropass integrates AI into its pentest process to increase efficiency without compromising depth.

This human-AI collaboration yields faster results without sacrificing the quality required for audits or assessments.

Can AI-Only Pentests Satisfy Compliance Requirements?

A fully AI-driven pentest refers to an automated assessment process conducted without human involvement. These tests use artificial intelligence to perform tasks like reconnaissance, vulnerability detection, and sometimes even exploitation. While they can deliver rapid insights and flag common security issues, they lack the contextual understanding and decision-making necessary for deeper evaluations. Now the question becomes: are these AI-only assessments enough to meet compliance standards?

Short answer: No. AI-only tests fall short of compliance-grade pentests.

Auditors require humans in the loop, both for risk assessments and to explain how tests were conducted.

Conclusion

AI is transforming penetration testing by streamlining repetitive tasks, accelerating reconnaissance, and enhancing visibility across large attack surfaces. These capabilities enable security teams to operate more efficiently, automating early-stage workflows so human testers can concentrate on complex, high-value activities.

However, AI alone cannot deliver the full picture. Understanding business context, adapting to edge cases, and making risk-informed decisions still require human expertise. Security is as much about creativity and critical thinking as it is about automation and scale. Without experienced oversight, AI may miss key insights or introduce operational risks.

At Thoropass, we thoughtfully integrate AI into our pentest methodology to improve speed and precision while maintaining the depth, compliance rigor, and human insight our clients expect. This collaborative approach allows us to deliver better outcomes, faster, smarter, and with confidence. AI won’t replace pentesters, it will empower them.



FAQs

Can AI fully replace penetration testers?

No. AI can automate certain tasks, but it lacks the intuition, contextual understanding, and adaptability required for comprehensive penetration testing. Additionally, because AI models may be trained on or store sensitive data, testers must be cautious about what information is shared with external AI vendors.

Is an AI-only pentest enough for compliance?

Not usually. Compliance standards like PCI DSS and HIPAA require human involvement and documentation that AI-only tools can’t provide.

How does Thoropass use AI in pentesting?

We use AI to automate parts of reporting and vulnerability discovery, always under human supervision for quality assurance.

Can AI introduce risk during pentesting?

Yes. Without proper safeguards, AI can cause service disruptions or access sensitive areas unintentionally.

Will AI eventually replace all security roles?

Unlikely. While AI can enhance productivity, it cannot replicate human judgment, ethics, or creativity in critical security operations.

With the accelerating pace of technological change, companies now face a critical need to navigate complex compliance landscapes and establish robust AI governance practices. A recent study revealed that 83% of organizations experienced a data breach in the last two years, and non-compliance penalties cost companies an average of $14.82 million annually. 

The stakes have never been higher, from ensuring regulatory adherence to maintaining ethical AI practices. Missteps can lead to significant risks, including a 28% drop in consumer trust following a compliance failure, fines exceeding 4% of global annual revenue under regulations like GDPR, and operational inefficiencies that slow growth and innovation.

Better together: Zendata + Thoropass 

Thoropass and Zendata have joined forces to offer an integration to simplify and enhance compliance and AI governance. Zendata’s expertise lies in helping organizations uncover hidden data flows, adhere to regulatory requirements, and implement ethical AI practices—enabling businesses to operate securely and confidently scale.

Overview of Zendata: What sets it apart in compliance?

Most AI platforms on the market focus on either security testing or governance, often forcing organizations to piece together separate tools to address their full range of needs. Zendata seamlessly integrates security testing and governance into a single, comprehensive platform. This holistic approach ensures that security leaders can adopt AI technologies confidently, knowing that their data is secure from potential vulnerabilities and used in alignment with ethical and regulatory standards.

Zendata helps organizations scale their AI initiatives without the typical concerns about data exposure, compliance risks, or governance gaps by bridging the gap between these two critical areas. This unified solution streamlines workflows, enhances operational efficiency, and fosters trust in AI adoption, making it a vital tool for modern businesses navigating the complex environment of AI and data management.

Tackling compliance challenges: Integration that provides solutions

Zendata tackles many critical compliance challenges that businesses face today, including data mismanagement, privacy violations, AI bias, and third-party risk. By providing automated solutions for protecting personally identifiable information (PII), detecting AI bias, and offering comprehensive data observability, Zendata helps organizations safeguard sensitive data while ensuring ethical and secure AI operations.

Zendata allows businesses to embed these compliance measures into their existing workflows easily. The platform’s continuous monitoring and health checks keep businesses up to date with the latest regulatory requirements and standards, significantly reducing the risk associated with AI models and data processes. 

This proactive approach helps organizations stay compliant and streamlines complex processes, ultimately saving valuable time and enhancing overall operational security. It allows businesses to scale confidently without the burden of compliance concerns.

Inspiration behind the Zendata-Thoropass collaboration

The Zendata-Thoropass collaboration combines privacy compliance AI governance with robust compliance automation, providing organizations with a comprehensive solution to tackle some of the most pressing challenges in data security, privacy, and compliance. As enterprises face increasing complexities surrounding regulatory requirements and the management of sensitive data, this integration enables them to integrate advanced AI-driven governance into their compliance processes seamlessly.

By merging Zendata’s expertise in AI governance with Thoropass’ powerful compliance automation, organizations can more effectively navigate evolving regulations, protect privacy, and manage data security risks. This collaboration streamlines compliance efforts and helps organizations proactively address emerging challenges, mitigate risks, and maintain a strong security posture as they scale.

Understanding the integration: Key features and how it works

This integration leverages APIs to connect Zendata’s advanced AI monitoring capabilities with Thoropass’ compliance automation platform. As a result, it enables smooth data exchange and workflow synchronization, helping organizations efficiently manage both compliance and AI governance in one unified system.

Zendata’s platform offers a range of key features that enhance its value when integrated with Thoropass. It provides automated protection of personally identifiable information (PII) through continuous monitoring and real-time alerts, ensuring sensitive data remains secure. Zendata’s bias detection capabilities help identify and address potential biases within AI models, a crucial feature for maintaining fairness and ethical AI practices. Additionally, Zendata’s data observability tools allow organizations to track and understand data flows across their systems, helping ensure transparency and regulatory adherence.

With continuous monitoring and health checks, Zendata supports ongoing compliance efforts by proactively detecting risks and potential violations. Allowing businesses to streamline processes, reduce model risks, and enhance operational security. 

Frameworks and regulations supported by the integration

The integration is designed to help organizations meet a wide range of global privacy regulations, industry-specific standards, and emerging AI governance frameworks. It offers a solution that simplifies compliance with today’s most critical and complex data-driven regulations.

For global privacy regulations, the integration supports compliance with frameworks such as GDPR, California Consumer Privacy Act (CCPA), and the Personal Data Protection Act (PDPA). These regulations require organizations to ensure that personal data is collected, processed, and stored with the highest levels of security and transparency. Zendata’s automated PII protection, data observability, and continuous monitoring help businesses maintain compliance with these stringent privacy requirements, ensuring that data protection is embedded throughout the data lifecycle.

In addition to privacy regulations, the integration assists with industry-specific standards such as the Gramm-Leach-Bliley Act (GLBA) and Health Insurance Portability and Accountability Act (HIPAA), which govern the security and confidentiality of sensitive financial and health data. Zendata’s advanced AI governance features and Thoropass’ compliance automation help businesses implement the necessary security controls and documentation to meet these standards, reducing the risk of non-compliance and ensuring data protection aligns with industry best practices.

Simplifying compliance workflows

Zendata offers real-time visibility into data flows and AI decision-making processes, allowing organizations to closely monitor and track how data is used and processed throughout their systems. By proactively identifying potential issues before audits, Zendata helps teams address concerns in advance, reducing the risk of last-minute surprises. This approach enhances the accuracy and efficiency of audits and helps businesses maintain compliance and minimize disruptions, ensuring a smoother and more secure operational environment.

Long-term vision for the Zendata-Thoropass partnership

As AI continues gaining traction in enterprises, it brings excitement and apprehension. The excitement stems from its potential to transform business operations, improve efficiency, and drive innovation. However, there is also a significant fear, particularly around data security and privacy concerns. Organizations increasingly worry about sensitive data exposure to AI systems and third parties leveraging AI technologies. The unknowns surrounding how this data will be used and who has access to it create anxiety around privacy violations, compliance risks, and ethical concerns.

The partnership aims to address these challenges by creating a unified, comprehensive platform that blends AI governance with compliance automation. It empowers businesses to scale their AI initiatives while prioritizing transparency, security, and adherence to regulations at every step. 

The long-term vision is to equip organizations with the tools they need to embrace AI confidently, knowing their data is secure, their operations are compliant, and their ethical use of AI systems. Through this partnership, Zendata and Thoropass are building the foundation for organizations to navigate the complexities of AI governance, mitigate risks, and stay ahead of the changing regulatory requirements.

Getting started with the Zendata-Thoropass integration

The integration is simple to set up and navigate. Contact Thoropass to learn more or request a demo to discover how Zendata’s AI governance capabilities, combined with Thoropass’ compliance automation, can streamline your compliance processes and enhance your organization’s data security.

The time of static, manually operated cybersecurity measures is behind us. Today, artificial intelligence (AI) has revolutionized the field by introducing automated systems that are ever-adaptive and capable of mitigating threats, both existing and new. The incorporation of AI sharpens threat detection, facilitates scalable monitoring in real-time, and presents financially viable options suited to changing cyber threat environments.

By infusing cutting-edge AI innovations into cybersecurity strategies, a forward-thinking security posture has been established. This advancement automates extremely accurate incident response methods while confronting an extensive array of dangers—especially those posed by sophisticated cyber threats.

Key takeaways

The impact of artificial intelligence on the threat landscape

It’s important to acknowledge that AI is somewhat of a double-edged sword: While it can be used to significantly bolster cybersecurity defenses, it also represents a potent tool that, if wielded with malicious intent, can lead to the development of formidable new cyber threats. Darktrace’s State of AI Cyber Security 2024 Report stats indicate 74% of security leaders say AI-powered cyber threats have impact their businesses, with more to come.

While agents of cybersecurity explore how AI can assist in bolstering security, cybercriminals are harnessing AI to craft more sophisticated attack methods, necessitating a “fight fire with fire” approach. Because of this, the cybersecurity industry must continuously adapt and employ advanced AI-driven measures to stay ahead of adversaries who are also using AI to enhance their attack strategies.

Let’s look at some of the ways AI can be used to enhance cybersecurity.

Data science x AI: A powerful tag-team in threat intelligence

The field of data science is crucial in strengthening the infrastructure for threat intelligence. It enhances AI capabilities, enabling these systems to parse extensive datasets to find irregularities and patterns that can indicate potential security threats.

For AI systems to establish a strong cybersecurity defense, it’s essential they gather thorough data from diverse origins such as endpoints, networks, and cloud frameworks.

Four ways AI is transforming incident response 

AI systems have transformed the landscape of incident response, automating the detection and alerting of cyber threats as well as safeguarding sensitive data.

This technological advancement has significantly improved threat detection capabilities by expediting threat identification and cutting down on mitigation time. The incorporation of AI-driven automation improves efficiency and scalability, optimizing early analysis phases and thereby freeing up security teams to concentrate on intricate, higher-priority tasks. 

1. AI-driven pattern recognition

AI algorithms, equipped with sophisticated pattern recognition capabilities, help identify any signs of malevolent activity and attacker behavior. These AI systems adeptly manage extensive data analysis tasks while evolving to enhance threat detection accuracy as they encounter new information.

Nevertheless, human judgment is still required to analyze intricate threats and make vital decisions. This is despite AI’s proficiency in preemptively recognizing potential ransomware or malware invasions before they breach digital defenses.

Example: Malware analysis

Artificial intelligence systems are able to sift through vast quantities of data to detect and isolate potential malware threats with unprecedented speed and accuracy. 

AI-driven malware analysis tools employ various techniques, such as static and dynamic analysis, to inspect and evaluate suspicious files. Static analysis involves examining the code without executing the program, while dynamic analysis observes the behavior of the code during execution. AI enhances these techniques by automating the process and learning from each analysis to improve future detection.

Moreover, AI systems can quickly adapt to the evolving nature of malware. As cybercriminals employ more sophisticated methods to evade detection, such as polymorphic and metamorphic malware, AI’s machine learning algorithms continuously learn and adjust to these new tactics. This ensures that the AI models remain effective even as the threat landscape evolves.

2. Cyber threats and the predictive power of AI

The ability to anticipate future attacks is a game-changer in cybersecurity. Utilizing AI for its predictive faculties allows organizations to defend against potential cyber threats before they happen and increase their readiness to counteract cyber incidents. 

In automated threat detection, AI plays a crucial role by absorbing knowledge from past events, ongoing data streams, and external intelligence sources to proactively spot new emerging threats.

AI’s predictive strength stems from its continual learning process, which adapts to an ever-changing environment of security risks. This enables AI to constantly refresh its understanding of the patterns and techniques used in attacks—thus equipping it with foresight into upcoming challenges within the field of cybersecurity.

Example: User and Entity Behavior Analytics (UEBA)

User and Entity Behavior Analytics (UEBA) utilizes advanced AI and machine learning algorithms to detect anomalies in user and entity behaviors. By establishing a baseline of “normal” activity patterns within an organization’s network, UEBA systems can identify deviations that may signify malicious intent or compromised accounts.

UEBA tools analyze a wide range of data sources, including logs, network traffic, and endpoints, to build comprehensive profiles for each user and entity. These profiles help in recognizing unusual patterns such as irregular login times, excessive file downloads, or uncharacteristic access to sensitive data, which could indicate a potential security threat.

The strength of UEBA lies in its ability to correlate disparate data points and apply context to behaviors. This contextual analysis is key in distinguishing between legitimate activities and potential security incidents. For example, an employee accessing the network from a foreign country may be flagged as suspicious, but if the system recognizes that the employee is traveling for business, it may consider this behavior as normal.


AI pentesting offering
Thoropass's AI pentesting offering ensures secure and ethical use of AI

Manage AI-related risk and ensure compliance with new and emerging AI frameworks with AI pentesting.

Join the waitlist icon-arrow-long

3. Real-time monitoring with AI systems

AI systems, equipped with machine learning algorithms, transform threat detection by providing security teams with the ability to monitor system health and data flows automatically and continuously. This significantly enhances real-time security through constant vigilance for potential threats, ensuring immediate notification of security personnel.

Organizations that implement AI-driven platforms featuring user and entity behavior analytics (UEBA) benefit from an elevated level of protection. These advanced systems identify intricate patterns and irregularities within vast datasets, bolstering the capability to detect threats effectively and respond in a timely manner.

4. Automated responses

The utilization of AI in cybersecurity extends to the realm of Automated Responses, where the system takes immediate and decisive action upon detecting a threat. For instance, when a suspicious IP address is identified as a potential source of malicious activity, AI-driven systems can block it automatically, preventing the attacker from causing further damage or gaining access to sensitive information.

Similarly, devices that are compromised or infected with malware can be swiftly quarantined by AI systems. This proactive measure isolates the affected device from the network, thereby containing the threat and preventing the spread of the infection to other devices or systems.

AI can also trigger incident response protocols—the predefined security procedures that are enacted in the event of a detected threat. These protocols may include: 

Automated responses powered by AI not only reduce the time taken to address security incidents but also enhance the overall efficacy of the cybersecurity infrastructure. By leveraging these automated capabilities, organizations can ensure a more resilient and responsive security posture in the face of ever-evolving cyber threats.

The integration challenge: Incorporating AI into existing cybersecurity

Incorporating AI into the current cybersecurity framework may come with challenges. It may require middleware or APIs to facilitate interaction and data transfer within existing systems, thereby enhancing threat detection capabilities while avoiding any interruption in services.

By merging AI with machine learning alongside traditional rule-based frameworks, a hybrid approach to threat detection is formed that enhances adaptability and precision in spotting potential threats. Through effective integration of strong AI elements, companies can deploy swift countermeasures, which help mitigate the effects of security incidents when they arise.

AI and human synergy in detecting security threats

The importance of human analysts cannot be overstated, even as we increasingly turn to artificial intelligence for support. Human analysts offer indispensable capabilities that AI may lack on its own, including:

These unique human traits are essential contributions.

Merging the automated efficiency of artificial intelligence with the nuanced insight of human intelligence is vital in identifying and neutralizing security threats.

By refining the accuracy of security measures and providing pertinent context for gathered data, AI supports the team in charge of security operations and responding to incidents. This support reinforces your security team’s capabilities without negating the requirement for human analytical input.

Incorporating AI into threat detection helps:

Cultivating AI literacy among security teams

To maximize the benefits of AI technology in cybersecurity, it is crucial for experts to:

By concentrating on increasing understanding about AI among organizations, teams can more effectively utilize its capabilities for detecting and responding to threats.

Ethical considerations around AI threat detection

Exploring the terrain of AI-driven cybersecurity requires careful consideration of ethical consequences. Ethical implementation of artificial intelligence within cybersecurity must follow these principles.

Upholding ethical norms in AI-focused cybersecurity hinges on diligent compliance measures.

Regulatory factors, including data privacy, storage duration, openness, and responsibility, play an essential role in integrating artificial intelligence into threat detection systems.

Looking ahead, we see even more potential progress and enhancements in AI and cybersecurity. AI predictions feature:

To address the threats associated with AI, there is an emerging inclination towards deploying artificial intelligence red teams as well as incentive-based vulnerability identification programs such as bug bounties. These strategies help with identifying and neutralizing distinct AI security weaknesses including the manipulation of models or attacks that exploit prompt injections.

Conclusion: AI is a game changer in advanced threat detection

AI has emerged as a real game-changer in cybersecurity. From transforming traditional defenses and enhancing threat detection accuracy to streamlining incident response and fostering a collaborative defense, AI’s impact on cybersecurity is immense.

As we navigate this evolving threat landscape, we must leverage the power of AI while balancing it with human intelligence and ethical considerations. The future of cybersecurity is indeed AI, and by harnessing its potential, we can fortify our defenses against the cyber threats of today and tomorrow.

More FAQs

Artificial Intelligence has revolutionized how we safeguard against cyber threats by moving away from conventional manual safeguards to advanced, automated systems capable of ongoing adaptation and learning. This shift has markedly enhanced the precision in detecting threats, facilitated monitoring in real time on a large scale, and provided an economical means to guard against the ever-changing landscape of cyber dangers.

AI enhances endpoint security by offering persistent protection to devices within a company and employing behavioral analytics to detect potential cyber threats based on user actions and system performance.

To summarize, when deploying AI for cybersecurity purposes, it is crucial to adhere to ethical and legal standards. This includes complying with relevant regulations pertaining to data privacy and ensuring transparency in operations.


AI governance is the process by which organizations and societies regulate artificial intelligence to ensure its ethical, fair, and abides by legal application. 

With artificial intelligence (AI) shaping critical aspects of life and business, governance stands as a guardian of values and norms in the burgeoning digital age. This article will guide you through the importance, approaches, and impact of AI governance, providing insight into its role in our increasingly AI-driven world.

Key takeaways

Understanding AI governance

AI governance. This term encompasses the complex set of regulations, policies, and standard practices that guide the ethical, responsible, and lawful employment of AI technologies. The objective within this domain is two-fold: 

  1. Maximize the benefits offered by AI, while simultaneously 
  2. Addressing a multitude of challenges, including data security breaches and moral dilemmas.

As AI prevalence increases across various sectors, it becomes paramount to uphold public confidence by ensuring transparency and accountability in how AI systems operate. The evolution of AI will inevitably be shaped by a confluence of factors, such as advances in technology, prevailing societal norms/values, and ongoing international partnerships.

Defining AI governance

The continuum of AI governance ranges from less structured to highly formalized systems, designed specifically to tackle the ethical implications associated with AI technologies. Governance that is informal typically originates from a company’s core values and might include ethics committees that operate without strict frameworks.

On the other hand, ad hoc governance presents itself as a more defined system set up to address particular challenges linked to AI by creating explicit policies, practices, and protocols for governance.

AI governance aims and objectives

AI governance aims to ensure that AI’s benefits are widely accessible, that AI initiatives resonate with societal values, and that responsible AI is promoted. Upholding principles such as fairness, transparency, and accountability is essential for integrating ethical considerations into business goals within every application of artificial intelligence.

The scope of governance around AI technologies

Governance of AI is an extensive field that includes ethical, legal, societal, and institutional dimensions. It devises strategies to guarantee that AI operations conform to organizations’ objectives while adhering to ethical norms. The governance approach differs across regions, from the thoroughgoing EU AI Act to emerging structures in the U.S., yet it converges on a unified objective: preemptively handling risks and safeguarding public well-being.

As AI technologies progress swiftly and have widespread effects internationally, it is imperative to adopt a judicious method for governing AI. Such governance must encourage creativity while simultaneously mitigating hazards and maintaining social values.

Why we really need AI governance 

AI governance is not merely a set of guidelines; it’s a necessity in the modern era, where AI systems profoundly influence various aspects of our daily lives. The need for AI governance stems from the potential risks and ethical dilemmas posed by autonomous systems. Without proper governance, AI could exacerbate social inequalities, invade privacy, or make unaccountable decisions with far-reaching consequences.

Let’s look at some of the key reasons AI governance is considered essential:

The establishment of AI governance is based on a foundation of legal frameworks and regulations designed to oversee the creation and implementation of artificial intelligence (AI) systems. Across the globe, there exists a varied regulatory environment for AI, highlighted by national approaches such as that adopted by Singapore and legislation like the European Union’s Artificial Intelligence Act that steers how AI is utilized.

With the advancement in AI technology comes an increase in the complexity surrounding compliance with laws and regulations, bringing up new challenges, including algorithmic accountability and consideration of what roles legal professionals will play going forward.

Understanding AI regulation

Regulation of AI is encompassed by both global and domestic structures. Legislation such as the GDPR exerts influence on AI by enforcing rigorous protections for personal data and privacy across the European Union. The EU, along with organizations like UNESCO, has crafted policies and ethical guidelines that emphasize human-centered principles in the development of AI.

The rapid escalation in data acquisition and analysis has raised apprehensions regarding individual privacy, necessitating stringent management and compliance with regulatory standards such as those established by the GDPR.

Below, we’ve listed some of the key Al regulations and regulatory proposals in 2025.

Name: AI Bill of Rights

Region: U.S. 

Description: Focuses on ensuring fairness, privacy, and transparency in AI systems.

More Info: Link


Name: Algorithmic Accountability Act

Region: U.S.

Description: Mandates impact assessments for AI systems used in critical sectors such as finance and healthcare.

More Info: Link


Name: Digital Services Oversight and Safety Act

Region: U.S.

Description: Mandates transparency reports, algorithmic audits, and accountability measures to protect consumers and ensure safe use of digital services.

More Info: Link


Name: DEEP FAKES Accountability Act

Region: U.S.

Description: Requires creators and distributors of deepfake technology to include watermarks indicating altered media.

More Info: Link


Name: NIST’s AI Risk Management Framework

Region: U.S.

Description: Emphasizes a risk-based approach to ensure AI technologies are trustworthy, fair, and secure.

More Info: Link


Name: Artificial Intelligence and Data Act (AIDA)

Region: Canada

Description: Aims to regulate the use of AI for protecting personal data and ensuring ethical use.

More Info: Link


Name: Pan-Canadian Artificial Intelligence Strategy

Region: Canada

Description: Enhances investments in AI research while emphasizing ethical standards and inclusivity.

More Info: Link


Name: European Union’s Artificial Intelligence Act

Region: EU

Description: Comprehensive framework categorizing AI systems into risk levels (unacceptable, high, limited, minimal) and imposing strict requirements on high-risk systems.

More Info: Link


Image of a European Union flag in front of an office building
Recommended Reading
Understanding the EU AI Act

Unpack key provisions and future impacts of the European Union’s new Artificial Intelligence Act

The EU AI Act: Key provisions and future impacts icon-arrow-long

Name: Digital Services Act (DSA)

Region: EU

Description: Addresses the accountability of online platforms, including AI-driven services, focusing on transparency and user safety.

More Info: Link


Name: National AI Strategy

Region: UK

Description: Focuses on maintaining leadership in AI innovation while promoting ethical AI and robust safety standards.

More Info: Link


Name: AI White Paper

Region: UK

Description: Proposes flexible regulatory frameworks to encourage innovation while ensuring AI technologies are trustworthy and transparent.

More Info: Link


Name: AI Development Plan

Region: China

Description: Emphasizes becoming a global leader in AI by 2030, with a focus on innovation, data protection, and international collaboration.

More Info: Link


The interplay between AI governance and laws

The governance of AI is deeply intertwined with legal structures. Legislation dictates the application of AI, while simultaneously, AI systems are deployed to manage and comply with multifaceted legal regulations. In America, government-led efforts and directives from entities such as the Federal Trade Commission bolster governance related to AI, illustrating how closely linked law and governance truly are.

Incorporating AI into strategies for governance, risk management, and compliance is crucial for adeptly maneuvering through these complex regulatory environments.

Legal regulations provide both limitations and inspiration for AI development, driving the creation of solutions that not only comply with but surpass legislative expectations. The EU AI Act and GDPR stand as exemplary examples of regulations that encourage the production of AI systems that are secure, reliable, and safe. AI systems are crafted to ensure adherence to legal norms, showcasing a harmonious relationship between innovation in technology and compliance with the law.

It is crucial to maintain a balance between rapid technological advancement and rigorous adherence to ethical and legal principles to foster sustainable innovation in artificial intelligence.

Establishing responsible AI practices in your business

Organizations must embed ethical considerations within their AI governance frameworks to guarantee the responsible application of AI’s potential. This requires not merely the creation of ethical guidelines but also adherence to legal standards and risk management pertaining to AI deployment. Achieving this AI GRC management lays down a solid base for fostering responsible AI development.

The role of AI ethics boards

Corporate AI ethics boards focused on AI play a critical role in maintaining the integration of ethical considerations within these evaluation metrics. They do so by implementing Key Performance Indicators (KPIs), which include measures like rate of bias detection and scores related to adherence to ethics.

Ethics boards focused on corporate AI have a crucial role in upholding ethical standards, which include:

Crafting ethical guidelines

Established on the foundation of universal principles, ethical guidelines for AI dictate that developers and regulators create AI systems that promote fairness, transparency, and privacy protection. These ethical AI practices are not static. Rather, they’re integrated into all stages of the life cycle of an AI system—including design, deployment, and ongoing supervision.

Ensuring high-quality data governance practices to prevent historical biases from infiltrating datasets is a critical component in fostering non-discriminatory ethical practices throughout the development of AI. Constructing centers dedicated to excellence in AI demonstrates a forward-thinking approach towards managing governance over these intelligent systems. Such hubs unite various experts to carefully consider both costs and ramifications brought about by increasing automation levels.

Ensuring compliance and risk management

Navigating the intricate landscape of AI regulations and data protection statutes is a crucial component of governing AI, essential to mitigating legal exposure and cultivating ethical practices in managing data.

Employing artificial intelligence for predictive analytics within risk management — key for detecting potential system malfunctions or regulatory non-compliance — underscores the importance of utilizing high-grade training datasets. This ensures biases are minimized, guaranteeing that decisions made by AI align with human ethical standards.


AI pentesting offering
Thoropass's AI pentesting offering ensures secure and ethical use of AI

Manage AI-related risk and ensure compliance with new and emerging AI frameworks with AI pentesting.

Join the waitlist icon-arrow-long

Incorporating human oversight

Within the realm of AI governance, ensuring that human oversight is integral acts as a safeguard to keep AI systems in check and accountable, especially when there are instances of mistakes or harm. Having an established process for appeals and human evaluation of decisions made by AI is crucial not just for retaining control over results but also for shielding institutions from the reputational harm that could stem from biases or inaccurate information.

Implementing effective AI governance strategies

To ensure responsible AI systems are managed effectively, a strategic approach to AI governance is essential. This includes the establishment of robust structures for responsible AI governance that offer specialized knowledge, attention to detail, and clear responsibilities—alongside ongoing evaluation of data quality and results.

The commitment to shaping society beneficially via artificial intelligence is embodied by the AI Governance Alliance. Its role in promoting innovation throughout various sectors underlines this dedication.

Continuous monitoring and adaptation of AI systems

Consistent supervision is crucial in the realm of AI governance to spot any discrepancies in performance, maintain accountability logs, and uphold adherence to regulatory standards. 

To prevent declines in functionality and certify that the desired results are achieved, it’s essential that organizations conduct ongoing surveillance, refinement, and verification of their AI models. The implementation of artificial intelligence for automated oversight concerning compliance can be an effective strategy to meet intricate regulatory requirements including privacy laws such as GDPR.

Assessing both the economic returns and supplementary benefits yielded by artificial intelligence offers quantifiable indicators that gauge fiscal prudence as well as extra gains provided by these services.

Conclusion: Effective and ethical AI management is key

As we navigate the complexities of AI governance, it is clear that while challenges abound, the roadmap for ethical and effective AI management is being charted with a focus on trust, transparency, and legal adherence. 

By implementing robust governance frameworks and engaging in continuous dialogue and innovation, we can ensure that AI serves the greater good, reflecting our highest values and standards.

More FAQs

The concept of AI governance encompasses a structured set of guidelines and practices aimed at the ethical, responsible, and lawful deployment of artificial intelligence. By emphasizing principles such as fairness and transparency, AI governance addresses risks while enhancing benefits to ensure that the application of AI resonates with societal values and objectives.

Human oversight is a central component of AI governance, and it is necessary to maintain accountability, ensure ethical decision-making, and uphold trust in AI systems.

Diverse teams that make up AI ethics boards play a pivotal role in AI governance by maintaining ethical standards, evaluating the conformity of AI systems to established ethical guidelines, and guaranteeing they meet societal expectations for oversight.

By involving stakeholders, AI governance is enhanced through the promotion of transparency, consideration of a variety of viewpoints, and the establishment of more inclusive and accountable policies for AI development.


The EU AI Act (aka the European Union Artificial Intelligence Act), introduced by the European Commission, aims to regulate AI systems to ensure they respect fundamental rights and foster trust. In this blog post, we’ll provide an overview of the Act’s key provisions, its risk-based classification of AI systems, and the global impact of the Act.

Key takeaways

An overview and a brief history of the EU AI act

The journey to regulate artificial intelligence within the European Union has been marked by several pivotal milestones. In April 2021, the European Commission took a groundbreaking step by proposing the first EU regulatory framework for AI. This proposal laid the foundation for a unified approach to ensure that AI systems are developed and utilized in a way that is safe, transparent, and respects fundamental rights across all member states.

After extensive discussions and negotiations, European Union lawmakers reached a political agreement on the draft artificial intelligence (AI) act in December 2023. This agreement was a significant achievement, representing a consensus on the principles and guidelines that would govern the use and development of AI within the Union. Finally, the Parliament adopted the Artificial Intelligence Act in March 2024, marking the culmination of years of work and setting the stage for a new era of AI governance. 

The European Union Artificial Intelligence Act, also known as the EU AI Act, is a pioneering piece of legislation. The act is aimed at businesses that provide, deploy, import, or distribute AI systems. At a high level, it aims to:

These requirements have the potential to influence global regulatory standards for AI.

The European Parliament prioritizes the safety, transparency, traceability, non-discrimination, and environmental friendliness of AI systems used within the Union. The potential benefits of the Act are far-reaching, with the hope of creating better healthcare, safer and cleaner transportation, more efficient manufacturing, and cheaper and more sustainable energy using artificial intelligence.


Flag of the European Union outside a building
Recommended Reading
EU-U.S. Data Privacy Framework: How the European Commission’s Decision Affects Data Transfers 
EU-U.S. Data Privacy Framework: How the European Commission’s Decision Affects Data Transfers  icon-arrow-long

Why AI needs oversight 🤖

The rapid development and deployment of artificial intelligence (AI) across various sectors have brought about transformative changes in society. With its potential to revolutionize industries, improve efficiency, and solve complex problems, AI also poses significant challenges that necessitate governance.

AI governance is, therefore, essential for several reasons:

Governance helps ensure that AI benefits society while minimizing its risks. The EU AI Act represents a pioneering effort to create a regulatory framework that balances the advancement of technology with the need to protect fundamental human rights and societal values.

A risk-based classification of AI systems

A distinguishing feature of the EU AI Act is its risk-based approach to AI regulation. The Act categorizes AI systems based on their risk to society, with varying levels of regulatory scrutiny applied to each category:

Risk level = Unacceptable risk

Risk level = High risk

Risk level: Limited risk

Risk level = Minimal or no risk

Of course, any form of legislation contains a lot of nuance. So, in the subsequent subsections, let’s explore this classification system in greater depth.

Unacceptable risk 

Action: Prohibition—these systems are outright banned.

AI practices deemed to pose unacceptable risks are at the top of the risk hierarchy. The Act outright bans these systems to protect fundamental rights and safety.

The EU AI Act identifies several AI practices that are considered to pose unacceptable risks and are therefore prohibited. These include:

High risk

Action: High-risk AI systems must adhere to several regulatory obligations.


Descending the risk ladder, high-risk AI systems are encountered next. These systems, which include high risk applications such as those used in critical infrastructure management, law enforcement, and biometric identification, are subject to stringent requirements to access the EU market.

The Act necessitates that providers of high-risk AI systems:

A full list of Annex III: High-Risk AI Systems can be found here. Some examples include:

These examples illustrate the broad range of applications for high-risk AI systems and the importance of rigorous regulatory oversight to ensure they operate within ethical and legal boundaries.

Limited risk

Action: Transparency – these AI systems must meet specific transparency requirements.


The Act, which focuses on regulation on artificial intelligence, applies lighter regulatory scrutiny to AI systems with limited risk, such as chatbots and generative models. This includes Chat-GPT. This category is primarily concerned with the risks associated with a lack of transparency in AI usage.

The Act (Article 50) requires ‘limited risk’ AI systems to comply with transparency mandates, informing users of their interaction with AI. If an AI system produces text that is made public to inform people about important matters, it should be identified as artificially generated. This labeling is necessary to ensure transparency and trust in the information. Similarly, images, audio, or video files modified with AI, such as deepfakes, need to be labeled as AI-generated.

Users of emotion recognition systems must also inform individuals when they are being exposed to such technology.

Minimal or no risk

Action: Encouraged to adhere to voluntary codes of conduct and best practices to ensure ethical and responsible use.

AI systems that pose a minimal risk are found at the bottom of the risk hierarchy, with such AI systems considered to be safe for free use. These technologies include AI-enabled video games and spam filters.

The Act considers AI technologies used in video games or spam filters as posing minimal or no risk. Therefore, these applications are allowed to operate in the EU market without needing to comply with the stringent requirements that apply to higher-risk AI systems.


AI pentesting offering
Thoropass's AI pentesting offering ensures secure and ethical use of AI

Manage AI-related risk and ensure compliance with new and emerging AI frameworks with AI pentesting.

Join the waitlist icon-arrow-long

Practical implementation for providers of high-risk AI

The EU AI Act imposes several obligations on providers of high-risk AI systems to guarantee compliance with regulatory standards. Before deploying high-risk AI technology, these businesses must conduct an initial risk assessment. Here’s a brief overview of the assessment process, including who conducts it and what steps are involved:

In addition to quality management and transparency, human oversight is a mandatory requirement for the operation of high-risk AI systems to ensure accountability. 

Post-market monitoring systems are also required to track the performance and impact of high-risk AI systems. Providers must maintain comprehensive records and report any serious incidents involving high-risk AI systems. 

In essence, AI providers are required to maintain ongoing quality and risk management to ensure that AI applications remain trustworthy even after they are released to the market.

Provisions for small and medium-sized businesses

Despite imposing strict regulatory requirements, the EU AI Act also includes provisions that support innovation and Small and Medium-sized Enterprises (SMEs). The Act introduces regulatory sandboxes to allow businesses to test AI systems in controlled environments.

Moreover, SMEs and startups benefit from the Act’s leniency in documentation requirements and exemptions from certain regulatory mandates. European Digital Innovation Hubs also provide technical and legal guidance to help SME AI innovators become compliant with the AI Act.

The AI Pact, a voluntary initiative, seeks to support the future implementation of the Act, inviting AI developers from Europe and beyond to comply with the Act’s key obligations ahead of time.

Institutional governance and enforcement

The European AI Office was established in 2024. It has several key responsibilities, including: 

These measures highlight the seriousness with which the Act’s provisions are enforced.

Transparency and trust in general-purpose AI

The EU AI Act regards transparency as fundamental, especially for general-purpose AI models. Article 50 of the Act introduces transparency obligations, like disclosing AI system use and maintaining detailed technical documentation, to enable a better understanding and management of these models.

General-purpose AI systems without systemic risks have limited transparency requirements. However, those posing systemic risks must adhere to stricter rules under the EU AI Act. This approach ensures that even the most complex and potentially impactful AI models are held to high standards of transparency and accountability.

Future-proofing and global influence

The EU AI Act’s future-proof approach is a significant feature. This approach allows the Act’s rules to adapt to technological change, ensuring that the legislation remains relevant as AI technology continues to evolve.

This means AI providers need to engage in ongoing quality and risk management to ensure their AI applications remain trustworthy even after market release. This approach ensures that the Act remains applicable and effective in the face of rapid technological advancements in AI.

The EU AI Act’s potential global influence is immense. Just as the EU’s General Data Protection Regulation (GDPR) has shaped data protection laws around the world, the EU AI Act could become a global standard, determining the impact of AI worldwide.

Countries worldwide are considering the EU AI Act while formulating their AI policies, potentially standardizing its provisions globally. The Act has already inspired countries like Canada and Japan to align their AI governance frameworks with the EU’s approach. Moreover, the Act’s extraterritorial reach means it impacts US companies if their AI systems are used by EU customers, further extending its global influence.

Looking ahead: Next steps for the EU AI Act

Having delved into the details of the EU AI Act, what can we expect next? Well, the Act is set to enter into force between May and June, with phased implementation through 2027 (full timelines are available here).

With some exceptions, the Act will become fully applicable two years after its publication in the Official Journal. The obligations concerning high-risk systems will become applicable three years after their entry into force. This phased implementation timeline allows for a smooth transition and gives businesses ample time to understand and comply with the new requirements.

In conclusion, the EU AI Act is a revolutionary piece of legislation that sets a global standard for AI regulation. It’s a comprehensive and future-proof framework that protects individuals and society while encouraging innovation and development in AI. As the Act moves towards full implementation, its influence on global AI governance will undoubtedly continue to grow.

More FAQs

The EU AI Act launched in January 2024 includes measures to support European startups and SMEs in developing trustworthy AI that aligns with EU values and rules.

The EU AI Act categorizes AI systems based on their risk to society, leading to different levels of regulatory scrutiny for each category. These classifications include unacceptable, high, limited, and minimal or no risk.

The EU AI Act supports innovation and SMEs by introducing regulatory sandboxes for testing AI systems and providing leniency in documentation requirements for small and medium-sized enterprises (SMEs). This allows businesses to innovate and test AI technologies in controlled environments while reducing regulatory burdens for SMEs and startups.

The EU AI Act’s future-proof approach allows its rules to adapt to technological change, ensuring that the legislation remains relevant as AI technology continues to evolve. This adaptability is a key strength in addressing future challenges and developments in AI.

The EU AI Act has the potential to influence AI policies worldwide, as its provisions could become a global standard for AI regulation and impact companies outside the EU. Its reach extends to companies whose AI systems are used by EU customers.


Stop me if you’ve heard this one before:

Your Sales team needs a DDQ in order to close business with a strategic partner. While you’ve already secured several compliance frameworks, including SOC 2 or maybe ISO 27001, the DDQ needs to be filled in from scratch before the deal can close. 

At 250 questions, the security survey will take a few business days, conservatively, to fill out, but will likely require several more days and several team members working together to complete. All in all, the better part of a week will be needed to fill out a form for which you already have most of the information.

Sound familiar?

Enter Thoropass’s GenAI DDQ

Few things are as onerous or essential in information security as due diligence forms. Otherwise known as security surveys or due diligence questionnaires (DDQs), these forms typically contain hundreds of questions that can take hours, if not days, to complete. Once completed, these DDQs can unlock business growth as organizations can better partner together and advance their security postures.

Even if you don’t rely on spreadsheets to fill in the surveys, not all DDQ automation software is the same. Thoropass’s GenAI DDQ not only helps speed up the process of filling in responses, it utilizes the evidence and findings that you already have from previous compliance checks and audits.

Our tests have shown over 80% efficiencies gained by using this tool. This means that the AI technology scans your previously uploaded documents and can fill in 180+ of those 250 questions, saving you an average of 8 hours. If the original DDQ was going to take your team 20 hours to complete, it would now take just six or less. 


Book a Demo
See GenAI Due Diligence Questionnaires in action
Book a Demo icon-arrow-long

Of course all of these times are estimates, but the efficiencies are real. As you use the tool more and upload more evidence to your Thoropass platform, the efficiencies continue to go up, meaning that some organizations could achieve upwards of 90% efficiency, reducing days of work to hours.

AI saves hours of work

Our DDQ feature leverages best-in-class Generative AI technology. The AI reads your questions and then searches policies, reports, and previous questionnaires from the Thoropass platform or locally uploaded documents in order to autofill the entire questionnaire. You simply review, and approve. 

But with saved time comes obvious concerns about accuracy and security. Can you really speed through these surveys and rely on the data to be accurate? The answer: yes.

Thoropass’s DDQ was designed with accuracy in mind. Your team will have the ability to make custom configurations, both scoring the responses you receive and advising the tool to pull information from local documents in addition to already uploaded documents within Thoropass. 

Worried about AI’s security? Our technology is governed by the same strict data policies employed throughout our platform, which ensures that your data stays local to you and not leaked into a larger LLM accessible to others. You control what gets analyzed and what gets generated in the DDQ.

Trust equals growth

While saving company resources is a major benefit of using DDQs, the main business use is to establish trust across your buyer and partner ecosystem. Especially as companies utilize TPRM and other risk assessment tools to evaluate their business partners, having ready-made DDQs are essential to establishing your organization as a trusted company, and closing business faster.

Obviously documents communicating trust need to be shared with strategic partners. As your Sales team will confirm, though, simply sharing isn’t enough. Deals often hinge on price, trust, and speed, which is why having DDQs fully integrated into the Thoropass platform is a game changer for both IT and Go To Market teams within your organization.

By securing your DDQs in a Thoropass data room, alongside all of your previously collected evidence, certifications, and attestations, everything your company needs to demonstrate its security posture is in a single source of truth: a compliance hub that is always accessible and always up to date.

Our platform is the single source of truth for your entire security and compliance program. AI analyzes and synthesizes your most recent data, ensuring comprehensive reviews and delivering up-to-date, evidence-based, and consistent responses. You just need to review and approve the answers. This minimizes human error, reduces legal risks, and supports ongoing business integrity and growth.

See our new GenAI-powered DDQs in action:

But don’t take my word for it, learn more about Thoropass GenAI DDQ here: https://thoropass.com/platform/due-diligence-questionnaire/


Like many other companies, we’ve watched as artificial intelligence has swept across the tech landscape and become commonplace in every industry, company, and home. And, like many others, we’re excited by the level of innovation and possibility that AI ushers in.

However, as the standard-bearers of quality in infosec compliance automation and audits, we also feel compelled to ensure that the industry collectively establishes an effective approach to ensuring data security and maintaining compliance in the age of AI.

We have launched a foundational set of tools that will drastically speed up vendor due diligence, services that help enable companies to implement AI solutions safely and responsibly, and support of compliance frameworks that will help organizations big and small to manage their risk related to AI adoption.

We present the following vision to define our philosophy and set a course for future evolution. This vision will guide us, our customers, and the industry forward.

Read the full press release here.

Sam Li, Founder & CEO

Our vision for AI and compliance

Every company is now an AI company. Whether they build AI products and services or not, GenAI and LLMs are now acronyms that every business should have in their service agreements and long-range business plans.

The majority of a recent Y-Combinator cohort was “AI-native.” At any tech or business conference you attend, almost every panel touches on GenAI. Microsoft, Google, and OpenAI (among others) are in an arms race for supremacy in the field in ways we haven’t seen in technology since the birth of cloud and smartphones, and many say the internet.

However, AI is not only about opportunity and growth. Cries of genuine concern have grown louder even as companies have raced to join the gold rush. Issues around copyright, hallucinations, abuse, and security are increasingly entering the exciting conversations about new innovations.

We cannot risk being passive observers.

Your company is an AI company. Even if you don’t produce AI products, your employees likely use AI services, which have been formed out of our collective data. Likewise, as we embrace the possibilities present in new innovations, we must also face the consequences of the concerns being raised.

There is no divide at this stage: we are all living together in the AI era. And in this AI era, Thoropass believes we need to foreground security as we look to embrace change responsibly. This is why, in both our products and our practices, we believe:

While the “what” of AI continues to take the spotlight, Thoropass believes the “who” is equally important. Our experts take this responsibility seriously and are at the heart of everything we embrace as a company. We enter this AI era by acknowledging the promise and perils of a changing world. To ensure that all companies enter on equal footing, and with security and privacy at the top of mind, we believe:

1. AI will revolutionize how compliance is done

Traditionally, compliance work has been characterized by manual processes, extensive documentation, and meticulous scrutiny of regulatory requirements. Audits – the mechanism to prove that what’s written on the policy is operating effectively in real life – are slow, backward-looking, and often unverifiable.

Thoropass was already built on the product vision of Verifiable Compliance at Scale, and bringing The OrO Way of compliance and audit to over a thousand customers, but AI will push our customers’ and partners’ experiences to the next level. Not just in terms of efficiency and accuracy, we now see a world where compliance in real-time is not only possible, but the new norm.

Thoropass and its business partners are already using AI to scan mountains of evidence in order to uncover security gaps and deliver compliance feedback in record time. What used to take hours can now be done with a click of a button, and as a result our experts have more time to focus on strategic initiatives and higher impact work. This is just the beginning.


AI-powered DDQs
Introducing new Due Diligence Questionnaires, powered by GenAI

Answer dozens of questionnaries in a fraction of the time with Thoropass’s new GenAI DDQs

Book a Demo icon-arrow-long

2. The world urgently needs new rules to govern our AI future, and government and industry must work together

As GenAI goes mainstream, its risks to society and businesses are becoming increasingly evident. Existing regulations and compliance standards do not provide practitioners with sufficient guidance to manage AI-related risks. To fully realize AI’s benefits while mitigating its dangers, it is essential for government and industry to collaborate closely and immediately to form new regulations and governing frameworks for AI.

Reaching consensus takes time, but that should not be a blocker to action. Thoropass and its business partners are staying informed about regulations from countries and governing bodies such as the US and EU, state governments like those in New York and Colorado, and industry groups such as HITRUST and ISO. We are also launching product offerings that support the latest AI compliance frameworks, such as ISO 42001 and NIST AI RMF.

Additionally, we provide services like AI pentesting to ensure that enterprises embarking on their own AI journeys do not have to be the first line of defense in protecting their data.


AI pentesting offering
Thoropass's AI pentesting offering ensures secure and ethical use of AI

Manage AI-related risk and ensure compliance with new and emerging AI frameworks with AI pentesting.

Join the waitlist icon-arrow-long

3. AI needs human experts

At its core, Thoropass believes that many AI use cases will benefit from human oversight, particularly in the compliance and risk management realm. By putting human experts squarely at the crossroads of where AI meets strategy, data, and security, companies will be able to reduce their risk and maintain a strong compliance posture. Having compliance experts at the table will result in a faster, safer, and more ethical approach to AI use.

At Thoropass, our auditors use AI-infused technology to achieve efficiencies that weren’t before possible but will soon be table stakes for infosec compliance. The unique solution that only we offer is the combination of human expertise and cutting-edge technology. We are your compliance co-pilot, and we will never put your compliance on auto-pilot (alone).

What this means for you

Just as we’ve been leaders in “traditional” infosec compliance since our founding, we bring the same level of expertise to AI compliance. Our job is to navigate the complexities of AI regulations and standards so you don’t have to.

By partnering with us, you can focus on your core business while we ensure your AI initiatives meet the highest compliance standards. Let us guide you in building your organization’s AI future—a future that is not only innovative but also fair, safe, and responsible.


It seems like every day, there is a new, shocking headline warning about a data breach or an announcement of some exciting advancement in cybersecurity. Staying on top of everything can feel like a full-time job.

But, let’s be serious, you have your own job to do—and it’s an important one with immense pressure. So, we’ve done the hard work for you and distilled this month’s news into three top headlines you need to know. Read on for the Cliff’s Notes (or Cole’s Notes for you Canadians) of the top 3 news headlines, as well as key insights on how to account for AI in your compliance program brought to you by Thoropass’s own DPO CISO Jay Trinckes.

You can watch Jay break everything down (in under 5-min) here, or read on for a quick overview:

Headline 1: Dormakaba Locks Used in Millions of Hotel Rooms Could Be Cracked in Seconds

The article from The Hacker News exposes critical security flaws in Dormakaba locks, widely employed in hotel rooms that enable attackers to bypass them within seconds. Up to 3 million hotel locks across 13,000 properties in 131 countries were compromised. Researchers uncovered vulnerabilities that could allow intruders to stealthily enter locked rooms, posing a significant threat to hotel guest security and privacy. Dormakaba has been urged to address these vulnerabilities promptly to prevent potential exploitation by malicious individuals.

Headlines 2: Fake Python Infrastructure Sends Malware to Coders

The article from IT Brew discusses a sophisticated attack where cybercriminals set up a fake Python infrastructure to distribute malware to unsuspecting developers. By creating counterfeit versions of popular Python libraries and uploading them to the Python Package Index (PyPI), the attackers lured developers into unknowingly installing malicious packages. These counterfeit packages contained malware that could compromise the security of systems and data on which the developers were working. The incident highlights the importance of vigilance and verifying the authenticity of packages before installation to mitigate such risks.

Headline 3: HITRUST Announces CSF v11.3.0 Launch to Enhance Its Industry Leading Security Framework

This press release announces the launch of version 11.3.0 of the HITRUST CSF (Common Security Framework) by HITRUST Alliance. This latest version includes updates and enhancements aimed at improving risk management and compliance processes for organizations. Key features of the update include new mappings to various regulatory requirements, enhancements to the assessment reporting process, and improvements in the usability of the CSF Assurance program. These updates are designed to help organizations strengthen their cybersecurity posture and streamline their compliance efforts.

DPO CISO Tip of the Month

Jay’s tip of the month for April is focused on AI and the importance of understanding the limitations of AI models. He emphasizes that while AI can be powerful, it’s essential to recognize that it’s not a silver bullet and can sometimes produce inaccurate or biased results. Jay advises practitioners to thoroughly evaluate AI models, consider potential biases, and remain critical of their outputs. He suggests seeking diverse perspectives and expertise to ensure AI systems are used responsibly and ethically, including developing policies and processes around the use of AI and GenAi for your organization.

Be safe, until next time…

ChatGPT was released in late November 2022. It quickly became the fastest product to reach one-million registered users. In a mere five days. 

Even if you don’t know the numbers, you already know the story.

AI was a dream and then a sudden reality. All corners of the world have been affected: from the Hollywood actors’ strike to ServiceNow attributing a large portion of their 27% YoY growth to GenAI. Understanding AI use was more than a mere “approaching need” in 2023. And it is already more than necessary in 2024.

AI is already here. Regulation–despite calls from every conceivable side–can only race to catch up.

AI in compliance

Contrast the meteoric rise of ChatGPT and all-things-AI with another corner of technology: information security.  Compliance orthodoxy is not known for its ability to move quickly. ISO 27001 was introduced in 2005. HITRUST rolled out in 2007. SOC 2 launched in 2010. Yet these standards are only just now becoming universal standards.

We take these frameworks for granted now as established standards, but consider what was happening in 2010 when SOC 2 came to market.

It emerged as a standard in response to increasing pressure to provide some methods of verifying good infosec practices. It was instituted by financial accountants (the AICPA) as an offshoot of financial controls, not as a direct response to the unfolding business environment at the time.

The business environment in 2010? Cloud. And Cloud–even then–was big business. Consider these stats: 

Before AI, Cloud was the most recent major shift in the tech industry, and it was well on its way to being the predominant paradigm for operating a technology business. Purchasing online was well established even before acronyms like SaaS and WFH had taken root.

SOC 2 was, and still is, an excellent compliance framework to ensure essential security in a Cloud-based world. However, as exemplified by the numbers above, it was already behind when it came to market. It wasn’t even truly adopted as the standard it is today until the mid 2010s. 

This story should sound familiar. The need for agreed upon compliance frameworks for AI has arrived, but the frameworks themselves have yet to arrive. We must ask ourselves why.

Technologic opacity, black boxes, and other hurdles 

One of the main culprits is technology opacity: LLM (large language model) technology is hard to understand, and in some cases literally incomprehensible by humans. The first companies to train AI models contribute opacity by remaining intentionally secretive about their underlying training data. Partially this secrecy is for standard industrial protection, but more uniquely to AI, there is a risk in using authors’ works without their consent. Companies that admit to using an author’s work could encounter copyright claims, bad publicity, or accusations of biased training data. 

A more fundamental concern is the black box problem with AI. The deep learning techniques used in training models mean that they are fundamentally beyond human comprehension; people cannot understand how connections are made nor why the connections are made. There are two ideas that are helpful in understanding this: complication and complexity. 

A system is complicated if there are lots of moving parts, but ultimately the inputs and outputs of the entire system can be grasped. In particular, the impact of changes to and within complicated systems are predictable. 

In contrast, a system is complex if the ways in which elements interact are unpredictable. Complex systems are characterized by small alterations leading to potentially dramatically different outcomes. 

Complicated systems, for example a 747, take a lot of time to understand and study, but can ultimately be described with deterministic-like predictive accuracy. Complex systems, however, for example the weather system the 747 is flying through, resist prediction and description because tiny changes to initial conditions or actions can lead to significant changes. 

Making complicated and complex compliant

“Traditional” digital systems are extremely complicated. Anyone who has worked with an app of even moderate size can tell war stories of factors coalescing to produce infernal bugs. But there is also the understanding that even the most difficult issue can be resolved through time, focus, and exploration. The system is, after all, understandable and deterministic. 

The construction of LLMs and use of AI is not just complicated, however, but also complex. There are infinite sliding factors: A small change in a prompt; training on slightly different sets of data; changing the temperature; submitting the same question multiple times. All of these changes result in dramatically different answers. 

These challenges around the black box of AI are precisely what makes it so difficult to establish effective regulations and compliance standards. Conversely, it’s also what makes the task so important. 

Cloud technology, in hindsight, feels quaint compared to the technology underpinning AI. Yes, it’s still very complicated, but it’s the complexity that resists regulation. SOC 2’s 80-some controls will probably not suffice to ensure this new  technology is employed safely and ethically.

For now, adapting the old standards is a good start: identify what past solutions will not work, document why, and create new standards where necessary.

Smart regulation or standards are hard in the best scenarios as they take experienced practitioners with deep knowledge of how the technology works. LLMs resist easy explanations of how they work and yet have long since passed the adoption threshold to require real regulation. The potential for complex systems to have outsized impacts–good and bad–mean that smart constraints are necessary.