Tag: GenAI
With AI becoming a core part of enterprise strategy, cybersecurity professionals are wading through the multifaceted dimensions of responsible and ethical AI use. Meanwhile, executives across business functions are increasingly interested in joining the conversation, seeing AI-savviness as critical to meeting strategic business objectives. That’s why we dedicated time at Thoropass Connect 2024 to a panel discussion on the Ethical and Responsible Use of AI-led by Thoropass CEO Sam Li. Joining Sam were Dan Ross of Dynamo AI, Mason Allen of Reality Defender, and Kaitlin Betancourt of Goodwin Law, who unpacked the meaning of responsible AI, discussed the essential compliance frameworks to deploy, and shared highlights from the playbooks for safeguarding against the specific threats posed by AI.
In case you missed the event, here are the top takeaways on responsible AI to advance your cybersecurity strategy and drive alignment between key stakeholders.
Buyers are wary of biases & hallucinations in AI models
Mason Allen is Head of Revenue and Partnerships at Reality Defender, a deepfake detection company that identifies synthetic media across audio, images, video, and text. Mason spoke about how the executive-level conversation around identifying bias in AI models has become more nuanced in the last decade. He described what he sees now in the market, saying, “The first questions we receive [are]: How biased are your models? Do you have benchmarks against that?” On the go-to-market side, showing prospective customers that you understand and can mitigate those challenges is critical.

Dan Ross is Head of AI Compliance Strategy at Dynamo AI, a firm helping enterprises deploy compliant AI, and he’s seeing a similar trend regarding concerns around AI hallucinations. The term “hallucination” and others like it were born out of machine learning engineers, explained Dan, but now “they’re becoming more standardized and discussed, and they’re starting to show up on risk reports and board reports and audit reports.” As industry decision-makers grow more attuned to the risks of AI, he empowers them to test scenarios, discuss risks that arise, and then make an educated call based on the intended use case.
To watch more of the conversation around understanding and identifying biases in AI, see this short clip:
Your responsible AI framework is unique to your business
Kaitlin Betancourt is a Goodwin Law partner specializing in cybersecurity law and advises clients on AI. She encourages cybersecurity professionals to take the first step toward building a responsible AI framework by assembling a group of cross-functional stakeholders. The objective is to discuss your organization’s culture and risk tolerance relative to AI across various perspectives.
That meeting “should ultimately culminate in some sort of policy responsible AI policy statement and/or framework,” she said, “and that will lead to, okay, well, how do we operationalize our principles?” Kaitlin advises selecting a risk management framework, such as the National Institute of Standards and Technology (NIST)’s newly developed voluntary framework, the AI Risk Management Framework. To assist cybersecurity professionals in executing the framework, NIST offers the NIST AI RMF Playbook, which includes suggestions organizations may use or borrow from to govern, map, measure, and manage risk.
For more information, watch this discussion clip on how to build a responsible AI framework:
Think about the human & human-in-the-loop
Kaitlin Betancourt raised a critical aspect of developing AI policy beyond organizational objectives – the human perspective. She said, “When we think about AI, we are thinking about the impact on the human and the human-in-the-loop.” A buzzy generative AI term, the human-in-the-loop concept focuses on ensuring a human is both active in the design, training, and operation of the GenAI model or process, with ultimate oversight and control of that model.
When it comes to humans, education is vital. Mason Allen pointed out that while cybersecurity professionals live and breathe these conversations daily, the rest of their colleagues do not necessarily understand that specific modalities like Deep Fakes exist. He shared a story from earlier this year in which a bad actor scammed a multinational Hong Kong-based company out of $25.6M by using a digitally recreated version of the company’s CFO in a video conference call instructing employees to transfer funds. The anecdote shows that you can’t underestimate the importance of simply raising awareness in the race to empower enterprises to deploy AI.
Dan Ross agrees that the conversation on responsible AI needs to extend past AI governance to existing regulation within the context of AI. Non-technical cybersecurity professionals, such as risk managers or auditors, need to join technical experts in the conversations around creating guardrails. This is important so they can effectively defend safety measures to other non-technical stakeholders, whether they be audit bankers, regulators, or the public. Non-technical users need to understand the data points that come out of an AI model and the nuances around guardrails so that they can serve as part of the control framework.
To see more of the panel’s conversation around humans-in-the-loop, watch this short clip:
Last thoughts: on Thoropass Connect’s Ethical and Responsible Use of AI panel
As AI continues to integrate into the core of enterprise strategies, it’s clear that building frameworks for responsible AI and ethical use is no longer an afterthought. Businesses can mitigate potential risks by acknowledging and addressing concerns around bias, hallucinations, and emerging threats like deepfakes. Collaborating across teams to create customized AI policies and adopting frameworks like NIST’s AI RMF will help cybersecurity professionals navigate the complexities of AI governance. Ultimately, involving technical and non-technical stakeholders in the conversation ensures that AI is compliant, safe, and aligned with broader business objectives, fostering trust and accountability in its deployment.
Want more expert insights? Many other interesting topics came up in this panel, from debates around open source vs. commercial models to the complexity of managing cross-state regulations. To dive in, you can watch the panel now in its entirety.
If you’re ready to see how Thoropass makes compliance easy regardless of where you are in your journey, book a call with one of our experts. Or read more about how we help cybersecurity professionals in HealthTech, FinTech, SaaS, and more get compliant and future-proof their businesses.
Every company is now an AI company. From automating routine tasks to generating insights from data, artificial intelligence is transforming the way businesses operate. Whether you’re aware of it or not, AI is likely already being used across your organization in some form—be it through shadow AI, integration into existing tools, or third-party vendors.
The growing reliance on AI brings incredible opportunities, but it also comes with significant risks. Without a well-defined AI policy, your organization could be vulnerable to misuse, data breaches, and compliance violations that result in hefty fines. That’s why it’s crucial to establish a comprehensive policy for managing the internal use of AI. Luckily, we’ve designed a comprehensive AI Policy Template you can use to get up and running in no time.
Why you need an AI Policy ASAP
An AI policy is a strategic document that outlines how AI systems should be used within your organization, establishing clear guidelines for their development, usage, and governance. It also sets compliance measures for AI technologies, ensuring that organizations adhere to legal and regulatory standards. It sets clear guidelines on what is and isn’t acceptable, ensuring that AI is employed responsibly and in line with legal and ethical standards. For larger businesses, where technical micromanagement is more complex, having a well-crafted AI policy is even more critical. It serves as a roadmap for AI governance, helping to mitigate risks and capitalize on AI’s benefits without compromising your company’s integrity.
Key areas an AI Policy must cover
While each organization is unique, there some areas that should appear in every AI Policy:
AI usage definition
Establishing a policy helps you clearly define how AI technology should be used within your business. This ensures that all AI-related activities align with your company’s goals and values.
Employee guidance
Your employees need to know what constitutes acceptable use of AI tools. A policy provides clear guidance, reducing the risk of misuse and ensuring that AI is used to its fullest potential.
Data protection
With artificial intelligences tools often involving sensitive data, it’s vital to outline what types of data are appropriate for AI use. Your policy will help safeguard your organization against data breaches and other security threats.
Accountability
An effective AI policy designates specific individuals or teams responsible for enforcing and maintaining AI best practices. This accountability is key to ensuring that your AI initiatives are managed properly.
Risk Mitigation
Perhaps most importantly, an AI policy helps you avoid massive penalties and fines that can arise from AI-related vulnerabilities. Effective risk management addresses potential issues, and protects your business from costly legal repercussions.
Intellectual property laws
Understanding and complying with intellectual property laws is essential for the ethical use of AI tools. Your policy should address legal considerations surrounding AI-generated outputs and pre-existing intellectual property.
Responsible and ethical use
Emphasizing the responsible and ethical use of AI technology in your policy promotes accountability and transparency. This includes establishing principles that prioritize ethical considerations in workplace applications.
Download your free AI Governance Policy Template
To help you get started, we’re offering a free, comprehensive AI Governance Policy Template tailored for small and mid-sized businesses. This template will guide you through the process of creating an airtight policy that addresses the unique needs of your organization. Get your free AI Policy Template here.
What’s included:
- Clear Definitions: Understand and articulate how AI is and isn’t to be used in your business.
- Employee Guidelines: Provide your team with clear instructions on the appropriate use of AI.
- Data Usage Parameters: Define what types of data are suitable for AI processes.
- Responsibility Assignment: Identify who will be responsible for overseeing AI practices in your organization.
- Compliance Assurance: Safeguard your business against legal issues by adhering to best practices.
How to use the template:
- Download the file.
- Open it with Acrobat or your preferred PDF editing software.
- Fill in the editable fields with your company-specific information.
- Save the completed policy for your records.
By taking these steps, you’ll be well on your way to implementing a robust AI policy that supports your business goals while minimizing risk.
Enter the AI era
Explore GenAI for your business, safely and securely
Explore the suite of new offerings from Thoropass to help your organization set itself up for success in this new era of GenAI and compliance
Cybercriminals are already exploiting AI technologies to orchestrate sophisticated cyberattacks. However, those same technologies can also serve as powerful tools for enhancing cybersecurity. By leveraging AI in compliance and cybersecurity tools, proactive threat hunting and anomaly detection can be achieved while creating predictive approaches to security challenges.
In this blog post, we examine how AI tools are being used for threat detection and cybersecurity. We’ll investigate how these advanced AI tools are transforming data protection in organizations by offering real-time monitoring and automating incident response, among other features, ultimately helping to bolster an organization’s cybersecurity posture.
Key takeaways
- AI-powered threat detection tools are revolutionizing cybersecurity by adapting in real time, learning from new attacks to enhance defenses, and providing crucial benefits such as real-time monitoring and behavioral analytics.
- Automated incident response capabilities allow for rapid detection, containment, and mitigation of cyber threats, significantly reducing damage and response times during cyber attacks.
- AI enhances threat intelligence and vulnerability management by integrating vast amounts of data, enabling predictive analysis, and automating patching processes, ensuring that organizations can identify, prioritize, and remediate potential security threats swiftly and effectively.
AI in cybersecurity: Seven ways AI helps security teams
What follows are some of the key ways AI is being incorporated into cybersecurity tools on the market in 2025, showcasing how these innovations enhance data protection and threat detection.
1. AI-powered threat detection
Security teams can employ artificial intelligence with exceptional accuracy and rapidity to pinpoint potential threats. These AI solutions differ from earlier static ones by dynamically adapting. They evolve instantly, drawing lessons from every new cyber attack to strengthen their guard against upcoming dangers.
As cyber criminals harness AI to design more complex attacks, these AI-powered defenses prove crucial in implementing an aggressive fighting-fire-with-fire cybersecurity approach.
Machine learning algorithms in threat detection
How does it work? Machine learning algorithms are at the core of AI’s capabilities for spotting threats. By analyzing historical security incidents, these sophisticated systems can identify patterns and irregularities that help predict and prevent upcoming dangers. These approaches can be:
- Supervised techniques that distinguish normal from harmful behavior, or
- Unsupervised approaches that detect deviations from typical activity
Compliance solutions like Thoropass have incorporated AI within protocols to emphasize the importance of data protection. Machine learning algorithms have transcended being mere improvements. They’re becoming crucial elements in threat detection processes, equipping security teams with tools to:
- Stay ahead of cybercriminals;,
- Improve their efficiency in identifying and addressing risks;
- Process vast datasets swiftly and precisely; and
- Streamline routine procedures.
These advancements significantly enhance a company’s defense mechanisms against potential attacks.
By integrating advanced technology like this, organizations can elevate their defensive measures substantially, with AI enhancing threat identification while bolstering an organization’s capacity for rapid response against possible intrusions into network systems or data breaches.

Real-time monitoring
Any lapse in time can result in compromised information. AI systems stand as vigilant guardians, ensuring:
- Continuous supervision over network traffic and user activity;
- Generative AI technology meticulously examines each segment of code and flow of network data for signs of threat; and
- A ceaseless supply of intelligence that operates around the clock.
Constant surveillance assures prompt recognition of irregularities, converting what could be catastrophic occurrences into manageable situations.
Behavioral analytics and proactive threat hunting
AI’s use in behavioral analytics is also emerging as a transformative force in cybersecurity. Utilizing AI to analyze the digital traces users leave behind allows for:
- The identification of nuanced and atypical user patterns that may go unnoticed by conventional security protocols; and
- Anticipating the subsequent steps of potential perpetrators.
AI also enhances the proactive pursuit of threat hunting by analyzing vast quantities of data to identify potential patterns and anomalies that may indicate malicious behavior. This anticipatory approach helps prevent these abnormalities from developing into significant security incidents.
2. Automated incident response with AI
As AI continues to enhance our ability to detect threats more effectively, it is important to recognize that cyberattackers are also leveraging AI to develop more sophisticated attacks. This means that successful breaches may still occur despite advanced detection capabilities. Therefore, utilizing AI as a response tool is a significant opportunity to mitigate damage and enhance our defensive strategies.
Every moment is crucial in the event of a cyber attack. AI can help to speed up your organization’s response, from identifying to containing and eliminating threats, which reduces harm and speeds up the restoration process.
Automated incident response tools that utilize AI are transforming the way organizations handle security incidents by offering swift and smart solutions that diminish reliance on human intervention.
AI-driven response capabilities
AI cybersecurity solutions empower autonomous responses that not only counteract threats, but can also help undo the harm caused by cyber aggressors. These automated systems can execute measures like quarantining a compromised device, thereby halting attacks in their trajectory without necessitating human involvement.
In real-time scenarios, AI enhances incident response efforts, reducing the time it takes to respond when a cyber attack strikes and lowering the threat level.
Natural language processing in incident response
Maintaining clear communication throughout and after an incident is essential to any business continuity and disaster recovery (BCDR) plan. By harnessing Natural Language Processing (NLP), AI can help bridge the gap between intricate cybersecurity information and clear and useful communication.
Many AI tools employ NLP to produce comprehensive reports and enable dynamic dialogue between teams and stakeholders. This ensures that all team members are equipped with the necessary details required for a prompt response.
3. Enhancing threat intelligence with AI
Security teams rely on threat intelligence as a critical navigator to steer through the stormy seas of cyber threats. These teams can now leverage AI tools that augment this intelligence, employing sophisticated analytics and recognizing complex patterns to spot subtle signs of compromise that a more manual approach might have overlooked.
By integrating improved threat detection features, security practitioners are more equipped to detect and counter potential threats. Moreover, with each analysis of incoming data, AI systems evolve smarter, forming an adaptive shield against both familiar and emerging cyber threats.
Integration with threat data sources
Integrating various data sources is crucial for an effective threat intelligence strategy. AI is particularly adept at this task, as it can be used to aggregate and scrutinize information from different sources (like system logs and network flows), thus providing an extensive overview of the threat landscape.
By synthesizing these varied data elements, AI constructs a defense infrastructure that surpasses the collective capabilities of its individual components, making it robust enough to counteract malware attacks and other cybersecurity threats.
Predictive analysis for emerging threats
AI tools excel at predictive analysis, i.e., interpreting trends to help organizations anticipate and prepare for possible upcoming threats. This proactive strategy allows security teams to effectively prioritize defense mechanisms, keeping them one step ahead of cybercriminals.
Enhancing security operations
While AI offers powerful predictive, responsive, and analytical capabilities, it can also help with the more routine, operational parts of your security team’s roles. Putting reports and standard workflows on autopilot allows your security analysts to focus on complex issues rather than repetitive tasks. This leads to a more robust incident response strategy and a stronger overall security posture while also maintaining consistent daily operations and reporting.
4. Vulnerability management and patch automation
Vulnerability management involves identifying, evaluating, and addressing weaknesses in a system to prevent cyber attacks. Patch automation is the process of automatically applying updates to software to fix these vulnerabilities. AI helps by quickly detecting potential weaknesses and prioritizing which ones need to be fixed first. This ensures that the most critical issues are addressed promptly, enhancing the overall security of the system.
Identifying and prioritizing vulnerabilities
AI takes a forward-looking stance in the realm of vulnerability management, ensuring risks are spotted before they escalate into actual breaches. Through persistent surveillance for potential vulnerabilities and evaluating their criticality, AI and machine learning (ML) enable organizations to strategically prioritize their defensive actions, effectively safeguarding against the most severe threats.
Automated patching solutions
Working quietly behind the scenes, automated patching solutions play a critical role in maintaining cybersecurity. These AI-driven systems alleviate compatibility concerns and lighten the workload for IT departments by smartly managing the distribution of updates to ensure system security. AI helps create patching solutions by continuously scanning systems for vulnerabilities, analyzing the severity and potential impact of these vulnerabilities, and prioritizing them accordingly. This allows for the most critical patches to be applied first, reducing the risk of exploitation.
AI solutions also automate patch testing in virtual environments to ensure they do not cause issues with existing software or systems before deployment. This proactive approach minimizes downtime and prevents potential disruptions that could arise from the patch application. Additionally, AI can predict future vulnerabilities based on historical data and emerging threat patterns, allowing organizations to stay ahead of potential risks.
Traditional patching methods often rely on human intervention and can be time-consuming, leading to delays in applying critical updates. AI streamlines this process, enhancing overall system security and resilience. The advantages of AI-driven patching solutions over traditional security tools include:
- Increased efficiency;
- Reduced manual effort;
- Faster response times;
- Alleviated compatibility concerns; and
- A lightened workload for IT departments.
5. AI in malware analysis and reverse engineering
Malware is any software intentionally designed to cause damage to a computer, server, or network. It includes viruses, worms, trojans, ransomware, and spyware. Modern malware’s complexity necessitates an equally complex method for analysis and reverse engineering. AI tools are adept at breaking down and making sense of malware by utilizing sophisticated algorithms to identify patterns, which strengthens an organization’s protection against ever more sophisticated threats.
Pattern recognition in malware detection
AI’s ability to detect malware hinges on its capacity for pattern recognition, which enables it to pinpoint behavioral patterns common among various samples of malware. This allows AI not only to recognize known threats, but also to identify new and unfamiliar ones, offering a protective measure that advances in parallel with the evolving nature of cybersecurity threats.
Improving detection accuracy
By continuously learning from an ever-growing repository of malware information, AI models refine their ability to identify infections. Extensive training on expansive datasets enables these models to adjust and remain current in the face of emerging threats while maintaining precise detection capabilities.
6. Identity and access management with AI
Identity and access management (IAM) is key to protecting an organization’s digital infrastructure, and AI is transforming this domain through its advanced capabilities in data analysis and behavioral biometrics.
Behavioral biometrics in IAM
Behavioral biometrics serves as an effective and nuanced artificial intelligence tool within Identity Access Management (IAM). It bolsters security through real-time analysis of how users engage with devices and software, allowing for the dynamic tailoring of authentication protocols to enhance protection while maintaining a seamless user experience.
Automated user authentication
AI is also advancing automated user authentication: By observing patterns of access and detecting irregularities, AI solutions for Identity and Access Management (IAM) work to guarantee that sensitive information is accessible solely to authenticated users.
Compliance and risk management
AI tools are fundamental to ensuring that organizations adhere to industry standards, playing a crucial role in compliance and risk management within IAM. Solutions like Thoropass offer customized advice to address cybersecurity problems, thereby simplifying the pathway to regulatory conformity.
7. Data loss prevention using AI
For many organizations, their data is their most valuable asset. This means the consequences of data loss extend far beyond mere annoyance. It can lead to catastrophic financial outcomes, severely tarnish reputations, and incur heavy legal repercussions.
AI’s strengths in analyzing data, automating processes, and its ability for continuous improvement are crucial aspects of Data Loss Prevention (DLP) strategies. These strategies protect sensitive information from falling into unauthorized hands or being unlawfully transferred.
The use of artificial intelligence is instrumental in developing strong DLP mechanisms that can adjust to changing threat landscapes while ensuring an organization’s data remains secure.
AI-driven data classification
Accurate data classification is critical for successful DLP implementation. AI aids in automating the identification and sorting of sensitive data, minimizing human mistakes, and increasing the accuracy of initiatives aimed at protecting such information.
Continuous monitoring and adaptation
By leveraging AI, organizations can elevate continuous monitoring to a more proactive stance. This technology allows for real-time detection of irregularities in data usage and enables dynamic refinement of DLP tactics to combat emerging threats through its insights.
Automating data protection processes
AI’s implementation in cybersecurity has proven to be highly beneficial for automating data protection procedures, thus enhancing an organization’s capability to:
- Promptly detect and mitigate data breaches;
- Minimize the repercussions of these occurrences;
- Cut down on human effort; and
- Bolster both the efficiency and efficacy of an organization or firm’s security measures.
How to choose the right AI cybersecurity tool(s)
With every tool on the market clamoring to incorporate and market its AI capabilities, it can be overwhelming to know where to start looking for the best solution. And with things moving so fast, it’s easy to quickly feel confused. To pick an artificial intelligence-driven security solution that meets your company’s specific demands and strengthens its security stance, you must carefully consider:
- Your industry and data set;
- Your business’s unique needs and goals;
- How easily these tools can be incorporated with current systems;
- Their features and capabilities;
- Training and onboarding requirements;
- Costs, time-savings, and other efficiencies;
- Understanding the limitations of these tools and where human supervision is still essential (and that you also maintain those resources); and
- Policy and governance.
Conclusion: AI ups the ante on both sides
The progression of cyber threats has turned into a high-stakes game where both sides (cyberattackers and cybersecurity) are continuously upping the ante: Cyberattackers are leveraging AI to launch more sophisticated and relentless assaults. If your defenses aren’t equally fortified with AI, you’re essentially handing them a significant advantage.
However, it’s important to note that these tools are evolving at a rapid pace, and their true capabilities remain largely untested. While embracing these advanced tools is crucial for any cybersecurity team, maintaining vigilance and supervision is still paramount. In this game of cat and mouse, staying one step ahead requires not just powerful tools, but also constant awareness and adaptability
More FAQs
Cybersecurity tools powered by AI stand apart from conventional solutions because they can sift through enormous data quantities instantaneously, pinpoint irregularities, and adjust autonomously to fresh threats without requiring perpetual manual intervention. This approach equips them with proactive measures against emerging dangers. Such a substantial leap in functionality presents a notable benefit when compared to legacy security methodologies inherent in traditional solutions.
In cybersecurity, AI alone is insufficient to supplant human specialists. To mount the most robust possible defense against cyber threats, it’s imperative that we utilize both AI’s prowess and humans’ nuanced expertise in tandem.
By utilizing AI for continuous real-time monitoring, threat detection is enhanced as it persistently oversees network traffic and monitors user activities. This allows for the immediate identification of any potential threats and provides prompt intelligence to facilitate a swift response.
AI is pivotal in boosting the effectiveness of malware analysis through its ability to discern patterns and offer significant understandings that bolster defenses against threats. This capability proves especially beneficial given the increasing quantity and complexity of malware challenges faced today.
When selecting a cybersecurity tool powered by AI, it’s essential to evaluate elements including its congruency with organizational objectives, the range of functionalities and potential it offers, proven success history, assessments from industry experts, the ability to scale up or down as needed, flexibility in adapting to new threats or technologies and how well it integrates with your current infrastructure. Each of these considerations is critical for identifying the most suitable solution for your enterprise.
Enter the AI era
Explore GenAI for your business, safely and securely
Explore the suite of new offerings from Thoropass to help your organization set itself up for success in this new era of GenAI and compliance
The rapid growth of artificial intelligence (AI) has revolutionized numerous industries, bringing unprecedented innovations and capabilities. Leading tools and platforms such as OpenAI, Google’s DeepMind, and IBM’s Watson have significantly advanced the field, enabling breakthroughs in natural language processing, machine learning, and autonomous systems. These advancements have paved the way for AI to be integrated into various aspects of business operations, healthcare, finance, and more, driving efficiency and creating new opportunities.
However, the same innovations that fuel progress also introduce new threats. AI technologies, while serving as powerful tools for enhancing cybersecurity, can equally be exploited by malicious actors to orchestrate sophisticated cyberattacks. The dual nature of AI in this context is evident: On one hand, AI-driven security measures can predict and counteract threats with remarkable precision; on the other hand, these technologies can be weaponized to develop advanced phishing schemes, ransomware, and other cyber threats.
In this blog post, we’ll examine both sides of the AI puzzle: how AI can be behind data breaches and other cybersecurity threats and how it can also be part of the solution. Let’s dive in!
Key takeaways
- AI-enabled cyberattacks are becoming increasingly sophisticated. They enable attackers to mimic legitimate communications and exploit data and network vulnerabilities, leading to serious data breaches and long-lasting damage to businesses.
- AI systems possess intrinsic security vulnerabilities—from the potential compromising of training data to the exploitation of AI models and networks. They require robust security measures and continuous monitoring for effective mitigation.
- Organizations must maintain a balance between AI innovation and security, emphasizing ethical AI development, employee training, and cross-industry collaboration to defend against evolving cybersecurity threats.
Understanding AI’s role in cyberattacks
How exactly is AI used in cyberattacks? It’s important to note that AI is an emerging technology that is evolving rapidly, so the answer to this question is rapidly evolving. Some of the ways we currently see AI being used in cyberattacks that result in data breaches include:
Phishing attacks and social engineering
Phishing and social engineering involve manipulating individuals into divulging confidential information or performing actions that compromise security. These tactics exploit human psychology to gain unauthorized access to systems or data.
- Spear phishing: AI can craft highly personalized phishing emails by analyzing social media profiles and other online information, making the messages appear exceptionally convincing to individuals.
- Deepfakes: AI-generated audio and video deepfakes can convincingly mimic trusted individuals, thereby making social engineering attacks significantly more effective.
Malware development
Malware, short for malicious software, is any software intentionally designed to cause damage to a computer, server, or network. It includes viruses, worms, trojans, ransomware, and spyware.
- Polymorphic malware: AI can generate malware that continually modifies its code to avoid detection by traditional signature-based antivirus programs.
- AI-driven exploits: AI has the capability to quickly identify and exploit software vulnerabilities by analyzing code and network traffic, outperforming human hackers in speed and efficiency.
Password cracking
AI has revolutionized password cracking by employing machine learning techniques to predict and generate likely password combinations. By analyzing large datasets of previously leaked passwords, AI can identify common patterns and create highly effective algorithms for breaking into accounts.
- Brute force attacks: AI can enhance brute force attacks by predicting likely password patterns based on user data. This is achieved through machine learning algorithms that analyze vast datasets of previously leaked passwords, identifying common patterns and creating highly effective strategies for breaking into accounts.
- Credential stuffing: AI can automate and enhance the process of testing stolen credentials across multiple sites and services to find valid combinations.
Network intrusions
Network intrusions refer to unauthorized access to an organization’s network with the intent to steal, manipulate, or destroy data. AI can be leveraged in network intrusions by automating the identification of vulnerabilities and executing attacks with precision. Utilizing machine learning algorithms, AI can continuously monitor network traffic to detect and exploit weaknesses, making it easier for attackers to infiltrate systems undetected.
- Anomaly detection evasion: AI can mimic normal user behavior to avoid triggering anomaly detection systems, allowing intruders to move laterally within networks without detection.
- Automated scanning: AI can automate the process of scanning networks for vulnerabilities, identifying weak points faster than manual methods.
Data exfiltration
Data exfiltration is the unauthorized transfer of data from a computer or network. AI can automate and enhance this process by identifying the most valuable data to steal and developing sophisticated methods to exfiltrate it without raising suspicion.
- Stealth techniques: AI can help in developing methods to exfiltrate data without raising suspicion, such as slow data leaks over long periods or using encrypted channels.
- Disguising traffic: AI can disguise malicious data transfers as legitimate network traffic, making it harder for intrusion detection systems to spot anomalies.
Denial of Service (DoS) and Distributed Denial of Service (DDoS) Attacks
Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks are types of cyber attacks designed to disrupt the normal functioning of a targeted server, service, or network by overwhelming it with a flood of internet traffic. In a DoS attack, a single machine is used to flood the target, whereas a DDoS attack uses multiple machines, often part of a botnet, to launch a coordinated assault.
AI can be used to enhance these attacks by optimizing the attack strategies. AI algorithms can identify the most effective ways to overwhelm a target’s resources, analyze network traffic to find the best times to strike and manage large botnets more efficiently. This allows for real-time adaptation to defenses, making the attacks more difficult to mitigate.
- Optimized attack strategies: AI can optimize DDoS attack strategies by identifying the most effective ways to overwhelm a target’s resources.
- Botnet management: AI can manage large botnets more efficiently, coordinating attacks and adapting to defenses in real time.
Reconnaissance
Reconnaissance, in the context of cyber-attacks, refers to the preliminary phase where attackers gather as much information as possible about their target. This information-gathering process is critical for planning and executing a successful attack. AI can significantly enhance the reconnaissance phase by automating and optimizing the information-gathering process.
- Automated information gathering: AI can automate the process of collecting information about targets from public sources, such as social media, websites, and databases. This reduces the time and effort required for manual reconnaissance and increases the amount of data that can be gathered.
- Predictive analysis: AI can analyze the gathered data to predict the best times and methods for attacks. By studying target behavior and historical data, AI can predict the best times and methods for attacks, optimizing the chances of success and minimizing the risk of detection.
Advanced Persistent Threats (APTs)
Advanced Persistent Threats (APTs) are prolonged and targeted cyberattacks in which an intruder gains access to a network and remains undetected for an extended period. These attacks are meticulously planned and executed, often by state-sponsored or highly organized hacking groups, with the intent to steal sensitive data or disrupt operations.
AI can significantly enhance the capabilities of APTs by automating various stages of the attack.
- Intelligent persistence: AI can help maintain persistence in a compromised network by continuously adapting and discovering new methods to remain undetected.
- Automated task execution: AI can autonomously execute intricate, multi-step attack strategies, dynamically adjusting its tactics based on the target’s responses.
Evasion techniques
Evasion techniques in cybersecurity refer to methods used by attackers to avoid detection by security systems. AI enhances these techniques by mimicking normal user behavior, continuously modifying malicious code, and developing sophisticated methods to bypass anomaly detection systems.
- Anti-forensics: AI can develop and implement techniques to erase traces of cyberattacks, making forensic analysis challenging.
- Adversarial machine learning: AI can be employed to generate adversarial examples that deceive other AI systems, effectively bypassing AI-based security measures.
Smart ransomware
Smart ransomware is an evolved form of traditional ransomware that leverages artificial intelligence to increase its effectiveness and sophistication.
Unlike conventional ransomware, which typically encrypts all files indiscriminately, smart ransomware uses AI to identify and target the most critical and valuable files within a system. This selective approach not only increases the likelihood of a ransom being paid but also minimizes the chances of detection before the encryption process is complete.
AI can significantly enhance ransomware by selecting the most valuable files to encrypt, setting ransoms based on the victim’s ability to pay, and communicating more persuasively with victims. Smart ransomware, powered by AI, takes these capabilities to the next level by incorporating advanced machine learning algorithms and data analysis techniques.

In this on-demand webinar we discuss the role that AI will play in streamlining compliance, and how compliance will evolve with new products, partnerships, and framework support as more companies adopt AI.
Real-world consequences: Prominent AI data breaches
These AI-enhanced cyberattack methods have already manifested in actual incidents that have had significant impacts across various sectors. By examining these prominent AI data breaches, we can gain valuable insights into the evolving threat landscape and the critical need for robust security measures.
Organizations may not always be fully aware of, or disclose, the exact technology used in a cyberattack, so the current role of AI in cyberattacks may be under-reported. Nevertheless, let’s take a look at some of the more prominent examples where the role of AI has been acknowledged:
- TaskRabbit Data Breach: In April 2018, TaskRabbit, a well-known online marketplace owned by IKEA, suffered a significant data breach. The breach affected over 3.75 million records of freelancers and clients, exposing personal and financial information. The attack, involving an AI-enabled botnet, forced the company to temporarily shut down its website and mobile app to mitigate the damage. (CyberTalk)
- Yum! Brands Data Breach: In January 2023, Yum! Brands fell victim to a ransomware attack that compromised both corporate and employee data. The AI-driven attack automated the selection of high-value data, leading to the closure of nearly 300 UK branches for several weeks. (Yum! press release)
- T-Mobile Data Breach: T-Mobile experienced its ninth data breach in five years, with 37 million customer records stolen in November 2022. The attack utilized an AI-equipped API to gain unauthorized access, exposing sensitive client information such as full names, contact numbers, and PINs. (NPR)
- Activision Data Breach: In December 2022, hackers targeted Activision with a phishing campaign using AI-generated SMS messages. The breach, which compromised the employee database, including email addresses, phone numbers, and salaries, was quickly identified and mitigated by the company. (Cyber News)
These incidents underscore the growing threat of AI-enabled data breaches and the need for robust security measures across all sectors.
How organizations can mitigate risks and protect sensitive data
As we’ve seen, AI is a powerful tool in the hands of attackers.
Large language models now power sophisticated social engineering and phishing attempts, marking a significant advancement in cyber attacks’ AI capabilities. Cybercriminals are also developing profiling techniques aided by AI technology, predicting and exploiting individual behaviors for highly personalized attacks. As AI developers continue to innovate, it’s crucial for organizations to stay vigilant and adapt their security measures accordingly.
With generative AI tools expected to be employed by both defenders and attackers, the complexity of threat vectors is set to rise, urging the cybersecurity industry to implement proactive measures like AI red teaming.
To safeguard the integrity of AI systems and ensure the security (and privacy) of sensitive data they process, organizations are required to adopt a comprehensive strategy. This includes robust measures specifically designed for AI’s distinct needs.
- Implementing encryption
- Establishing sophisticated access controls
- Conducting periodic security and privacy assessments
- Creating and following a patch management protocol
Such steps are vital in detecting and remedying possible weaknesses within these systems, thus resolving any associated concerns about their security and privacy.
Fighting fire with fire: Using AI for cybersecurity
Just as AI is being used by cyberattackers, so it is being utilized for better outcomes: AI is emerging as a robust defensive mechanism. By leveraging AI in compliance and security tools, it becomes possible to engage in proactive threat hunting and anomaly detection while creating predictive approaches to security challenges. These AI-based technologies utilize algorithms and sophisticated statistical methods that are vital for spotting data patterns indicative of imminent threats.

Answer dozens of questionnaries in a fraction of the time with Thoropass’s new GenAI DDQs
In industries like healthcare, where the stakes are incredibly high, employing AI and machine learning for defense is critical in repelling increasingly complex attacks. This strategic use of technology ensures sectors remain resilient against ever-evolving security threats.
Balancing innovation with security
Maintaining a steadfast focus on security is essential for the advancement of AI technologies. AI developers, data scientists, policymakers, and other experts must consider the ethical and safety implications of AI development. To mitigate the risks associated with AI, establishing an extensive governance program that emphasizes continual monitoring and educating personnel is imperative.
The industry’s push towards creating uniform risk management frameworks highlights collaboration as an integral element that bolsters both innovative progress and fortification of security within these evolving technologies.
Strengthening AI system security and privacy
Enhancing the security and privacy of AI systems is an undertaking of utmost importance. This involves not only applying technical safeguards, such as encryption and managing access, but also committing to ethical conduct and transparent practices during their development.
To identify vulnerabilities within AI systems, periodic security assessments, which encompass testing through simulated attacks, are crucial. Entities can turn to established guidelines like the NIST AI Risk Management Framework and ISO 42001 AI Management System standard for guidance when devising strategies to secure their artificial intelligence systems.
Employee awareness and training
Continuous learning and engaging security teams, privacy teams, and employees are foundational to establishing a security-centric and privacy-centric culture within an organization.
It is vital that your organization’s privacy and security teams receive specialized instruction to fully grasp the unique risks associated with AI, including data poisoning and the manipulation of models. Given the current shortage of cybersecurity expertise, it’s increasingly critical that comprehensive privacy and security training be extended to staff at all organizational levels.
Industry collaboration and sharing
Combating AI-related threats requires a collective effort, and collaborative initiatives across different industries are crucial for improving shared knowledge and developing unified strategies against new threats.
By exchanging intelligence and tactics, businesses along with healthcare organizations, strengthen their defenses to keep pace with the constantly changing spectrum of cybersecurity challenges.
Preparing for the future: Evolving security measures
The enhancement of AI-powered defense mechanisms necessitates a corresponding boost in the allocation and evolution of security measures. This is reflected by the projection that spending on corporate cybersecurity will grow by 14% by 2024 (Gartner).
As the AI landscape continues to evolve, it brings with it a broadening spectrum of potential threat vectors and enlarges the attack surface. To stay one step ahead of future cyber threats, it’s imperative to equip your organization with ample resources and a progressive stance toward establishing security protocols.
Policy and governance in AI security and privacy
Effective governance and regulatory structures are crucial for the integrity of AI security and privacy. As ‘Privacy by Design’ emerges as a normative approach, it compels system developers to seamlessly integrate privacy considerations into the fabric during their automated systems’ development phase. This integration ensures that AI strategies stay concurrent with escalating privacy risks and reflect contemporary social norms.

Whether in the form of shadow AI, incorporation of AI into tools already in your tech stack, or use of AI from third-party vendors, it’s critical to establish an airtight policy on the use and management of AI across your organization. Get your free policy today.
Summary: AI necessitates heightened vigilance
It is incumbent upon all organizations to strengthen their defenses, educate their workforce, and collaborate across industries to safeguard against these sophisticated attacks. Only through a comprehensive and evolving strategy can we hope to protect the privacy and security of sensitive data in this new era of technological advancement.
More FAQs
By leveraging artificial intelligence, cyber attacks can become more sophisticated and harder to detect, intensifying the effects of data breaches. AI has the capability to craft convincing phishing emails, seek out network vulnerabilities efficiently, and carry out precisely targeted attacks that could result in the widespread divulgence of personal data.
Vulnerabilities in AI systems encompass a range of risks from code and algorithm defects to the potential for backdoor incursions during model training. These systems can be compromised by poisoned data intended to skew AI behavior as well as susceptibilities such as environmental manipulation and the threat of model extraction attacks.
Organizations need to adopt stringent security protocols, including encryption and access controls, along with conducting regular security audits and training employees to reduce the risks associated with AI technologies. It’s vital for these organizations to develop an all-encompassing strategy for AI security, keep up-to-date with regulatory requirements, and actively participate in collaboration within the industry.
While AI aids defenders in actively pursuing threat hunting and developing predictive measures for security to anticipate and counteract threats efficiently, it also empowers attackers by bolstering their abilities to execute personalized attacks and social engineering with greater effectiveness.
Future developments in AI security will involve increasingly sophisticated AI-driven defense systems, investments in advanced security resources, and continuous adaptation of regulatory frameworks to address emerging AI privacy risks and societal values. It’s crucial to keep up with these developments to ensure the security and ethical use of AI technology in the future.
Enter the AI era
Explore GenAI for your business, safely and securely
Explore the suite of new offerings from Thoropass to help your organization set itself up for success in this new era of GenAI and compliance
Powered by GenAI, Thoropass’s new Due Diligence Questionnaires product redefines how you respond to due diligence questionnaires, security surveys, and RFPs, saving time, reducing risk, and accelerating completion.
By leveraging your company’s existing data–PDFs of prior surveys, policies, procedures, reports– in a completely closed-loop system, the product does not require exposure to an external LLM. Additionally, your, company’s answers are never used to train other external models, ensuring all data remains within the company’s control.
Let’s dive a bit deeper into the key benefits of Thoropass’s new GenAI-powered DDQs. You can also check out a demo here:
AI-generated answers speed up questionnaire completion
Filling out security questionnaires can be daunting and time-consuming. Thoropass’s DDQs leverage generative AI to assess questions and match them with your company’s existing library of responses. If no direct match is found, the GenAI technology scans your documents to suggest answers that can be adopted or edited as needed.
Maintain high standards with human approval
Quality control is crucial for due diligence. Thoropass GenAI DDQ incorporates approval steps and thorough quality checks to ensure all responses meet your organization’s standards. Approved answers are automatically saved to the library, enhancing the accuracy and reliability of future questionnaires. This continuous improvement loop ensures that your responses are always top-notch.
Configurable source documents fit your unique needs
We allow users to tailor document sources. Whether you choose documents from Thoropass’s platform or upload your files—such as policies, procedures, audit reports, pentesting reports, and previously answered questionnaires—you can create a customizable sourcing repository that suits your unique requirements.
Securely share completed questionnaires via data room
Thoropass GenAI DDQ goes beyond just filling out questionnaires. A secure data room within the platform allows you to securely share completed questionnaires and supporting documents with your team and stakeholders. This ensures confidentiality and professionalism in every interaction, helping you to showcase your security posture externally with confidence.
Thoropass centralizes your entire infosec compliance program, providing a single source of truth for all your security and compliance efforts. If you’d like to see GenAI DDQs in action, book a demo with us today!
Enter the AI era
Explore GenAI for your business, safely and securely
Explore the suite of new offerings from Thoropass to help your organization set itself up for success in this new era of GenAI and compliance
The EU AI Act (aka the European Union Artificial Intelligence Act), introduced by the European Commission, aims to regulate AI systems to ensure they respect fundamental rights and foster trust. In this blog post, we’ll provide an overview of the Act’s key provisions, its risk-based classification of AI systems, and the global impact of the Act.
Key takeaways
- The EU AI Act introduces comprehensive regulations for AI systems to ensure safety, transparency, and fundamental rights, potentially setting global standards for AI governance.
- The Act adopts a risk-based classification for AI systems, ranging from outright bans on unacceptable risks to minimal requirements for low-risk applications, with high-risk AI systems facing stringent regulatory scrutiny.
- The Act supports innovation and small and medium-sized enterprises (SMEs) by providing regulatory sandboxes, leniency in documentation, and technical support, facilitating a balanced approach between regulation and technological advancement.
An overview and a brief history of the EU AI act
The journey to regulate artificial intelligence within the European Union has been marked by several pivotal milestones. In April 2021, the European Commission took a groundbreaking step by proposing the first EU regulatory framework for AI. This proposal laid the foundation for a unified approach to ensure that AI systems are developed and utilized in a way that is safe, transparent, and respects fundamental rights across all member states.
After extensive discussions and negotiations, European Union lawmakers reached a political agreement on the draft artificial intelligence (AI) act in December 2023. This agreement was a significant achievement, representing a consensus on the principles and guidelines that would govern the use and development of AI within the Union. Finally, the Parliament adopted the Artificial Intelligence Act in March 2024, marking the culmination of years of work and setting the stage for a new era of AI governance.
The European Union Artificial Intelligence Act, also known as the EU AI Act, is a pioneering piece of legislation. The act is aimed at businesses that provide, deploy, import, or distribute AI systems. At a high level, it aims to:
- Regulate artificial intelligence systems
- Ensure those businesses respect fundamental rights
- Promote innovation and investment in AI technology
- Foster the development and uptake of safe and trustworthy AI systems across the EU’s single market
- Mitigate the risks posed by certain AI systems
- Set a global standard for AI regulation
- Emphasize trust, transparency, and accountability
These requirements have the potential to influence global regulatory standards for AI.
The European Parliament prioritizes the safety, transparency, traceability, non-discrimination, and environmental friendliness of AI systems used within the Union. The potential benefits of the Act are far-reaching, with the hope of creating better healthcare, safer and cleaner transportation, more efficient manufacturing, and cheaper and more sustainable energy using artificial intelligence.

Why AI needs oversight 
The rapid development and deployment of artificial intelligence (AI) across various sectors have brought about transformative changes in society. With its potential to revolutionize industries, improve efficiency, and solve complex problems, AI also poses significant challenges that necessitate governance.
AI governance is, therefore, essential for several reasons:
- Ethical considerations: AI systems can make decisions that profoundly affect individuals and communities. Without proper governance, there is a risk of reinforcing biases, infringing on privacy, and making unethical decisions.
- Safety and reliability: AI systems must be safe and reliable, especially when they are used in critical sectors like healthcare, transportation, and finance. Governance ensures that AI systems are thoroughly tested and monitored to prevent harm or malfunction.
- Accountability: When AI systems make decisions, it can be difficult to trace the rationale behind those decisions. Governance frameworks assign responsibility and ensure that there is a clear line of accountability when things go wrong.
- Public trust: For AI to be widely accepted and integrated into society, the public must trust that it is being used responsibly. Governance helps build this trust by ensuring transparency in how AI systems are developed and used.
- Preventing misuse: AI has the potential to be misused for fraudulent activities, surveillance, and other malicious purposes. Governance can provide safeguards against such misuse.
- Global standards: As AI technologies cross borders, international governance can help establish global standards and prevent a ‘race to the bottom’ where countries or companies compete by lowering ethical standards.
Governance helps ensure that AI benefits society while minimizing its risks. The EU AI Act represents a pioneering effort to create a regulatory framework that balances the advancement of technology with the need to protect fundamental human rights and societal values.
A risk-based classification of AI systems
A distinguishing feature of the EU AI Act is its risk-based approach to AI regulation. The Act categorizes AI systems based on their risk to society, with varying levels of regulatory scrutiny applied to each category:
Risk level = Unacceptable risk
- High-level description: AI systems that pose an unacceptable risk to the safety, livelihoods, and rights of people.
- Action: Prohibition
Risk level = High risk
- High-level description: AI systems that pose significant risks to health, safety, and fundamental rights.
- Action: Strict assessment
Risk level: Limited risk
- High-level description: AI systems that pose a lower level of risk but still have the potential to impact individuals’ rights and well-being.
- Action: Maintain transparency
Risk level = Minimal or no risk
- High-level description: AI systems that pose little to no risk to individuals’ rights or safety. These systems are typically used for purposes that do not have significant impacts on people’s lives.
- Action: No specific regulatory requirements
Of course, any form of legislation contains a lot of nuance. So, in the subsequent subsections, let’s explore this classification system in greater depth.

Unacceptable risk
Action: Prohibition—these systems are outright banned.
AI practices deemed to pose unacceptable risks are at the top of the risk hierarchy. The Act outright bans these systems to protect fundamental rights and safety.
The EU AI Act identifies several AI practices that are considered to pose unacceptable risks and are therefore prohibited. These include:
- Subliminal manipulation: Utilizing covert techniques that subconsciously influence individuals, undermining their ability to make informed decisions and causing significant harm.
- Exploitation of vulnerabilities: Leveraging weaknesses associated with age, disabilities, or socio-economic status to alter behavior detrimentally, leading to substantial harm.
- Sensitive biometric categorization: Systems that infer sensitive personal attributes such as ethnicity, political stance, union affiliation, religious or philosophical convictions, or sexual orientation, with exceptions for certain law enforcement activities and dataset labeling or filtering.
- Social scoring schemes: Assigning ratings to individuals or groups based on their social behavior or personal characteristics, resulting in adverse or discriminatory outcomes.
- Criminal risk assessment: Estimating the likelihood of an individual committing a crime based solely on profiling or personality traits, barring instances that support human judgment with objective, verifiable evidence directly related to criminal conduct.
- Facial recognition databases: Compiling extensive databases of facial images through indiscriminate scraping from online sources or surveillance footage without targeted justification.
- Emotion inference in sensitive contexts: Analyzing emotional states in environments like workplaces or educational settings, unless it serves a medical purpose or is crucial for safety reasons.
- Real-time remote biometric identification: Implementing ‘real-time’ remote biometric identification in public spaces for law enforcement purposes, except under specific conditions such as locating missing or trafficked individuals, averting significant and immediate threats to life or terrorist acts, or identifying perpetrators of serious crimes.
High risk
Action: High-risk AI systems must adhere to several regulatory obligations.
Descending the risk ladder, high-risk AI systems are encountered next. These systems, which include high risk applications such as those used in critical infrastructure management, law enforcement, and biometric identification, are subject to stringent requirements to access the EU market.
The Act necessitates that providers of high-risk AI systems:
- Implement a comprehensive risk management system that remains active throughout the entire lifecycle of the high-risk AI system, ensuring that all potential issues are identified, assessed, and mitigated in a timely manner.
- Enforce rigorous data governance protocols to guarantee that the AI system’s training, validation, and testing datasets are not only relevant and representative but also as error-free and complete as possible, tailored to the system’s specific objectives.
- Compile and maintain detailed technical documentation that transparently demonstrates the AI system’s compliance with regulatory requirements, providing authorities with the necessary insights to evaluate the system’s adherence to the established standards.
- Integrate advanced record-keeping functionalities within the high-risk AI system, enabling automatic logging of critical events that could influence risk assessment at a national level or reflect significant modifications throughout the system’s lifecycle.
- Supply comprehensive instructions for use to downstream deployers, equipping them with the knowledge and tools required to ensure their own compliance when utilizing the high-risk AI system.
- Architect the high-risk AI system with built-in capabilities for human oversight, allowing deployers to monitor and intervene in the system’s operations as needed to maintain control and accountability.
- Design the high-risk AI system with a focus on achieving and maintaining high levels of accuracy, robustness, and cybersecurity, to protect against potential threats and ensure reliable performance.
- Establish and maintain a robust quality management system, which is fundamental for ongoing compliance assurance and for fostering a culture of continuous improvement within the organization.
A full list of Annex III: High-Risk AI Systems can be found here. Some examples include:
- Remote biometric identification systems: These systems, excluding those used for simple verification of identity, are considered high-risk when they identify individuals in public spaces or analyze biometric data to infer sensitive attributes such as ethnicity, political beliefs, or emotional states.
- Infrastructure safety components: AI systems integral to the management and operation of critical infrastructure, such as utilities (water, gas, electricity) and transportation networks, are high-risk due to their role in ensuring public safety and the continuity of essential services.
- AI in education: Systems that determine access to or assignment in educational and vocational institutions, evaluate learning outcomes to guide student development, or monitor student behavior during examinations are high-risk due to their influence on academic and career opportunities.
- Recruitment and employment: High-risk systems in this category include those used for screening job applications, evaluating candidates, managing tasks, and monitoring employee performance. These systems can significantly affect employment prospects and workplace dynamics.
- Public services: AI systems that assess eligibility for public benefits, manage service allocations, or evaluate creditworthiness are high-risk, as they directly affect individuals’ access to essential services and financial stability. Similarly, AI systems that prioritize emergency response calls or assess risks for health and life insurance purposes are included in this category.
- Law enforcement: Systems used for profiling during criminal investigations, assessing the reliability of evidence, or evaluating the risk of re-offending are considered high-risk. These systems can have profound implications for personal freedom and the fairness of legal proceedings.
- Migration and border control: High-risk systems include those used for assessing migration risks, processing asylum or visa applications, and identifying individuals at borders, except for the verification of travel documents. These systems play a critical role in migration management and individual rights.
- AI in legal and political arenas: Systems that assist in fact-finding, legal interpretation, or alternative dispute resolution are high-risk due to their potential influence on judicial outcomes. AI systems that could affect election results or voting behavior, other than organizational tools for political campaigns, are also classified as high-risk.
These examples illustrate the broad range of applications for high-risk AI systems and the importance of rigorous regulatory oversight to ensure they operate within ethical and legal boundaries.
Limited risk
Action: Transparency – these AI systems must meet specific transparency requirements.
The Act, which focuses on regulation on artificial intelligence, applies lighter regulatory scrutiny to AI systems with limited risk, such as chatbots and generative models. This includes Chat-GPT. This category is primarily concerned with the risks associated with a lack of transparency in AI usage.
The Act (Article 50) requires ‘limited risk’ AI systems to comply with transparency mandates, informing users of their interaction with AI. If an AI system produces text that is made public to inform people about important matters, it should be identified as artificially generated. This labeling is necessary to ensure transparency and trust in the information. Similarly, images, audio, or video files modified with AI, such as deepfakes, need to be labeled as AI-generated.
Users of emotion recognition systems must also inform individuals when they are being exposed to such technology.
Minimal or no risk
Action: Encouraged to adhere to voluntary codes of conduct and best practices to ensure ethical and responsible use.
AI systems that pose a minimal risk are found at the bottom of the risk hierarchy, with such AI systems considered to be safe for free use. These technologies include AI-enabled video games and spam filters.
The Act considers AI technologies used in video games or spam filters as posing minimal or no risk. Therefore, these applications are allowed to operate in the EU market without needing to comply with the stringent requirements that apply to higher-risk AI systems.

Manage AI-related risk and ensure compliance with new and emerging AI frameworks with AI pentesting.
Practical implementation for providers of high-risk AI
The EU AI Act imposes several obligations on providers of high-risk AI systems to guarantee compliance with regulatory standards. Before deploying high-risk AI technology, these businesses must conduct an initial risk assessment. Here’s a brief overview of the assessment process, including who conducts it and what steps are involved:
- Developers: Conduct initial risk assessments and classify their AI systems based on provided guidelines.
- Notified bodies: For high-risk AI systems, these independent entities may need to verify compliance.
- National competent authorities: Oversee compliance, conduct audits, and enforce regulations.
- Continuous monitoring: Developers must continuously monitor and reassess their AI systems to ensure ongoing compliance.
In addition to quality management and transparency, human oversight is a mandatory requirement for the operation of high-risk AI systems to ensure accountability.
Post-market monitoring systems are also required to track the performance and impact of high-risk AI systems. Providers must maintain comprehensive records and report any serious incidents involving high-risk AI systems.
In essence, AI providers are required to maintain ongoing quality and risk management to ensure that AI applications remain trustworthy even after they are released to the market.
Provisions for small and medium-sized businesses
Despite imposing strict regulatory requirements, the EU AI Act also includes provisions that support innovation and Small and Medium-sized Enterprises (SMEs). The Act introduces regulatory sandboxes to allow businesses to test AI systems in controlled environments.
Moreover, SMEs and startups benefit from the Act’s leniency in documentation requirements and exemptions from certain regulatory mandates. European Digital Innovation Hubs also provide technical and legal guidance to help SME AI innovators become compliant with the AI Act.
The AI Pact, a voluntary initiative, seeks to support the future implementation of the Act, inviting AI developers from Europe and beyond to comply with the Act’s key obligations ahead of time.
Institutional governance and enforcement
The European AI Office was established in 2024. It has several key responsibilities, including:
- Monitoring the enforcement and implementation of the EU AI Act
- Investigating violations within their jurisdictions
- Coordinating enforcement actions to ensure regulatory coherence across all EU Member States
- Imposing substantial fines for noncompliance with the EU AI Act
- Fostering collaboration, innovation, and research in AI
- Engaging in international dialogue
- Striving to position Europe as a leader in the ethical and sustainable development of AI technologies
These measures highlight the seriousness with which the Act’s provisions are enforced.
Transparency and trust in general-purpose AI
The EU AI Act regards transparency as fundamental, especially for general-purpose AI models. Article 50 of the Act introduces transparency obligations, like disclosing AI system use and maintaining detailed technical documentation, to enable a better understanding and management of these models.
General-purpose AI systems without systemic risks have limited transparency requirements. However, those posing systemic risks must adhere to stricter rules under the EU AI Act. This approach ensures that even the most complex and potentially impactful AI models are held to high standards of transparency and accountability.
Future-proofing and global influence
The EU AI Act’s future-proof approach is a significant feature. This approach allows the Act’s rules to adapt to technological change, ensuring that the legislation remains relevant as AI technology continues to evolve.
This means AI providers need to engage in ongoing quality and risk management to ensure their AI applications remain trustworthy even after market release. This approach ensures that the Act remains applicable and effective in the face of rapid technological advancements in AI.
The EU AI Act’s potential global influence is immense. Just as the EU’s General Data Protection Regulation (GDPR) has shaped data protection laws around the world, the EU AI Act could become a global standard, determining the impact of AI worldwide.
Countries worldwide are considering the EU AI Act while formulating their AI policies, potentially standardizing its provisions globally. The Act has already inspired countries like Canada and Japan to align their AI governance frameworks with the EU’s approach. Moreover, the Act’s extraterritorial reach means it impacts US companies if their AI systems are used by EU customers, further extending its global influence.
Looking ahead: Next steps for the EU AI Act
Having delved into the details of the EU AI Act, what can we expect next? Well, the Act is set to enter into force between May and June, with phased implementation through 2027 (full timelines are available here).
With some exceptions, the Act will become fully applicable two years after its publication in the Official Journal. The obligations concerning high-risk systems will become applicable three years after their entry into force. This phased implementation timeline allows for a smooth transition and gives businesses ample time to understand and comply with the new requirements.
In conclusion, the EU AI Act is a revolutionary piece of legislation that sets a global standard for AI regulation. It’s a comprehensive and future-proof framework that protects individuals and society while encouraging innovation and development in AI. As the Act moves towards full implementation, its influence on global AI governance will undoubtedly continue to grow.
More FAQs
The EU AI Act launched in January 2024 includes measures to support European startups and SMEs in developing trustworthy AI that aligns with EU values and rules.
The EU AI Act categorizes AI systems based on their risk to society, leading to different levels of regulatory scrutiny for each category. These classifications include unacceptable, high, limited, and minimal or no risk.
The EU AI Act supports innovation and SMEs by introducing regulatory sandboxes for testing AI systems and providing leniency in documentation requirements for small and medium-sized enterprises (SMEs). This allows businesses to innovate and test AI technologies in controlled environments while reducing regulatory burdens for SMEs and startups.
The EU AI Act’s future-proof approach allows its rules to adapt to technological change, ensuring that the legislation remains relevant as AI technology continues to evolve. This adaptability is a key strength in addressing future challenges and developments in AI.
The EU AI Act has the potential to influence AI policies worldwide, as its provisions could become a global standard for AI regulation and impact companies outside the EU. Its reach extends to companies whose AI systems are used by EU customers.
Enter the AI era
Explore GenAI for your business, safely and securely
Explore the suite of new offerings from Thoropass to help your organization set itself up for success in this new era of GenAI and compliance
Stop me if you’ve heard this one before:
Your Sales team needs a DDQ in order to close business with a strategic partner. While you’ve already secured several compliance frameworks, including SOC 2 or maybe ISO 27001, the DDQ needs to be filled in from scratch before the deal can close.
At 250 questions, the security survey will take a few business days, conservatively, to fill out, but will likely require several more days and several team members working together to complete. All in all, the better part of a week will be needed to fill out a form for which you already have most of the information.
Sound familiar?
Enter Thoropass’s GenAI DDQ
Few things are as onerous or essential in information security as due diligence forms. Otherwise known as security surveys or due diligence questionnaires (DDQs), these forms typically contain hundreds of questions that can take hours, if not days, to complete. Once completed, these DDQs can unlock business growth as organizations can better partner together and advance their security postures.
Even if you don’t rely on spreadsheets to fill in the surveys, not all DDQ automation software is the same. Thoropass’s GenAI DDQ not only helps speed up the process of filling in responses, it utilizes the evidence and findings that you already have from previous compliance checks and audits.
Our tests have shown over 80% efficiencies gained by using this tool. This means that the AI technology scans your previously uploaded documents and can fill in 180+ of those 250 questions, saving you an average of 8 hours. If the original DDQ was going to take your team 20 hours to complete, it would now take just six or less.
Of course all of these times are estimates, but the efficiencies are real. As you use the tool more and upload more evidence to your Thoropass platform, the efficiencies continue to go up, meaning that some organizations could achieve upwards of 90% efficiency, reducing days of work to hours.
AI saves hours of work
Our DDQ feature leverages best-in-class Generative AI technology. The AI reads your questions and then searches policies, reports, and previous questionnaires from the Thoropass platform or locally uploaded documents in order to autofill the entire questionnaire. You simply review, and approve.
But with saved time comes obvious concerns about accuracy and security. Can you really speed through these surveys and rely on the data to be accurate? The answer: yes.
Thoropass’s DDQ was designed with accuracy in mind. Your team will have the ability to make custom configurations, both scoring the responses you receive and advising the tool to pull information from local documents in addition to already uploaded documents within Thoropass.
Worried about AI’s security? Our technology is governed by the same strict data policies employed throughout our platform, which ensures that your data stays local to you and not leaked into a larger LLM accessible to others. You control what gets analyzed and what gets generated in the DDQ.
Trust equals growth
While saving company resources is a major benefit of using DDQs, the main business use is to establish trust across your buyer and partner ecosystem. Especially as companies utilize TPRM and other risk assessment tools to evaluate their business partners, having ready-made DDQs are essential to establishing your organization as a trusted company, and closing business faster.
Obviously documents communicating trust need to be shared with strategic partners. As your Sales team will confirm, though, simply sharing isn’t enough. Deals often hinge on price, trust, and speed, which is why having DDQs fully integrated into the Thoropass platform is a game changer for both IT and Go To Market teams within your organization.
By securing your DDQs in a Thoropass data room, alongside all of your previously collected evidence, certifications, and attestations, everything your company needs to demonstrate its security posture is in a single source of truth: a compliance hub that is always accessible and always up to date.
Our platform is the single source of truth for your entire security and compliance program. AI analyzes and synthesizes your most recent data, ensuring comprehensive reviews and delivering up-to-date, evidence-based, and consistent responses. You just need to review and approve the answers. This minimizes human error, reduces legal risks, and supports ongoing business integrity and growth.
See our new GenAI-powered DDQs in action:
But don’t take my word for it, learn more about Thoropass GenAI DDQ here: https://thoropass.com/platform/due-diligence-questionnaire/
Enter the AI era
Explore GenAI for your business, safely and securely
Explore the suite of new offerings from Thoropass to help your organization set itself up for success in this new era of GenAI and compliance
Like many other companies, we’ve watched as artificial intelligence has swept across the tech landscape and become commonplace in every industry, company, and home. And, like many others, we’re excited by the level of innovation and possibility that AI ushers in.
However, as the standard-bearers of quality in infosec compliance automation and audits, we also feel compelled to ensure that the industry collectively establishes an effective approach to ensuring data security and maintaining compliance in the age of AI.
We have launched a foundational set of tools that will drastically speed up vendor due diligence, services that help enable companies to implement AI solutions safely and responsibly, and support of compliance frameworks that will help organizations big and small to manage their risk related to AI adoption.
We present the following vision to define our philosophy and set a course for future evolution. This vision will guide us, our customers, and the industry forward.
Read the full press release here.
Sam Li, Founder & CEO
Our vision for AI and compliance
Every company is now an AI company. Whether they build AI products and services or not, GenAI and LLMs are now acronyms that every business should have in their service agreements and long-range business plans.
The majority of a recent Y-Combinator cohort was “AI-native.” At any tech or business conference you attend, almost every panel touches on GenAI. Microsoft, Google, and OpenAI (among others) are in an arms race for supremacy in the field in ways we haven’t seen in technology since the birth of cloud and smartphones, and many say the internet.
However, AI is not only about opportunity and growth. Cries of genuine concern have grown louder even as companies have raced to join the gold rush. Issues around copyright, hallucinations, abuse, and security are increasingly entering the exciting conversations about new innovations.
We cannot risk being passive observers.
Your company is an AI company. Even if you don’t produce AI products, your employees likely use AI services, which have been formed out of our collective data. Likewise, as we embrace the possibilities present in new innovations, we must also face the consequences of the concerns being raised.
There is no divide at this stage: we are all living together in the AI era. And in this AI era, Thoropass believes we need to foreground security as we look to embrace change responsibly. This is why, in both our products and our practices, we believe:
- a better future is possible as long as we are thoughtful about how we balance security and innovation.
- our original mission–to ensure that compliance is never a blocker to innovation–is just as applicable in the AI era.
- infosec compliance is more important and more complex than ever.
- infosec compliance can be important while simultaneously accelerating innovation.
- we will accomplish this by creating products, services, and solutions that not only keep pace with, but preempt where AI security challenges will occur.
- AI is only as effective as the humans that guide it, and the humans at Thoropass represent a hand-picked collection of auditors, compliance experts, and SMEs who work with our engineers to build adaptive and innovative solutions.
- as the only infosec compliance automation and audit platform with AI-infused technology, in-house experts, and customer-first processes to provide the OrO Way of a simple and streamlined solution, we can help companies meet the current and future challenges posed by AI.
While the “what” of AI continues to take the spotlight, Thoropass believes the “who” is equally important. Our experts take this responsibility seriously and are at the heart of everything we embrace as a company. We enter this AI era by acknowledging the promise and perils of a changing world. To ensure that all companies enter on equal footing, and with security and privacy at the top of mind, we believe:
1. AI will revolutionize how compliance is done
Traditionally, compliance work has been characterized by manual processes, extensive documentation, and meticulous scrutiny of regulatory requirements. Audits – the mechanism to prove that what’s written on the policy is operating effectively in real life – are slow, backward-looking, and often unverifiable.
Thoropass was already built on the product vision of Verifiable Compliance at Scale, and bringing The OrO Way of compliance and audit to over a thousand customers, but AI will push our customers’ and partners’ experiences to the next level. Not just in terms of efficiency and accuracy, we now see a world where compliance in real-time is not only possible, but the new norm.
Thoropass and its business partners are already using AI to scan mountains of evidence in order to uncover security gaps and deliver compliance feedback in record time. What used to take hours can now be done with a click of a button, and as a result our experts have more time to focus on strategic initiatives and higher impact work. This is just the beginning.

Answer dozens of questionnaries in a fraction of the time with Thoropass’s new GenAI DDQs
2. The world urgently needs new rules to govern our AI future, and government and industry must work together
As GenAI goes mainstream, its risks to society and businesses are becoming increasingly evident. Existing regulations and compliance standards do not provide practitioners with sufficient guidance to manage AI-related risks. To fully realize AI’s benefits while mitigating its dangers, it is essential for government and industry to collaborate closely and immediately to form new regulations and governing frameworks for AI.
Reaching consensus takes time, but that should not be a blocker to action. Thoropass and its business partners are staying informed about regulations from countries and governing bodies such as the US and EU, state governments like those in New York and Colorado, and industry groups such as HITRUST and ISO. We are also launching product offerings that support the latest AI compliance frameworks, such as ISO 42001 and NIST AI RMF.
Additionally, we provide services like AI pentesting to ensure that enterprises embarking on their own AI journeys do not have to be the first line of defense in protecting their data.

Manage AI-related risk and ensure compliance with new and emerging AI frameworks with AI pentesting.
3. AI needs human experts
At its core, Thoropass believes that many AI use cases will benefit from human oversight, particularly in the compliance and risk management realm. By putting human experts squarely at the crossroads of where AI meets strategy, data, and security, companies will be able to reduce their risk and maintain a strong compliance posture. Having compliance experts at the table will result in a faster, safer, and more ethical approach to AI use.
At Thoropass, our auditors use AI-infused technology to achieve efficiencies that weren’t before possible but will soon be table stakes for infosec compliance. The unique solution that only we offer is the combination of human expertise and cutting-edge technology. We are your compliance co-pilot, and we will never put your compliance on auto-pilot (alone).
What this means for you
Just as we’ve been leaders in “traditional” infosec compliance since our founding, we bring the same level of expertise to AI compliance. Our job is to navigate the complexities of AI regulations and standards so you don’t have to.
By partnering with us, you can focus on your core business while we ensure your AI initiatives meet the highest compliance standards. Let us guide you in building your organization’s AI future—a future that is not only innovative but also fair, safe, and responsible.
Enter the AI era
Explore GenAI for your business, safely and securely
Explore the suite of new offerings from Thoropass to help your organization set itself up for success in this new era of GenAI and compliance
It seems like every day, there is a new, shocking headline warning about a data breach or an announcement of some exciting advancement in cybersecurity. Staying on top of everything can feel like a full-time job.
But, let’s be serious, you have your own job to do—and it’s an important one with immense pressure. So, we’ve done the hard work for you and distilled this month’s news into three top headlines you need to know. Read on for the Cliff’s Notes (or Cole’s Notes for you Canadians) of the top 3 news headlines, as well as key insights on how to account for AI in your compliance program brought to you by Thoropass’s own DPO CISO Jay Trinckes.
You can watch Jay break everything down (in under 5-min) here, or read on for a quick overview:
Headline 1: Dormakaba Locks Used in Millions of Hotel Rooms Could Be Cracked in Seconds
The article from The Hacker News exposes critical security flaws in Dormakaba locks, widely employed in hotel rooms that enable attackers to bypass them within seconds. Up to 3 million hotel locks across 13,000 properties in 131 countries were compromised. Researchers uncovered vulnerabilities that could allow intruders to stealthily enter locked rooms, posing a significant threat to hotel guest security and privacy. Dormakaba has been urged to address these vulnerabilities promptly to prevent potential exploitation by malicious individuals.
Headlines 2: Fake Python Infrastructure Sends Malware to Coders
The article from IT Brew discusses a sophisticated attack where cybercriminals set up a fake Python infrastructure to distribute malware to unsuspecting developers. By creating counterfeit versions of popular Python libraries and uploading them to the Python Package Index (PyPI), the attackers lured developers into unknowingly installing malicious packages. These counterfeit packages contained malware that could compromise the security of systems and data on which the developers were working. The incident highlights the importance of vigilance and verifying the authenticity of packages before installation to mitigate such risks.
Headline 3: HITRUST Announces CSF v11.3.0 Launch to Enhance Its Industry Leading Security Framework
This press release announces the launch of version 11.3.0 of the HITRUST CSF (Common Security Framework) by HITRUST Alliance. This latest version includes updates and enhancements aimed at improving risk management and compliance processes for organizations. Key features of the update include new mappings to various regulatory requirements, enhancements to the assessment reporting process, and improvements in the usability of the CSF Assurance program. These updates are designed to help organizations strengthen their cybersecurity posture and streamline their compliance efforts.
DPO CISO Tip of the Month
Jay’s tip of the month for April is focused on AI and the importance of understanding the limitations of AI models. He emphasizes that while AI can be powerful, it’s essential to recognize that it’s not a silver bullet and can sometimes produce inaccurate or biased results. Jay advises practitioners to thoroughly evaluate AI models, consider potential biases, and remain critical of their outputs. He suggests seeking diverse perspectives and expertise to ensure AI systems are used responsibly and ethically, including developing policies and processes around the use of AI and GenAi for your organization.
Be safe, until next time…
Navigating compliance in the age of Generative AI (GenAI) presents both opportunities and challenges. In a recent webinar, Thoropass Co-Founder and CEO Sam Li sat down with a panel of experts, including:
- Arushi Saxena, Head of Trust & Safety at Dynamo AI
- Edward Tiang, CEO of GPTZero
- Dan Adamson, Founder of Armilla AI
Sam guided the panel through several questions that examined how GenAI in compliance is shifting paradigms in risk management and regulatory adherence. Here are some highlights that uncover how GenAI is leveraged for proactive, efficient, and ethically aligned compliance strategies. You can watch the full recording here.
How do you define responsible GenAI adoption?
There are some key areas that organizations–large and small–must prioritize to ensure responsible development and use of GenAI.
According to Dan Adamson, responsible AI is highlighted by pillars such as:
- trust,
- explainability,
- transparency,
- fairness; and sometimes,
- sustainability.
These principles form the foundation for ensuring the responsible development and usage of GenAI.
Arushi Saxena added on the importance of operationalizing governance and fostering collaboration across teams involved in AI development. She stressed the need for:
- training,
- hiring the right talent,
- legal reviews; and,
- effective communication strategies
Edward Tian elaborated on two critical aspects of responsible AI adoption. Firstly, he underscored the importance of maintaining “humans in the loop” throughout AI development to balance AI and human contributions effectively. Secondly, he emphasized the necessity of truth in understanding AI’s impact, advocating for transparency in AI-generated content.
Collectively, the experts highlighted the imperative for organizations to prioritize human involvement, operationalize governance processes, and embrace transparency with proactive approaches and data-driven strategies to navigate the evolving landscape of GenAI.
How do you think the GenAI landscape has changed since you launched compared to today?
Edward Tian likened this transition to the ongoing evolution in AI chess models pointing out that even the best AI chess model isn’t as good as the best AI chess model in addition to the best human AI chess player. He highlighted the shift from sporadic AI instances to a pervasive integration of AI in content creation, necessitating a new approach to categorization and understanding its impact.
Edward Tian further outlined measures his company takes to assist businesses in detecting and managing AI-generated content, including copyright detection, plagiarism checks, and bias assessments. Whereas once, the detection of AI was like “finding a needle in a haystack”, use of AI is now so pervasive the challenge looks a lot different.

Arushi Saxena discussed the concept of red teaming in AI governance, drawing parallels with its origins in military and cybersecurity contexts. Red teaming involves proactively attacking one’s own systems to identify vulnerabilities, thereby enabling companies to prioritize and mitigate potential risks. Arushi also highlighted government mandates, such as President Biden’s executive order requiring NIST to develop guidelines for red teaming large language models, as indicative of its growing importance.
The panelists agreed that as mainstream adoption of AI and large language models continues to expand, standardized evaluation frameworks like OWASP’s Top 10 for LLM will play a crucial role in ensuring responsible AI development and deployment.
What are some of the biggest risks you see your enterprise customers facing, and how can an insurance assessment help?
This question was mainly directed at Dan. He highlighted the need for proper assessment tools and mitigation strategies to address potential risks effectively and emphasized the importance of implementing better training for internal employees and ensuring a higher level of process maturity to prevent mishaps. Dan noted that their goal at Armilla AI is to provide assessments to ensure the right tooling and processes are in place, reducing the likelihood of incidents occurring.

He also touched on the role of insurance in risk transfer, comparing it to other domains like cybersecurity. He suggested that as organizations demonstrate a certain level of maturity in AI adoption, they become eligible for risk transfer tools, such as insurance, which can provide coverage in case of AI-related incidents.
Regarding the market’s reception to industry standards and third-party assessments, Dan acknowledged that it’s still early days. However, he noted significant progress, citing initiatives like NIST’s active involvement and the recent launch of ISO 42001 as promising steps forward. He highlighted the importance of evolving standards in enabling systematic measurement of AI development processes.
What are some strengths and weaknesses of GenAI in its current state?
Edward took the reigns on this question. He discussed GenAI’s proficiency in performing standard tasks and writing code efficiently, unerscoring its utility in various applications.
However, he also outlined several common risks associated with early adoption. These include challenges related to explainability, biases in models, and vulnerabilities to contamination in training data.
One notable risk Edward discussed is AI models’ susceptibility to injection attacks, where malicious content infiltrates training data, potentially compromising model performance and integrity. He highlighted the significance of addressing these risks and implementing tools to safeguard AI development processes.

On the consequences of contaminated data, he explained how it could lead to increased model hallucinations and reduced intelligence, ultimately affecting model accuracy and performance. He underscored the importance of ensuring the originality and quality of training data to maintain the effectiveness of AI models.
What type of regulatory trends do you foresee for the GenAI landscape?
Arushi began by discussing the EUAI ACT, a centralized set of regulations aimed at harmonizing AI standards. She emphasized its risk-based approach, where requirements vary based on the risk level of AI systems, a model that may influence future US regulations.
Arushi also touched on patchwork regulations at the US state level, such as data transparency laws and watermarking bills, reflecting a growing interest in AI governance across different jurisdictions.
Dan echoed Arushi’s sentiments on the risk-based approach of the EU AI ACT, acknowledging its complexity in determining the risk level of AI applications. He highlighted the impact of generative AI systems like GenAI on regulatory debates, noting their influence on rethinking risk assessment methodologies.
Dan further emphasized the evolving nature of AI regulations, with both federal and local governments introducing laws tailored to specific use cases, such as New York City’s law addressing HR bias in automated decision-making systems.

Hot takes:
The panel ended with a rapid-fire round where each speaker gave their hot takes on:
Advice on how to consider data security and compliance when exploring and working with GenAI.
Dan Adamson: Establishing a responsible AI policy with well-thought-out processes to guide deployment is essential. Proper employee training is also necessary to prevent the misuse of AI systems. Several cases have involved misuse of decision-assist tools, leading to legal repercussions.
Arushi Saxena: Organizations need to prioritize a framework that allows for human-in-the-loop interaction and ensures that AI systems are used responsibly. Training staff and creating policies to support responsible AI development while also highlighting the role of effective communication in educating customers about AI usage is of the utmost importance.

Edward Tian: It’s important to think about how to bridge the gap between producers and consumers of AI-generated information, especially with the shift in education towards AI calibration and of detecting appropriate levels of AI usage.
Fear and anxiety around AI
Arushi Saxena: Creative and educational materials are essential to build trust with customers. For example, model cards and accompanying documentation that explain the intended use and limitations of AI models. By providing clear guidelines and communication, companies can alleviate fears and increase trust among customers.
Dan Adamson: The role of independent assessments will become more important in gaining customers’ trust, especially in compliance-driven industries. Internal communication and staff training are key to ensure the proper use of AI tools and incident handling.
Final word
In conclusion, the panelists expressed optimism about the potential of GenAI in compliance, citing productivity gains and accuracy boosts as key benefits. However, they emphasized the need for responsible AI deployment and ongoing vigilance to ensure the ethical and transparent use of GenAI technologies.
As organizations navigate the complex landscape of AI adoption, adherence to best practices, compliance standards, and transparent communication will be essential in building trust and mitigating potential risks associated with AI implementation.
Thoropass is actively working on implementing responsible AI into its practices and developing safe and useful tools for customers. Book time with an expert if you’d like to chat more.