Your employees are already using AI. The time to think about responsible use and ISO 42001 is now. 

AI is everywhere—from how we market our products to how we make decisions in hiring, healthcare, and finance. But while the technology is rapidly evolving, our ability to use it responsibly still has a long way to come. That’s where ISO 42001 comes in.

As someone who’s spent over two decades in cybersecurity and worked closely with organizations trying to mitigate risk, there has been an explosion in questions and demand for clarity in organizations across every sector. I recently sat down with Thoropass to talk about ISO 42001 (full link below) to unpack what ISO 42001 is, why it matters, and how organizations can confidently take advantage of the opportunities AI brings, without sacrificing compliance. 

Why ISO 42001 Matters

ISO 42001 is the world’s first global standard for AI management systems. It doesn’t just regulate AI—it helps organizations define what “responsible use” actually looks like in practice. Think of it as a way to bring governance to the chaos, providing a framework that guides how AI is implemented, monitored, and validated.

Unlike ISO 27001, which protects the what (data, systems), ISO 42001 protects the how—how AI decisions are made, how risks are managed, how biases that AI might bring can be addressed.

The First Step: Know Your AI Landscape

Before you can comply with ISO 42001, you need a full picture of where and how AI is being used in your organization. That includes both the team use that you’re aware of and “shadow AI”—where individual teams or employees are leveraging tools like ChatGPT or in-app AI under the radar.

Understanding this landscape allows you to take control before AI use becomes too decentralized or risky to reign in or understand the use of AI at your organization. 

Building Responsible Use Into the Business

Organizations have two main things to consider when it comes to ISO 42001 compliance:

  1. Defining Acceptable Use: Start with a lightweight policy. What’s OK in your organization—using AI to generate content? Make hiring decisions? They both have wildly different implications. Make sure to understand and define these boundaries upfront. As you begin to understand how AI is being used and goals for that use, you should consider building a more formal governance program or incorporating AI into existing IT governance processes.
  2. Respond to Customer Expectations: In highly regulated industries like finance, healthcare, or government, your customers will soon start requiring AI governance in vendor contracts. ISO 42001 certification becomes a competitive advantage by signaling you’ve already done the work.

What Implementation Looks Like

If you’re familiar with ISO 27001 or similar ISO frameworks, the process will feel familiar: policies, procedures, management oversight, and internal auditing. The biggest difference? The philosophical lift.

Unlike traditional compliance, ISO 42001 forces organizations to define their own interpretation of “responsibility.” What does it mean for your business to use AI ethically and safely? There’s no one-size-fits-all answer. That’s both the challenge and the blessing with how this standard is applied. 

Bias, Human Oversight, and the “Human in the Loop”

ISO 42001 doesn’t dictate exactly how to manage bias or when humans need to intervene in AI processes. Instead, it pushes you to address those issues explicitly:

  • What biases exist in your use cases?
  • When must a human validate AI output?
  • Where do you draw the line between assistance and autonomous decision-making?

For example, using AI to generate social media copy isn’t the same as using it to screen job candidates. The risks—and the governance required—are completely different.

When Should You Start?

Organizations need to start considering AI and focusing on 42001 compliance now. Employees and individuals are already using it. And they aren’t considering the risk to the org. I’ve talked to dozens of companies that know they want to use AI but haven’t yet defined how. They don’t always understand that there’s probably already a fair amount of AI use happening within their teams. 

Start developing your AI use policy before the tech gets baked into your processes in ungoverned ways that you can’t control. It’s far easier to embed governance from the start than to untangle missteps later. Let’s start defining what responsible AI usage looks like now, before it’s too late. 



Watch the
full interview here.
To learn more about how Thoropass can help you with ISO 42001 compliance, talk to an expert today.

Share this post with your network:

LinkedIn