Key Takeaways from Thoropass Connect: Ethical and Responsible Use of AI

With AI becoming a core part of enterprise strategy, cybersecurity professionals are wading through the multifaceted dimensions of responsible and ethical AI use. Meanwhile, executives across business functions are increasingly interested in joining the conversation, seeing AI-savviness as critical to meeting strategic business objectives. That’s why we dedicated time at Thoropass Connect 2024 to a panel discussion on the Ethical and Responsible Use of AI-led by Thoropass CEO Sam Li. Joining Sam were Dan Ross of Dynamo AI, Mason Allen of Reality Defender, and Kaitlin Betancourt of Goodwin Law, who unpacked the meaning of responsible AI, discussed the essential compliance frameworks to deploy, and shared highlights from the playbooks for safeguarding against the specific threats posed by AI. 

In case you missed the event, here are the top takeaways on responsible AI to advance your cybersecurity strategy and drive alignment between key stakeholders.

Buyers are wary of biases & hallucinations in AI models

Mason Allen is Head of Revenue and Partnerships at Reality Defender, a deepfake detection company that identifies synthetic media across audio, images, video, and text. Mason spoke about how the executive-level conversation around identifying bias in AI models has become more nuanced in the last decade. He described what he sees now in the market, saying, “The first questions we receive [are]: How biased are your models? Do you have benchmarks against that?” On the go-to-market side, showing prospective customers that you understand and can mitigate those challenges is critical.  


Recommended reading
Dan Ross of Dynamo.ai explains importance of the human-in-the-loop guardrail
Experiencing the ‘human-in-the-loop’ guardrail icon-arrow-long

Dan Ross is Head of AI Compliance Strategy at Dynamo AI, a firm helping enterprises deploy compliant AI, and he’s seeing a similar trend regarding concerns around AI hallucinations. The term “hallucination” and others like it were born out of machine learning engineers, explained Dan, but now “they’re becoming more standardized and discussed, and they’re starting to show up on risk reports and board reports and audit reports.” As industry decision-makers grow more attuned to the risks of AI, he empowers them to test scenarios, discuss risks that arise, and then make an educated call based on the intended use case. 

To watch more of the conversation around understanding and identifying biases in AI, see this short clip:

Your responsible AI framework is unique to your business

Kaitlin Betancourt is a Goodwin Law partner specializing in cybersecurity law and advises clients on AI. She encourages cybersecurity professionals to take the first step toward building a responsible AI framework by assembling a group of cross-functional stakeholders. The objective is to discuss your organization’s culture and risk tolerance relative to AI across various perspectives. 

That meeting “should ultimately culminate in some sort of policy responsible AI policy statement and/or framework,” she said, “and that will lead to, okay, well, how do we operationalize our principles?” Kaitlin advises selecting a risk management framework, such as the National Institute of Standards and Technology (NIST)’s newly developed voluntary framework, the AI Risk Management Framework. To assist cybersecurity professionals in executing the framework, NIST offers the NIST AI RMF Playbook, which includes suggestions organizations may use or borrow from to govern, map, measure, and manage risk. 

For more information, watch this discussion clip on how to build a responsible AI framework: 

Think about the human & human-in-the-loop

Kaitlin Betancourt raised a critical aspect of developing AI policy beyond organizational objectives – the human perspective. She said, “When we think about AI, we are thinking about the impact on the human and the human-in-the-loop.” A buzzy generative AI term, the human-in-the-loop concept focuses on ensuring a human is both active in the design, training, and operation of the GenAI model or process, with ultimate oversight and control of that model. 

When it comes to humans, education is vital. Mason Allen pointed out that while cybersecurity professionals live and breathe these conversations daily, the rest of their colleagues do not necessarily understand that specific modalities like Deep Fakes exist. He shared a story from earlier this year in which a bad actor scammed a multinational Hong Kong-based company out of $25.6M by using a digitally recreated version of the company’s CFO in a video conference call instructing employees to transfer funds. The anecdote shows that you can’t underestimate the importance of simply raising awareness in the race to empower enterprises to deploy AI.

Dan Ross agrees that the conversation on responsible AI needs to extend past AI governance to existing regulation within the context of AI. Non-technical cybersecurity professionals, such as risk managers or auditors, need to join technical experts in the conversations around creating guardrails. This is important so they can effectively defend safety measures to other non-technical stakeholders, whether they be audit bankers, regulators, or the public. Non-technical users need to understand the data points that come out of an AI model and the nuances around guardrails so that they can serve as part of the control framework. 

To see more of the panel’s conversation around humans-in-the-loop, watch this short clip:

Last thoughts: on Thoropass Connect’s Ethical and Responsible Use of AI panel

As AI continues to integrate into the core of enterprise strategies, it’s clear that building frameworks for responsible AI and ethical use is no longer an afterthought. Businesses can mitigate potential risks by acknowledging and addressing concerns around bias, hallucinations, and emerging threats like deepfakes. Collaborating across teams to create customized AI policies and adopting frameworks like NIST’s AI RMF will help cybersecurity professionals navigate the complexities of AI governance. Ultimately, involving technical and non-technical stakeholders in the conversation ensures that AI is compliant, safe, and aligned with broader business objectives, fostering trust and accountability in its deployment. 

Want more expert insights? Many other interesting topics came up in this panel, from debates around open source vs. commercial models to the complexity of managing cross-state regulations. To dive in, you can watch the panel now in its entirety. 

If you’re ready to see how Thoropass makes compliance easy regardless of where you are in your journey, book a call with one of our experts. Or read more about how we help cybersecurity professionals in HealthTech, FinTech, SaaS, and more get compliant and future-proof their businesses.

Share this post with your network:

LinkedIn