Webinar: Understanding safety and security in GenAI

Navigating compliance in the age of Generative AI (GenAI) presents both opportunities and challenges. In a recent webinar, Thoropass Co-Founder and CEO Sam Li sat down with a panel of experts, including:

Sam guided the panel through several questions that examined how GenAI in compliance is shifting paradigms in risk management and regulatory adherence. Here are some highlights that uncover how GenAI is leveraged for proactive, efficient, and ethically aligned compliance strategies. You can watch the full recording here.

How do you define responsible GenAI adoption? 

There are some key areas that organizations–large and small–must prioritize to ensure responsible development and use of GenAI.

According to Dan Adamson, responsible AI is highlighted by pillars such as:

  • trust, 
  • explainability, 
  • transparency, 
  • fairness; and sometimes, 
  • sustainability. 

These principles form the foundation for ensuring the responsible development and usage of GenAI.

Arushi Saxena added on the importance of operationalizing governance and fostering collaboration across teams involved in AI development. She stressed the need for:

  • training, 
  • hiring the right talent,
  • legal reviews; and,
  • effective communication strategies 

Edward Tian elaborated on two critical aspects of responsible AI adoption. Firstly, he underscored the importance of maintaining “humans in the loop” throughout AI development to balance AI and human contributions effectively. Secondly, he emphasized the necessity of truth in understanding AI’s impact, advocating for transparency in AI-generated content.

Collectively, the experts highlighted the imperative for organizations to prioritize human involvement, operationalize governance processes, and embrace transparency with proactive approaches and data-driven strategies to navigate the evolving landscape of GenAI.

How do you think the GenAI landscape has changed since you launched compared to today?

Edward Tian likened this transition to the ongoing evolution in AI chess models pointing out that even the best AI chess model isn’t as good as the best AI chess model in addition to the best human AI chess player. He highlighted the shift from sporadic AI instances to a pervasive integration of AI in content creation, necessitating a new approach to categorization and understanding its impact.

Edward Tian further outlined measures his company takes to assist businesses in detecting and managing AI-generated content, including copyright detection, plagiarism checks, and bias assessments. Whereas once, the detection of AI was like “finding a needle in a haystack”, use of AI is now so pervasive the challenge looks a lot different.

Arushi Saxena discussed the concept of red teaming in AI governance, drawing parallels with its origins in military and cybersecurity contexts. Red teaming involves proactively attacking one’s own systems to identify vulnerabilities, thereby enabling companies to prioritize and mitigate potential risks. Arushi also highlighted government mandates, such as President Biden’s executive order requiring NIST to develop guidelines for red teaming large language models, as indicative of its growing importance. 

The panelists agreed that as mainstream adoption of AI and large language models continues to expand, standardized evaluation frameworks like OWASP’s Top 10 for LLM will play a crucial role in ensuring responsible AI development and deployment.

What are some of the biggest risks you see your enterprise customers facing, and how can an insurance assessment help? 

This question was mainly directed at Dan. He highlighted the need for proper assessment tools and mitigation strategies to address potential risks effectively and emphasized the importance of implementing better training for internal employees and ensuring a higher level of process maturity to prevent mishaps. Dan noted that their goal at Armilla AI is to provide assessments to ensure the right tooling and processes are in place, reducing the likelihood of incidents occurring.

He also touched on the role of insurance in risk transfer, comparing it to other domains like cybersecurity. He suggested that as organizations demonstrate a certain level of maturity in AI adoption, they become eligible for risk transfer tools, such as insurance, which can provide coverage in case of AI-related incidents.

Regarding the market’s reception to industry standards and third-party assessments, Dan acknowledged that it’s still early days. However, he noted significant progress, citing initiatives like NIST’s active involvement and the recent launch of ISO 42001 as promising steps forward. He highlighted the importance of evolving standards in enabling systematic measurement of AI development processes.

What are some strengths and weaknesses of GenAI in its current state?

Edward took the reigns on this question. He discussed GenAI’s proficiency in performing standard tasks and writing code efficiently, unerscoring its utility in various applications. 

However, he also outlined several common risks associated with early adoption. These include challenges related to explainability, biases in models, and vulnerabilities to contamination in training data.

One notable risk Edward discussed is AI models’ susceptibility to injection attacks, where malicious content infiltrates training data, potentially compromising model performance and integrity. He highlighted the significance of addressing these risks and implementing tools to safeguard AI development processes.

On the consequences of contaminated data, he explained how it could lead to increased model hallucinations and reduced intelligence, ultimately affecting model accuracy and performance. He underscored the importance of ensuring the originality and quality of training data to maintain the effectiveness of AI models.

Arushi began by discussing the EUAI ACT, a centralized set of regulations aimed at harmonizing AI standards. She emphasized its risk-based approach, where requirements vary based on the risk level of AI systems, a model that may influence future US regulations. 

Arushi also touched on patchwork regulations at the US state level, such as data transparency laws and watermarking bills, reflecting a growing interest in AI governance across different jurisdictions.

Dan echoed Arushi’s sentiments on the risk-based approach of the EU AI ACT, acknowledging its complexity in determining the risk level of AI applications. He highlighted the impact of generative AI systems like GenAI on regulatory debates, noting their influence on rethinking risk assessment methodologies. 

Dan further emphasized the evolving nature of AI regulations, with both federal and local governments introducing laws tailored to specific use cases, such as New York City’s law addressing HR bias in automated decision-making systems.

Hot takes: 

The panel ended with a rapid-fire round where each speaker gave their hot takes on:

Advice on how to consider data security and compliance when exploring and working with GenAI.

Dan Adamson: Establishing a responsible AI policy with well-thought-out processes to guide deployment is essential. Proper employee training is also necessary to prevent the misuse of AI systems. Several cases have involved misuse of decision-assist tools, leading to legal repercussions.

Arushi Saxena: Organizations need to prioritize a framework that allows for human-in-the-loop interaction and ensures that AI systems are used responsibly. Training staff and creating policies to support responsible AI development while also highlighting the role of effective communication in educating customers about AI usage is of the utmost importance.

Edward Tian: It’s important to think about how to bridge the gap between producers and consumers of AI-generated information, especially with the shift in education towards AI calibration and of detecting appropriate levels of AI usage.

Fear and anxiety around AI

Arushi Saxena: Creative and educational materials are essential to build trust with customers. For example, model cards and accompanying documentation that explain the intended use and limitations of AI models. By providing clear guidelines and communication, companies can alleviate fears and increase trust among customers.

Dan Adamson: The role of independent assessments will become more important in gaining customers’ trust, especially in compliance-driven industries. Internal communication and staff training are key to ensure the proper use of AI tools and incident handling.

Final word

In conclusion, the panelists expressed optimism about the potential of GenAI in compliance, citing productivity gains and accuracy boosts as key benefits. However, they emphasized the need for responsible AI deployment and ongoing vigilance to ensure the ethical and transparent use of GenAI technologies. 

As organizations navigate the complex landscape of AI adoption, adherence to best practices, compliance standards, and transparent communication will be essential in building trust and mitigating potential risks associated with AI implementation.

Thoropass is actively working on implementing responsible AI into its practices and developing safe and useful tools for customers. Book time with an expert if you’d like to chat more.

Share this post with your network:

LinkedIn