Blog Compliance Is ChatGPT safe? A balanced look at AI security concerns Since its launch in November 2022, ChatGPT has revolutionized the way we approach content creation, reporting, and communication. This innovative AI has become a go-to tool for a myriad of business applications, ranging from drafting content marketing to enhancing customer service interactions, conducting research, and creating documentation like presentations. But is ChatGPT safe for users and their data? In this blog post, we cut through the complexity to clearly address this pressing concern. Our thorough examination covers OpenAI’s security measures, potential data risks, and essential safety tips for using ChatGPT responsibly. Stay informed and protected as we unravel the essentials of ChatGPT’s safety in the following sections. Key takeaways OpenAI has completed a SOC 2 audit and implements strict access controls to secure ChatGPT, but risks like data breaches still loom Individual users need to be vigilant about the information they’re feeding Chat-GPT when writing prompts. They should also take care to avoid scams, misuse of AI, and compromise of sensitive data while using ChatGPT and similar technologies Similarly, businesses that incorporate AI and ChatGPT into their products need to take special care to determine a rationale for their use Adhering to evolving AI security measures, managing data settings, and understanding legal implications are key to maintaining user trust and privacy High level: What is ChatGPT? ChatGPT is a by-product of GPT-3, a third-generation pre-trained transformer developed by OpenAI. In basic terms, ChatGPT is powered by a Large Language Model (LLM), which is a statistical tool to predict the probability of the next word in a sentence. You can ask ChatGPT certain questions in English, and it will respond with a ‘statistically plausible’ answer based on a very large data set. The answers may appear to be well-written and authoritative, but if you dig a little deeper, you may find it will generate incorrect answers by unintentionally stitching wrong pieces of information together. The AI achieves its impressive performance by using machine learning algorithms to predict the next word in a sequence, given all the previous words within the text. This allows it to generate coherent and contextually relevant content. ChatGPT can be fine-tuned for specific tasks, industries, or applications, making it an incredibly versatile tool for both personal and professional use. Its conversational abilities make ChatGPT an excellent fit for chatbots, virtual assistants, and customer service applications, where it can provide instant responses that are often indistinguishable from those a human would give. As such, ChatGPT has quickly become an indispensable asset in various sectors, revolutionizing how we interact with technology. Evaluating ChatGPT’s security profile OpenAI, the ChatGPT creator, adheres to stringent security compliance, demonstrated by its SOC 2 audit. This achievement elevates the security of both the ChatGPT Enterprise and its API platform, marking the initial stride in a series of comprehensive safety measures implemented by OpenAI. To protect customers, intellectual property, and vital data, OpenAI enforces rigorous access control measures and encompasses thorough security protocols such as penetration testing. The complete list of compliance listed on OpenAI’s website (as of May 2024) includes: CCPA (learn more about CCPA compliance) GDPR (learn more about GDPR compliance) SOC 2 (learn more about SOC 2 compliance) SOC 3 CSA STAR The same site offers access (upon request) to various documentation, including pentest reports, their SOC 2 report, SIG core, and more. A brief history of Chat-GPT information breaches/leaks The journey of ChatGPT hasn’t been without its share of data security incidents, which serve as stark reminders of the vulnerabilities inherent in even the most sophisticated AI platforms. Here’s a concise history of some notable breaches: March 2023 Data Leak: A data leak allowed some users to see content from another active user’s chat history. Perhaps even more concerning, the Chat-GPT shared that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus (the paid version of Chat-GPT) subscribers who were active during a specific period. Open AI took Chat-GPT offline to patch this bug. You can read more about the incident on OpenAI’s website here. Credentials Breach (June 2022 – May 2023): Over 100,000 ChatGPT account credentials were found for sale on dark web marketplaces. This breach was attributed to various info-stealers like Raccoon, Vidar, and RedLine, which collected the credentials over nearly a year. The breach underscored the importance of strong password practices and the use of two-factor authentication to enhance account security. Each information breach has been a learning opportunity for OpenAI, enabling the organization to refine its security strategies. The implementation of advanced encryption, regular security audits, and the introduction of the Bug Bounty program are direct results of past experiences. These proactive steps are part of OpenAI’s commitment to user safety and the ongoing effort to secure the integrity of ChatGPT’s data. Is the ChatGPT app safe? Five key considerations As the adoption of AI-powered tools like Chat-GPT continues to rise, it’s crucial to scrutinize their safety and integrity. Below, we delve into five key considerations. 1. Potential spread of misinformation/disinformation or copyrighted material While the platform may have achieved the above-mentioned compliance, this should not create false reassurance of the validity or accuracy of the information it generates. Where does GPT-3 come up with the information it presents? To the best of our knowledge, this information was obtained by at least five (5) sources: Common crawl: By crawling websites, deduping information, and utilizing a ranking process to determine higher quality sites Web text: By extracting texts from websites and again, deduping the information along with utilizing criteria to determine ‘quality’ Books 1: Collection of unpublished novels Books 2: Smaller input of Books1, which is undetermined Wikipedia: English version ChatGPT uses mathematics based on a large quantity of content to determine the next words in a sentence, but may not be accurate or understand the ‘context’ of the information being presented. ChatGPT may be good at answering ‘general’ questions but may not be trustworthy when relying on it for precise answers. Because of the perceived ‘productivity boost’ of using AI, ChatGPT may abet the spread of misinformation more quickly than other tools. Moreover, as AI systems generate content based on extensive training data, they may inadvertently infringe on copyright if the AI has been trained on protected works and produces similar output. But think of it this way: Just as you can Google a question and get a variety of answers – some credible and some falling more into the category of misinformation and even disinformation – so too can you ask ChatGPT a question and get a mixed bag of answers. Unlike Google; however, you may not know where the source of the information is coming from with ChatGPT. Any information consumed online needs to be carefully evaluated for everything, including accuracy, recency, plagiarism, misinformation, and disinformation. 2. Data collection uses One major concern regarding the use of ChatGPT is when it’s used to share personal information (or other sensitive/proprietary information). Any information shared with ChatGPT (or OpenAI) can be used to improve their services. OpenAI also warns to use their service at ‘your own risk’ since it is still in ‘beta.’ To be fair, OpenAI allows you not to have your content ‘shared,’ but it could diminish functionality or limit the ability to address certain use cases. (See OpenAI’s Terms of Use for further information.) Organizations need to exercise caution when sharing personal or sensitive information. For instance, ChatGPT may be able to review your code, but your code should be considered proprietary information. When it comes to processing personal data, you must provide privacy notices and obtain consent for processing such data, which includes, in most cases, processing by subprocessors like OpenAI. OpenAI may execute a Data Processing Addendum, but realize once the information is ‘shared’ with the huge data lake, it could be hard to secure (or ensure it isn’t shared any further for other purposes). 3. Reputational concerns If your organization depends on being an ‘expert’ on certain topics, using ChatGPT may pose reputational hazards. Although ChatGPT may produce content at scale, it may end up taking you longer to ‘vet’ the content. Many niche publications have taken steps to protect their reputation as subject matter experts by banning the use of AI in their content creation. For example, Food & Wine magazine states, “It is against our guidelines to publish automatically generated content using AI (artificial intelligence) writing tools such as ChatGPT.” Similarly, many larger B2B and B2C organizations have similar policies regarding their content marketing, including thought leadership and social media. However, using ChatGPT as a research or editing tool rather than an information-generating tool may still offer efficiencies in the content creation process. While reputation is a top concern, accurately describing your product or service is also essential for protecting your organization against unfair or deceptive trade practices regulated by the Federal Trade Commission. Utilizing ChatGPT to embellish marketing content without human review could lead to violations. 4. Could Chat-GPT be used as a hacking tool? The advanced capabilities of ChatGPT raise questions about its potential misuse in cyberattacks. It’s important to evaluate whether ChatGPT could assist in hacking endeavors and what measures are in place to prevent such scenarios. The sophistication of ChatGPT’s language processing may lend itself to crafting phishing emails or generating code to exploit vulnerabilities.However, OpenAI has implemented safeguards to detect and prevent the misuse of its technology for malicious purposes. Users must be aware of these risks and the importance of adhering to best practices for cybersecurity when interacting with artificial intelligence platforms like ChatGPT. 5. A proliferation of ChatGPT scams As the popularity of ChatGPT soars, it’s important to recognize that not all associated risks fall within OpenAI’s control. Particularly concerning is the uptick in scams that exploit the ChatGPT name, leveraging the AI’s prominence to deceive users with fake ChatGPT apps. While OpenAI is committed to user safety, individuals—especially those less familiar with AI technology—must also exercise vigilance. Scammers may craft counterfeit ChatGPT websites, impersonate the service in communications, or promote fraudulent ChatGPT-related offers. These deceptive tactics aim to harvest personal and financial information. It is crucial for users to stay informed and cautious, recognizing that the onus is on them as much as on OpenAI to navigate these external threats skillfully as ChatGPT becomes increasingly woven into the fabric of everyday technology use. Recommended for you Webinar highlights: Can we risk waiting? Understanding safety and security in AI Key takeaways from a webinar featuring AI experts from GPTZero, Armilla AI and Dynami AI. Jay Trinckes Data Protection Officer See all Posts Webinar: Understanding safety and security in GenAI icon-arrow-long How to use ChatGPT responsibly: Five tips You may choose to limit ChatGPT access across your organization, or even ban it outright. But assuming there’s some degree of access, it’s important to provide guidelines around safe use of AI to your employees. Though external threats are inherent in the digital realm, we, as users, bear a responsibility to safeguard our confidential data from security risks. Consider the following tips for using ChatGPT and other AI tools responsibly. 1. Withhold sensitive data As users, we hold the power to decide what information to share with ChatGPT— so it’s vital to make conscious and well-informed decisions regarding what you share. We recommend avoiding sharing personal or proprietary business information when using ChatGPT. Inadvertently exposing sensitive data, such as internal business details and personally identifiable information, has happened in some instances. This is particularly crucial in environments where data privacy is paramount, and the inadvertent exposure of such information could lead to compliance issues, legal repercussions, or competitive disadvantage. Users should be mindful of the types of conversations they are having and the potential visibility of the content they generate when using AI systems like ChatGPT. 2. Disable your conversation history for enhanced privacy Besides withholding sensitive data, users can also take steps to enhance their privacy when interacting with ChatGPT. One such step is disabling ‘Chat history & training’ in the data controls settings. This action prevents the AI from using your conversations to learn and improve, ensuring that the details of your interactions are not stored or analyzed for future training purposes. While this may slightly limit the AI’s personalization aspect, it significantly boosts privacy by reducing the digital footprint left behind. Additionally, users should regularly review their privacy settings and be aware of any updates or changes in OpenAI’s policies that might affect how their data is handled. 3. Review and verify all AI content (with a real and qualified human) Balancing the benefits of AI-generated content with ethical considerations is a delicate act. While AI may be used in various ways to benefit your business, from research, through to the writing and editing processes, it’s important to introduce review steps to validate its outputs. It can undoubtedly be a powerful tool, but it is not recommended to let AI self-drive any business processes, including content creation. Introduce steps to: Doublecheck citations: Any data or dates cited by tracing them back to an authoritative site Verify information: Review multiple authoritative sources to confirm any information Run a plagiarism checker: Ensure the originality of the content and avoid potential copyright issues Review writing quality: Make sure the voice and tone are on-brand and the sentence structure is friendly to other human readers! Double-check for AI detection: If the content will be published online, use tools to determine if the content could be flagged as AI-generated, which might affect its credibility or SEO ranking Other steps: Consider any other steps that are relevant to your field or industry to ensure the highest standards of accuracy and reliability While AI is sometimes touted as replacing certain roles, its use also necessitates new review processes. In a world where most consumers feel information overload, human editors, subject matter experts, and the need for original, credible and digestible information are key. 4. Conduct vendor due diligence and review contractual obligations Organizations may be required to perform vendor due diligence activities on third parties they use. These activities generally include reviewing attestations, certifications, or completion of questionnaires to provide assurance the vendor implements certain controls to ensure adequate security and privacy. Organizations may also be under contractual obligations to perform these reviews and obtain assurances. Furthermore, certain contractual obligations may be passed down to tertiary organizations. For example, you may be using a service provider required to meet your security and privacy requirements. They may use another service provider, which should also meet the same security and privacy standards as the first service provider. This may be the case where the service provider uses an API call to OpenAI to process the data. Unless the service provider is establishing its own ‘private’ AI solution, it may be ‘subcontracting’ work to another AI provider. If this secondary provider doesn’t provide the level of security and privacy the end-user expects from the primary service provider, the primary service provider may be in breach of its contractual obligations by using this secondary provider. 5. Stay up-to-date with emerging AI governance AI governance aims to address the complex challenges posed by advanced AI systems, including issues of transparency, accountability, and ethical use. It involves a collaborative effort among policymakers, technologists, and stakeholders to create a framework that balances innovation with societal values.Regulatory bodies around the world are working to develop standards and guidelines for AI. These may include requirements for explainability of AI decision-making processes, adherence to privacy laws, and measures to prevent discrimination and bias in AI systems. By keeping informed about emerging AI governance, users can better navigate the ethical and regulatory landscape, advocate for responsible AI use, and contribute to the development of policies that promote trust and safety in AI applications. Conclusion: Like any tool, AI carries risk ChatGPT may be a useful tool, but this is exactly what it is—a ‘tool.’ Just like any tool, you should take special care if you plan to use it. There are potential use cases where ChatGPT would make work more efficient, but some cases need more limitations or restrictions over its use. Organizations cannot ignore the risks associated with any new technology. They must analyze these risks, determine a plan of action to mitigate these risks, implement appropriate controls, and monitor the use and processes around these tools. Moreover, staying on top of evolving AI governance (e.g. the EU AI Act) is key. ChatGPT will not replace humans anytime soon and must be treated with a certain level of scrutiny, considering the risks outlined above. In this Compliance Director’s opinion, organizations shouldn’t fall for the hype. They should perform due diligence related to their particular business case over the use of ChatGPT (and other AI solutions). If your organization needs help to determine some of the appropriate use cases to maintain compliance with regulatory, contractual, or industry best practices, get in touch. We have experts at Thoropass (formerly Laika) who can help! *This blog was NOT generated by ChatGPT 🙂 Note: This post was originally published on Feb 27, 2023 but has since been updated and reviewed by internal subject matter experts. More FAQs How can I protect my ChatGPT account? To protect your ChatGPT account, make sure to use a strong password, enable multi-factor authentication, and manage your data settings within the platform. These steps will enhance the security of your account and help keep your information safe. What kind of sensitive data should I withhold when using ChatGPT? When using ChatGPT, you should avoid inputting personal, proprietary business, or confidential information. This helps to protect your sensitive data from being shared unnecessarily How can I avoid ChatGPT-related scams? To avoid ChatGPT-related scams, always verify the authenticity of websites and apps and be vigilant about phishing emails, offers, and fake apps. Stay safe! Does OpenAI have a privacy policy? Yes, OpenAI has a privacy policy that complies with major regulations like CCPA and GDPR. Take the Quiz Which framework(s) are right for your business? SOC 2? ISO 27001? GDRP? All of the above? Take this free quiz to find out the best compliance mix for your organization Jay Trinckes Data Protection Officer See all Posts Take the Quiz icon-arrow Jay Trinckes Data Protection Officer See all Posts Share this post with your network: Facebook Twitter LinkedIn