The Risks of Using AI Tools like ChatGPT in ISO 27001 Compliance: What To Look Out For  

Chat PGT & Compleye

Love it or hate it, ChatGPT, and in fact, AI (Artificial Intelligence) in general is here to stay.  

All those in favour say AI should be seen as a tool that can be used to make organisations and individuals more efficient, while all those against wonder if we are falling into the trap of quantity versus quality.  

It’s an enormous subject with room for a multitude of opinions and endless debate.  

So, let’s narrow it down to how it affects us. Let’s delve into the ChatGPT compliance world. 

ChatGPT and compliance: are compliance officers at risk?   

When ChatGPT emerged, copywriters, graphic designers, musicians and just about anyone else with a creative bent went into panic mode. Would jobs be lost? Would we forget how to write, design, draw, create?  

The possibility is not beyond the scope of the imagination. It’s not unlikely that we will become less creative, less able to think for ourselves and more likely to become dependent on AI to do our jobs for us. In fact, recent studies have shown that AI will indeed be responsible for hundreds of thousands of job losses.  

But, even though ChatGPT mimics humanness in some ways (even referring to itself as ‘me’ and ‘I’), something we must remember is that AI doesn’t have: 

  • Personal experiences  
  • Emotions 
  • Intrinsic human memory  
  • The ability to sympathise  
  • The ability to empathise.  

So, are compliance officers risk?  

To ascertain this, we first need to look at whether compliance officers have the ability to empathise or sympathise.  


What we need to look at is what ChatGPT is and what limitations it has.  

What is ChatGPT?   

The GPT in ChatGPT stands for “Generative Pre-trained Transformer which is a type of large language model (LLM) neural network that can perform various natural language processing tasks such as answering questions, summarising text and even generating lines of code.” – Al Jazeera 

ChatGPT is an AI model that has been trained to provide answers to instructions provided by users. It does this using RLHF (Reinforcement Learning from Human Feedback) and PPO (Proximal Policy Optimisation).  

Open AI (the organisation that developed Chat GPT) provides this infographic to explain the learning process. 

Taking this process into account, it’s not unlikely that ChatGPT could essentially learn to tick the compliance boxes for any and all compliance standards in the future.  

It would, however, first need to be trained to understand every aspect of an organisation, from its security protocols, to its systems, products and personnel. More about this organisation-specific training later. 

What is highly unlikely is that ChatGPT could identify shades of grey when it comes to compliance, especially because it may not be able to contextualise evidence within the parameters of specific situations and scenarios.  

In addition, while an AI model such as ChatGPT could generate a report for an auditor, assimilate compliance knowledge, provide summaries and check for updates, will it be able to keep up with the ever-evolving field of compliance? The answer, is that – in the future – it probably could. Or rather ComplianceGPT (when it’s invented) will probably be able to.  

A compliance AI tool could be trained on an organisation’s internal policies and procedures and provide advice and documentation to compliance officers within an organisation about regulations that apply to their company specifically.  

So, in theory, ChatGPT could one day do the basics of a compliance officer’s job.  

However, like all things human and non-human, AI has its limitations.  

The limitations of ChatGPT  


On February 2 2023, Open AI’s CEO, Sam Altman, tweeted, “We know that ChatGPT has shortcomings around bias, and are working to improve it.” 

According to a February 23 article in Business Insider, ChatGPT needs to become more ‘woke’ because it’s unlikely that companies will invest in technology that could cause them PR headaches.  

How does Chat GPT’s wokeness relate to its ability to take on the role of a compliance officer? Objectivity is one of the most important skills for compliance officers, and – naturally – auditors too.  

This isn’t to say that humans don’t fall victim to bias. The difference is that a compliance officer who shows bias, will be held accountable, while an automaton may not be.  

“Compliance officers are expected to provide an objective view of company policies. Influence by other employees, including management and executives, to overlook infractions may result in significant fines or sanctions that may lead to financial loss or even business closure.” – Investopedia 

Lack of common sense  

ChatGPT has no brain, no emotion, no history, no family. It has no morals or the ability to reason and understand. That means it lacks common sense.  

We all need to learn through experience. When you’re a toddler you’ll happily put your hand on a stove plate because you haven’t yet learnt that it could burn you. Once you’ve learnt that you’re likely to get hurt, it becomes common sense not to put your hand on the stove plate. Of course, you can develop common sense by learning from other people’s experiences, but that requires the ability to reason and deduce – something that ChatGPT lacks.  

Or does it?  

Let’s do a quick test.  

We asked ChatGPT, “Is it safe to leave a laptop open on a desk?” 

And this is what it said: 

Leaving a laptop open on a desk is generally safe as long as the desk is stable and the laptop is not in danger of being knocked over or exposed to other potential hazards. However, there are a few factors to consider that could affect the safety of leaving a laptop open on a desk: 

  • Security: If you are leaving your laptop unattended in a public place, it may not be safe from theft. It’s important to keep your laptop secure by locking it with a password or using a laptop lock. 
  • Temperature: Laptops generate heat when in use, and leaving a laptop on a desk for an extended period of time could cause it to overheat. Make sure the ventilation on the laptop is not blocked and the laptop is not placed in direct sunlight or near a source of heat. 
  • Power: If your laptop is not plugged into a power source, leaving it open on a desk for an extended period of time could cause the battery to drain, which could affect its performance and lifespan. 

In summary, it is generally safe to leave a laptop open on a desk, but it’s important to consider security, temperature, and power issues before doing so. 

At no point did Chat GPT consider compliance, privacy and data security. A lack of common sense? Definitely.  

It should be noted that when specifically directed to consider compliance as part of the same scenario, Chat GPT’s response was completely different and conformed to what would be expected as a response to a compliance-specific question.  

Inability to understand context  

In their article ‘The Top 10 Limitations of Chat GPT’, Forbes explains that, “ChatGPT has difficulty understanding context, especially sarcasm and humor. While ChatGPT is proficient in language processing, it can struggle to grasp the subtle nuances of human communication. For example, if a user were to use sarcasm or humor in their message, ChatGPT may fail to pick up on the intended meaning and instead provide a response that is inappropriate or irrelevant.” 

Dependence on training data  

Training data is used to train algorithms so that they can predict outcomes. As you can see from the diagram below, extracted from Cloud Factory’s article “The Essential Guide to Quality Training Data for Machine Learning”, when it comes to training data, there are multiple points at which human input is necessary.  

This means that: 

  1. The AI depends on the accuracy of human input 
  1. The AI cannot function without a human in the system  

As with most things AI – and we saw this with the laptop example above – what you put in is what you get out. So, if the information and knowledge that is input by humans is inaccurate, untrue or of poor quality, then the output that the end user receives from the AI will similarly be inaccurate, untrue or of poor quality.  

Limited domain knowledge  

Domain knowledge refers to the intelligence one has about a specific area within a specific industry. So, essentially, domain knowledge = expertise. ChatGPT is not an expert in any particular field. This means that applying compliance within a specific domain will be difficult unless the user first trains Chat GPT to understand their industry, organisation or context.  

This can be done, of course, but it would in fact be increasing the time taken to implement compliance frameworks, rather than increasing efficiency.  

ChatGPT and GDPR: what do we know?  

Italy’s data protection authority has ordered Open AI to stop processing people’s data in Italy because it’s concerned that Chat GPT is breaching the European Union’s General Data Protection Regulation.  

According to the BBC, “The watchdog said on 20 March 2023 that the app had experienced a data breach involving user conversations and payment information. 

It said there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”. 

It also said that since there was no way to verify the age of users, the app “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness”.’ 

On the surface it does seem that Chat GPT is in breach of key GDPR principles, but judging by the lack on bandwagon-jumping, this could just be a knee-jerk reaction from Italy. The situation – as with anything AI – is ever-evolving. 

So, what exactly do we know about ChatGPT and GDPR? 

  • Purpose limitation  

According to The Data Protection Commission, “Personal data should only be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes.”  

We’ll have to keep an eye on what happens next in the Italy scenario to be able to know whether or not Open AI is practising purpose limitation as it should be.  

  • Separation of data  

Of vital concern for many organisations is the protection of data that is stored on shared systems. There are a number of reasons why this is important. One of them, of course, is regulatory compliance.  

“Because of the potential impact unauthorized access can have on a business, it is very important that organizations implement robust data segregation measures to limit access to sensitive data.” – Next Labs. 

  • Storage limitation  

Although ChatGPT itself doesn’t control the data that is inputted and also doesn’t have the ability to delete data, data that is used to train the AI is stored in large databases. The organisations that use the data to train Chat GPT claim that they may need to keep data for long periods of time for research or training purposes.  

ChatGPT and ISO 27001: what to look out for?  

  • Access controls  

Subsection A.9.1 of ISO 27001 requires that an organisation has an Access Control Policy, defining which users have access to which networks and services.  

At this stage Chat GPT wouldn’t be able to perform this task unless it had been extensively trained on an organisation’s access control protocols. Although it could churn out guidance, which would vary according to the tech startup’s specific industry, about how to create the policy, it wouldn’t be able to create the policy itself. This is because creating a comprehensive and effective access control policy requires a deep understanding of the specific business needs, risks, and regulatory requirements of a specific organisation. In fact, ChatGPT itself recommends that organisations engage with experienced information security professionals to create and implement an access control policy that is tailored to the specific needs of their organisation. 

  • Incident management  

ISO 27001 defines a security incident as “an unwanted event that could endanger the confidentiality, integrity, or availability of information”. 

An incident can vary from fairly innocuous to extremely threatening for an organisation. Managing an incident requires not only a deep understanding of a particular organisation’s assets, vulnerabilities and response procedure, but a sensitivity with regards to communication around the incident.  

Reporting procedures, post-incident reviews and improvement recommendations are best formulated by a professional person who has a complex understanding of the organisation involved.  

Again, although ChatGPT can provide you with step-by-step instructions on what to do, unless it has been extensively trained in every aspect of your organisation, it won’t be able to manage the incident or create accurate comms and reports around the incident.  

  • Monitoring and testing  

When asked if it can perform ISO 27001 monitoring and testing, Chat GPT provided guidance and advice on monitoring and testing activities, including: 

Vulnerability Scanning and Penetration Testing 

Access Control Testing 

Incident Response Testing 

Business Continuity Testing 

Why ChatGPT is a bad fit for compliance, in general ?

Compliance relies on an accurate and meticulous understanding of regulatory frameworks. It is inherently dependent on honest and exact inputs and outputs.  

Open AI lists two of Chat GPT’s limitations as: 

  1. ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers… 
  1. The model is often excessively verbose … 

As a tech start-up you definitely don’t need incorrect information when you head into your ISO 27001 audit and you also want to keep information as succinct and transparent as humanly (yes, humanly) possible.  

How you could use ChatGPT in compliance  

ChatGPT can give you the run-of-the-mill what-to-do and how-to-do-it of ISO 27001 compliance. What it can’t do is create bespoke content for your compliance journey based on your organisation’s specific information and needs.   

With regards to AI, law professor Matthew Sag of Emory University says, “There’s a saying that an infinite number of monkeys will eventually give you Shakespeare.” 

It’s important that we realise that Chat GPT is really just that (for now, anyway) – the result of an infinite amount of data from unknown sources being compiled into ‘facts’.  

The results of this when it comes to compliance with standards such as ISO 27001 could be disastrous.  

So, it’s advisable, if you’re going to use Chat GPT or any other AI for compliance (or anything else for that matter), that you check and double check the content that the AI creates.  

A better compliance journey awaits 

Could ChatGPT Compliance be the future of compliance?


For now, while you could ostensibly use ChatGPT to create a generic roadmap for your ISO 27001 journey, there’s already a tried-and-trusted tool (with accompanying directions) in existence… one that’s been created by experts in the field of lean compliance.  

It’s called Compleye Online and you can book a free demo here!

Table of Contents

Compliance Platform for Tech Companies

All-in-One DIY Compliance Platform to help tech businesses towards their ISO 27001, ISO 9001, or SOC-2 certification and stronger performance on privacy and security. Ready?