ChatGPT, short for Generative Pre-Trained Transformer, surpassed one million users in less than a week from its introduction and 100 million users within its first 2 months. Its ability to articulately answer questions, compose emails, write essays and code has been described as “magical”, leading to its viral popularity. However, the widespread use of this generative AI system has caused serious privacy concerns.
ChatGPT uses deep learning to generate responses based on data entered into the system. This data is used to train and improve the AI which is currently in its testing phase. Many critics have labelled this problematic as individuals may not know that their data is being used to train the AI system. Whilst ChatGPT’s privacy policy describes how data is used and protected, uncertainty surrounds the AI’s transparency and privacy practices.
There are many questions surrounding whether generative AI like ChatGPT can be compliant with Article 17 EU-GDPR. The right to be forgotten is difficult to enforce when using generative AI due to its artificial neural network. It has been suggested that AI systems do not forget like humans and instead adheres to new data, meaning that old data has ‘technically’ not been forgotten. By this understanding, the system is non-compliant with Art.17 EU GDPR.
The platform’s developer, OpenAI, refers to the California Consumer Privacy Act (CCPA) on its privacy policy page. In summary, the CCPA ensures that citizens of California have the right to know what personal information has been collected and the right to request deletion of personal information. In addition, the company’s privacy policy declares that personal data collected will not be shared with others, despite this, it remains unclear how these regulations and guidelines apply to data stored in ChatGPT.
With the fast emergence of new AI, regulators and data protection authorities will focus their attention on applying the law to these developments. IAPP writer Jennifer Bryant highlights progress being made in the form of the EU’s AI Act and the U.S. National Institute of Standards and Technology ‘AI Risk Management Framework’. Though non-compulsory, the framework provides guidance to entities that have implemented AI into their practices, on how to increase its reliability and protect the privacy of individuals.
Although data protection regulations are being developed and implemented, it is proving impossible to understand this technology and create legislations at the same pace as the growth and expansion of AI. With these constant changes, it is evident that there will be continued uncertainty surrounding ChatGPT/ AI generative systems and data privacy.