OpenAI geoblocks ChatGPT in Italy

by Ana Lopez

No, it’s not an April 1 joke: OpenAI has begun geo-blocking access to its generative AI chatbot, ChatGPT, in Italy.

The move follows an order from the local data protection authority Friday that it must stop processing data from Italians for the ChatGPT service.

In a statement appearing online to users with an Italian IP address trying to access ChatGPT, OpenAI writes that it “regrets” to inform users that it has blocked access to users in Italy – at the “request” of the data protection authority – which it is known as the Guarantee.

It also says it will refund all users in Italy who purchased the ChatGPT Plus subscription service last month – also noting that subscription renewals there are “temporarily paused” so users won’t be charged while the service is suspended.

OpenAI seems to be applying a simple geographic block right now – meaning using a VPN to switch to a non-Italian IP address is a simple solution to the block. Although if a ChatGPT account was originally registered in Italy, it may no longer be accessible and users who want to get around the block may need to create a new account with a non-Italian IP address.

OpenAI notice to users in Italian about ChatGPT blocking

Statement from OpenAI to users trying to access ChatGPT from an Italian IP address (Screengrab: Natasha Lomas/

On Friday the Guarantee announced it has opened an investigation into ChatGPT for suspected breaches of the European Union’s General Data Protection Regulation (GDPR) – and says it is concerned that OpenAI has been processing Italians’ data unlawfully.

OpenAI doesn’t seem to have informed anyone whose online data it found and used to train the technology, for example scraping information from internet forums. It also hasn’t been completely open about the data it processes – certainly not for the latest iteration of its model, GPT-4. And while training data it used may have been public (in the sense that it was posted online), the GDPR still contains principles of transparency – suggesting that both users and people from whom the data was collected should have been made aware.

In his rack yesterday the Guarantee also pointed out the lack of a system to prevent minors from accessing the technology, raising a child safety flag – noting, for example, that there is no age verification feature to prevent inappropriate access.

In addition, the regulator has expressed concern about the accuracy of the information provided by the chatbot.

ChatGPT and other generative AI chatbots are known to sometimes produce erroneous information about named individuals – an error that AI makers call “hallucinatory.” This looks problematic in the EU as the GDPR gives individuals a range of rights over their information, including the right to rectify inaccurate information. And currently it’s not clear that OpenAI has a system where users can ask the chatbot to stop lying about them.

The San Francisco-based company has still not responded to our request for comment on the Warranties research. But in its public statement to geo-blocked users in Italy, it claims: “We are committed to protecting people’s privacy and we believe we provide ChatGPT in compliance with the GDPR and other privacy laws.”

“We are going to talk to the Guarantee aiming to restore your access as soon as possible,” it also writes, adding, “Many of you have told us that you find ChatGPT useful for everyday tasks, and we look forward to making it available again soon.”

Despite an optimistic note at the end of the statement, it’s not clear how OpenAI can address the compliance issues raised by the Guarantee – given the broad scope of concerns about GDPR, it has been set out as the start of a deeper investigation.

The pan-EU regulation calls for data protection by design and standard, meaning that privacy-focused processes and principles are supposed to be embedded in a system that processes people’s data from the start. In other words, the opposite approach to collecting data and asking for forgiveness later.

Sanctions for confirmed GDPR breaches, meanwhile, can be as high as 4% of a data processor’s annual global turnover (or €20 million, whichever is greater).

Furthermore, since OpenAI has no headquarters in the EU, each of the bloc’s data protection authorities is empowered to regulate ChatGPT – meaning the authorities of all other EU member states can choose to intervene and investigate – and fines for any breaches they find (in relatively short order, as each would only trade in their own patch). So it faces the highest level of GDPR exposure, unprepared to play the forum-shopping game other tech giants have used to slow down privacy enforcement in Europe.

Related Posts