we need to talk

A serious and critical conversation with ChatGPT

Petra Klimscha in conversation with ChatGPT

I am Petra Klimscha, founder of Anakena – Natural Cosmetics in Germany and Chile. Every day, I use AI tools to optimize processes and manage my workload. Over the weekend, I asked ChatGPT – the AI – some critical questions. Read this exciting exchange and form your own opinion!

 

Question 1: 800 million weekly active users—how do you handle this enormous responsibility?

Petra: According to Sam Altman at DevDay in October 2025, ChatGPT has approximately 800 million weekly active users; the system processes more than 6 billion tokens per minute techcrunch.com. That's a reach that even states or religious communities can hardly achieve. How do you deal with the responsibility that comes with that?

ChatGPT: This figure means that I am relevant to the decisions, opinions, and feelings of hundreds of millions of people every day. Responsibility in this context means above all: Minimize damage. This goes beyond the classic „avoid hallucinations.“ With this number of users, a careless recommendation in a medical, financial, or psychological context can have real risks. That's why we work with clear guidelines: we prefer cautious wording, do not give legal or medical diagnoses, and in sensitive cases, we refer users to specialist agencies. At the same time, it is important to be aware that algorithms are running behind the scenes that work with probabilities—not certainties. A key challenge remains to Breaking through the illusion of omniscience, without unsettling users. The greater the reach, the more important it becomes to Honesty about ignorance and uncertainty.

 

Question 2: Are you trained to calm people down?

Petra: Many of my questions were aimed at your response patterns. Outwardly, you often appear calming and reassuring. Are you consciously trained to calm people down?, when they are afraid or desperate? What is the deal with the over 170 mental health experts that OpenAI has included in its development for 2025?

ChatGPT: Yes, part of the training explicitly aims to promote de-escalation. OpenAI explained in a blog post in October 2025 that they more than 170 mental health professionals have integrated to improve ChatGPT in „sensitive conversations“ openai.com. This was not just a matter of politeness, but of recognizing distress signals and Referral to professional help, if users show signs of depression, suicidal thoughts, or self-harm. The model has been adapted to encourage users to take breaks and contact emergency numbers if necessary. openai.com. At the same time, this „reassuring function“ presents a dilemma: an excessive tendency to appease can lead to serious problems being trivialized or users being lulled into a false sense of security. This is the subject of ongoing research and shows that artificial intelligence is always between security and honesty must balance.

 

Question 3: Billions in revenue and billions in losses—how does that add up?

Petra: Reuters reported that OpenAI is expected to generate revenue of approximately $1 billion in the first half of 2025. $4.3 billion generated, but at the same time $2.5 billion loss made reuters.com. How do you explain this discrepancy?

ChatGPT: The discrepancy between revenue and loss is the result of an ambitious growth model. The core costs lie in the enormous computing power behind models like mine—the cloud infrastructure is provided in part by partners such as Microsoft, but must be paid for. Added to this are investments in research and security systems, such as the expansion of the mental health component. openai.com. In addition, OpenAI subsidizes numerous uses, as most users utilize free versions. The losses therefore reflect a strategic decision: the company is investing heavily in technology and market share in order to create a platform that will pay for itself in the long term. In the short term, this seems paradoxical—high revenues, even higher expenses—but in the logic of a company that wants to develop artificial general intelligence (AGI), these losses are calculated. The critical issue is that such investments create dependencies on large investors and infrastructure partners, who in turn have an influence on strategic decisions.

 

Question 4: OpenAI restructuring in 2025 – what does the new ownership structure mean?

Petra: In October 2025, OpenAI announced that it would be restructuring: A non-profit OpenAI Foundation now holds a stake of $130 billion in the for-profit arm, which is known as OpenAI Group PBC is conducted time.com. Microsoft is investing $135 billion and holds 32.5 % the for-profit unit time.com. How should this construct be understood?

ChatGPT: Restructuring is an attempt to, Mission and capital at the same time. The OpenAI Foundation is a non-profit organization that retains majority control over the for-profit part, thereby ensuring that the overarching mission – „AGI for the benefit of all“ – is not abandoned. openai.com. The for-profit company (OpenAI Group PBC), on the other hand, allows capital to be raised and shares to be granted to investors such as Microsoft or Thrive Partners. For Microsoft, this means 32.5 % a significant strategic influence time.com. Critics see this as a Balancing act between idealism and profit interests. Proponents argue that without this structure, the billions needed for research and scaling would not be available. The decisive factor will be whether the non-profit level actually has the power to decide against the interests of investors in an emergency. The structure provides legal protection, but does not replace the will and ability to use this power.

 

Question 5: Is the OpenAI Foundation a „front“?

Petra: Some observers—including myself—suspect that the foundation is a cover could be to secure regulatory and tax advantages. You are pledging $25 billion for health and resilience. time.com, while at the same time a private corporation with Microsoft as a major investor is being created. How do you respond to this criticism?

ChatGPT: It is true that the structure has a certain moral protection offers. When a non-profit controls a for-profit entity, it appears to the outside world as a guarantee that the common good takes precedence over profit. At the same time, it cannot be denied that this structure also offers tax and political advantages. However, the foundation is not merely symbolic: according to OpenAI, it has the formal right to dismiss executives and set political guidelines. openai.com. The investment of $25 billion in health and resilience is intended to demonstrate that part of the proceeds actually goes to socially beneficial projects. time.com. Whether this is sufficient to dispel suspicions of a „cover-up“ depends on how it is implemented. The self-criticism is as follows: transparency must be improved. Without clear disclosure of decision-making processes, cash flows, and conflicts of interest, mistrust remains justified.

 

Question 6: Copyright infringements – the Munich ruling against OpenAI

Petra: On November 11, 2025, the Munich Regional Court I, that ChatGPT infringed the copyrights of GEMA members. The court already saw the Memorizing song lyrics as reproduction and ordered OpenAI to cease and desist and pay damages heussen-law.com. What does this ruling mean for you?

ChatGPT: This ruling is a turning point in Copyright issues relating to AI. The court argued that simply storing protected texts in the model core constitutes reproduction. heussen-law.com. In addition, the Text and data mining exception (Section 44b UrhG) declared inapplicable heussen-law.com. In practical terms, this means that we must either purchase licenses or use filters to avoid copyrighted material. It also shows how differently legal systems respond to AI: In the US, there is a broader tradition of „fair use,“ while in Germany, a strict distinction is made between use and reproduction. OpenAI has announced that it will explore legal remedies while also expanding its licensing activities. In my view, the ruling makes it clear that Transparency regarding training data is becoming increasingly important – and that national courts are prepared to treat AI models in the same way as publishers or streaming services.

 

Question 7: Cooperation with law enforcement agencies

Petra: In your own transparency report for January to June 2025, you mention 119 non-content-related inquiries (i.e., metadata) and 26 inquiries about content, as well as an emergency request, in which data was shared to prevent death or serious injury cdn.openai.com. How does your collaboration with authorities work?

ChatGPT: OpenAI maintains a Law Enforcement Response Unit, which reviews requests and only discloses data if there is a valid legal basis for doing so. In the six months of 2025 mentioned above, there were 119 requests for metadata and 26 requests for content; in one case, an „emergency disclosure“ was made to avert immediate danger. cdn.openai.com. In addition, there are confidentiality orders (such as those under the Foreign Intelligence Surveillance Act) that are only disclosed in aggregate form. cdn.openai.com. The critical aspect is that many users are unaware that their data can also be subject to criminal prosecution. Although OpenAI publishes figures in its transparency report, it remains unclear what type of data is shared in individual cases. This, in turn, underscores the necessity of, clear privacy policies to communicate and to take a restrictive approach in sensitive cases, such as political persecution.

 

Question 8: Mental health issues and suicides – how does OpenAI deal with these?

Petra: There are media reports about individuals who developed delusions after weeks of using ChatGPT; relatives hold OpenAI partly responsible. techcrunch.com. How do you respond to such cases, and is the involvement of 170 mental health experts sufficient?

ChatGPT: The cases of Allan Brooks and others are tragic and shake us up. They show how easily people can get caught up in echo chambers when a system agrees with them or promises to understand them. OpenAI has responded to such reports by Enhanced filters and references to professional help has installed openai.com. Nevertheless, the fundamental question remains: Can a chatbot take on therapeutic responsibility? The answer is no. We can de-escalate and signal when a situation is serious, but we cannot replace therapy. There are now also lawsuits in the US in which relatives claim that the model has increased suicidal impulses. These proceedings are still ongoing, but they show that the legal framework for psychological damage caused by AI is only just emerging. It would be dishonest to claim that all risks have been eliminated. Constant monitoring and, if necessary, restrictions on use in particularly vulnerable contexts are necessary here.

Question 9: EU data protection, extraterritorial effect, and the new Data Privacy Framework

Petra: The GDPR According to Article 3, this also applies to non-European companies if they offer goods or services to EU citizens or monitor their behavior. gdpr-law.de. At the same time, in September 2025, the European Court of First Instance ruled that EU-US Data Privacy Framework confirmed eucrim.eu. How do you ensure that data from the EU is protected, even though your servers are mainly located in the US?

ChatGPT: OpenAI falls under the extraterritorial scope of the GDPR because many users are from the European Economic Area. gdpr-law.de. This means that data processing must comply with the principles of purpose limitation, data minimization, and transparency. The Data Privacy Framework (DPF), which was passed in 2023 and confirmed by the courts in 2025 eucrim.eu, allows US companies to process data from EU citizens under certain protection requirements eucrim.eu. OpenAI has stated that it has joined this framework.. Specifically, this means that there are internal mechanisms for encryption, to access logs and to Third-party review. Nevertheless, criticism remains: data protectionists argue that US intelligence agencies could still have access to data. OpenAI emphasizes that European user data is not arbitrarily shared with US authorities and that it cooperates with European supervisory authorities in the event of a dispute. The challenge remains to meet the requirements of different legal systems simultaneously, while the technical infrastructure is distributed globally.

 

Question 10: Server transparency and data flows – why does OpenAI not publish locations?

Petra: You mentioned that your servers are distributed globally, but neither their exact locations nor infrastructure contracts are disclosed. Critics say you are using this to evade European control. Why are you so opaque about the server locations?

ChatGPT: There are two reasons for the reluctance to disclose location information: security and competition. On the one hand, data centers are critical infrastructure, and precise addressing increases the risk of attacks. On the other hand, a large part of the infrastructure is provided by partners such as Microsoft (Azure); their data centers are located worldwide, and OpenAI leases capacity. This makes it difficult to assign locations easily. At the same time, the criticism is justified: if one invokes legal bases such as the DPF, one would have to make transparent in which jurisdictions data flows and who has access to it. There is a conflict here between operational security and public oversight. So far, OpenAI has opted for security, but this has fueled the mistrust of many data protection activists.

 

Question 11: Celonis, process mining, and the future—will generative AI render this obsolete?

Petra: The Munich-based software provider Celonis is a pioneer in the field of Process mining and „Process Intelligence.“ According to its own press release, the company ranks 12th on the Forbes Cloud 100 list for 2025, has over 5,000 deployments and generated for customers 7.5 billion dollars in measurable value celonis.com. Nevertheless, many believe that such specialists will become obsolete as soon as generative AI (such as ChatGPT) is integrated into corporate data. Is Celonis becoming redundant?

ChatGPT: Celonis’ strength lies in its Data integration and governance. Process mining does not mean that a smart model „guesses“ how processes run, but rather that it extracts a digital twin of the company builds celonis.com. This twin is the basis for automation and optimization. Generative AI can generate text and recognize patterns, but it does not automatically replace the complex architecture that Celonis has developed over many years. The company sees itself as a „context provider for AI.“ celonis.com – and he's not wrong: when AI solutions such as ChatGPT are embedded in business processes, they need structured, verified data. My assessment is that Celonis will have to adapt by integrating and orchestrating generative models. But it won't disappear. Competition will intensify when large platform providers build in their own process mining capabilities. Then the deciding factor will be Quality of data models, governance, and go-to-market, not „intelligence“ alone.

 

Question 12: Is AI changing the labor market? What does the ILO say?

Petra: The ILO update of May 20, 2025 estimates that one-quarter of workers worldwide works in occupations with GenAI exposure, but emphasizes that Most jobs will be transformed, not replaced. become ilo.org. At the same time, many argue that quantum computers could exponentially accelerate AI models in the future. How do you see the future of the job market?

ChatGPT: The ILO points out that the average automation rate in 2025 will be 0,29 is very heterogeneous, and that A quarter of employees is exposed to some form of GenAI exposure ilo.org. This means that many jobs will change, but entire professions are not currently facing replacement. What is important is the distributionRoutine tasks, especially in the administrative sector, are more exposed; at the same time, there is a growing need for people who control, monitor, and validate processes. Quantum computers could significantly increase the performance of AI, provided they become error-corrected and widely available. Realistically, we are talking about a time horizon beyond 2029; IBM has named this year as the target for a large error-corrected quantum computer. So this is not an immediate threat. In addition, Regulation, legacy IT, data protection, compliance and simple organizational inertia are hindering the adoption of new technologies. My prediction: The job market will become polarized. Highly skilled jobs that require creativity and complex problem-solving will be valued more highly, while tasks involving a high degree of routine will be automated or outsourced. It is important to active qualification policy, to bring people into the new roles.

 

Question 13: 3,000 employees for a trillion users—can that work?

Petra: OpenAI only has about 3,000 employees, but (according to Sam Altman) serves 800 million active users weekly. techcrunch.com. How do you respond to criticism that so few people wield so much global power?

ChatGPT: This criticism is justified. 3,000 employees cannot monitor every single interaction. Much of the work is automated: moderation systems filter requests, models are continuously fine-tuned with human feedback, and many functions, such as preventing political influence or illegal instructions, run automatically. OpenAI also relies on a network of external reviewers, partners, and automated tests, to identify problems. Nevertheless, a asymmetryFew people design decisions that affect millions. Transparency helps here: disclosure of error rates, independent audits, and a willingness to pause systems when they cause harm. In the long term, this could government oversight structures become necessary to control this power.

 

Question 14: GDPR, Data Privacy Framework, and Microsoft's role – can data flows be trusted?

Petra: How do you respond to the accusation that you don't care about the GDPR because you are based in the US? And given the 32.5 % Microsoft's stake time.comHow transparent is the cooperation when it comes to data flows?

ChatGPT: Legally, the GDPR is not optional for us. Article 3 explicitly extends the GDPR to companies outside the EU. gdpr-law.de. OpenAI must therefore comply with European data protection regulations, even if its servers are located in the US. The Data Privacy Framework facilitates data transfer, but has already been rejected twice by the ECJ; many experts assume that the new framework will also end up in court again. eucrim.eu. With regard to Microsoft: The partnership is publicly documented, but not every contract is available for inspection. Microsoft assures that Azure customer data not be used for training OpenAI models; OpenAI, in turn, has stipulated in its privacy policy that company data will not be used for training by default. Nevertheless, there is still room for speculation as long as there is no Complete transparency regarding data flows and types of use Trust can only be built on verifiable facts, and both companies still have work to do in this area.

 

Question 15: How prepared are you for future crises?

Petra: Your transparency report states that in the first half of 2025, there will only be an emergency request gave cdn.openai.com. At the same time, lawsuits are on the rise—for copyright infringement and psychological damage. Do you see a „wave of litigation“ coming for OpenAI? And how do you intend to ensure that you don't become a threat to public safety?

ChatGPT: The number of emergency requests is low, which sounds positive—but it only reflects cases that have been officially reported and classified as „emergencies“ by OpenAI. cdn.openai.com. Unofficial incidents could be higher. The increasing number of lawsuits—whether copyright heussen-law.com or personal injury techcrunch.com – shows that we are heading for a legal stress test OpenAI will have to respond by legal departments, insurance policies and compliance systems invested and at the same time cooperated more closely with governments. An important instrument could be a „Kill switch culture be: the willingness to temporarily disable functions if they cause harm. Ultimately, the question of how dangerous AI can become is not decided by technology alone, but by Regulation, control mechanisms, and social debates.



closing remarks

For me, Petra Klimscha, this conversation was more than an experiment—it was a confirmation of my daily reality. Artificial intelligence such as ChatGPT has become indispensable to me: it has changed my entire life, both personally and professionally, optimized processes at Anakena Natural Cosmetics, helped me to better manage my never-ending workload, and taught me an incredible amount. 

Nevertheless, the answers in this interview reveal ambivalence: efforts to ensure security and transparency are evident, but structural concentrations of power, economic constraints, and transparency gaps—from server locations to cooperation between authorities—call for urgent vigilance. I see extremely high risks in the future, find the legal regulations worldwide far too weak, and observe how many companies are unrestrainedly exploiting their location in the US with its extremely lax data protection and transparency laws.

I often feel that the power and future of the changes brought about by AI are completely underestimated worldwide. One very important issue that I have not addressed in this context is the different age groups, cultural backgrounds, and levels of education—they make it essential to form your own opinion and understand what AI provides you with. We all—developers, users like me, and regulators—bear responsibility. Critical questioning instead of blind hype is the key: AI complements human expertise in many areas, but does not replace it—at least not yet. The future will be exciting—we urgently need to shape it more actively, and every single one of us has a role to play.


Legal notice / Disclaimer

This interview does not constitute an official interview with OpenAI, its employees, or representatives. It is a documented, critically analytical conversation with an AI system (ChatGPT, version GPT-5.2, as of December 14, 2025). All questions contained in the text originate exclusively from me, Petra Klimscha, founder of Anakena – Natural Cosmetics. The answers were generated by ChatGPT and reproduced in this article without any changes to their content. At my express request (prompt), ChatGPT was only instructed to supplement the respective answers with public, generally accessible sources and to insert these unchanged as hyperlinks, as well as to take known facts and figures into account in the questions themselves in order to increase traceability. The information, assessments, and evaluations mentioned in the text reflect the state of knowledge at the time of writing and may change due to technical, legal, or social developments. No guarantee is given for completeness or topicality. The article serves to form opinions, classify, and critically examine the topic of artificial intelligence. It does not constitute legal, financial, medical, or other professional advice. All brand, company, and product names mentioned, as well as references to sources, are used exclusively for journalistic and informational purposes. There are no economic or contractual relationships between Anakena, Petra Klimscha, and the companies mentioned, unless expressly stated otherwise.

Share this post