As cyber threats escalate and European regulators tighten AI controls, OpenAI has introduced a significant initiative known as “Trusted Access” in Europe. This project aims to provide controlled access to advanced cybersecurity models, including its latest creation, GPT-5.5-Cyber. The program is designed to enhance resilience by enabling trusted organizations to detect and resolve vulnerabilities more efficiently.
OpenAI emphasizes that the right balance between security and utility is crucial for its new program. Emmanuel Marill, Managing Director, EMEA, underscores the objective:
“We need to block dangerous activity, while making sure trusted defenders have tools that are genuinely useful in protecting systems, finding vulnerabilities and responding to threats quickly.”
Notable names like Deutsche Telekom, BBVA, Telefonica, Sophos, and Scalable Capital are already part of this program, which highlights the urgency as attackers automate faster than most organizations can patch. While AI excels at writing and reviewing code rapidly, the same prowess that exposes weaknesses also aids defenders in patching them—if the governance is robust.
This initiative is particularly relevant in the context of productivity and automation, as cybersecurity now heavily influences collaboration. Incidents require coordination, often leading to calls, chats, and administrative approvals. AI accelerates detection and remediation, thereby reducing the bottleneck of excessive meetings and unnecessary communication.
However, “Trusted Access” introduces a new layer of dependency. By relying on external models and their safeguards, enterprises may face concentration risks. A few providers could dictate how swiftly European enterprises defend against emerging threats, subsequently influencing security workflows and training.
For automation leaders, this dependency risk extends beyond cybersecurity. Verified access could become a norm across other critical systems, such as customer records and financial systems.
OpenAI’s thrust in cybersecurity is a strategic move, aligning with its recent initiative, Daybreak, which aims to accelerate the gap between discovery and vulnerability patching. OpenAI positions Daybreak as a way to mitigate risks through advanced AI models, which are only available to verified entities under tight controls.
Europe’s regulatory environment adds another layer of complexity. With stringent procurement policies, the emphasis is on risk management, auditability, and data governance. EU’s regulations, coupled with digital sovereignty issues, compel European institutions to adopt verified access models under strict scrutiny.
Ultimately, OpenAI’s Trusted Access program may enhance the speed and efficiency of defensive responses, potentially setting a precedent for how Europe might engage with frontier AI: verified access, heightened safeguards, and rigorous governance standards. This suggests that “AI at work” will increasingly resemble “AI under control.”
For those in productivity and automation, these developments indicate a shift towards tightly monitored AI usage, emphasizing the importance of scrutinizing how AI integrates into organizational processes.

