In a world where artificial intelligence is evolving at lightning speed, ML6 stands out as a pioneer that pairs technological innovation with a strong vision on security, reliability, and ethics.
During an in-depth conversation, ML6 experts Rob Vandenberghe and Louis Vanderdonckt offered a clear view of how the company is evolving, the role it plays in the AI landscape, and why trust is an absolute prerequisite for long-term value creation. From the technical foundations to the human-driven culture: this article brings the full story together.
Who is ML6? A brief introduction
“At our core, we are an engineering-driven organization,” says Rob Vandenberghe, ISO at ML6. “Most of our team consists of machine learning engineers who build advanced, tailor-made AI solutions.” ML6 develops solutions that disrupt industries and address real business challenges, make processes smarter and unlock new forms of efficiency. “Thanks to a structural investment in R&D, we continuously explore new technologies, often before customers even ask for them.”
Technology, however, is only one part of their story. ML6 also guides organizations in the responsible application of AI. This includes advising on how to scale AI sustainably, manage risks, and implement governance and compliance in practice. The company helps clients understand what AI can and cannot do, where risks lie, and how to navigate the technological, legal and ethical questions that come with it.
With this combination of technical depth and accountability, ML6 has become an important player in the European AI ecosystem. The strong emphasis on ethics and governance clearly sets the company apart. “An internal ethics board assesses sensitive projects before they even begin, measuring them against clear values and boundaries. That internal compass provides direction, makes ML6 unique in the sector and builds trust with clients, partners and employees,” says Louis Vanderdonckt, DPO at ML6.
AI, Cybersecurity and Trust: one interwoven story
The rise of artificial intelligence has fundamentally changed the cybersecurity landscape. Where organizations once worked with a small number of clearly defined systems, today they operate within a fragmented ecosystem of dozens of tools, each automating part of a process. “This creates new efficiencies, but also increases risk,” says Vandenberghe. “The more data is spread across different applications, the greater the chances of leaks, misconfigurations or misuse.”
Large language models also introduce a new category of risks. Because these models struggle to clearly distinguish between user instructions, system instructions and code, they are more susceptible to manipulation. Despite important progress in mitigation, organizations remain vigilant about prompt injection, deception, and unwanted outputs.
At the same time, cyberattacks are becoming more accessible and realistic thanks to AI. Deepfake attacks, convincing spear-phishing or automated social engineering are now within reach for malicious actors. Companies must evolve their defensive strategies as quickly as the threats evolve.
In this context, trust becomes a crucial building block. “At ML6, we notice that mistrust around AI often stems from a lack of understanding: people don’t know what AI can do, what the risks are, or how systems actually work. That’s why ML6 strongly invests in education, transparency, and literacy. When you demystify the technology, much of the fear disappears,” Vandenberghe explains.
“The ethics board reinforces that foundation,” adds Vanderdonckt. This internal council reviews sensitive projects prior to kickoff and measures against clear values. Applications that may be ethically or socially problematic simply aren’t built. This creates a culture where responsibility is central and where trust isn’t a side product but a core part of every AI implementation.
Building safe and mature AI
At ML6, security begins at the very first design phase. The company deliberately opts for serverless and cloud-native architectures to minimize infrastructure risks. By relying on the security mechanisms offered by major cloud providers, ML6 can focus on application logic, model safety and access control.
“To work efficiently and securely, we developed a hardened ‘building kit’ for each type of AI project: a pre-secured template containing all the necessary components. This allows engineers to create value immediately without the risk of serious security mistakes,” says Vandenberghe.
However, many organizations are not yet AI-ready. They want to get started but lack the fundamentals: clear data architectures, governance, privacy processes or secure infrastructure. “In such cases, ML6 is honest about the need to build those foundations first. Otherwise, a project will inevitably fail, damaging AI’s credibility internally.”
ML6 often sees companies approach AI like traditional IT, while AI governance requires a completely different approach; one that includes ethics, privacy, security and organization-wide collaboration. “Silos are one of the biggest stumbling blocks we encounter in AI projects with clients,” notes Vanderdonckt.
In industrial environments, AI takes on an additional dimension: the connection between digital and physical risks. ML6 works on projects involving quality control through vision systems, fire detection using camera images and predictive models for energy and production environments. In those contexts, safety always comes first.
“AI should never directly and unprotectedly control OT systems via the cloud,” says Vandenberghe. “That’s why ML6 uses strict separations between IT, DMZ and OT. For critical processes, models run locally to ensure latency, availability and safety.”
For ML6, cybersecurity goes beyond preventing incidents. It is an essential part of quality and reliability. Strong security leads to better documentation, more efficient processes and faster onboarding of new employees. The investment pays off multiple times.
“Security is the foundation of trust. Without that trust, people will not use the systems, and the potential value of AI will never be realized,” Vanderdonckt concludes.
NIS2: how ML6 approached the transition
ML6 already had a strong foundation in information security before NIS2 emerged, making the transition to this new European directive relatively smooth. Their golden rule: “Start early. Organizations that wait until the pressure increases risk being too late for certification or compliance.”
Audits play an important role in ML6’s security framework. They not only provide independent verification for clients but also improve internal quality and keep the management team closely engaged. “Especially within an AI supply chain, where models, datasets, cloud providers and different API links come together, transparency among all parties is necessary.” True assurance comes from critical questions, open communication and shared responsibility.
A striking example is the collaboration with the Dutch National Cyber Security Centre (NCSC). ML6 developed a system that automatically converts vulnerabilities from multiple sources into understandable and actionable reports for organizations subject to NIS2. This solution enables faster, more consistent and more efficient analyses, directly strengthening the resilience of countless companies.
Partnering with OpenAI: moving into the next phase of AI innovation
Earlier this year, OpenAI announced ML6 as one of their Services Partners to drive AI adoption across Europe “This strategic collaboration gives ML6 access to new technologies and insights before they become widely available,” says Vanderdonckt. “That allows us to innovate faster while identifying risks early. But OpenAI also learns from our practical experience, making the collaboration truly two-way.”
ML6 also stands out for its intensive work with OpenAI’s voice models. “One of the most eye-catching applications is an AI recruiter that automatically contacts candidates, asks questions and gathers information. This solution helps organizations reach large groups of candidates more efficiently and more inclusively,” Vanderdonckt explains.
Training & AI governance: the certified AI Compliance Officer (CAICO)
A noteworthy initiative is the training program ML6 offers in collaboration with ICTRecht: the Certified AI Compliance Officer course. This program combines technical insights, legal expertise and practical exercises to prepare professionals for the challenges of AI governance. It includes online modules, an extensive handbook, three intensive training days and a recognized exam.
The program attracts a diverse audience: DPOs, legal experts, business leaders and engineers. Its strength lies in the exchange between these profiles and the strong practical cases that are covered.
The story of ML6 shows that AI only creates value when technology is combined with security, governance, ethics and trust. Innovation alone is not enough. It is the deliberate, thoughtful approach that makes AI sustainable and meaningful.