AI Data Defense: Building security layers around large language models

Published: 11 November 2025

Text: Anne-Marie Korseberg Stokke

Photo: Anne-Marie Korseberg Stokke

The experienced technology leader Solveig Ellila Kristiansen has established herself in Oslo Science Park with the consultancy Hockeystick Impact and the startup AI Data Defense. They have developed a security filter for companies that enables the use of artificial intelligence without leaking sensitive information or violating regulations.

“I’ve used generative AI a lot myself, and in the beginning, I basically just closed my eyes and went for it,” says Solveig Ellila Kristiansen with a laugh. “But once you start reading the EU AI Act and see how data actually flows, you quickly realize this isn’t sustainable.”

Kristiansen is the co-founder of AI Data Defense, a newly established company “born” within the consultancy Hockeystick Impact. Together with founder Mikael Loefstrand and an experienced cross-border team, she has developed a data protection solution for generative AI. The solution acts as both a filter and a guide for sensitive information, ensuring that companies comply with security and regulatory requirements.

A woman with a microphone points to a presentation slide on a screen. The background features stacked logs.
A woman with a microphone points to a presentation slide on a screen. The background features stacked logs.
Solveig Ellila Kristiansen

“We see that everyone is using AI, but many are doing it ‘in secret.’ The problem is that language models aren’t built to say ‘stop, you shouldn’t share that.’ They simply take input and respond,” she explains.

A recent Australian survey of more than 30,000 employees across 47 countries showed that 70 percent had used a generative AI model not approved by their employer — and nearly half admitted to sharing sensitive information.

From underwater projects to their own product

Kristiansen is the founder and advisor at the consultancy Hockeystick Impact, where she brought in Mikael Loefstrand as a consultant on AI projects, including work with the underwater technology company Granfoss. When they realized how vulnerable their clients were, the next step became obvious:

“Our clients don’t want to use language models in their IP projects. They’re afraid that data from applications, strategies, and research projects might leak. Several of them have just filed patents or are working on technologies they absolutely don’t want ending up on a server in China. We have to offer something better than just ‘copy-paste into a free chat,’” says Kristiansen.

A real-time filter managing traffic between internal and external models

The solution developed by AI Data Defense functions as an intelligent security filter — a kind of traffic controller between a company’s data and language models.

“We control what information is allowed to pass through, and where it goes. Some data should never leave the company and is therefore routed to internal language models, while other information can be sent to external models like ChatGPT — but only after we’ve removed everything sensitive,” Kristiansen explains.

She offers a concrete example:
“Imagine an employee pasting a client letter or contract into an AI service to have it rewritten or summarized. Our solution automatically detects personal data, customer details, and business-sensitive information, masks it in real time, and ensures that only safe text is sent onward. The rest is processed internally in the company’s own models.”

In larger organizations, specific policies define what can pass through the filter. The system also logs usage, giving management insight into how generative AI is actually used across the organization — and how much sensitive data has been intercepted and filtered out.

From consultancy to product — and out into the world

Both Kristiansen and Loefstrand have extensive experience from major U.S. technology companies, having trained employees, executives, and boards on how to listen to customers while clearly communicating their value proposition.

As advisors based in Oslo, Kristiansen says their choice of Oslo Science Park as headquarters was no coincidence:

“There are so many people here with fantastic ideas, so the Science Park was a natural choice for us. We meet researchers, entrepreneurs, and established companies that want to adopt AI — but who also need assurance around data and compliance. That fits perfectly with our philosophy: AI should be a tool for growth, not a risk,” says Kristiansen.