"We do this because it's necessary, not easy."

Published: 03 January 2024

Text: NewsLab AS

Photo: Aiba

Norwegian company Aiba is developing technology to prevent online abuse against children. They have a unique product that people around the world need.

"In recent years, we have seen a doubling of reported online abuse every single year. This is becoming a huge problem, and we cannot solve it with traditional methods."

This statement comes from Hege Tokerud, CEO of Aiba. To assist traditional methods with more modern tools, they have developed the security platform Amanda.

Amanda's purpose is to prevent online abuse and other unwanted behavior. Using artificial intelligence (AI), the program recognizes patterns in the communication of potential abusers and flags conversations before they escalate into abuse or cyber grooming.

"Our customers are mainly gaming companies and similar businesses, but we also collaborate with the police," Tokerud says.

Established last year, Aiba has several pilot customers on the platform but also aims to expand its reach and acquire more clients. Tokerud believes they will succeed, especially since the EU Parliament recently approved the DSA (The Digital Services Act) package and the UK Online Safety Bill:

"In simple terms, the law requires gaming companies and other online providers to ensure that children and young people are safe online. Until now, it has been companies that genuinely care about children and young people who have contacted us to handle this. With DSA, it will force a change in many companies – and also a significant need for services like Amanda," she says.

Hege Tokerud, CEO of Aiba.

Difficult to distinguish regular chats from risky conversations

Amanda can be integrated into chat services in games or other platforms. It works by analyzing the text content of the chat and attempting to identify potential risks. If the exchange is flagged as a high-risk conversation, the AI can automatically pause it. Before it can resume, a human needs to review it.

However, distinguishing between regular conversations among children and those involving potential abusers is not straightforward, says Tokerud:

"We don't do this because it's easy. It's very difficult! No one has achieved this before us. But we see that we are getting close; we have a good solution that is about to be usable."

A commonly used method among online abusers is to pretend to be a child, contact peers through a chat platform, and build trust. Aiba's technology can, on average, identify such conversations in less than 20 message exchanges, sometimes far fewer.

Amanda is up to 30 times more efficient

According to Aiba's data, Amanda can be up to 30 times more efficient and cost-effective than manual moderation. This allows human staff to use their workforce to address specific cases and save time. As Tokerud mentioned earlier, the number of online abuses that need to be identified and addressed has skyrocketed. In such cases, efficiency quickly becomes a pressing need.

Aiba's technology can, on average, identify online abusers pretending to be a child in less than 20 message exchanges, sometimes far fewer.

Hege Tokerud

"According to the police, the only way we can reduce the number of online abuses is to stop them before they even happen. Punishing the abusers doesn't help. By then, the abuse has already occurred," Tokerud explains.

Preventing abuse is further complicated by the fact that both children and abusers use linguistic tricks to bypass automatic filters. For example, words can be replaced with emojis, text can be written vertically, or letters can be replaced with other characters. This is something Amanda's language models are trained on, and eventually, Aiba will also train the algorithms on audio and visual content. The more tools Amanda gets, the more conversations can be intercepted.

Tokerud believes this is essentially an arms race:

"Abusers share methods and procedures with each other, causing the problem to spiral out of control. We estimate that there are between one and a half million abusers working day and night to groom and exploit children and young people."

Eight years of research

The environment behind Aiba originates from the information security environment at NTNU in Gjøvik, with significant contributions from Professor of Behavioral Biology Patrick Bours. Bours is from the Netherlands, and it was through him that the story of Amanda Todd became central to the project.

Tokerud met Bours in 2018, and together they began to examine the issue. They quickly discovered the enormous extent of the abuses and, most importantly, the lack of tools to handle them.

Today, Aiba has nine employees and operates from Oslo Science Park and offices in Gjøvik, bringing research into real-world applications. Even though they deal with dark issues, Tokerud emphasizes that it doesn't affect Aiba's work environment or the positive culture they've cultivated:

"A good culture is essential, and our industry is characterized by seriousness. However, we shouldn't be too serious all the time, so humor, openness, and generosity are crucial. We recruit across all age groups, genders, nationalities, and disciplines. We're building a company that addresses a particularly important and challenging global issue, so we need to attract the most talented individuals."

If you want to read more about Aiba and their work, you can find more on their website.