For many career-driven individuals, they want to find something they are incredibly passionate about and Forbes 30u30 responsible AI expert, Alexandra Ebert, is no different.
Ebert, a professional in AI and data privacy, advises the UK’s FCA on data issues and Chairs the IEEE Synthetic Data IC Expert Group. Her day-to-day job is Chief AI & Data Democratisation Officer at MOSTLY AI, one of the best-funded startups in Austria.
However, she didn’t follow the expected route into AI. She began in marketing and economics, briefly tried software engineering, and discovered a lasting fascination with technology – particularly artificial intelligence (AI).
“It’s such a wide-reaching technology that has massive, massive transformational impact on the economy, on societies, and on ethics.”
Curious about the capabilities of AI, Ebert convinced a professor, who taught the fundamentals of the modern tech stack and a little about AI, to supervise a thesis exploring how AI and machine learning (ML) would develop in Europe under the new GDPR.
During this time, she interviewed many experts and policymakers and these conversations sparked her deeper interest in the ethics and societal impact of AI.
One of the conversations was with a founder of MOSTLY AI.
Who is MOSTLY AI?
MOSTLY AI are best known for its synthetic data capabilities. The core of the business is using Generative AI synthetic data to anonymise existing data assets without losing information.
But how is the data synthesised?
This process is more than masking or redacting – Ebert explained that it works in three steps:
- The generator trains on the original dataset, learning its correlations, structure, and time-based patterns
- It then produces entirely new synthetic data “from scratch”, which resembles the real dataset statistically but contains no real individuals
- Built-in privacy mechanisms exclude unique records – for example, an outlier like a billionaire’s bank account or a handful of rare medical cases – while retaining the broader statistical patterns
Unlike older anonymisation technologies, which strip datasets down to a few usable columns and risk re-identification, synthetic data preserves around 99.5% of the information. That makes it both privacy-safe and analytically valuable.
“Underneath that is this mission that data should belong to everyone, and that is has the potential to empower everyone … when we look into the area of AI for good … the big challenge here is none of the societal AI for good benefits can happen if the data lives behind closed doors.”
Policy driving change in access to data
Changing policy to drive change in the way data and AI is accessed is one of the things that Ebert and other experts are trying to implement.
She noted that while the digital economy is expected to drive GDP, few governments have frameworks to support it. Large organisations also lack incentives to open data assets.
However, Ebert believes that in time, AI will become a foundational technology – much like electricity “the future is, very clearly, open synthetic data,” giving societies access to information comparable to the printing press or the Internet.
Why is diversity in AI important?
Ebert is clear when it comes to diversity and responsible AI.
“Responsible AI cannot be put on the shoulders of engineers and data scientists … it needs diverse conversation – you need to have input from the legal side of things, with ethicists and philosophers even.
“The more perspectives you can bring in from different professions, the better your result is going to get. The more diversity you can bring in from different cultures, the better your product is going to get.”
She stressed that while many organisations state diversity as a goal, their systems and incentives rarely support it. Parents and society, she added, must also think about how they encourage young people, especially girls, to pursue STEM.
Despite these challenges, Ebert is hopeful that as AI tools become more user-friendly, more people will be able to contribute without specialist training: “I’m quite optimistic that we will go from AI specialists to AI generalists … If everybody has basic AI literacy, they can meaningfully contribute.”
Surprises when exploring AI
On the surface, privacy seemed straightforward. Yet Ebert was surprised to find that it was actually “a double-edged sword”.
She pointed out that outdated anonymisation often removes minority and edge cases from datasets, making models less representative.
“You lose the minorities. You lose the diversity. Which means that you don’t see what types of customers or citizens you have and how you could cater to their needs.”
Privacy, she added, is sometimes also used as a shield.
“Data owners, when asked by the government to release data … would say ‘due to privacy, we can’t give up our data’ – it’s often used as a shield not to democratise data. With modern technologies like synthetic data … policy makers, organisations, and societies should start asking the question ‘what is the cost of not sharing data?’.”
For Ebert, trust in AI is not and should not be a tick-box exercise in compliance but an enabler of adoption.
“It’s not only a necessity for some legal compliance checks, but it’s really the smarter, more sustainable way to do business and benefit from AI tools in the long run.”
This article originally appeared in the October’25 magazine issue of Electronic Specifier Design – see ES’s Magazine Archives for more featured publications.
