As artificial intelligence tools become more powerful and accessible, the ability to generate convincing synthetic media has grown rapidly. While this technology can be used creatively, it also introduces new risks for businesses and institutions that must verify the authenticity of digital content. In response to these challenges, German startup Neuramancer AI Solutions GmbH has raised €1.7 million in pre seed funding to advance its technology designed to detect deepfakes and other forms of manipulated media.
The funding round was led by Vanagon Ventures, with participation from Bayern Kapital and a group of venture capital firms and business angels. Additional investors include the Nuremberg based venture capital firm ZOHO.VC and family office Lightfield Equity. The investment consortium also includes experienced executives from the financial services and technology sectors who are supporting the company as strategic advisors.
Addressing the growing deepfake challenge
Generative artificial intelligence has significantly improved the quality and realism of synthetic images and videos. As a result, manipulated media can increasingly resemble authentic content, making it harder for individuals and organisations to detect fraud or misinformation.
This development poses particular risks in industries where visual evidence plays an important role in decision making. For example, insurance companies often rely on photos and videos submitted by customers when evaluating claims related to accidents or property damage.
According to industry estimates, insurance fraud already results in billions of euros in financial losses each year. With the emergence of generative AI tools capable of altering images or producing entirely synthetic content, new forms of fraud are becoming possible. These include manipulated photos of damage, altered documents and even deepfake video calls designed to deceive investigators.
As AI generated media continues to evolve, the need for reliable detection technologies is becoming increasingly urgent.
Detecting manipulation through image noise patterns
Neuramancer AI Solutions has developed a technology platform designed to detect manipulated images and videos by analysing statistical irregularities within the media itself.
Rather than focusing on the visible content of an image or video, the system examines structural characteristics such as noise patterns and artifacts that appear during the generation or modification of digital media. These subtle irregularities can provide clues indicating whether a file has been manipulated.
By analysing these patterns, the platform aims to identify deepfakes and edited media even when the visual content appears realistic to the human eye.
The company believes this approach offers advantages over detection systems that rely primarily on semantic analysis or black box artificial intelligence models.
Providing forensic analysis for investigations
Beyond simply identifying manipulated content, the Neuramancer platform also produces detailed forensic reports that explain how and where alterations may have occurred.
These reports highlight specific sections of an image or video that show signs of modification and provide contextual information about the nature of the manipulation. Such insights can assist organisations in investigating suspicious cases and determining whether submitted media evidence has been altered.
For sectors such as insurance, finance and security, this forensic capability can support fraud prevention efforts while providing investigators with clearer evidence for their assessments.
A focus on transparent and explainable AI
Neuramancer positions its technology as part of a broader movement toward transparent and explainable artificial intelligence systems. While many AI detection tools rely on opaque algorithms, the company emphasises a scientific approach that provides interpretable results.
This transparency is expected to become increasingly important as regulatory frameworks for artificial intelligence continue to evolve across Europe and other regions. Organisations may need to demonstrate not only that detection systems work but also how they reach their conclusions.
The company believes explainable AI could become a competitive advantage for organisations seeking reliable ways to identify manipulated digital content.
Scaling the platform and expanding the team
With the newly raised funding, Neuramancer plans to scale its deepfake detection platform and expand its development and research teams.
The investment will also support commercialisation efforts as the company prepares to bring its technology to market. While the platform has potential applications across multiple industries, the company intends to initially focus on the insurance sector where the need for reliable fraud detection tools is particularly strong.
As generative AI technologies continue to reshape how digital content is created and shared, Neuramancer aims to position itself as a provider of tools that help organisations verify the authenticity of images and videos.
By combining scientific analysis with explainable artificial intelligence, the company hopes to provide businesses with a reliable way to detect synthetic media and mitigate the growing risks associated with deepfakes.