Veritaserum - Your AI Truth Serum
MICS Capstone Project Fall 2024

Veritaserum

Veritaserum: Your AI Truth Serum

In today's digital world, demonstrating trust and safety is increasingly challenging, yet ensuring authenticity in AI interactions is crucial. Veritaserum is your solution—an AI truth serum ensuring honesty, reliability, and trustworthiness in Large Language Models (LLMs).  It acts as a digital signature for AI, Veritaserum acts as a digital signature, confirming the reliability and trustworthiness of the AI you engage with, safeguarding you from manipulation.

Our mission is to foster transparency and safety in AI, empowering users with confidence in the AI they interact with.  Veritaserum also provides a real-time control channel for model providers to signal updates to their models, guaranteeing consumers are always kept informed and have access to the latest and most secure versions. Whether you're a developer, researcher, or simply seeking trustworthy AI, Veritaserum enhances safety and transparency, contributing to the responsible adoption of this transformative technology.

Veritaserum tackles two critical challenges in demonstrating the trust and safety of Large Language Model (LLM) based systems. To enhance safety, it explores methods to bolster resilience, ensuring LLMs generate safe and unbiased outputs even when faced with adversarial inputs particularly those involving toxic prompts. To establish trust, the project investigates ways to track LLM inference over time and verify the authenticity of the serving model, drawing inspiration from the C2PA standard. 

By tackling these important issues related to security and reliability of LLMs the project contributes to the responsible and safe adoption of these advanced technologies in different types of applications.

Last updated: November 23, 2024