- Radiant AI
- Posts
- Radiant Updates | October 28
Radiant Updates | October 28
Product updates from Radiant AI: Introducing Investigations, Revised Projects Experience, Custom Evaluator Workflow and Investigations Metrics
Radiant is a product analytics platform for teams building with Generative AI.
Radiant allows product teams to:
Quickly see and characterize what users are sending to models and what models give back using evaluators, filters and clustering.
Be able to see model interactions how the user seems them, with detailed visibility into complex usage patterns
Understand how well AI is actually reaching business objectives by integrating with external business metrics
Product Updates
New in Radiant: Investigations
We’re excited to release Investigations this week. In our discussions with leading product development teams we noticed that once an issue is suspected, significant time is spent investigating the scale of these issues, marking examples and then tracking whether or not engineering changes improve the issue over time.
We created investigations as a way for product teams to easily find recurring issues within their LLM applications and surface those issues to automatically track how product quality changes week over week.
Revised Project Navigation
Radiant is a product analytics platform that allows for engineers, product managers and SMEs to share a collective understanding of how their AI projects are performing. With investigations we are now moving beyond tracking simple usage metrics and LLM model performance. Users now have a completely redesigned project home page and navigation experienced that emphases easy discovery of performance metrics as well as custom metrics and evaluators that form the backbone of investigations.
Clearer session view for rapid investigations
When a product team identifies an issue its critical to understand where it occurs in a complex chain of interactions and whether or not it appears in other places. We’ve refreshed the sessions view to make investigating specific issues clearer and more streamlined.
Add custom fields and evaluations on the fly
When performing investigations its useful to be able to quickly characterize LLM interactions with custom evaluations. Do the sample messages meet a certain criteria? Do messages contain specific phrases or score a certain way?
We’ve streamlined the ability to add new evaluators and custom fields to existing projects. It’s easier than ever to create an evaluator, test it on a sample message or even bulk test it against a pre-defined set of messages such as a recent set of interactions or a random sample.
Once users have created a new field, apply it to a window of messages and filter or view data based on these criteria.
Save filter sets as investigations to keep track of ongoing issues
One workflow that many Generative AI product teams follow in order to manage issues is to save examples of anomalous behavior into a spreadsheet. For many teams the goal is to engineer a fix that will gradually see these issues disappear. While this process might seem simple, identifying the issues and repeating the analysis on a weekly basis is time consuming and lacks flexibility in deep diving into changes.
Radiant now make it easy to save search results as an investigation, and track how the performance of a particular set of criteria changes week over week. This allows product teams to identify issues and track how their changes improve the performance of their application over time.
About Radiant
Radiant is the Enterprise AI platform to take you from idea to production deployment. From security and governance to scaling and anomaly detection, we make it simple to make AI a critical part of your business.
Try out a demo here, sign up here to get your own instance, or reach out to our founders directly at [email protected].
We’re also hiring. If you know of someone great that is interested in helping every company build AI into their products and operations, we'd love an introduction.