Radiant Updates | October 14

Product updates from Radiant AI: Streamlined uploads with Sentry and Langfuse, Keyword checks, on the fly evaluators, analyze data in Pandas.

Radiant is the Enterprise AI platform to take your idea from prototype to production deployment. From security and governance to evaluations and anomaly detection, we make it simple to make AI a critical part of your business. 

After a brief hiatus we’re back to regular updates from the Radiant team. Over the past few months we’ve been heads down refining our product, talking to new customers and deploying new features. 

What we’ve learned from our customers is that frameworks to build and then write “evaluations” for GenAI products are a dime a dozen. LLM evaluations, like production log monitors, are just a mechanical way to quantify unstructured data into something you can compare. The hard part, again, comes down to how you interpret these product signals to understand how your users engage and what will serve them better.

We’re building our platform to provide product analytics for teams building with generative AI. Once AI products reach sufficient scale teams use Radiant to understand how users are engaging with their products: 

Radiant allows product teams to: 

  • Quickly see and characterize what users are sending to models and what the models give back using evaluators, filters and clustering. 

  • Be able to see the interactions how the user seems them, with detailed visibility into complex interactions

  • Understand how well AI is actually reaching business objectives by integrating with external business metrics

Product Updates

Easy Ingestion from all of your existing business data

Radiant is designed to fit into the observability and logging ecosystem of enterprise teams. We’re happy to announce that we now support integrations with the popular monitoring platforms Langfuse and Sentry. Use Radiant to ingest business KPIs, observability / logging, and GenAI evals that you run yourselves or in another platform. 

Find results faster with keyword checks 

Keyword checks allow users to quickly identify specific keywords or phrases in any number of interactions, including a single instance within a complex multi-turn interaction. Users can provide a simple keyword or a regular expression, describe the desired output format, and then what time range of responses should be checked. The resulting metric becomes part of the dashboard and can be filtered to identify specific interactions or traces. 

Evaluate Traces on the fly

Identifying where models are not working can be a tedious process, especially if there are multiple interactions per session. Users investigating an issue can now add keyword checks or custom evaluators on the fly and apply them to a session, trace or selection of traces. 

Analyze Data in Pandas

Users looking to analyze data for more specific questions or with more sophisticated techniques, we now offer an easy integration to export to Pandas datasets for analysis in notebooks. Simply select traces or interactions as part of a dataset and Radiant will produce code for use in your notebook. Using Radiant’s unified data model, you can select the specific fields you need for your analysis from model details to keywords and evaluation metrics.   

Model Support

Radiant is designed to work with all major model providers, including OpenAI, Anthropic, Cohere, Azure, Gemini and on-prem configurations. 

This past month we added support for the OpenAI o1 Model Family

About Radiant

Radiant is the Enterprise AI platform to take you from idea to production deployment. From security and governance to scaling and anomaly detection, we make it simple to make AI a critical part of your business. 

Try out a demo here, sign up here to get your own instance, or reach out to our founders directly at [email protected].

We’re also hiring. If you know of someone great that is interested in helping every company build AI into their products and operations, we'd love an introduction.