Radiant Updates | June 2

Product updates from Radiant AI: Outlier Detection Models

Our product team has been busy this week deploying significant developments to our anomaly detection framework. As a result there are fewer front-end developments. We thank you for your continued support and collaboration, since it directly helps us continue to ship great products for our customers.

Product Updates

[Research preview] Outlier detection models trained for your use case

A frequent problem we have seen with our customers is that LLMs are a good baseline to find outliers in large datasets, especially when they have been adjusted for large context windows.

The problem is that they often lack the nuance to understand the particulars of a use case. This is expressed in good accuracy but poor recall, which is often connected to a high number of false negatives since the model is unsure. Of course this depends on prompting and the specificity of a given model.

We have found that by providing an LLM with similar inputs as reference data as well as computed metrics for the reference data, we can boost the accuracy of the model.

We have started fine tuning models for our alpha customer use cases to boost performance even further and initial results are very promising especially in improving recall substantially.

This will become a driver for outlier detection that assists the statistical methods already implemented in the platform.

About Radiant

Radiant is the Enterprise AI platform to take you from idea to production deployment. From security and governance to scaling and resiliency, we make it simple to make AI a critical part of your business. 

Try out a demo here, sign up here to get your own instance, or reach out to our founders directly at [email protected].

We’re also hiring. If you know of someone great that is interested in helping every company build AI into their products and operations, we'd love an introduction.