

Guidance Throughout & Even Simpler Chat Experience
Making products simpler is a harder task than adding features. Our Valentine’s Day release is all about giving customers what they asked for, and this time the overall theme is guidance and simplicity. With this release, we’ve eliminated several steps to streamline the conversational experience for advanced analytics. Previously you had to set up the analysis by first choosing the variable you wanted to understand before you could chat. Now, you can jump right into your conversation.
Guidance Throughout & Even Simpler Chat Experience

Making products simpler is a harder task than adding features. Our Valentine’s Day release is all about giving customers what they asked for, and this time the overall theme is guidance and simplicity. With this release, we’ve eliminated several steps to streamline the conversational experience for advanced analytics. Previously you had to set up the analysis by first choosing the variable you wanted to understand before you could chat. Now, you can jump right into your conversation.

Automatic KPI Detection & Problematic Predictor Guidance
Misleading factors can hamper analytics when exploring multiple variables, and many users also need deeper KPI insights without realizing it. Our new release addresses both. Automatic Problematic Predictor Guidance identifies predictors that may distort results and suggests which factors to exclude, boosting analytical accuracy and preventing costly misinterpretations. Simultaneously, Automatic KPI Detection recognizes when your conversation implies a KPI-focused query and seamlessly transitions into an augmented analytics chat, removing the guesswork of manual switching.
Automatic KPI Detection & Problematic Predictor Guidance

Misleading factors can hamper analytics when exploring multiple variables, and many users also need deeper KPI insights without realizing it. Our new release addresses both. Automatic Problematic Predictor Guidance identifies predictors that may distort results and suggests which factors to exclude, boosting analytical accuracy and preventing costly misinterpretations. Simultaneously, Automatic KPI Detection recognizes when your conversation implies a KPI-focused query and seamlessly transitions into an augmented analytics chat, removing the guesswork of manual switching.

Auto-Recommend Context (Data) Search in Chat
We know that half the battle is often finding the right data. To tackle this, we’ve improved the Auto-Recommend Context Dataset Search. After submitting a question, our system automatically scans for relevant keywords and runs a dataset search in the background. You’ll then see a ranked list of matching data or document sets, along with explanations of why they’re a good fit. This means less time digging around for the right data source and more time uncovering valuable insights. It’s a frictionless way to ensure you’re always tapping into the most relevant information for your analytics needs.
Auto-Recommend Context (Data) Search in Chat

We know that half the battle is often finding the right data. To tackle this, we’ve improved the Auto-Recommend Context Dataset Search. After submitting a question, our system automatically scans for relevant keywords and runs a dataset search in the background. You’ll then see a ranked list of matching data or document sets, along with explanations of why they’re a good fit. This means less time digging around for the right data source and more time uncovering valuable insights. It’s a frictionless way to ensure you’re always tapping into the most relevant information for your analytics needs.

Beta: Adding some of the core improvements of the DeepSeek models, without using such models themselves.
First DeepSeek talked about beating OpenAI with $5.6 Million of training. Then people started writing articles saying "Stanford researchers develop AI reasoning model for mere $50, challenges OpenAI, DeepSeek." What is going on? Spoiler alert: neither case involves building a model from scratch.The Stanford research was however very intriguing because it talked about adding reasoning abilities to a 70 Billion parameter model with just $50 of training and 1000 examples of correct answers to beat more advanced models like o1 which likely has 300-500 Billion parameters. Reasoning abilities - where a language model thinks through a problem, tries different approaches, sometimes backtracks - can significantly improve a model. For example, both DeepSeek R1 and the most recent OpenAI models o1 and o3 are reasoning models. Aible showed that just 100 examples can help a miniscule 3 Billion parameter model start reasoning, and just 1000 good examples can help it match the latest OpenAI o3 model on a specific task. The key insight here: with just a little help from business users (not data scientists), a 100X smaller model can perform as well as the latest state of the art model built with billions of dollars of investment. See this LinkedIn post for details: Unlocking the Power of User Feedback: just 1000 examples from business users helps an AI match a 100X bigger model.
More Announcements coming on March 17th
We have some very cool capabilities that we will announce at the NVIDIA GTC conference on March 17th 2025.
Those updates are coming soon.