Connect to your Large Language Model (LLM)
ThoughtSpot provides a fully managed "ThoughtSpot Enabled Auto Select" LLM configuration that gives you access to the latest, high-performance models with no setup required. We strongly recommend this option for the best performance and experience.
For organizations with strict compliance or governance requirements that prevent the use of a managed service, ThoughtSpot also supports connecting to your own private LLM endpoints. This "Bring Your Own LLM Key" (BYOLLM Key) option allows you to exercise full control over your AI stack.
When you connect to your own LLM, you are responsible for its performance, cost, and maintenance.
Enabling your own LLM (BYOLLM Key)
Enabling a BYOLLM Key connection is a guided process managed by our support team to ensure compatibility and performance.
To enable a custom LLM connection, please contact ThoughtSpot Support. Our team will work with you to perform a compatibility check and guide you through the setup process.
Prerequisites for setup
Before contacting ThoughtSpot Support, gather the required credentials for your specific LLM provider. This will expedite the setup and compatibility check.
We currently support connections to Azure OpenAI, Google Vertex AI, Amazon Bedrock and custom LLM gateways.
For Azure OpenAI
You need:
-
Your endpoint URL
-
An API key
-
The deployment names for your default models
For steps on how to create an Azure OpenAI resource, see Create and deploy an Azure OpenAI in Microsoft Foundry Models resource in Microsoft’s Azure documentation.
For Google Vertex AI
You will need:
-
The JSON key file for a service account with the Vertex AI User role
-
Your Google Cloud project ID
-
The region for your models (for example, us-central1)
-
The model IDs for your models
For steps on how to create Service account keys, see Create and delete service account keys in Google Cloud’s IAM documentation.
For Amazon Bedrock
You need:
-
Your AWS region
-
An API key (long-term API keys required)
-
Inference profile ARN for the supported LLM
For steps on creating a long-term API key, see Generate an Amazon Bedrock API key in the AWS documentation.
For a custom LLM gateway
You need:
-
Your gateway URL (base URL of your gateway endpoint)
-
The default model name your gateway expects.
-
Authentication details:
-
For API Key Auth: your API key
-
For OAuth 2.0 Auth: your token endpoint URL, client ID, and client secret
-
Your gateway must be fully compatible with the OpenAI /v1/chat/completions API specification and have tool calling enabled.
|
Important notice about Spotter 3 Early Access
|
Recommended LLMs
For the best results, we recommend using the following LLMs or their equivalents for each version of Spotter.
LLM update policy
The following policies apply to updates of LLMs used with ThoughtSpot.
Updates for ThoughtSpot-managed LLMs
We are dedicated to providing you with a best-in-class experience and continuously evolving with advancements in AI. Therefore, we will notify you of updates to our LLMs through our public documentation and weekly release notes.
Updates for BYOLLM Keys
If you use your own LLM keys, we will notify you of changes to our recommended LLMs directly through our customer support teams and documentation.
Mandatory update period for BYOLLM
We provide you a three-week period to update your models according to the new recommendations. This update is essential to ensuring optimal performance and continued access to our latest features. If you fail to comply with this update within the specified time period, you may experience diminished performance or inaccessibility to certain features. ThoughtSpot support is committed to communicating and collaborating with you to guarantee the best possible experience.
Best practices
For the best experience, access to the latest LLMs, and the lowest maintenance overhead, we strongly recommend ThoughtSpot Enabled Auto Select. This ensures you always have access to the latest models, fully managed and optimized by ThoughtSpot.
The BYOLLM Key path is a specialized option intended for organizations with specific, non-negotiable compliance or governance requirements that prevent the use of a managed LLM.