
Salesforce’s push into generative AI has raised new questions about data ownership and risk in enterprise contracts. In 2025, as organizations adopt tools like Einstein GPT, it’s critical to understand how Salesforce treats your data and AI-generated content under its terms.
CIOs, procurement leaders, legal teams, and IT managers should pay close attention to these clauses – they determine who owns the data and outputs, how that data may be used for AI training, and who is liable if AI goes awry. For a complete overview, read our ultimate guide to Salesforce Contract Legal Terms.
Getting clarity on Salesforce’s AI terms isn’t just legal nitpicking; it’s about protecting your organization. Without careful review, you might unknowingly allow Salesforce to use your customer data for its models, or you could be on the hook if an AI-generated error causes trouble.
Let’s break down the key points of Salesforce’s AI and IP terms, highlight potential risks, and suggest ways to negotiate more balanced terms.
How Salesforce Uses Customer Data for AI
Modern AI relies on data, so you need to know if your Salesforce data will feed into any models beyond your own use. Salesforce’s approach distinguishes between predictive AI features and generative AI:
- Predictive AI (global model training): Some Einstein predictive features (like forecasting or recommendations) improve by learning from aggregated customer data. By default, Salesforce may include your CRM data (anonymized) to train these global models that benefit all customers. However, Salesforce offers an opt-out. If you don’t want your data used in this way, you can request to exclude your org’s data from model training. Inaction is generally treated as consent to use the data.
- Generative AI (Einstein GPT): Generative features use large language models. Salesforce assures that when you use Einstein GPT to generate text or code, your prompts and data are not retained to train the model. They’re processed on the fly, and thanks to the Einstein Trust Layer, sensitive details are masked, and the AI provider keeps nothing. In short, using Einstein GPT will not leak your proprietary data into a shared AI brain.
Key considerations: If your data is being used to improve Salesforce’s AI, even anonymously, consider the privacy and competitive implications. Many companies – especially in regulated sectors – exercise the opt-out to keep data strictly isolated.
Make sure to document your preference (opt-out or opt-in) during negotiations or in writing.
Additionally, verify Salesforce’s claims in practice: review their documentation on data usage and confirm that any third-party LLMs involved are abiding by a zero-data-retention policy. It’s all about ensuring your customer data is handled in accordance with your risk tolerance and compliance needs.
AI Output Ownership & Intellectual Property
When Salesforce’s AI generates content (an email draft, a chat response, a code snippet), who owns that output? Under Salesforce’s standard terms, the data you input is yours, and Salesforce owns the platform and underlying models.
Salesforce doesn’t claim ownership of the specific content that gets generated for you – that output is generally considered part of your customer data or work product. In other words, if Einstein GPT helps you write something, you have the right to use that material as you see fit.
However, Salesforce is careful to draw a line around its IP. You get the content, but not any rights to the AI algorithms or any proprietary training data behind it. Also, Salesforce disclaims responsibility for the output. They do not warrant that AI-generated content will be correct, original, or safe to use.
For example, if an AI-produced paragraph happens to resemble copyrighted text or gives faulty advice, Salesforce’s contracts place that risk on the customer. There’s typically no indemnification from Salesforce for AI outputs.
In practice, this means you should treat AI outputs like you would an employee’s draft – it’s helpful and you own it, but it might need editing and fact-checking.
The onus is on your organization to vet AI-generated material for accuracy, compliance, and IP issues before using it publicly. Salesforce provides the tool, but it does not guarantee the result.
Liability & Disclaimers in Salesforce’s AI Terms
Salesforce’s contract will include the usual limits on liability and disclaimers, and these cover AI features fully. In general, Salesforce offers its services “as is” with no guarantee that outcomes (like AI predictions or content) will be error-free or fit for a particular purpose. The agreement will limit Salesforce’s liability for any damages from using the service, often to a capped amount and excluding indirect losses.
For AI specifically, Salesforce emphasizes that results may be imperfect. The terms or documentation might state that AI suggestions are for informational purposes and that you should not rely on them without human judgment. If an AI-generated recommendation causes a mistake or loss, Salesforce will point to these disclaimers. Simply put, the risk of using AI outputs is largely borne by the customer.
There may also be explicit use-case disclaimers. For instance, Salesforce’s policies might prohibit using Einstein GPT for decisions that have legal or health consequences or to generate prohibited content.
Part of this is due to ethical and legal caution, and part of it ensures that Salesforce is not tied to particularly risky uses of its AI. Violating those rules could also breach your contract.
In summary, Salesforce limits its accountability for AI. That’s why your company must implement its own checks and balances when deploying these features.
Protect your company by reading and focusing on compliance, Salesforce Audit Rights & Compliance Clauses: Protecting Yourself in the Contract.
Negotiation Tips & Buyer Protections
To better protect yourself, consider negotiating adjustments to Salesforce’s default AI terms:
- Data usage opt-out: Ensure the contract explicitly allows you to opt out of having your data used in Salesforce’s AI model training (and note if you are opting out). This adds contractual weight to Salesforce’s policy.
- Output ownership clarity: Add language confirming that your company owns any AI-generated output from your Salesforce use. This removes ambiguity and prevents Salesforce or others from later claiming rights in content your AI usage creates.
- Liability and warranty tweaks: While Salesforce won’t fully accept liability for AI, you can push for some safeguards. For example, ask for a statement that the AI features will materially perform as described, or for assistance (or credit) if an AI output causes a significant issue. You might not receive a broad indemnity, but even a small amount of shared responsibility or a commitment to help address problems is a win.
- Privacy & security assurances: Tie the AI features into your data protection addendum. Require that any data sent to AI models (including third-party LLMs) remain subject to the same privacy and security standards as the rest of your Salesforce data. If you need data residency (e.g. ,all data stays in EU or US servers), stipulate that in relation to AI processing as well.
Large enterprise customers have the most leverage to negotiate these points, but any company can ask. At the very least, raising these issues gets Salesforce on record about their practices and might elicit extra guidance or support.
Strategic Considerations for Enterprises
Beyond contract terms, have a strategy for using Salesforce’s AI:
- Regulated industry caution: If you’re in a regulated field, double-check that using Salesforce AI complies with all rules. You might disable or limit certain AI functionality until you vet it for compliance (for example, not using generative AI on confidential patient data, or ensuring financial AI suggestions get a compliance review).
- AI governance: Establish internal policies for AI use. Define where AI can or cannot be applied in your workflows and require human oversight for critical tasks. Train your staff on the proper use of Einstein GPT, emphasizing it as a helpful assistant rather than an automatic decision-maker.
- Ethical use alignment: Align Salesforce AI use with your company’s ethical AI guidelines. Use features like the Einstein Trust Layer to enforce data privacy and encourage users to report any biased or odd outputs. Maintain transparency – if you use AI to generate content that interacts with customers or employees, consider disclosing this fact in line with emerging best practices.
- Stay updated: AI terms and regulations are evolving. Keep an eye on Salesforce’s updates to their AI policies (they may update terms as laws change). Likewise, stay informed about new laws (such as the EU’s AI Act or other national regulations) that could affect how you leverage AI. You may need to adjust your usage or negotiate new terms in the future as the legal landscape shifts.
Checklist for Reviewing Salesforce AI & IP Terms
- Data usage: Are you clear on whether/not Salesforce can use your data to train AI models? Is there an opt-out and have you exercised it if desired?
- Output ownership: Does the agreement explicitly say you own the AI-generated outputs? If not, get clarification to avoid any IP confusion.
- Liability disclaimers: What does Salesforce disclaim regarding AI? Identify phrases like “as-is” and “no liability for AI outcomes” – these tell you that you’re accepting the risk.
- Indemnity: Know the gaps. Salesforce’s indemnification of customers likely doesn’t cover AI-generated content issues. Plan around that gap (through insurance, internal controls, or negotiation).
- Compliance: Are AI features covered under your Salesforce data protection and privacy terms? Make sure things like GDPR, HIPAA, or other requirements are addressed when using Einstein GPT.
Conclusion
Salesforce’s AI/IP terms are new territory, and they often favor Salesforce by default. That’s not surprising – no vendor wants open-ended liability for unpredictable AI behavior. However, enterprise customers have both the right and the need to push for more balanced terms.
By reviewing these clauses carefully and negotiating where necessary, you can secure vital protections: ensuring your data isn’t misused, affirming your ownership of what the AI produces for you, and carving out at least some vendor accountability.
Ultimately, the goal is to embrace Salesforce’s innovative AI with a clear understanding. These tools can drive efficiency and insights, but only if deployed within a framework that manages risk.
With the right contract clauses in place and a strong internal governance approach, you can confidently leverage Einstein GPT and other AI features – gaining the benefits of automation and intelligence while keeping control of your data and IP.
Read about our Salesforce Negotiation Services