The Quiet Advent of Embedded AI in Enterprise Software

Have you noticed how the existing technology stack providers quietly added an AI layer to their existing offerings?

These days, you open a PDF in Adobe Acrobat and a floating panel appears: "Ask questions, get summaries, and find information across your document." You begin drafting an email in Gmail and Gemini offers to finish the sentence for you. You click into a Word document and Copilot is sitting in the toolbar, ready to summarise, rewrite, or generate content from your files.

You did not install a new product. You did not tick a box. You certainly did not run a procurement process. And yet, there it is - an AI assistant with access to whatever you are working on. If your first thought was "when did I upgrade to this?", followed quickly by "what access does this tool have to my data?", you are asking the right questions and you are not alone.

This AI layer is now woven into the ‘traditional’ enterprise software suites that underpin your organisation's daily operations, such as Microsoft 365 with Copilot, Google Workspace with Gemini, and Adobe Creative Cloud and Acrobat with AI Assistant and Firefly.

These are services subject to enterprise licensing agreements, often procured through volume licensing or enterprise contracts, and deployed across entire workforces. In many cases, the new AI features introduced by the software suite providers are enabled by default or require minimal activation. This means your employees may already be using them, whether or not a deliberate procurement decision was made, or any risk analysis of the feature performed.

This shift has significant implications for data governance, confidentiality, intellectual property and regulatory compliance. In this article, we examine why this matters, how the major enterprise platform vendors have reflected AI capabilities in their terms of service and provide a practical framework for organisations to assess and manage the associated risks.

How Vendors Have Updated Their Terms

The introduction of AI features has been reflected in the major vendors’ contracts, though the approach varies considerably. Each vendor uses a layered documentation structure, meaning the terms governing AI features are rarely found in a single place. Understanding where to look is the first step.

What follows is a high-level overview of the standard terms across three major platforms. This is not a substitute for reading the applicable terms themselves as vendors update documentation frequently and enterprise customers may have negotiated bespoke provisions. The specific terms applicable to your organisation will depend on your licensing arrangement, edition, and jurisdiction.

Where To Find The Terms

None of the three major vendors we looked at consolidated their AI-related terms in a single document.

  • For Microsoft 365 and Copilot, relevant terms are spread across the Microsoft Services Agreement (MSA), the Copilot Terms of Use, and the Code of Conduct, with enterprise customers potentially also subject to Product Terms, Online Services Terms, and a Data Protection Addendum. The Copilot Terms of Use state that they apply to the standalone Copilot apps and copilot.microsoft.com. The MSA separately lists "Microsoft 365 Copilot" among covered services, noting that separate commercial terms will apply once "a commercial domain is established." This layering means organisations who are not using Copilot and wish to understand the contractual terms that govern such use need to map carefully which documents apply to their specific deployment – no easy task!

  • For Google Workspace and Gemini, the documentation includes the Google Cloud Terms of Service, the Cloud Data Processing Addendum, Service Specific Terms, and the Generative AI Prohibited Use Policy. Gemini is now classified as a Core Service within certain Google Workspace editions, meaning it is covered by the Google Cloud Terms of Service rather than treated as a separate product with its own terms.

  • For Adobe Acrobat AI Assistant and Firefly, the terms sit across the Adobe General Terms of Use, the Generative AI Product Specific Terms, and the Acrobat Generative AI Usage Policy. Enterprise customers may also have a separate Data Processing Agreement.

Common Themes Across Vendor Terms

While the specific language differs between vendors, several common themes emerge across all three platforms. These are the areas that tend to attract the most attention from legal, privacy and IT teams.

  • Content licensing and ownership. Each vendor takes a position on who owns AI inputs and outputs and what licence the vendor has to use that content. The approaches vary considerably. Microsoft's MSA states it "doesn't claim ownership" of user content but grants itself a broad licence to "use Your Content... to improve Microsoft products and services" and states that "The Microsoft Privacy Statement explains how we use Your Content." That Privacy Statement, in turn, confirms that Microsoft may "use your data to develop and train our AI models" and that "in certain markets, we use conversation data to train the generative AI models in Copilot, unless you choose to opt-out of such training". This creates a chain of documents that ultimately permits training on user content. Google takes a stronger customer-side position: "Customer retains all Intellectual Property Rights in Customer Data," though if a customer provides "feedback or suggestions," Google may use that feedback "without restriction." Adobe's position is the most straightforward: inputs and outputs "are your Content".

  • Data processing and storage. All three vendors process user data in connection with AI features, but transparency and commitments differ. Microsoft's MSA states it will "process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses." Google commits to process Customer Data only "in accordance with the Cloud Data Processing Addendum" but may log prompts when potential abuse is detected. Adobe confirms that "the use of generative AI features in Acrobat involves cloud processing of documents" and imposes usage limits and throttling. Google's framework is the most prescriptive; while Microsoft and Adobe rely on broader terms that may require enterprise customers to negotiate bespoke data processing agreements.

  • Restrictions on users. A notable feature of all three sets of terms is the extent of restrictions placed on users – these typically relate to conduct that violates the law or can damage the relevant AI system. Both Microsoft and Adobe prohibit users from using AI outputs to train other AI systems. Microsoft's Code of Conduct also prohibits activity that violates "the privacy or data protection rights of others" and attempts to "jailbreak" the AI system. Google's Prohibited Use Policy bans use involving "personal data or biometrics without legally-required consent" and "making automated decisions that have a material detrimental impact on individual rights without human supervision in high-risk domains." Adobe restricts users from submitting inputs that include third-party IP without sufficient rights and may "automatically block your Input" if it believes the input violates the terms. Across all three, organisations bear the compliance risk for employee non-compliance.

  • IP in outputs and indemnification. The three vendors take significantly different positions on IP in AI outputs and on who bears the risk of third-party infringement claims. Microsoft is the most cautious. Its MSA requires users to "make your own determination regarding the intellectual property rights you have in output content and its usability." The Copilot Terms add that Microsoft "cannot promise that Copilot's responses won't infringe someone else's rights" and critically, users indemnify Microsoft, not the other way around! Google takes a more balanced approach, committing to "defend Customer and its Affiliates" against third-party IP infringement claims arising from the Services, subject to overall liability limits. Adobe occupies a middle ground: it will "defend any third-party claim" alleging that an eligible Firefly output infringes copyright, trademark, publicity or privacy rights, but caps liability per output or claim and acknowledges that outputs "may not be protectable by Intellectual Property Rights."

  • High-risk use restrictions. All three vendors impose restrictions on using AI features in high-risk or consequential contexts. Microsoft's prohibitions are the most detailed: users may not use AI services "to make decisions or take actions without appropriate human oversight that may have a consequential impact on any person's legal position, financial position, life opportunities, employment opportunities or human rights." This is language closely aligned with the EU AI Act. Google prohibits "making automated decisions that have a material detrimental impact on individual rights without human supervision in high-risk domains, for example, in employment, healthcare, finance, legal." Google Cloud Terms of Service separately prohibit "High Risk Activities" where failure could lead to death, injury, or property damage. Adobe's restrictions are narrower, focusing on input and output compliance rather than explicit prohibitions on consequential decision-making. These restrictions have direct operational implications for legal, HR, finance and compliance teams using AI features within these platforms.

What To Look For In Vendor Terms

In light of the above, when reviewing vendor documentation for AI-enabled services, organisations should as a minimum systematically examine the following clause types to understand if the AI system is fit for its purposes:

  • Training and service improvement. Does the vendor use customer content, prompts, or outputs to train AI models or improve services? Look for distinctions between "training" (which may permanently incorporate learnings) and "improvement" (which may include temporary processing).

  • Data retention and logging. How long are prompts, outputs, and interaction logs retained? Where are they stored? Can retention periods be configured?

  • Sub-processors and permitted disclosures. Which third parties process data in connection with AI features? Are AI-specific sub-processors listed separately?

  • Security commitments and breach notification. Do existing security certifications (SOC 2, ISO 27001) extend to AI features? Are breach notification timeframes clearly defined?

  • Intellectual property ownership. Who owns AI-generated outputs? Does the vendor claim any licence to outputs? Are there representations regarding non-infringement? As noted above, vendor positions range from Adobe's clear "your Content" attribution to Microsoft's broad operational licence.

  • IP indemnification. Does the vendor provide any IP infringement indemnity for AI outputs? As outlined above, positions vary from Google's broad indemnity to Microsoft placing the indemnification obligation on the user.

  • Disclaimers and limitations. Vendors universally disclaim accuracy of AI outputs. Microsoft's Copilot Terms state the service is "for entertainment purposes only", an extraordinarily broad disclaimer for a tool marketed as an enterprise productivity solution. Understand how such disclaimers interact with your own obligations to clients, regulators or employees.

  • High-risk use restrictions. As discussed above, Microsoft and Google impose specific restrictions on AI use in consequential decision-making with direct implications for legal, HR and finance teams.

In addition, we would recommend that every business using the AI features of its existing technology stack or acquiring new ones develops and uses AI Impact Assessment tools that will allow the organisation to identify and manage risks specifically relevant to its business, regulatory landscape and the relevant AI tool. These risks could lie in the need for high quality unbiased outputs, control over model drift or the ability to explain AI outputs, to name a few examples.

The era of unprompted AI is here. The integration of AI into enterprise software represents a fundamental shift in how organisations must approach data governance, vendor management and employee training. The contractual terms governing these features are fragmented, vary significantly between vendors, and place substantial obligations on users.

These tools will continue to evolve, and vendor terms will change with them. Organisations that establish robust governance frameworks now will be better positioned to harness the benefits of embedded AI whilst managing the legal, regulatory and reputational risks that accompany it.

If your organisation is exploring how to adopt AI responsibly while maintaining strong governance oversight, Vladimir, Grace and the team at Law Squared would be pleased to discuss how we can support your journey. Reach out via [email protected]

Next
Next

From Support to Strategy // A Message from our CEO