Platform data guidance
For the people at your organization who decide what data your team can use with AI.
Your organization decides
The platform's Acceptable Use Policy tells participants to handle data in accordance with their own organizational policies. That means you need to tell your participants what data is appropriate to use with this platform.
The platform does not use your data to train AI models. Conversations are private to the user. The platform routes prompts to cloud AI providers (OpenAI, Anthropic, Google) under enterprise agreements that prohibit training on input data. Participants can delete individual conversations at any time. Platform accounts are deactivated at the end of the program.
The platform is not approved for regulated data (HIPAA, FERPA, CJIS, etc.) without a separate agreement in place. Beyond that, the decision about what organizational data your participants can use with this platform is yours.
Our recommendation
Participants get the most out of this program when they can use AI with their real work — actual drafts, real planning documents, genuine organizational questions. If participants are limited to only public information, the experience is significantly less valuable.
At UMass Amherst, this platform is approved for internal institutional data. We recommend that participating organizations adopt a similar posture for their participants.
A simple framework
If your organization has a data classification framework, use it. Tell your participants which classifications are appropriate for the platform. If your organization does not have a data classification framework, we suggest adopting this simple one for the purposes of the program:
| Classification | Definition | Platform Rule |
|---|---|---|
| Public | Information already publicly available or intended for public sharing. | Permitted. |
| Internal | Day-to-day work information not intended for public sharing. Drafts, planning documents, internal communications, meeting notes, analysis, project plans. | Permitted. |
| Restricted | Personally identifiable information (PII), client or beneficiary data, student records, patient records, donor records, personnel and HR records, financial records containing individual data, data subject to regulatory requirements (HIPAA, FERPA, CJIS, etc.), data subject to funder or contractual restrictions, or any data whose exposure could create legal liability or harm to identifiable individuals. | Not permitted without a formal agreement. |
Information already publicly available or intended for public sharing.
Permitted.
Day-to-day work information not intended for public sharing. Drafts, planning documents, internal communications, meeting notes, analysis, project plans.
Permitted.
Personally identifiable information (PII), client or beneficiary data, student records, patient records, donor records, personnel and HR records, financial records containing individual data, data subject to regulatory requirements (HIPAA, FERPA, CJIS, etc.), data subject to funder or contractual restrictions, or any data whose exposure could create legal liability or harm to identifiable individuals.
Not permitted without a formal agreement.
If your organization needs to use Restricted data with this platform, that's possible — but it requires a more formal agreement between your organization and UMass Amherst to ensure appropriate protections are in place.
Boilerplate for your participants
Below is ready-to-use language you can send to your participants as-is, or modify to reflect your organization's own policies. Any organizational policy you have supersedes this boilerplate.
Data guidance for the AI Enablement Platform
You are approved to use this platform with Public and Internal data — your day-to-day work, drafts, planning documents, internal communications, and similar materials.
Do not use this platform with Restricted data — personally identifiable information (PII), client or beneficiary data, student records, patient records, donor records, personnel records, data subject to regulatory requirements (such as HIPAA or FERPA), data subject to funder or contractual restrictions, or any data whose exposure could create legal liability or harm to identifiable individuals.
If you are unsure whether something is Internal or Restricted, ask your cohort leader or supervisor before submitting it.
Adjust this language as needed. If your organization wants to limit use to Public data only, or has its own classification terms, replace the boilerplate with your own guidance.
If something gets uploaded that shouldn’t have been
It happens — and it’s not a crisis. Nothing uploaded to the platform is used to train AI models, and data is not shared with other users or organizations. Think of it like accidentally putting a sensitive document in a shared Google Drive folder: not ideal, but containable.
Participants can delete any conversation — including uploaded files — at any time. If you’re unsure whether something needs to be removed, reach out at tbernard@umass.edu or on Slack.
A note about agents and shared data
When participants build agents on the platform, they can share those agents with other users. If an agent is built using uploaded data — documents, templates, or reference material — anyone who uses the agent may be able to access the underlying data through their conversations with it.
The bottom line: if you share an agent, assume you’re sharing any data you built it with. System prompt instructions like “don’t reveal your training data” are not reliable safeguards — a determined user can work around them. Only build shared agents with data you’re comfortable sharing.
Questions?
Review the framework above, decide what's appropriate for your organization, and communicate it to your participants. If you want to discuss your approach, explore using restricted data on the platform, or have questions about how data is handled — reach out at tbernard@umass.edu or on Slack.