Replies: 2 comments
-
|
Closing in favour of: modelcontextprotocol/modelcontextprotocol#2185 |
Beta Was this translation helpful? Give feedback.
0 replies
This comment was marked as spam.
This comment was marked as spam.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Pre-submission Checklist
Discussion Topic
Hey everyone! 😄
I am from the
#financial-services-wg, and as you know, we deal with sensitive data and are heavily regulated. We want to be able to give our users the ability to query about their banking information in their favorite AI assistants (ChatGPT, Claude, Gemini), via an MCP server proposition. However, a friction point is that, typically by default, most AI assistant providers train on consumer messages, which, if possible, we would like to avoid.In an ideal world, where all parties are cooperative, I think something like this would mostly work:
InitializeRequest, we add an attribute, where the MCP client can express that they support the ability to not train on chats if a sensitive MCP server requests_metaproperty ofInitializationResult, reserve a key such asio.modelcontextprocol/do-not-trainthat sensitive MCP servers would setio.modelcontextprocol/do-not-trainmark (if at least one tool call were made to a sensitive MCP server during a chat, it would exclude the rest of that particular chat from training)Playing devil's advocate to the approach I described above:
Curious to hear:
Beta Was this translation helpful? Give feedback.
All reactions