feat(MMS): add WebSocket endpoints and restructure service layer#41
feat(MMS): add WebSocket endpoints and restructure service layer#41Liparakis wants to merge 7 commits intoExtremelyd1:mainfrom
Conversation
- Swapped to join sessions - Add WebSocket endpoint and connection management - Reorganize lobby/matchmaking services into subfolders - Update bootstrap, contracts, and endpoint routing - Remove obsolete top-level service files (moved to subfolders)
…eanup logs feat: add PrivacyFormatter utility class chore: redact sensitive data from logs in production
Extremelyd1
left a comment
There was a problem hiding this comment.
Thanks for the PR, I haven't tested it functionally yet, but have left a bunch of comments for either requested changes or questions on why some things are the way they are.
Didn't we discuss you were going to deliver the changes in 3 PRs? If so, is this only the first PR or is this just everything. Because it feels like everything is in here.
| private static bool TryCreateCertificate(string pem, string key, out X509Certificate2? certificate) { | ||
| certificate = null; | ||
| try { | ||
| using var ephemeralCertificate = X509Certificate2.CreateFromPem(pem, key); |
There was a problem hiding this comment.
This already returns a X509Certificate2, which is the type we need for the certificate. Why do we then export it and load it again through another class?
There was a problem hiding this comment.
CreateFromPem already returns the right X509Certificate2. The reason we export and re-import is not to change the type, but to force the private key into Windows-backed key storage. On Windows PEM-created certificates can have an ephemeral private key that works in code but is unreliable for server TLS. Re-importing as PKCS#12 with key storage flags transforms the key in a form Kestrel without potential issues. Sources: CreateFromPemFile docs, LoadPkcs12 docs, dotnet/runtime#93319.
There was a problem hiding this comment.
Great explanation, which would be nice to have in the code as well. If people are reading this in the future, they might have the same question, and won't be able to find this comment here.
| foreach (var proxy in configuration.GetSection("ForwardedHeaders:KnownProxies").Get<string[]>() ?? []) { | ||
| if (IPAddress.TryParse(proxy, out var address)) | ||
| options.KnownProxies.Add(address); | ||
| } | ||
|
|
||
| foreach (var network in configuration.GetSection("ForwardedHeaders:KnownNetworks").Get<string[]>() ?? | ||
| []) { | ||
| if (!TryParseNetwork(network, out var ipNetwork)) | ||
| continue; | ||
|
|
||
| options.KnownNetworks.Add(ipNetwork); | ||
| } |
There was a problem hiding this comment.
This is for trusting forwarded headers only from your proxy (i assumed there is one.. perhaps NginX). Since MMS sits behind proxy, ASP.NET Core needs to know which proxy IPs/networks are allowed to supply X-Forwarded-For and related headers, so client IP detection, HTTPS handling and IP-based rate limiting work correctly and cannot be spoofed by direct requests.
There was a problem hiding this comment.
I guess this is an extra measure to prevent direct request spoofing, but in a proper setup (the current deployment for example), there is no possible way to make direct request to the MMS without going through the proxy.
The question now is: do I need to add configuration for my proxy so that Kestrel recognises it?
There was a problem hiding this comment.
Yes you definetely do want a config. You should create an appsettings.json ( if you don't have one already ) and add the following field.
{
"ForwardedHeaders": {
// Use when you know Exact proxy IP
"KnownProxies": [ "127.0.0.1" ],
// Use when the proxy is running inside a Docker container,
// where IP addresses may vary depending on container/network configuration.
// Note: This could be far fetched thus, the code that scans the CIDR Blocks can be removed.
"KnownNetworks": ["172.18.0.0/16"]
}
}
Extremelyd1
left a comment
There was a problem hiding this comment.
See the comments I left in their respective threads.
On a separate note: I seem to be unable to add commits to this PR, so I can't fix simple things myself. Did you do anything odd to the git branch structure for this to happen? In other PRs I could simply load the PR in my IDE and push commits to your branch.
…nt name conflicts
Extremelyd1
left a comment
There was a problem hiding this comment.
Thanks for the additional changes, I've managed to push some last additions as well (it was a Rider IDE issue).
Still need to do some functional testing before merging though.
Matchmaking changes (the important part)
The main change is introducing
JoinSession.Before, a join attempt was spread across multiple places:
There was no single place responsible for a join attempt.
Now there is.
A
JoinSessionrepresents a single join attempt and keeps everything together:Bootstrap
Pulled all the startup logic out of
Program.csand split it into focused pieces:ServiceCollectionExtensionshandles all Dependencies injection registration.WebApplicationExtensionssets up the middlewareHttpsCertificateConfiguratordeals with HTTPS and Kestrel configProgramStateholds shared runtime state that used to be scatteredProgram.csis now just wiring things together instead of doing everything itself.Features
Routing and handler logic are no longer mixed:
EndpointRouteBuilderExtensionsregisters routesHealthEndpointsexposes/healthLobbyEndpointsdefines routesLobbyEndpointHandlerscontains the actual logicVersion validation is now centralized:
MatchmakingVersionValidationNew WebSocket endpoints are:
/ws/{token}/ws/join/{joinId}Contracts
All request and response DTOs are grouped:
Requests.csResponses.csHTTP helpers
EndpointBuilderwraps commonMapGetandMapPostpatterns to reduce repetition and keep validation consistentNetwork
UdpDiscoveryServicehandles UDP discovery and feeds into join sessionsWebSocketManagertracks active connectionsLobby
Lobby responsibilities are split cleanly:
LobbyServicecontains core logicLobbyCleanupServicehandles background cleanupLobbyNameServicemanages namingNote
AI helped with extracting parts of the code and documentation, but the design and decisions were done manually.
Some comments might be a bit misleading but i am happy to correct them any.