Skip to content

cobalt/metrics: Integrate granular memory tracking into memory instrumentation service#10019

Open
Awallky wants to merge 5 commits intoyoutube:mainfrom
Awallky:feature/granular-memory-cl3-wiring
Open

cobalt/metrics: Integrate granular memory tracking into memory instrumentation service#10019
Awallky wants to merge 5 commits intoyoutube:mainfrom
Awallky:feature/granular-memory-cl3-wiring

Conversation

@Awallky
Copy link
Copy Markdown
Contributor

@Awallky Awallky commented Apr 14, 2026

This commit wires the granular memory tracking delegate into the memory
instrumentation service and the UMA pipeline. It ensures that detailed
memory dumps correctly populate the custom categories.

  • Register the delegate in cobalt_browser_main_parts.cc.
  • Wire the delegate into MemoryInstrumentation.
  • Fix move semantics for detailed_stats_kb in CreatePublicOSDump.

Bug: 494004530

@Awallky Awallky requested a review from a team as a code owner April 14, 2026 04:57
@Awallky Awallky requested a review from johnxwork April 14, 2026 04:57
@Awallky Awallky changed the title Feature/granular memory cl3 wiring cobalt/metrics: Feature/granular memory cl3 wiring Apr 14, 2026
@github-actions
Copy link
Copy Markdown
Contributor

🤖 Gemini Suggested Commit Message


cobalt: Add granular memory metrics

Introduce a DetailedMetricsDelegate interface within memory_instrumentation
to allow project-specific categorization of memory regions. This enables
customizable memory usage reporting by parsing /proc/self/smaps lines.

Move OS memory dump operations to a background thread to prevent UI jank
during memory collection. Modernize /proc file parsing from sscanf to
C++ string utilities for improved robustness and maintainability.
Extend the Mojo interface for detailed memory statistics, conditional
on Cobalt-specific build flags. This provides specialized memory
profiling capabilities tailored for Cobalt's needs.

Bug: 494004530

💡 Pro Tips for a Better Commit Message:

  1. Influence the Result: Want to change the output? You can write custom prompts or instructions directly in the Pull Request description. The model uses that text to generate the message.
  2. Re-run the Generator: Post a comment with: /generate-commit-message

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors memory instrumentation parsing logic to use std::string_view and base::SplitStringPiece, improving efficiency by reducing heap allocations. It also introduces a DetailedMetricsDelegate for project-specific metrics collection, with specific implementations for Cobalt. Review feedback identifies fragile path parsing logic in proc_maps_linux.cc and os_metrics_linux.cc that could be improved by using token offsets. Additionally, the buffer size for reading smaps lines should be increased to 4096 to handle maximum Linux path lengths, and the Cobalt implementation of PerformOSMemoryDump needs to accurately track and report the overall success status of the dump operation.

Comment on lines +214 to +222
size_t dev_pos = lines[i].find(tokens[3]);
size_t inode_pos = lines[i].find(tokens[4], dev_pos + tokens[3].size());
if (inode_pos != std::string_view::npos) {
size_t path_pos =
lines[i].find_first_not_of(' ', inode_pos + tokens[4].size());
if (path_pos != std::string_view::npos) {
region.path.assign(lines[i].substr(path_pos));
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for finding the path position using find() on the device and inode tokens is fragile. If the device or inode strings appear earlier in the line (e.g., in the address or offset fields), find() might return the wrong position. Since tokens[4] is a std::string_view into lines[i], you can calculate the exact position of the inode and then find the path starting after it.

Suggested change
size_t dev_pos = lines[i].find(tokens[3]);
size_t inode_pos = lines[i].find(tokens[4], dev_pos + tokens[3].size());
if (inode_pos != std::string_view::npos) {
size_t path_pos =
lines[i].find_first_not_of(' ', inode_pos + tokens[4].size());
if (path_pos != std::string_view::npos) {
region.path.assign(lines[i].substr(path_pos));
}
}
if (tokens.size() >= 6) {
// The path starts after the inode. Since tokens are views into the
// original line, we can calculate the path position directly.
size_t inode_end_offset = (tokens[4].data() + tokens[4].size()) - lines[i].data();
size_t path_offset = lines[i].find_first_not_of(' ', inode_end_offset);
if (path_offset != std::string_view::npos) {
region.path.assign(lines[i].substr(path_offset));
}
}

Comment on lines +215 to +230
if (tokens.size() >= 6) {
size_t dev_pos = header_line.find(tokens[3]);
if (dev_pos != std::string_view::npos) {
size_t inode_pos =
header_line.find(tokens[4], dev_pos + tokens[3].size());
if (inode_pos != std::string_view::npos) {
size_t filename_pos =
header_line.find_first_not_of(' ', inode_pos + tokens[4].size());
if (filename_pos != std::string_view::npos) {
region->mapped_file = std::string(header_line.substr(filename_pos));
base::TrimWhitespaceASCII(region->mapped_file, base::TRIM_ALL,
&region->mapped_file);
}
}
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for finding the path position using find() on the device and inode tokens is fragile. If the device or inode strings appear earlier in the line (e.g., in the address or offset fields), find() might return the wrong position. Since tokens[4] is a std::string_view into header_line, you can calculate the exact position of the inode and then find the path starting after it.

    if (tokens.size() >= 6) {
      // The path starts after the inode. Since tokens are views into the
      // original line, we can calculate the path position directly.
      size_t inode_end_offset = (tokens[4].data() + tokens[4].size()) - header_line.data();
      size_t filename_offset = header_line.find_first_not_of(' ', inode_end_offset);
      if (filename_offset != std::string_view::npos) {
        region->mapped_file = std::string(header_line.substr(filename_offset));
        base::TrimWhitespaceASCII(region->mapped_file, base::TRIM_ALL,
                                  &region->mapped_file);
      }
    }


if (!res)
return std::vector<VmRegionPtr>();
char line[2048];
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The buffer size of 2048 is too small for Linux paths, which can be up to 4096 characters (PATH_MAX). If a line in /proc/self/smaps exceeds this buffer, fgets will return a partial line, which will likely break the parsing logic in the delegate. It is recommended to use at least 4096 bytes.

Suggested change
char line[2048];
char line[4096];

base::BindOnce(
[](RequestOSMemoryDumpCallback callback,
base::flat_map<base::ProcessId, mojom::RawOSMemDumpPtr> results) {
std::move(callback).Run(true, std::move(results));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The PerformOSMemoryDump implementation for Cobalt always reports success (true) to the callback, even if OSMetrics::FillOSMemoryDump fails for some or all processes in the loop. It should track the overall success state and report it accurately, similar to the non-Cobalt implementation.

@Awallky Awallky changed the title cobalt/metrics: Feature/granular memory cl3 wiring cobalt/metrics: Integrate granular memory tracking into memory instrumentation service Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant