Skip to content
Oriol edited this page Nov 7, 2025 · 3 revisions

EAR reporting system is designed to fit any requirement to store all data collected by its components. By this way, EAR includes several report plug-ins that are used to send data to various services.

Overview

The reporting system is implemented by an internal API used by EAR components to report data at specific events/stages, and the report plug-in used by each one can be set at ear.conf file. The Node Manager, the Database Manager, the Job Manager and the Global Manager are those configurable components. The EAR Job Manager differs from other components since it lets the user to choose other plug-ins at job submission time. Check out how at the Environment variables section.

Plug-ins are compiled as shared objects and are located at $EAR_INSTALL_PATH/lib/plugins/report. Below there is a list of the report plug-ins distributed with the official EAR software.

Report plug-in name Description
eard.so Reports data to the EAR Node Manager. Then, it is up to the daemon to report the data as it was configured. This plug-in was mainly designed to be used by the EAR Job Manager.
eardbd.so Reports data to the EAR Database Manager. Then, it is up to this service to report the data as it was configured. This plug-in was mainly designed to be used by the EAR Node Manager.
mysql.so Reports data to a MySQL database using the official C bindings. This plug-in was first designed to be used by the EAR Database Manager.
psql.so Reports data to a PosgreSQL database using the official C bindings. This plug-in was first designed to be used by the EAR Database Manager.
prometheus.so This plug-in exposes system monitoring data in OpenMetrics format, which is fully compatible with Prometheus.
examon.so Sends application accounting and system metrics to EXAMON.
dcdb.so Sends application accounting and system metrics to DCDB.
sysfs.so Exposes system monitoring data through the file system.
csv_ts.so Reports loop and application data to a CSV file. It is the report plug-in loaded when a user sets --ear-user-db flag at submission time.
dcgmi.so Reports loop and application data to a CSV file. It differs from the csv_ts.so plugin since it also reports NVIDIA DCGM metrics collected by the EAR Library.

Prometheus report plugin

Requirements

The Prometheus plugin has only one dependency, microhttpd. To be able to compile it make sure that it is in your LD_LIBRARY_PATH.

Installation

Currently, to compile and install the prometheus plugin one has the run the following command.

make FEAT_DB_PROMETHEUS=1
make FEAT_DB_PROMETHEUS=1 install

With that, the plugin will be correctly placed in the usual folder.

Configuration

Due to the way in which Prometheus works, this plugin is designed to be used by the EAR Daemons, although the EARDBD should not have many issues running it too.

To have it running in the daemons, simply add it to the corresponding line in the configuration file.

EARDReportPlugins=eardbd.so:prometheus.so

This will expose the metrics on each node on a small HTTP server. You can access them normally through a browser at port 9011 (fixed for now).

In Prometheus, simply add the nodes you want to scrape in prometheus.yml with the port 9011. Make sure that the scrape interval is equal or shorter than the insertion time (NodeDaemonPowermonFreq in ear.conf) since metrics only stay in the page for that duration.

Examon

ExaMon (Exascale Monitoring) is a lightweight monitoring framework for supporting accurate monitoring of power/energy/thermal and architectural parameters in distributed and large-scale high-performance computing installations.

Compilation and installation

To compile the EXAMON plugin you need a functioning EXAMON installation.

Modify the main Makefile and set FEAT_EXAMON=1. In src/report/Makefile, update EXAMON_BASE with the path to the current EXAMON installation. Finally, set an examon.conf file somewhere on your installation, and modify src/report/examon.c (line 83, variable `char* conffile = "/hpc/opt/ear/etc/ear/examon.conf"`) to point to the new examon.conf file.

The file should look like this:

[MQTT]

brokerHost = hostip

brokerPort = 1883

topic = org/bsc

qos = 0

data_topic_string = plugin/ear/chnl/data

cmd_topic_string = plugin/ear/chnl/cmd

Where hostip is the actual ip of the node.

Once that is set up, you can compile EAR normally and the plugin will be installed in the lib/plugins/report folder inside EAR's installation. To activate it, set it as one of the values in the EARDReportPlugins of ear.conf and restart the EARD.

The plugin is designed to be used locally in each node (EARD level) together with EXAMON's data broker.

DCDB

The Data Center Data Base (DCDB) is a modular, continuous, and holistic monitoring framework targeted at HPC environments.

This plugin implements the functions to report periodic metrics, report loops, and report events.

When the DCDB plugin is loaded the collected EAR data per report type are stored into a shared memory which is accessed by DCDB ear sensor (report plugin implemented on the DCDB side) to collect the data and push them into the database using MQTT messages.

Compilation and configuration

This plugin is automatically installed with the default EAR installation. To activate it, set it as one of the values in the EARDReportPlugins of ear.conf and restart the EARD.

The plugin is designed to be used locally in each node (EARD level) with the DCDB collect agent.

Sysfs Report Plugin

This is a new report plugin to write EAR collected data into a file. Single file is generated per metric per jobID & stepID per node per island per cluster. Only the last collected data metrices are stored into the files, means every time the report runs it saves the current collected values by overwriting the pervious data.

Namespace Format

The below schema has been followed to create the metric files:

/root_directory/cluster/island/nodename/avg/metricFile
/root_directory/cluster/island/nodename/current/metricFile
/root_directory/cluster/island/jobs/jobID/stepID/nodename/avg/metricFile
/root_directory/cluster/island/jobs/jobID/stepID/nodename/current/metricFile 

The root_directory is the default path where all the created metric files are generated.

The cluster, island and nodename will be replaced by the island number, cluster name, and node information.

metricFile will be replaced by the name of the metrics collected by EAR.

Metric File Naming Format

The naming format used to create the metric files is implementing the standard sysfs interface format. The current commonly used schema of file naming is <type>_<component>_<metric-name>_<unit>.

Numbering is used with some metric files if the component has more than one instance like FLOPS counters or GPU data. Examples of some generated metric files:

  • dc_power_watt
  • app_sig_pck_power_watt
  • app_sig_mem_gbs
  • app_sig_flops_6
  • avg_imc_freq_KHz

Metrics reported

The following are the reported values for each type of metric recorded by ear:

  • report_periodic_metrics
    • Average values
      • The frequency and temperature values have been calculated by summing the values of all periods since the report loaded until the current period and divide it by the total number of periods.
      • The energy value is accumulated value of all the periods since the report loaded until the current one.
      • The path to those metric files built as: /root_directory/cluster/island/nodename/avg/metricFile
  • Current values
    • Represent the current collected EAR metric per period.
    • The path to those metric files built as: /root_directory/cluster/island/nodename/current/metricFile
  • report_loops
    • Current values
      • Represent the current collected EAR metric per loop.
      • The path to those metric files built as: /root_directory/cluster/island/jobs/jobID/stepID/nodename/current/metricFile
  • report_applications
    • Current values
      • Represent the current collected EAR metric per application.
      • The path to those metric files built as: /root_directory/cluster/island/jobs/jobID/stepID/nodename/avg/metricFile
  • report_events
    • Current values
      • Represent the current collected EAR metric pere event.
      • The path to those metric files built as: /root_directory/cluster/island/jobs/jobID/stepID/nodename/current/metricFile

Note: If the cluster contains GPUs, both report_loops and report_applications will generate new schema files will per GPU which contain all the collected data for each GPU with the paths below:

  • /root_directory/cluster/island/jobs/jobID/stepID/nodename/current/GPU-ID/metricFile
  • /root_directory/cluster/island/jobs/jobID/stepID/nodename/avg/GPU-ID/metricFile

CSV

This plug-in reports both application and loop signatures in CSV format. Note that the latter can only be reported if the application is running with the EAR Job Manager. Fields are separated by semi-colons (i.e., ;). This plug-in is the one loaded by default when a user sets --ear-user-db submission flag.

By default output files are named ear_app_log.<nodename>.time.csv and ear_app_log.<nodename>.time.loops.csv for applications and loops, respectively. This behaviour can be changed by exporting EAR_USER_DB_PATHNAME environment variable. Therefore, output files are <env var value>.<nodename>.time.csv for application signatures and <env var value>.<nodename>.time.loops.csv for loop signatures.

When setting --ear-user-db=something flag at submission time, the batch scheduler plug-in sets this environment variable for you.

The following table describes application signature file fields:

Field Description Format
JOBID The Job ID the following signature belongs to. integer
STEPID The Step ID the following signature belongs to. integer
APPID The Application ID the following signature belongs to. integer
USERID The user owning the application. string
GROUPID The main group the user owning the application belongs to. string
ACCOUNTID This is the account of the user which ran the application. Only supported in SLURM systems. string
JOBNAME The name of the application being runned. In SLURM systems, this value honours SLURM_JOB_NAME environment variable. Otherwise, it is the executable program name. string
ENERGY_TAG The energy tag requested with the application (see ear.conf). string
JOB_START_TIME The timestamp of the beginning of the application, expressed in seconds since EPOCH. integer
JOB_END_TIME The timestamp of the application ending, expressed in seconds since EPOCH. integer
JOB_EARL_START_TIME The timestamp of the beginning of the application monitored by the EARL, expressed in seconds since EPOCH. integer
JOB_EARL_END_TIME The timestamp of the application ending reported by the EARL, expressed in seconds since EPOCH. integer
START_DATE The date of the beginning of the application, expressed in %+4Y-%m-%d %X. string
END_DATE The date of the application ending, expressed in %+4Y-%m-%d %X. string
POLICY The Job Manager optimization policy executed (if applies). string
POLICY_TH The power policy threshold used (if applies). real
JOB_NPROCS The number of processes involved in the application. integer
JOB_TYPE The job type. integer
JOB_DEF_FREQ The default frequency at which the job started. integer
EARL_ENABLED Indicates whether the job-step ran with the EARL enabled. integer
EAR_LEARNING Whether the application was run in the learning phase.
NODENAME The short node name the following signature belongs to. string
AVG_CPUFREQ_KHZ The average CPU frequency across all CPUs used by the application, in kHz. integer
AVG_IMCFREQ_KHZ The average IMC frequency during the application execution, in kHz. integer
DEF_FREQ_KHZ The default CPU frequency set at the start of the application, in kHz. integer
TIME_SEC The total execution time of the application, in seconds. integer
CPI The Cycles per Instruction retrieved across all application processes. real
TPI Transactions to the main memory per Instruction retrieved . real
MEM_GBS The memory bandwidth of the application, in GB/s. real
IO_MBS The accumulated I/O bandwidth of the application processes, in MB/s. real
PERC_MPI The average percentage of time spent in MPI calls across all application processes, in %. real
DC_NODE_POWER_W The average DC node power consumption in the node consumed by the application, in Watts. real
DRAM_POWER_W The average DRAM power consumption in the node consumed by the application. real
PCK_POWER_W The average package power consumption in the node consumed by the application real
CYCLES The total cycles consumed by the application, accumulated across all its processes. integer
INSTRUCTIONS The total number of instructions retrieved, accumulated across all its processes. integer
CPU-GFLOPS The total number of GFLOPS retrieved, accumulated across all its processes. real
GPUi_POWER_W The average power consumption of the ith GPU in the node. real
GPUi_FREQ_KHZ The average frequency of the ith GPU in the node. real
GPUi_MEM_FREQ_KHZ The average memory frequency of the ith GPU in the node. real
GPUi_UTIL_PERC The average GPU i utilization. integer
GPUi_MEM_UTIL_PERC The average GPU i memory utilization. integer
GPUi_GFLOPS The total GPU i GFLOPS retrieved during the application execution. real
GPUi_TEMP The average temperature of the ith GPU of the node, in celsius. real
GPUi_MEMTEMP The average memory temperature of the ith GPU of the node, in celsius. real
L1_MISSES The total numer of L1 cache misses during the application execution. integer
L2_MISSES The total numer of L2 cache misses during the application execution. integer
L3_MISSES The total numer of L3 cache misses during the application execution. integer
SPOPS_SINGLE The total number of floating point operations, accumulated across all processes, retrieved during the application execution. integer
SPOPS_128 The total number of AVX128 floating point operations, accumulated across all processes, retrieved during the application execution. integer
SPOPS_256 The total number of AVX256 floating point operations, accumulated across all processes, retrieved during the application execution. integer
SPOPS_512 The total number of AVX512 floating point operations, accumulated across all processes, retrieved during the application execution. integer
DPOPS_SINGLE The total number of double precision floating point operations, accumulated across all processes, retrieved during the application execution. integer
DPOPS_128 The total number of double precision AVX128 floating point operations, accumulated across all processes, retrieved during the application execution. integer
DPOPS_256 The total number of double precision AVX256 floating point operations, accumulated across all processes, retrieved during the application execution. integer
DPOPS_512 The total number of double precision AVX512 floating point operations, accumulated across all processes, retrieved during the application execution. integer
TEMPi The average temperature of the socket i during the application execution, in celsius. real
NODEMGR_DC_NODE_POWER_W Average node power along the time period, in Watts. This value differs from DC_NODE_POWER_W in that it is computed and reported by the Node Manager (the EARD) independently on whether the EARL was enabled. real
NODEMGR_DRAM_POWER_W Average DRAM power along the time period, in Watts. Not available on AMD sockets. This value differs from DRAM_POWER_W in that it is computed and reported by the Node Manager (the EARD) independently on whether the EARL was enabled. real
NODEMGR_PCK_POWER_W Average RAPL package power along the time period, in Watts. This value shows the aggregated power of all sockets in a package. This value differs from PCK_POWER_W in that it is computed and reported by the Node Manager (the EARD) independently on whether the EARL was enabled. real
NODEMGR_MAX_DC_POWER_W The peak DC node power computed by the Node Manager. real
NODEMGR_MIN_DC_POWER_W The minimum DC node power computed by the Node Manager. real
NODEMGR_TIME_SEC Execution time period (in seconds) which comprises the job-step metrics reported by the Node Manager. real
NODEMGR_AVG_CPUFREQ_KHZ The average CPU frequency computed by the Node Manager during the job-step execution time. real
NODEMGR_DEF_FREQ_KHZ The default frequency set by the Node Manager when the job-step began. real

DCGMI

This plug-in reports same metrics as the CSV. Additionally, it reports NVIDIA DCGM profiling metrics for those NVIDIA GPU devices which support them.

Since ear-v5.0, the EAR Library supports collecting and reporting NVIDIA DCGM profiling metrics for Ampere and Hopper devices. NVIDIA Turing should be supported as well.

Apart from loading the report plug-in, i.e., export EAR_REPORT_ADD=dcgmi.so, the EAR Library must have the DCGM monitoring enabled. This feature is enabled by default unless explicitely set at compile time. If disabled, you can enable it by setting the EAR_GPU_DCGMI_ENABLED environment variable to 1:

...

export EAR_GPU_DCGMI_ENABLED=1
export EAR_REPORT_ADD=dcgmi.so
srun --ear=on my_app

Below table describes fields reported in the csv file generated by this plug-in. Please, review the official documentation for more information about each metric definition.

By default, EAR just collects a subset of the DCGM metrics (see below table). In order to collect all of them, set the EAR_DCGM_ALL_EVENTS environment variable to 1. See the full list of supported metrics:

Field Description Format
DCGMI_EVENTS_COUNT The number of fields related with DCGM metrics. integer
GPUi_gr_engine_active (*) Graphics Engine Activity. real
GPUi_sm_active (*) SM Activity. real
GPUi_sm_occupancy (*) SM Occupancy. real
GPUi_tensor_active Tensor Activity. real
GPUi_dram_active Memory BW Utilization. real
GPUi_fp64_active FP64 Engine Activity. real
GPUi_fp32_active FP32 Engine Activity. real
GPUi_fp16_active FP16 Engine Activity. real
GPUi_pcie_tx_bytes (*) PCIe Bandwidth (writes). real
GPUi_pcie_rx_bytes (*) PCIe Bandwidth (reads). real
GPUi_nvlink_tx_bytes (*) NVLink Bandwidth (writes). real
GPUi_nvlink_rx_bytes (*) NVLink Bandwidth (reads). real

* This metric needs to be requested explicitly through export EAR_DCGM_ALL_EVENTS=1.

Clone this wiki locally