diff --git a/doc/api.rst b/doc/api.rst index 538ceec9..8a464bd6 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -1,8 +1,8 @@ -.. _api: - API === +.. module:: pyperf + The module version can be read from ``pyperf.VERSION`` (tuple of int) or ``pyperf.__version__`` (str). @@ -197,6 +197,8 @@ Benchmark class Get the start date of the first run and the end date of the last run. + Return ``(datetime.datetime, datetime.datetime)`` or ``None``. + Return a ``(start, end)`` tuple where start and end are ``datetime.datetime`` objects if a least one run has a date metadata. @@ -222,7 +224,7 @@ Benchmark class Get the total number of values. - .. method:: get_nwarmup() -> int or float + .. method:: get_nwarmup() -> int | float Get the number of warmup values per run. @@ -244,7 +246,7 @@ Benchmark class Use the ``duration`` metadata of runs, or compute the sum of their raw values including warmup values. - .. method:: get_loops() -> int or float + .. method:: get_loops() -> int | float Get the number of outer loop iterations of runs. @@ -253,7 +255,7 @@ Benchmark class .. versionadded:: 1.3 - .. method:: get_inner_loops() -> int or float + .. method:: get_inner_loops() -> int | float Get the number of inner loop iterations of runs. @@ -262,7 +264,7 @@ Benchmark class .. versionadded:: 1.3 - .. method:: get_total_loops() -> int or float + .. method:: get_total_loops() -> int | float Get the total number of loops per value (outer-loops x inner-loops). @@ -650,6 +652,10 @@ Runner class See the :ref:`bench_time_func() example `. + .. method:: is_worker() + + Return True if the current process was started with --worker, and False otherwise. + .. method:: parse_args(args=None) Parse command line arguments using :attr:`argparser` and put the result @@ -686,7 +692,7 @@ The :class:`Run` class collects metadata by default. Benchmark: * ``date`` (str): date when the benchmark run started, formatted as ISO 8601 -* ``duration`` (int or float >= 0): total duration of the benchmark run in seconds (``float``) +* ``duration`` (int | float >= 0): total duration of the benchmark run in seconds (``float``) * ``name`` (non-empty str): benchmark name * ``loops`` (``int >= 1``): number of outer-loops per value (``int``) * ``inner_loops`` (``int >= 1``): number of inner-loops of the benchmark (``int``) @@ -733,14 +739,14 @@ System metadata: * ``boot_time`` (str): Date and time of the system boot * ``hostname``: Host name * ``platform``: short string describing the platform -* ``load_avg_1min`` (int or float >= 0): Load average figures giving the number of jobs in the run +* ``load_avg_1min`` (int | float >= 0): Load average figures giving the number of jobs in the run queue (state ``R``) or waiting for disk I/O (state ``D``) averaged over 1 minute * ``runnable_threads``: number of currently runnable kernel scheduling entities (processes, threads). The value comes from the 4th field of ``/proc/loadavg``: ``1`` in ``0.20 0.22 0.24 1/596 10123`` for example (``596`` is the total number of threads). -* ``uptime`` (int or float >= 0): Duration since the system boot (``float``, number of seconds +* ``uptime`` (int | float >= 0): Duration since the system boot (``float``, number of seconds since ``boot_time``) Other: diff --git a/doc/changelog.rst b/doc/changelog.rst index ac48fe7a..19896888 100644 --- a/doc/changelog.rst +++ b/doc/changelog.rst @@ -180,7 +180,7 @@ a Python binding called "perf" as well. Version 1.6.0 (2019-01-11) -------------------------- -* Add *teardown* optional parameter to :class:`Runner.timeit` and ``--teardown`` +* Add *teardown* optional parameter to ``Runner.timeit`` and ``--teardown`` option to the :ref:`perf timeit ` command. Patch by **Alex Khomchenko**. * ``Runner.timeit(stmt)`` can now be used to use the statement as the benchmark @@ -561,8 +561,8 @@ Version 0.7.10 (2016-09-17) Version 0.7.9 (2016-09-17) -------------------------- -* Add :meth:`Benchmark.get_unit` method -* Add :meth:`BenchmarkSuite.get_metadata` method +* Add ``Benchmark.get_unit`` method +* Add ``BenchmarkSuite.get_metadata`` method * metadata: add ``nohz_full`` and ``isolated`` to ``cpu_config`` * add ``--affinity`` option to the ``metadata`` command * ``convert``: fix ``--remove-all-metadata``, keep the unit @@ -614,7 +614,7 @@ Version 0.7.4 (2016-08-18) * Support PyPy * metadata: add ``mem_max_rss`` and ``python_hash_seed`` -* Add :func:`perf.python_implementation` and :func:`perf.python_has_jit` +* Add ``perf.python_implementation`` and ``perf.python_has_jit`` functions * In workers, calibration samples are now stored as warmup samples. * With a JIT (PyPy), the calibration is now done in each worker. The warmup @@ -711,7 +711,7 @@ Changes: and TextRunner. * A single JSON file can now contain multiple benchmarks * Add a dependency to the ``six`` module - :meth:`Benchmark.add_run` now raises an exception if a sample is zero. + ``Benchmark.add_run`` now raises an exception if a sample is zero. * Benchmark.name becomes a property and is now stored in metadata * TextRunner now uses powers of 2, rather than powers of 10, to calibrate the number of loops @@ -725,9 +725,9 @@ Changes: * The ``hist`` command now accepts multiple files * ``hist`` and ``hist_scipy`` commands got a new ``--bins`` option * Replace mean with median -* Add :meth:`perf.Benchmark.median` method, remove ``Benchmark.mean()`` method +* Add ``perf.Benchmark.median`` method, remove ``Benchmark.mean()`` method * ``Benchmark.get_metadata()`` method removed: use directly the - :attr:`perf.Benchmark.metadata` attribute + ``perf.Benchmark.metadata`` attribute * Add ``timer`` metadata. ``python_version`` now also contains the architecture (32 or 64 bits). @@ -774,23 +774,22 @@ Version 0.3 (2016-06-10) * If TextRunner detects isolated CPUs, it sets automatically the CPU affinity to these isolated CPUs * Add ``--json-file`` command line option -* Add :meth:`TextRunner.bench_sample_func` method +* Add ``TextRunner.bench_sample_func`` method * Add examples of the API to the documentation. Split also the documentation into subpages. * Add metadata ``cpu_affinity`` -* Add :func:`perf.is_significant` function -* Move metadata from :class:`~perf.Benchmark` to ``RunResult`` -* Rename the ``Results`` class to :class:`~perf.Benchmark` -* Add :attr:`~TextRunner.inner_loops` attribute to - :class:`TextRunner`, used for microbenchmarks when an - instruction is manually duplicated multiple times +* Add ``perf.is_significant`` function +* Move metadata from ``perf.Benchmark`` to ``RunResult`` +* Rename the ``Results`` class to ``perf.Benchmark`` +* Add ``TextRunner.inner_loops`` attribute to + ``TextRunner``, used for microbenchmarks when an Version 0.2 (2016-06-07) ------------------------ * use JSON to exchange results between processes * new ``python3 -m perf`` CLI -* new :class:`TextRunner` class +* new ``TextRunner`` class * huge enhancement of the timeit module * timeit has a better output format in verbose mode and now also supports a ``-vv`` (very verbose) mode. Minimum and maximum are not more shown in diff --git a/doc/runner.rst b/doc/runner.rst index da724864..e7f42a94 100644 --- a/doc/runner.rst +++ b/doc/runner.rst @@ -3,7 +3,7 @@ Runner CLI ========== -Command line options of the :class:`Runner` class. +Command line options of the :class:`pyperf.Runner` class. Loop iterations ---------------