Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/_images/add_new_mfc_tag.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/allKeyIds.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/calibration_keys_enum.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/ckv_config.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/ckv_subgraph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/define_key_values.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/define_new_tag.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/device_pp_subgraph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/device_subgraph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/graph_keys.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/mfc_assigned_tags.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/mfc_calibration_window.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/mfc_kvh2xml.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/open_key_configurator.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/stream-device_subgraph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/stream_subgraph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/system_designer_workflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/tag_key.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/volume_control_ckv.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
32 changes: 1 addition & 31 deletions docs/_sources/design/arspf_design.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1168,37 +1168,7 @@ Customizations

Custom module
-------------

The custom module development workflow involves the following high-level steps:

1. Start custom algorithm using standard industrial tool such as Matlab and optimize the algorithm
for intended processor architecture

2. Develop the Common Audio Processor Interface (CAPI) wrapper for the
custom algorithm. For examples and detailed instructions, see the :ref:`capi_mod_dev_guide`

3. Develop an API header file consisting of Module ID and configuration
parameters related to the custom algorithm.

4. Generate an API XML file by running the h2xml conversion tool on the API
header file. The XML file provides the necessary information about configuration
interfaces, supported containers, stack size, and any other policies
that are required for the AudioReach configuration tool (ARC platform).

5. Compile the CAPI-wrapped module as a built-in module as part of ARE image
or standalone shared object.

6. Import the custom module into the ARC platform through a module
discovery workflow, and create use case graphs by placing the module
in the appropriate container and subgraphs.

7. Calibrate or configure the module together with an end-to-end use
case, and store the data in the file system (through the ACDB file
provided by the ARC platform).

8. Launch the end-to-end use case from the application, which in turn
uses the use case graph and calibration information from the ACDB
file and provides them to the ARE to realize the use case.
For steps on how to add a custom module, please refer to the :ref:`adding_modules` guide.

Custom container
----------------
Expand Down
260 changes: 190 additions & 70 deletions docs/_sources/design/design_concept.rst.txt

Large diffs are not rendered by default.

41 changes: 40 additions & 1 deletion docs/_sources/design/linux_plug-in_arch.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,24 @@ Graph Overview
Sample Audio Graph for MSSD Scenario

Figure depicts the reference design of audio graph for MSSD playback scenario. In this example, stream sub-graph and stream-PP sub graph are consolidated into just stream sub-graph. Stream subgraph consists of write shared memory endpoint, PCM decoder, PCM converter. Client passes PCM samples to write shared memory endpoint. PCM converter is put in place to convert PCM samples to format supported by the stream-specific post-processing modules if conversion is necessary. Output of stream subgraph is fed into stream-device subgraph which consists of media format converter(MFC). MFC is put in place to convert stream-leg PCM to device-leg PCM format. After conversion, output of stream-device sub-graph is fed into device PP subgraph for device-specific post-processing. Note that mixer is placed at the beginning of subgraph to mix input streams. Output of device PP subgraph is then feed into device subgraph containing hardware endpoint module such as I2S driver for eventual rendering out of SoC.


The reference playback graphs for Linux platforms typically consists of the following subgraphs:

1. **Stream** – The software interface between the DSP and high-level operating system.
2. **Stream-PP** – Contains postprocessing (PP) modules specific to a stream (for example, bass boost, reverb, etc.)
3. **Stream-Device** – Consists of any per stream per device modules such as sample rate/media format conversion
4. **Device-PP** – Contains PP modules specific to a hardware device (common examples include IIR Filter, MBDRC, etc.)
5. **Device** – The hardware endpoint, most often a mic or a speaker.

An Rx (audio output) use case will follow this order (Stream -> Device), while a Tx (audio input) use case will be
reversed (Device -> Stream).
By default, GKVs are defined for Stream, StreamPP, Device, and DevicePP. StreamDevice subgraphs do not have a unique GKV, but instead use a combination of Stream
and Device GKVs.

Please note that it is not necessary for every graph to have a Stream-PP, Stream-Device, or Device-PP subgraph.
Most commonly, subgraphs are only defined once for each Stream or Device, and
different calibrations are realized with the PP subgraphs.

Key Vector Design
^^^^^^^^^^^^^^^^^^^^^^

Expand Down Expand Up @@ -257,6 +274,28 @@ Key Vector Design
| Stream2 + Device Metadata | StreamRX2DeviceRX KVs, DeviceRX PP KVs |
+-----------------------------+------------------------------------------+

Below is a breakdown of a Low Latency playback graph from the RB3 Gen2 ACDB file:

StreamRX Subgraph:

.. figure:: images/linux/stream_subgraph.png
:figclass: fig-center

Stream-Device Subgraph:

.. figure:: images/linux/stream-device_subgraph.png
:figclass: fig-center

Device PP Subgraph:

.. figure:: images/linux/device_pp_subgraph.png
:figclass: fig-center

Device Subgraph:

.. figure:: images/linux/device_subgraph.png
:figclass: fig-center

**GKV**

GKV1: <StreamRX1 KVs, StreamRX2 PP KVs, StreamRX1DeviceRX KVs, DeviceRX
Expand Down
83 changes: 78 additions & 5 deletions docs/_sources/dev/dev_workflow.rst.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,83 @@
.. _dev_workflow:

Use Case Development Workflow
##################################################
Development Workflow
#####################

.. figure:: images/usecase_flowchart.png
.. toctree::
:maxdepth: 1
:hidden:

adding_modules
capi_mod_dev
system_workflow


The AudioReach project envisions three different profiles of audio product developers:

* **Algorithm Developer:** Develops audio algorithms and integrates them into AudioReach by converting them into modules.
* **Tuning Engineer:** Tunes existing audio use cases in AudioReach to their exact specification.
* **System Integrator:** Designs audio graph with necessary modules and develops software components in AudioReach in order to realize audio use cases and operations associated with the use cases, such as pause, seeking, volume control, and etc.

.. figure:: images/dev_workflow/developer_workflow.png
:figclass: fig-center
:scale: 70 %
:scale: 80 %

Workflow diagram with color-coded developer workflows.

High-level Use Case Development Workflow
The AudioReach SDK and tools provides feature-rich capabilities to enable these audio developer types.
For example, developers can use the program **AudioReach Creator** to modify and tune audio use cases, and to add custom modules to use case graphs.
AudioReach Creator is an integral part of AudioReach and is a necessary tool for all of the workflow types listed above.
For steps on how to install AudioReach Creator, please refer to the :ref:`arosp_overview` page, under the section "Steps to install ARC".

* Note: To access the AudioReach Creator guide, after installing and opening the program, select the "User Guide" option on the start-up window.

Additionally, AudioReach developers can use a supported platform device, such as the Raspberry Pi 4 or the RB3 Gen2, to test new modules, use cases, or features.
Please refer to the available :ref:`platform` for steps on how to setup an AudioReach build for the preferred device and run a basic use case.

Below is a description of each type of developer, and which documentation pages or resources they should take a look at to get started.

====================================

**Algorithm Developer**

An algorithm developer can integrate a custom audio algorithm into AudioReach in the form of a **module**.
Once a custom module is developed, it can be compiled into an AudioReach build and then added to a use case graph in AudioReach Creator.
Then, the module can be tested by pushing the use case (or ACDB) files to a platform device and running the use case.
For a more in-depth guide on how to integrate a custom module into AudioReach and test, please refer to the page :ref:`adding_modules`.

The :ref:`available_modules` page contains the list of audio modules that are currently available on AudioReach.

====================================

**Tuning Engineer**

An audio tuning engineer will utilize the full capabilities of AudioReach Creator to tune audio use cases to their exact specifications by changing module properties
such as volume, audio filtering, media format, and more.
Tuning engineers can take advantage of both "offline" and "online" tuning.

In offline tuning, a developer can change module properties (such as the volume or media format) in AudioReach Creator and save the use case in the form of ACDB files.
These ACDB files will then contain the use case with the updated module properties, which can be loaded onto the platform device to test the new calibration.

In online tuning, a developer can run an audio use case on the platform device and directly tune module properties while the use case is running, which is known as **Real-time Calibration (RTC)**.
To do this, the device must first be connected to the "online mode" of AudioReach Creator. Once AudioReach Creator is connected, running a use case on the device will cause the corresponding
audio use case graph to appear in the AudioReach Creator graph view. Then, the developer can directly update the module properties in AudioReach Creator and hear the updated results in real time.
For example, a developer can start a playback use case, increase the volume using the "Volume Control" module, set the changes, and hear the change in volume while the clip is running.

In some cases, modules can also be added and removed from the use case graph while the use case is running, which is known as **Real-time Graph Modification (RTGM)**.
Online tuning allows the developer to modify the use case properties without re-uploading the ACDB files and rebooting the device.
Developers can also use integrated resource monitoring (IRM) to view latency and performance measurements while in online tuning mode.

Tuning engineers will likely want to take full advantage of the calibration tools in AudioReach Creator. For this, please refer to section 5 of the AudioReach Creator guide.

====================================

**System Integrator**

A system integrator is responsible for developing and integrating software components which utilize AudioReach constructs and APIs to enable new audio use cases and their associated operations, such as pausing, seeking, and volume control.
This role may also involve creating new use case graphs on which the use case software will operate.
To successfully expand new features and functionalities into the intended product, system integrators must possess an in-depth knowledge of AudioReach constructs, software design, and relevant tools.

To learn about the system integrator workflow, please refer to the :ref:`system_workflow` guide.
For more information about the system integrator workflow, including explanations on how calibrations for audio operations are set in AudioReach,
please check section 4 of the AudioReach Creator guide.
To learn more about the full design of AudioReach, please refer to the :ref:`design` pages.
3 changes: 1 addition & 2 deletions docs/_sources/dev/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ AudioReach Developer Guides
.. toctree::
:maxdepth: 1

dev_workflow
capi_mod_dev
dev_workflow
plat_port
available_modules
137 changes: 137 additions & 0 deletions docs/_sources/dev/system_workflow.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
.. _system_workflow:

System Integrator Workflow
###########################

.. contents::
:local:
:depth: 2

Introduction
==============
This document is a high-level overview of the system integrator workflow.
A system integrator utilizes AudioReach to design audio use case graph and develop software to operate on the use case graph.
Before proceeding to rest of document, it is important for system integrator to first understand fundamental AudioReach graph
composition and use case constructs. For high-level overviews of these concepts along with some examples,
please refer to the :ref:`design_concept` page.

Workflow Overview
=================

.. figure:: images/system_integrator/system_designer_workflow.png
:figclass: fig-center
:scale: 80 %

System Integrator Workflow Diagram

Import H2XML XML files
--------------------------
In AudioReach, H2XML definitions are used to generate metadata that can be leveraged
in ARC to create use cases. H2XML files are generated via annotations in the API files. Some examples of H2XML metadata include:

* CAPI module definitions
* Key-values and module tags, defined in kvh2xml.h
* Container properties
* Driver properties

Design use case graphs
--------------------------
The core task of the system designer is to create a system of graphs that satisfy the
product-specific use case requirements. The system designer should have some
knowledge of the tuning requirements for a device in order to utilize the best signal
processing topology for a particular use case. In many situations, the reference
implementation may be adequate.

Associate GKV, CKV, TKV
---------------------------
To satisfy the driver-side logic, the system designer must associate GKVs, CKVs and
TKVs to leverage graphs and modules with minimal calibration entries. Some tuning
knowledge is required (for example, which modules should be sample rate-dependent
(useful when assigning CKVs)).
If necessary, the system designer may also create new key-values using the KVH2XML
header file.

Configure dynamic loading
-----------------------------
The system designer may optionally choose to configure whether each module is
loaded either at bootup or runtime.

Customizing KVs with KVH2XML
==============================

KVH2XML overview
--------------------
AudioReach defines a data-driven method of use case handling. Using
KVH2XML.h, H2XML tools, and the Discovery Wizard in ARC, a system designer can
define and manage custom keys and values.
The general steps to add or modify keys/key-values are:

1. Update driver software to include a new key/key-value to associate to a new use
case.

2. In ARC, import the updated key definitions and associate them to a new use case.

Adding a generic key
------------------------
In KVH2XML, keys are first defined as generic keys then later added as graph keys,
calibration keys, or module tags. The below images are examples used
exclusively by Qualcomm. To add a generic key:

1. Open kvh2xml.h located in the audioreach-conf repository.
2. Add a new key ID to the AllKeyIds enum:

.. figure:: images/system_integrator/allKeyIds.png
:figclass: fig-left
:scale: 60 %

The key ID value will follow the format 0xFF000000.

3. Define the key values:

.. figure:: images/system_integrator/define_key_values.png
:figclass: fig-left
:scale: 60 %

Adding a graph or calibration key
-------------------------------------
To add the key as a graph or calibration key (after adding as a generic key):

1. Using the key ID, add the generic key to the Graph_Keys enum:

.. figure:: images/system_integrator/graph_keys.png
:figclass: fig-left
:scale: 60 %

Otherwise, if the new key is a calibration key, add it to the CAL-Keys enum:

.. figure:: images/system_integrator/calibration_keys_enum.png
:figclass: fig-left
:scale: 60 %

2. Update the driver side logic to create a use case mapping for the new key.
3. After recompiling, the output XML file is automatically generated. Import the new
KVH2XML xml file using the Discovery Wizard. For details, see section 4.1 of the ARC guide.

Adding a module tag
-------------------
To add the key as a module tag (after adding as a generic key):

1. Open `kvh2xml.h <https://github.com/Audioreach/audioreach-conf/blob/master/qcom/kvh2xml.h>`__.
2. Add the new tag as a define in kvh2xml.h. Tag values follow the format 0xC00000FF:

.. figure:: images/system_integrator/define_new_tag.png
:figclass: fig-left
:scale: 60 %

3. Add one or more keys to associate with the module tag:

.. figure:: images/system_integrator/tag_key.png
:figclass: fig-left
:scale: 60 %

4. Update the driver side logic to utilize the new tag.

5. After recompiling, the output XML file is automatically generated. Import the new
KVH2XML xml file using the Discovery Wizard. For details, see section 4.1 of the ARC guide.


3 changes: 2 additions & 1 deletion docs/_sources/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,11 @@ Welcome to AudioReach

Announcements
*************
* (10/6/2025): The newly added :ref:`dev_workflow` guide provides a starting point for developers to learn about the AudioReach developer workflow.

* (8/22/2025): Two documentation pages have been recently released:

* The :ref:`available_modules` list provides an overview of all the available
* The :ref:`available_modules` list provides an overview of all the available
audio alogirithms in AudioReach, including where to locate them in the open source project and a basic description of their capabilities.
* The :ref:`adding_modules` guide outlines steps on how to add a custom audio algorithm to an AudioReach yocto build.

Expand Down
Loading
Loading