Skip to content

Conversation

@tmleman
Copy link
Contributor

@tmleman tmleman commented Dec 8, 2025

This pull request extends the generic IPC service API with a function to support sending critical messages/notifications and refactors the Intel Audio DSP host IPC service backend.

Summary of changes:

  • ipc_service: Add ipc_service_send_critical() API and a matching send_critical() backend hook to send high-priority messages via a dedicated fast path that bypasses normal busy checks.
  • ipc: intel_adsp: Simplify the host IPC service backend by replacing the custom intel_adsp_ipc_msg descriptor and enum-based length encoding with a fixed two-word (header + payload) IPC payload used consistently in send and receive paths.
  • intel_adsp: Wire the emergency/critical send path through the new ipc_service_send_critical() API while keeping the normal IPC flow unchanged.
  • intel_adsp: Move the synchronous IPC wait logic into the common intel_adsp_ipc_send_message_sync() helper used by tests, and remove the now-redundant backend-specific sync helper.
  • intel_adsp: Update and clarify backend documentation comments to describe the new data format, the role of ipc_ept_cfg::priv, and the semantics of the TX-buffer readiness and RX-buffer release helpers.

Status / TODO:

  • Existing intel_adsp IPC tests have not yet been adapted to the refactored backend and will currently fail. Test updates will be added as part of this PR before it is ready to merge.

@tmleman tmleman added In progress For PRs: is work in progress and should not be merged yet. For issues: Is being worked on RFC Request For Comments: want input from the community labels Dec 8, 2025
@zephyrbot zephyrbot added area: Boards/SoCs area: IPC Inter-Process Communication platform: Intel ADSP Intel Audio platforms labels Dec 8, 2025
Copy link
Contributor

@kv2019i kv2019i left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All in all looks good. I think using the service interface rx/tx buffer ops is cleaner mapping. Some questions on whether we keep a struct for the payload (which is still special) and a question on the synchronous send variant.

return ret;
}
ARG_UNUSED(dev);
uint32_t msg[2] = {data, ext_data};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been looking at how to bolt my #98862 on top of this, but seems this is doable. So if the full payload is sent using this interface, the msg would be 4x32bit, with inband header and an optional larger payload.

I do wonder if the "struct intel_adsp_ipc_msg" should be kept as the payload is somewhat special already with the 2x32bit inband message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't seen the changes you're proposing before. I was considering a similar solution, but instead of a structure with a pointer, I saw it more as a buffer that has space for two dwords of message + payload. When SOF sends a message, the IPC backend would know how many bytes to copy to the mailbox based on the passed size.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tmleman Given SOF client code is using a scatter gather approach (the IPC payload is a tuple of upto two register words and a variable size payload), I think we need some structure for the payload. We will have variance betweeb different HW implementations (some transfer data inband in the doorbell registers, some don't), and between IPC versions (IPC3 has no concept of "ext_data").

I was thinking a four item tuple "first_word, second_word, payload_ptr, payload_len" would be enough to cover all variation we know about. Given we are mixing data and pointers, I think it would be better to use a struct to describe the object that is passed to IPC service send routine. I think this makes the code more readable both in SOF side, as well as the various Zephyr backends.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tmleman Any takes on this? Do you prefer to merge this first, and then follow-up with the payload extension? I'm ok with this as well, but would be good to have a rough consensus of how this can be extended. If this will be forever limited to "two word" payloads, not sure it's worth the effort tot move this to IPC service as we already know now this will only be useful for one type of hardware. And if that's the case, SOF code could just directly use the driver interface.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kv2019i Right now I don’t have a concrete design for the payload descriptor that would be passed to the backend via the IPC service.

The steps I had in mind are:

  • After these changes are merged, update the Zephyr version in SOF and run regression tests.
  • Introduce IPC service usage in SOF and replace the direct use of the Intel‑specific IPC driver.
  • Clean up the Intel ADSP IPC path by removing the old IPC driver so only the backend remains.
  • Once we are in that state, we can extend the backend to handle richer IPC payloads (data in mailboxes/shared memory).

Today, the data we pass through IPC service is intentionally backend‑specific, and each vendor can define its own layout. I’d like to hear from other vendors what they would need here. If we see enough commonality for SOF IPC, we can then agree on a shared struct or helper type on top of that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tmleman Ack thanks, I think we are aligned then. I'm good to take this in steps. We can use the size parameter to incrementally add features (if message length is more than 2 words, then client is using a newer interface).

W.r.t. this:

Today, the data we pass through IPC service is intentionally backend‑specific, and each vendor can define its own
layout. I’d like to hear from other vendors what they would need here.

I think can aim higher already now. If you look at IPC3 drivers in https://github.com/thesofproject/sof/blob/main/src/drivers/ , we have 4 four vendors using exactly the same layout. So I think it's really already aligned in SOF -- we have two layouts currently, IPC3 and IPC4 and the two are never compiled in same binary, so in practise there's no variation and the layout doesn't vary from hardware to another. This has the great benefit that we don't need add another abstract on client (SOF) side, but can directly use Zephyr IPC interface.

I mean if any folks from other vendors object, I'm happy to change my mind, but given this already worked once, I'd rather go towards this goal and keep code in SOF clean and take full benefit of the hardware abstraction Zephyr provides.

return -ENOTSUP;
}

int intel_adsp_ipc_release_rx_buffer(const struct device *instance, void *token, void *data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is nice, using release_rx_buffer() nicely maps to completing the IPC.

uint32_t msg[2] = {data, ext_data};

return 0;
return ipc_service_send(&intel_adsp_ipc_ept, &msg, sizeof(msg));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this break the semantics as this is no longer synchronous?

It's another question can we drop the synchronous send variant altogether. It seems this is only used by the unit tests (e.g. zephyr/tests/boards/intel_adsp/smoke/src/hostipc.c ).

If needed, this could be replaced by async send, and an implementation of "ipc_service_get_tx_buffer()" that allows to wait until send is complete.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but I'm fixing this in a later commit (521a106). Ideally this should be one commit, but I wanted to discuss the target solution. One of the considered options is to remove this functionality.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in SOF IPC sending will remain synchronous though?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kv2019i We haven't made a decision yet about what we do with 'intel_adsp_ipc_send_message_sync'. I would probably go in the direction of removing this function, I don't think we ever had a use case for it.

@tmleman tmleman force-pushed the topic/upstream/pr/intel/ace/ipc_rework branch from 51b8020 to c995b6d Compare December 9, 2025 16:42
@nashif nashif assigned dcpleung and unassigned doki-nordic Dec 9, 2025
@tmleman tmleman force-pushed the topic/upstream/pr/intel/ace/ipc_rework branch from c995b6d to 521a106 Compare December 10, 2025 14:35
@zephyrbot zephyrbot added the area: Tests Issues related to a particular existing or missing test label Dec 12, 2025
@tmleman tmleman force-pushed the topic/upstream/pr/intel/ace/ipc_rework branch 3 times, most recently from e747aed to 1f242c2 Compare December 16, 2025 13:34
dcpleung
dcpleung previously approved these changes Dec 16, 2025
@tmleman tmleman force-pushed the topic/upstream/pr/intel/ace/ipc_rework branch from 1f242c2 to cd582ee Compare December 17, 2025 10:42
Add ipc_service_send_critical() as a dedicated API for sending critical
high-priority messages over an IPC endpoint.

Introduce an optional send_critical() callback in struct
ipc_service_backend so backends can implement a special fast path that
bypasses normal state and busy checks for critical notifications such as
crash reports or fatal errors.

The ipc_service_send_critical() wrapper mirrors ipc_service_send() on
the service side and delegates the actual behavior to the
backend-specific send_critical() implementation.

Signed-off-by: Tomasz Leman <tomasz.m.leman@intel.com>
Rework the Intel Audio DSP host IPC service backend to use the generic
ipc_service data/len conventions and the new critical send and buffer
management APIs.

Replace the backend-specific intel_adsp_ipc_msg and related enums with a
simple two-word payload passed through the standard ipc_service_send()
and ipc_service_send_critical() interfaces, and adapt the ISR and
receive callback to operate on this representation.

Use ipc_service_get_tx_buffer_size() as a readiness check for the host
channel and provide hold_rx_buffer() / release_rx_buffer()
implementations so ipc_service_release_rx_buffer() can be used to signal
when the channel becomes available again.

Wire the emergency send path through send_critical() to route urgent
messages via the backend's emergency fast path while keeping the normal
IPC flow unchanged.

Signed-off-by: Tomasz Leman <tomasz.m.leman@intel.com>
This patch refreshes the documentation in the Intel Audio DSP host IPC
service backend to match the current data model and backend API
contracts.

The updated comments clarify that ipc_ept_cfg::priv must point to
intel_adsp_ipc_ept_priv_data so that the backend can carry state between
the ISR and the application callbacks. They also describe that both the
send and receive paths operate on a fixed two-word uint32_t IPC payload
(header and extended payload) rather than a generic byte buffer, and
that len must always be sizeof(uint32_t) * 2.

The documentation for intel_adsp_ipc_send() is rewritten to explain the
expected payload format, the length and NULL checks performed by the
backend, and how the function programs the IPC registers while relying
on ipc_send_message() for BUSY and TX acknowledgment handling. The
parameter semantics and error returns are aligned with the generic
ipc_service send() hook contract to make the backend behaviour easier to
understand and reuse.

Signed-off-by: Tomasz Leman <tomasz.m.leman@intel.com>
Move the synchronous IPC wait logic out of the Intel Audio DSP host IPC
backend and into the common intel_adsp_ipc helper used by tests.

Update intel_adsp_ipc_send_message_sync() to both send the IPC message
through ipc_service_send() and wait on the backend semaphore, and remove
the now redundant ipc_send_message_sync() helper from the host IPC
backend.

Document that intel_adsp_ipc_send_message_sync() is a test-only helper,
not used by SOF firmware, and that it is a candidate for future removal
or for replacement by an explicit synchronous send primitive in the
generic IPC service API or in application code.

Signed-off-by: Tomasz Leman <tomasz.m.leman@intel.com>
This patch updates the intel_adsp smoke tests to use the new host IPC
backend data model and APIs.

The host IPC smoke test now treats messages as a fixed two-word uint32_t
payload instead of using struct intel_adsp_ipc_msg and the
INTEL_ADSP_IPC_CB_* message types. The receive callback validates the
payload length and pointer, checks the header/payload values against the
expected RETURN_MSG_* constants, and sets msg_flag on receipt. The
test-specific intel_adsp_ipc_send_message() and
intel_adsp_ipc_send_message_sync() helpers are rewritten to send a
two-word payload via ipc_service_send(), and the synchronous helper
waits on the backend semaphore only when the send succeeds. A dedicated
done callback is registered via intel_adsp_ipc_set_done_handler() to
drive done_flag, and intel_adsp_ipc_complete() now completes RX by
calling ipc_service_release_rx_buffer() and asserting on failure.
Completion polling is aligned with the backend by using
ipc_service_get_tx_buffer_size().

The clock calibration smoke test is adapted in the same way. It now
includes the ipc_service and intel_adsp host IPC backend headers when
using the new interface, interprets incoming data as a two-word uint32_t
payload, validates the length and pointer, and stores the returned
timestamp from the extended payload word into the endpoint private data.
The rest of the test flow, including endpoint registration, timestamp
requests and clock rate checks, remains unchanged but runs on top of the
new IPC backend representation.

Signed-off-by: Tomasz Leman <tomasz.m.leman@intel.com>
@tmleman tmleman force-pushed the topic/upstream/pr/intel/ace/ipc_rework branch from cd582ee to 3cfbf17 Compare December 17, 2025 11:56
@sonarqubecloud
Copy link

Copy link
Contributor

@kv2019i kv2019i left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test results look good, +1 from me.

@tmleman maybe remove the "RFC" tag and "WIP" from summary to gather more feedback.

@tmleman tmleman removed the In progress For PRs: is work in progress and should not be merged yet. For issues: Is being worked on label Dec 18, 2025
@tmleman tmleman changed the title [WIP] ipc: intel_adsp: rework host IPC service backend and add critical send support ipc: intel_adsp: rework host IPC service backend and add critical send support Dec 18, 2025
__ASSERT(len == sizeof(uint32_t) * 2, "Unexpected IPC payload length: %zu", len);
__ASSERT(data != NULL, "IPC payload pointer is NULL");

const uint32_t *msg = (const uint32_t *)data;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like the type-cast shouldn't be needed

uint32_t msg[2] = {data, ext_data};

return 0;
return ipc_service_send(&intel_adsp_ipc_ept, &msg, sizeof(msg));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in SOF IPC sending will remain synchronous though?

}

break;
if ((len != (sizeof(uint32_t) * 2)) || data == NULL) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 parentheses too many: if (len != sizeof(uint32_t) * 2 || data == NULL)

ret = -EBADMSG;

break;
if ((len != (sizeof(uint32_t) * 2)) || data == NULL) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto parentheses

}

return ret;
const uint32_t *msg = (const uint32_t *)data;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

supposedly superfluous type-cast, but needs checking

int ret = ipc_service_send(&intel_adsp_ipc_ept, &msg, sizeof(msg));

if (ret < 0) {
k_sem_take(&devdata->sem, timeout);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh... so the only reason it can fail is because it's delayed? I see a few more ways how ipc_service_send() can fail. At least we should check a specific error code, since there's no other way to indicate asynchronous execution. Or maybe we can call the backend .send() function directly or maybe a new internal function like

int intel_adsp_ipc_send_message_sync(...)
{
    return intel_adsp_ipc_send_message_internal(..., timeout);
}
int intel_adsp_ipc_send_message(...)
{
    return intel_adsp_ipc_send_message_internal(..., 0);
}

also previously the check was ret == 0 which wasn't perfect but made a bit more sense: not an error, not a positive byte count that has already been sent, but 0, meaning asynchronous sending in progress.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: Boards/SoCs area: IPC Inter-Process Communication area: Tests Issues related to a particular existing or missing test platform: Intel ADSP Intel Audio platforms RFC Request For Comments: want input from the community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants