sys/tasklet: rename irq_handler to tasklet#12459
sys/tasklet: rename irq_handler to tasklet#12459jia200x wants to merge 1 commit intoRIOT-OS:masterfrom
Conversation
3a6e2c9 to
70ffefd
Compare
70ffefd to
568938b
Compare
|
|
||
| /** | ||
| * @brief Default priority of the interrupt handler thread | ||
| * @brief Default priority of the tasklets thread |
There was a problem hiding this comment.
Can this be added to the config Doxygen group?
There was a problem hiding this comment.
I advocate to not expose this now to a broader audience. While the API of offloading work to another thread seems to fulfill all needs, the configuration of the taskletthread is very likely to be changed in the near future to allow multiple threads, if requested.
| * | ||
| * Defines the prototype of the function that is registered together | ||
| * with an interrupt event and to be called when the interrupt is handled. | ||
| * with a taskletand to be called when the tasklet is handled. |
There was a problem hiding this comment.
| * with a taskletand to be called when the tasklet is handled. | |
| * with a tasklet and to be called when the tasklet is handled. |
| * corresponding source has occurred and needs to be handled. Each interrupt | ||
| * event can only be pending once. | ||
| * Used modules have to define a structure of this type for each tasklet | ||
| * used by the modules. Structures of this type are used to put |
There was a problem hiding this comment.
I'm not sure it's used modules here. Also the sentence reads a bit odd.
There was a problem hiding this comment.
How about "Modules using tasklets have to define..."
| TASKLET_HANDLER_PRIO, | ||
| THREAD_CREATE_WOUT_YIELD | | ||
| THREAD_CREATE_STACKTEST, | ||
| _tasklet_loop, NULL, "irq_handler"); |
kaspar030
left a comment
There was a problem hiding this comment.
@jia200x @gschorcht please let's take a step back.
irq_handler is a wrapper around the event module.
it adds:
- thread creation function
- a check on every event post whether the thread is running
- a check whether a handler has already been registered ("pending flag")
It removes the ability to have multiple queues (there's only one highest prio thread running a single queue), and it's API description makes it seem like a good way to defer ISR work to thread context.
1.) could have been added to event
2.) Has to be removed. Doing that check every time (disabling IRQ, checking flag, re-enabling IRQ for an API that's supposed to be mostly used from within ISRs) is useless, as once started, the thread won't ever be stopped. Worse, if the thread has not been started, it will, from ISR context, at less than defined time. (It might happen on the first radio packet received...). Thus that has to be moved into an initialization function.
3.) the pending flag is IMO implicit in event, as event->next is always set when in queue (pending), and NULL when not. If not, this could have been added to event.
Now you're proposing to rename this wrapper around event to "tasklet", which seems to some to be a well-known name for "put function plus argument on a queue, have a loop collect and execute it later".
At that point, when would I use tasklets, when would I use events?
IMO, they're the same. "irq_handler" (or "tasklets") have some ISR related how-to-use documentation, and there's a shared thread for executing the handlers. But that one thread might not be enough has already been discussed. So has adding shared event-threads for the event module.
TL;DR I think we should consolidate to one "event / event loop" API.
|
hi @kaspar030 Note tasklets are not a replacement of event queues, they are used for different things. The irq_handler implementation (renamed here to tasklets) is close to the Tasklets implementation of Linux and the event API closer to the Workqueues of Linux (check the differences here). Using event loops on irq_handler/tasklets is a design decision, same as using softIRQ for implementing tasklets in Linux.
I'm aware of this, and this was not the case of the original tasklets PR ( #12420 ). I'm with @kaspar030 on this argument, but this can be modified to the behavior of #12420
It doesn't remove the ability to have multiple event queues. The
True. Already solved in #12420, and can be adapted here as well.
Tasklets should be used for handling low level stuff (PHY/MAC, drivers ISR offloading, internal OS signals, etc). In order to preserve real time constrains, they should aim to be deterministic. Events should be used for all the rest (interfacing to user API, network stack components, libraries that require timeouts, and in most cases as a replacement of IPC messages).
The tasklets wrap the
If that means that the |
Yes, because there's no "shared event thread" within the current event module. If there were, instead of Let's add shared threads to event (as @haukepetersen already has, unfortunately hidden in a branch).
How do they differ (other than being put in different queues)? |
The single interrupt handler thread was initially motivated by the fact that there are a lot of drivers for sensors that are connected via I2C or SPI and the mutex based synchronization mechanism doesn't work in ISRs. The question at that time was, who should create the handler thread. The different drivers have no knowledge about the handler thread. The application should't need any knowledge about the internals of the drivers and whether they need the handler thread or not. That was the reason why the thread was created when it is used the first time. A possible solution could be the autoinitialization if |
What you describe is exactly the same as a tasklet, but it's explicitly called "tasklet" instead of "shared_queue". The problem here is only about semantics, because the "shared queue" would also need to initialize the thread, put the queue somewhere, etc. So, this is only a naming convention to me.
Semantics. Tasklets are intended to be used for "Bottom Half Processing" or deferring the execution until the kernel finds a safe time to run the task. Tasklets is just an implementation of this "shared event thread" that shares the definition of tasklets of most libraries and OSs. |
How are they semantically different? Not what the name implies to you what they should be used for, but how a function executed as a tasklet is semantically different from a function executed as an event handler? IIUC, apart from naming, they are equivalent.
I understand that Linux and Zephyr call their function deferring mechanism "tasklet". Where else? |
I meant language semantics. Since tasklets/irq_handler are implemented on top of events, they are the same in terms of functionality.
OpenThread, Spring Batch, some Google JS libraries, etc.
Exactly, and they should use events instead of tasklets. As said before, tasklets are intended to be used by the kernel components. I don't see why an end user should offload an ISR or do something really low level if there's a driver that handles that. If someone needs low level operations (even on Linux), then would be working closer to the kernel. |
Is your point that because some deferred work is "low level", "Bottom Half" or "ISR offload", it should be done by an API named "tasklet" because that is a commonly used name for that, and if it is "application stuff", it should be done with a differently named API because "tasklets" is not supposed to be used for that? |
|
Hej, I'd like to jump into this discussion, too - as I seem to have missed the discussion when I have to say that I get a bad stomachache looking at the concept. My major concern is that it destroys all real-time properties for device drivers that use this thread! In the current form it is highly dependent on the selected modules on how they will perform. Example: a none-time-critical sensor on a slow I2C bus can block a highly time critical accelerometer/radio/whatever on a completely independent SPI bus -> bad, right!? Taking a step back: the general concept of creating a shared thread context that can be used by multiple modules is a very nice thing to do, especially considering the benefits in saved system resources (stack space, ...). So I am in for that. BUT: just having a single thread for various interrupt operations is IMHO not an option (see example above)! So my proposal would be something like the following:
So in a default configuration, I could imagine one high-prio handler for drivers, and maybe a low-prio handler for logging and similar. So in that case, the approach would not differ much from what is proposed here. BUT: with that approach it is possible to implement more fine grained setups if needed! |
Yes I know. The original version of the After a long discussion with @maribu and @kaspar030 these queues were removed and the Regarding sensors and I2C and SPI, we would need a separate handler thread for each bus. However, separate handler threads require a lot of resources and can't solve the problem for events which reuire access to the same bus. Therefore, we (@maribu and me) decided to start with the This shouldn't be a justification of the approach but merely an explanation of how it came to the
I have thought about such an approach very often. The question was how to tell a driver which handler threads exist and which handler thread to use. The problem described in the example above wouldn't be solved if all drivers use the same thread.
When we discussed with @jia200x in #12420 to reuse the |
Hi, the misconception is that there has to be a single thread to offload to. It is conceptionally impossible to use a single thread for stuff with hard real time requirements and low-priority slow stuff. |
Not really. My point is that the |
I think this can be extended to any high priority thread getting blocked due to a number of slow and unpredictable I2C bus transaction taking priority. This is the strong point of the current model where interrupts set only a flag and the IRQ itself is handled at the priority of the handling thread (see below). With the current design having priority 0 as a must the timing behaviour for other threads becomes rather unpredictable¸ being dependent on bus usage among other things. What I'm afraid of here is that a method is made available to schedule relative slow processes (I2C bus exchanges with a potential timeout or clock stretching) on a high priority thread. This to me looks like an easy and not so transparant way to hamper other relative high priority threads such as netif threads. Which could show itself as a non-deterministic RTT spike in a network packet.
To me it would already help if this tasklets idea can be modified to run as a low priority thread where tasks are run eventually. |
|
I think we all agree that it would be nice to have something that helps to handle events from different drivers that require the access to exclusive resources in thread context, without having to create a separate thread for each driver. The question is how to realize and how it should be configured. |
|
@bergzand you put it to the point! It seems to me we all agree, that real-time properties are a key issue here. And we don't have to argue, that lengthy operations in a high-prio thread will always dry out anything in lower prio tasks -> that is how priority based scheduling, like done in RIOT, is all about... Now how to chose the actual priorities used for certain tasks is something that is very specific to each and every single application/firmware/project, as it depends on many things, and configuring these priorities so that application requirements are fulfilled is not an easy task. But key in RIOT is, that users/developers are actually able to select and tune priorities for their used modules so they can achieve their goals. So when introducing shared event handlers, I think it is very important to leave that door open to developers, so they still have the power to fine tune their system regarding runtime priorities. So what I am aiming for is a solution, that can use a simplified (default) configuration, but also allows for fine-grained tuning if needed. So as a default configuration, we might simply introduce a single event handler for high-prio driver related operations (so basically
Its actually not hard to make the used event handler queue configurable for drivers. Simply add a config option to their params struct pointing to the queue of choice.
Exactly, hence (optionally) multiple threads/queues
ACK - so hence my problem with the current state of the code :-) |
Just for clarification, the priority of the |
Technically yes, but if the comment here is to be trusted there isn't that much to configure. |
Absolutely.
In the core I think it can be pretty straight forward by running a number of event handler threads during system initialization, each simply running typedef struct {
char priority;
const char *name; // make this possibly optional
} abceventrunner_params_t;
static const abceventrunner_params_t abceventrunner_params[] =
{
{ 1, "irq_handler_high" },
{ 4, "irq_handler_med" },
{ (THREAD_PRIORITY_IDLE - 1), "run_me_if_you_can" },
}Initialization is trivial, simply allocate I started to play with something like this a time ago: https://github.com/haukepetersen/RIOT/tree/add_eventhandlerthread, though never came to integrate multiple queues... Now for drivers we just need to add some syntactic sugar so we can directly tell them which queue to use. |
Yes, but only collectively without any option for differentiation (slow dev A vs picky dev B...) |
as was shown: from a technical perspective they are :-)
Lets not worry about multi-core systems (for now) -> these have so many implications on all the core modules (msg, mutex, ...), that event threads are the least concern...
+1 for going the |
Once we have multiple cores, starting multiple handler threads and changing event to use a mutex instead of thread_flags for notification would solve that. Once we have actual multicore support.... |
Yes, I'm aware of that. I was only referring on why tasklets differ from event queues.
That's exactly the point why I was arguing for What's the technical reason behind extending the API of |
I meant, if it's decided to go with the extension of the |
The only extension that we need for now is the creation of dedicated handler threads. Otherwise, the event API is already a "tasklet" implementation that is already there. What would be the technical reason to create a new module, if an existing one provides all the functionality needed? |
events are tasklets. We can change the event implementation to semaphores, mutex, whatever at any time... |
This is the root of the problem. Tasklets and events are not equivalent.
The low priority tasks introduced in #12480 and #12474 are not considered tasklets! I'm aware tasklets can be implemented using |
On @haukepetersen comment above, yes. This PR and the current |
I think you're quite fixed on what tasklets are doing in Linux (which is quite a different environment).
On Linux, Tasklets are always run on the CPU on which they were scheduled.
So do events when using multiple handler threads. |
|
Ok, it was not the intention to make that a big discussion because of naming conventions or where to implement a module. I gave my reasons to implement it on top of the If I'm the only one aiming for a |
|
@gschorcht: Would you be fine if I open a PR targeting the release candidate that removes |
Of course. It seems that we an agreement. |
|
I'm missing a bit of background information here. Is there an example of a sensor/module that requires processing ASAP (and thus where a low priority queue doesn't work) after the interrupt but where it can't be done in the IRQ. In other words, what's the use case of the high priority handler? |
Each sensor that requires access to SPI or I2C after the sensor triggered an interrupt, for example when exhausting a defined threshold. Another example are I2C GPIO extenders that indicate the change of an input by an interrupt. The access to SPI and I2C is only possible in thread context. |
Maybe I'm missing the point here, but why does the average sensor require a high priority thread to handle a threshold exceeded notification?
This is IMHO the only example I've seen so far having a hard requirement on the concept discussed here. Just to have the option out in the open here: it is perfectly valid to decide not to support interrupt for these GPIO expander devices and only support GPIO interrupts on the MCU built-in GPIO. |
|
On a more constructive note, is it okay with people to as a first step create the concept discussed here as a low priority thread ( We can always extend to multiple handler threads and/or multiple priorities later if deemed necessary. Edit: added numbers for low priority. |
|
@kaspar030: Are you willing to complete your proof of concept PR? |
It depends on the sensor. If it is just a temperature sensor, exceeding a threshold might not to be important. But what about a accelerometer in security critical applications? The system has to react in milliseconds. |
I wouldn't call an accelerometer in a security critical application the average sensor. Doesn't mean that your example is invalid, but I don't think it is the common case. |
|
I think we have concluded that we let the user configure the priority levels, as arguing about priorities without the context of the intended application makes no sense, right? But I agree with @bergzand that in absense of any user provided priority assignment low priority is reasonable default (maybe just higher than the main thread). |
This is exactly what I already proposed in #12474 (comment) to avoid that drivers use the high priority thread to ensure to be handled before the main thread. |
@bergzand By the way, the need for a single handler thread came up when I wrote the driver for PFC857x in PR #10430. Therefore, the handler priority was originally that high. |
I think this went a little bit out of the context of my original motivation, although this approach could be useful for other things too. My original motivation was to have an event queue with a "high enough" priority, so the OS can handle transceiver ISR and MAC layer logic. So, a low priority thread wouldn't work for my use case. If low level priorities are useful for notifications, I think #12474 or #12480 are good candidates for both use cases |
That's why PR #12474 defines a high priority and a low priority thread. That should cover both cases. IMHO, the question is how low should the low priority be, less than the main thread or less than the idle thread. |
Sure! |
How about this: We use something like I think it would also be a nice implementation detail, if the number of event handler threads could be chosen regardless of the actually used priorities. E.g. if only one event handler thread is enabled, that thread should just handle all events regardless of their priorities. The more threads are actually created, the more priority levels start to actually show different behavior. |
That would be nice. |
|
I guess we can close this one since #12474 got merged. |
Contribution description
This PR is a replacement of #12420. It renames the
irq_handlermodule (#10555) totasklet, as described in #12420 (comment)The reason is because the concept of "tasklets" is common in OSs and scheduling systems, so this way it should be easier to expose this feature in the documentation.
Testing procedure
Use
tests/sys_taskletas following.Issues/PRs references
#10555
Closes #12420