[pull] master from buildroot:master#921
Merged
pull[bot] merged 4 commits intomir-one:masterfrom Mar 19, 2026
Merged
Conversation
A "device memory" enabling project encompassing tools and libraries for CXL, NVDIMMs, DAX, memory tiering and other platform memory device topics. ndctl is using __struct_group() [1] which was introduced in kernel headers in upstream commit [2], first included in v5.16. The commit [2] was backported in v5.15.54 in [3] and v5.10.156 in [4]. Therefore, this commits sets the minimal toolchain headers version requirement to 5.10. [1] https://github.com/pmem/ndctl/blob/v83/cxl/fwctl/features.h#L108 [2] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=50d7bd38c3aafc4749e05e8d7fcb616979143602 [3] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=d57ab893cdf8046cbe4d49746f9418020f788b1f [4] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=9fd7bdaffe0e89833f4b1c1d3abd43023e951ec1 Signed-off-by: Chen Pei <cp0613@linux.alibaba.com> [Julien: - add commit log info about __struct_group() - add __struct_group() comment in Config.in - relax toolchain headers requirements to 5.10 - sort BR2_PACKAGE_ blocks in .mk alphabetically ] Signed-off-by: Julien Olivain <ju.o@free.fr>
Signed-off-by: Chen Pei <cp0613@linux.alibaba.com> Signed-off-by: Julien Olivain <ju.o@free.fr>
Since llama.cpp update in Buildroot commit [1], the test_aichat can
fail for several reasons:
The loop checking for the llama-server availability can fail if curl
succeed, but the returned json data is not formatted as expected.
This can happen if the server is ready but the model is not completely
loaded. In that case, the server returns:
{"error":{"message":"Loading model","type":"unavailable_error","code":503}}
This commit ignore Python KeyError exceptions while doing the
server test, to avoid failing if this message is received.
Also, this new llama-server version introduced a prompt caching, which
uses too much memory. This commit completely disable this prompt
caching by adding "--cache-ram 0" in the llama-server options.
[1] https://gitlab.com/buildroot.org/buildroot/-/commit/05c36d5d875713521f99b7bad48be316dcde2510
Signed-off-by: Julien Olivain <ju.o@free.fr>
https://github.com/harfbuzz/harfbuzz/blob/13.2.1/NEWS Signed-off-by: Giulio Benetti <giulio.benetti@benettiengineering.com> Signed-off-by: Julien Olivain <ju.o@free.fr>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.4)
Can you help keep this open source service alive? 💖 Please sponsor : )