-
Notifications
You must be signed in to change notification settings - Fork 332
Add support for Arm64 including all changes in PR #1302 (@swine, @surajjs95, @t-msn, and others) #1439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| for (i = 0; (void *)insn < insn_end && *insn == 0xd503201f; i++, insn++) | ||
| ; | ||
|
|
||
| if (i == 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we believe the i should be either 0 or 2, would it be better if we can report an error here when the i is some other value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the function_padding_size is decideced by the nop padding at the start of the function.
The number of nop adding to the function head is decided by the N of building option
-fpatchable-function-entry.
Or maybe this function padding size can be calculated by this building option of the target kernel
build Makefile? And we can support more specific situation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the arm64 kernel build, there could be:
- no nops at the start
- 2 nops at the start
- 2 nops before the start and two after the start.
There could also be a BTI instruction at the entry of the function before the two nops, if they exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@puranjaymohan : From my understanding, I think @wardenjohn is right about the -fpatchable-function-entry argument... but I don't think it's trivial to figure out what that build-time option actually was. (Sure, kpatch-build could reverse engineer from the kernel config, but it may become out of date.) So for the time being, I think the dynamic padding check is easier to implement. (Incidentally, we may need this dynamic padding for ppc64le as there are kernel config options that change the number of nops from 1, 2, etc.)
Anyway, a trivial nitpick here: *insn == 0xd503201f is not endian safe. See other examples in this file that check the bytes individually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay,
I will fix it in the next push.
Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@puranjaymohan , a reminder for the pending changes here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ZzzMao I have already pushed the fix.
Now the code looks like:
for (i = 0; (void *)insn < insn_end; i++, insn += 4)
if (insn[0] != 0x1f || insn[1] != 0x20 ||
insn[2] != 0x03 || insn[3] != 0xd5)
break;
if (i == 2)
size = 8;
else if (i != 0)
log_error("function %s within section %s has invalid padding\n", sym->name, sym->sec->name);
|
Hi, Puranjay I am testing this pull request using two types of kernels: one [1] that uses the sframe unwinder for livepatch, and one [2] that does not. [1] https://lore.kernel.org/all/20250127213310.2496133-1-wnliu@google.com/ The test environment is as follows:
In both kernels, the "module-LOADED.test" fails during integration testing. The results of reproducing the issue on the command line are as follows: It seems likely that the cause of this issue is the same as the one mentioned in the previous patch submission: https://lore.kernel.org/linux-arm-kernel/20211103210709.31790-1-surajjs@amazon.com/ It appears that Adding more information:
Applying the update patch for 6.13 available at https://lore.kernel.org/all/20250412010940.1686376-1-dylanbhatch@google.com/ now causes module-LOADED.test to PASS. |
|
This PR has been open for 60 days with no activity and no assignee. It will be closed in 7 days unless a comment is added. |
|
Hello, comment to remove stale flag. |
|
Hi @t-msn and thanks for pinging this thread. Indeed, upstream looks ready to merge ("arm64: Implement HAVE_LIVEPATCH") 🥳 When that happens, I'll see about updating the unit test objects and soon after this MR to add arm64 support (looks like it will probably need a rebase). |
Signed-off-by: Puranjay Mohan <pjy@amazon.com>
The "has_function_profiling" support field in the symbol struct is used to show that a function symbol is able to be patched. This is necessary to check that functions which need to be patched are able to be. On arm64 this means the presence of 2 NOP instructions at function entry which are patched by ftrace to call the ftrace handling code. These 2 NOPs are inserted by the compiler and the location of them is recorded in a section called "__patchable_function_entries". Check whether a symbol has a corresponding entry in the "__patchable_function_entries" section and if so mark it as "has_func_profiling". Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com> [Modify to use __patchable_function_entries support added upstream] Signed-off-by: Puranjay Mohan <pjy@amazon.com>
…rch64 Add aarch64 support to kpatch_create_ftrace_callsite_sections(). Check for the 2 required NOP instructions on function entry, which may be preceded by a BTI C instruction depending on whether the function is a leaf function. This determines the offset of the patch site. Signed-off-by: Pete Swain <swine@google.com> Signed-off-by: Puranjay Mohan <pjy@amazon.com>
Add the final support required for aarch64 and enable building on that arch. Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com> [kpatch_line_macro_change_only() fixes] Signed-off-by: Puranjay Mohan <pjy@amazon.com>
On arm64 when CONFIG_DEBUG_BUGVERBOSE is enabled debugging information about bug sections is stored in the ".rodata.str" section of the object file. If this isn't included then linking fails as labels in this section are referenced by relocations in the bug table. Include this section to enable building patches which contain this bug debug information. Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>
It seems mapping symbols in aarch64 elf has section size of 0. So, exclude it in section symbol replacing code just like kpatch_correlate_symbols(). This fixes the data-read-mostly unit test on aarch64. Signed-off-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com> Signed-off-by: Puranjay Mohan <pjy@amazon.com>
Copy from kernel source tree. Signed-off-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
|
@joe-lawrence I have rebased this MR, I hope we can merge this soon after the kernel side is merged. |
Awesome, thanks for the update! (We were just about to have @ryanbsull tackle that :) Our plan is to start playing with arm64 livepatching with the upstream kernel, and then generate some unit-test object files with v6.16. (I assume everything needed is already merged with Sunday's Linux 6.16 tag?) |
Please let me know if I can help with any tasks. I did initial testing with the objects in https://github.com/mihails-strasuns/kpatch-unit-test-objs/tree/arm64/aarch64 but I think we should regenerate these again with the new kernel and gcc. Unfortunately, the code for arm64 livepatching is not in
These are merged and will show up in |
For arm64 this option uses -fpatchable-function-entry=M,2, so 2 NOPs
are placed before the function entry point (in order to store a pointer
to ftrace_ops). When calculating function padding, check for the
presence of the two NOPs, and adjust the padding size by 8 if they are
found.
This was merged in the upstream kernel in v6.8 with:
baaf553d3bc3 ("arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS")
With this into the equation, the entry of a function can look like one
of:
1. Without DYNAMIC_FTRACE_WITH_CALL_OPS and CONFIG_ARM64_BTI_KERNEL
-------------------------------------------------------------------
Disassembly of section .text.cmdline_proc_show:
0000000000000008 <cmdline_proc_show>:
8: d503201f nop
c: d503201f nop
2. Without DYNAMIC_FTRACE_WITH_CALL_OPS and with CONFIG_ARM64_BTI_KERNEL
------------------------------------------------------------------------
Disassembly of section .text.cmdline_proc_show:
0000000000000008 <cmdline_proc_show>:
0: d503245f bti c
4: d503201f nop
8: d503201f nop
3. With DYNAMIC_FTRACE_WITH_CALL_OPS and without CONFIG_ARM64_BTI_KERNEL
------------------------------------------------------------------------
Disassembly of section .text.cmdline_proc_show:
0000000000000000 <cmdline_proc_show-0x8>:
0: d503201f nop
4: d503201f nop
0000000000000008 <cmdline_proc_show>:
8: d503201f nop
c: d503201f nop
4. With DYNAMIC_FTRACE_WITH_CALL_OPS and with CONFIG_ARM64_BTI_KERNEL
---------------------------------------------------------------------
Disassembly of section .text.cmdline_proc_show:
0000000000000000 <cmdline_proc_show-0x8>:
0: d503201f nop
4: d503201f nop
0000000000000008 <cmdline_proc_show>:
8: d503245f bti c
c: d503201f nop
10: d503201f nop
make create-diff-object aware of DYNAMIC_FTRACE_WITH_CALL_OPS and its
quirks.
Signed-off-by: Dylan Hatch <dylanbhatch@google.com>
Signed-off-by: Puranjay Mohan <pjy@amazon.com>
The sframe sections can't be normally diffed and need to be created separately. skip them for now. Signed-off-by: Puranjay Mohan <pjy@amazon.com>
|
Thanks for this PR. I've been testing it on an arm64 CPU with linus' tree, and so far live patch is working fine. I've tested with two independent changes in drivers code (netconsole) and they worked fine. I even tested in a kernel kernel (with Lockdep, kasan, etc), and I haven't seen anything weird. |
kpatch_replace_sections_syms() substitutes the object/function symbols for the section symbol in the relocation sections. But relocations in .rela.__patchable_function_entries can point to an instruction ouside the function. This is because with CALL_OPS enabled, two NOPs are added before the function entry. __patchable_function_entries needs the address of the first NOP above the function. We anyway generate __patchable_function_entries again in kpatch_create_ftrace_callsite_sections() so we can skip mangling these relocations Signed-off-by: Puranjay Mohan <pjy@amazon.com>
kpatch-build/kpatch-elf.c
Outdated
| kelf->arch = S390; | ||
| break; | ||
| case EM_AARCH64: | ||
| kelf->arch = ARM64; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was updated to AARCH64 in a later commit. I guess this is a typo here?
|
Besides the "ARM64" typo, which got fixed in a later commit, this version looks good in my tests. |
@ryanbsull is going to be handling the merging (currently OOTO) for this MR. I'd also like to get in a fix for !1470 (currently working on it) before tagging a release. Would mid-Sept work and would you need a v0.9.11 tag? |
There are already quite a few changes on top of v0.9.10, so a v0.9.11 release will help the build process. But I agree that it is important to fix #1470. Mid-September is a bit late for our schedule. If we need to wait 3-4 weeks for v0.9.11, we will probably release another version based on v0.9.10. |
|
I have added a commit to update the reference for arm64 test objects. We should be good to merge this PR now. If more bugs are found, I will open new PRs to fix them ASAP. |
Can you tack this change onto that last commit: diff --git a/test/unit/Makefile b/test/unit/Makefile
index fde1717..e3ed7d7 100644
--- a/test/unit/Makefile
+++ b/test/unit/Makefile
@@ -1,4 +1,4 @@
-ARCHES ?= ppc64le x86_64
+ARCHES ?= aarch64 ppc64le x86_64
.PHONY: all clean submodule-check
|
AArch64 test objects are now available in the kpatch-unit-test-objs
repository with commit 4ad64a06b6d0 ("Merge pull request dynup#54 from
dynup/aarch64")
Update the submodule reference to the above commit so the objects are
available to the test suite. Add aarch64 to the unit Makefile so the
tests are run.
Signed-off-by: Puranjay Mohan <pjy@amazon.com>
a21d2da to
f15eedf
Compare
|
@joe-lawrence I have pushed the last commit again with the Makefile change. |
|
Merging this one finally! Thanks to everyone who contributed to coding, rebasing, testing, etc. to the branches! |
This is a rebase of #1302 with some more changes and some already merged patches removed.
It is supposed to work with the
sframebased reliable stack unwinder posted upstream:[PATCH 0/8] unwind, arm64: add sframe unwinder for kernel
NOTE:
Building and testing a livepatch on a 6.12 arm64 kernel for the following basic patch:
Load and Unload test
Unit tests pass with the objects at: https://github.com/mihails-strasuns/kpatch-unit-test-objs/tree/arm64
Integration tests