xenomai.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: JerryJun  <jerryjun123@163.com>
To: "Kowalsky, Clara" <clara.kowalsky@siemens.com>
Cc: xenomai <xenomai@lists.linux.dev>
Subject: Re: NX-protected page
Date: Tue, 16 Apr 2024 07:37:39 +0800 (CST)	[thread overview]
Message-ID: <123d35fb.288.18ee41f9711.Coremail.jerryjun123@163.com> (raw)
In-Reply-To: <AS5PR10MB817389FE4D3AE120BC80E6D393062@AS5PR10MB8173.EURPRD10.PROD.OUTLOOK.COM>

       Thank you so much for keeping an eye on this.
	Yes, we do use xenomai.supported_cpus = 0xf but we bind the real-time thread to cpu2
	You can repeat this problem in the attachment. As can be seen from backtrace, this problem is related to the rt_set_affinity API. After we removed this rt_set_affinity API, it returned to normal, but we do not know the reason
	We ran this test on other kernels and got some results from Crash analysis by kdump. We don't know if this will help us solve the problem
	
      KERNEL: vmlinux-4.19.89-ipipe-9-xenomai-3.1-20201027
    DUMPFILE: V10-5T/202403231347/dump.202403231347  [PARTIAL DUMP]
        CPUS: 4
        DATE: Sat Mar 23 13:47:22 2024
      UPTIME: 05:29:52
LOAD AVERAGE: 0.86, 0.67, 0.68
       TASKS: 436
    NODENAME: debian
     RELEASE: 4.19.89-ipipe-9-xenomai-3.1-20201027
     VERSION: #1 SMP PREEMPT Tue Oct 27 19:51:52 CST 2020
     MACHINE: x86_64  (3600 Mhz)
      MEMORY: 31.7 GB
       PANIC: "BUG: unable to handle kernel paging request at 00000000000a22e8"
         PID: 26821
     COMMAND: "rtThread"
        TASK: ffff888791bc0040  [THREAD_INFO: ffff888791bc0040]
         CPU: 2
       STATE: TASK_RUNNING (PANIC)
crash_pageoffset> bt -s
PID: 26821  TASK: ffff888791bc0040  CPU: 2   COMMAND: "rtThread"
 #0 [ffff88885fe03d10] machine_kexec+337 at ffffffff81041051
 #1 [ffff88885fe03d58] __crash_kexec+104 at ffffffff810f2818
 #2 [ffff88885fe03e20] crash_kexec+56 at ffffffff810f39d8
 #3 [ffff88885fe03e38] oops_end+186 at ffffffff8101795a
 #4 [ffff88885fe03e58] no_context+439 at ffffffff8104b2a7
 #5 [ffff88885fe03eb0] __do_page_fault+162 at ffffffff8104b8b2
 #6 [ffff88885fe03f28] ___xnsched_run+507 at ffffffff811838cb
 #7 [ffff88885fe03f60] xnintr_core_clock_handler+540 at ffffffff8117cddc
 #8 [ffff88885fe03fa0] dispatch_irq_head+126 at ffffffff8111953e
 #9 [ffff88885fe03fc8] __ipipe_handle_irq+103 at ffffffff81037f07
#10 [ffff88885fe03ff0] apic_timer_interrupt+18 at ffffffff81801d22
--- <IRQ stack> ---
bt: cannot transition from IRQ stack to current process stack:
        IRQ stack pointer: ffff88885fe03d10
    process stack pointer: ffff88885fe03c10
       current stack base: ffffc90005a30000
	   
 crash> dis -l ___xnsched_run
linux-4.19.89-ipipe-9-xenomai-3.1/kernel/xenomai/sched.c: 1042                                                if (shadow && ipipe_root_p)
0xffffffff81183887 <___xnsched_run+439>:        test   %r15d,%r15d  
0xffffffff8118388a <___xnsched_run+442>:        je     0xffffffff811838b7 <___xnsched_run+487>  
linux-4.19.89-ipipe-9-xenomai-3.1/./arch/x86/include/asm/irqflags.h: 331                                      ipipe_get_current_domain
0xffffffff8118388c <___xnsched_run+444>:        pushfq
0xffffffff8118388d <___xnsched_run+445>:        pop    %rdx
linux-4.19.89-ipipe-9-xenomai-3.1/./arch/x86/include/asm/irqflags.h: 261
0xffffffff8118388e <___xnsched_run+446>:        cli
linux-4.19.89-ipipe-9-xenomai-3.1/./include/linux/ipipe_domain.h: 331   
0xffffffff8118388f <___xnsched_run+447>:        mov    $0xa2260,%rax                                                     
linux-4.19.89-ipipe-9-xenomai-3.1/./include/linux/ipipe_domain.h: 269
0xffffffff81183896 <___xnsched_run+454>:        mov    %gs:0x7ee8b93a(%rip),%rcx  # 0xf1d8                   return __ipipe_raw_cpu_read(ipipe_percpu.curr);
                      RCX: ffffffff828021c0       ffffffff828021c0:  0000000000042e80
0xffffffff8118389e <___xnsched_run+462>:        mov    (%rax,%rcx,1),%rax                                                                                    
0xffffffff811838a2 <___xnsched_run+466>:        mov    0x42e70(%rax),%rax                                    return __ipipe_get_current_context()->domain;
linux-4.19.89-ipipe-9-xenomai-3.1/./arch/x86/include/asm/irqflags.h: 336
0xffffffff811838a9 <___xnsched_run+473>:        push   %rdx
0xffffffff811838aa <___xnsched_run+474>:        popfq
linux-4.19.89-ipipe-9-xenomai-3.1/./include/linux/ipipe_domain.h: 334   
0xffffffff811838ab <___xnsched_run+475>:        cmp    $0xffffffff825f8940,%rax                             
0xffffffff811838b1 <___xnsched_run+481>:        je   0xffffffff81183b06 <___xnsched_run+1078>  
                                                goto shadow_epilogue;  
linux-4.19.89-ipipe-9-xenomai-3.1/include/xenomai/cobalt/kernel/sched.h: 372                
0xffffffff811838b7 <___xnsched_run+487>:        mov    $0xa4540,%rdi                        0xffffffff811838be <___xnsched_run+494>:        add    %gs:0x7ee8b912(%rip),%rdi   # 0xf1d8
linux-4.19.89-ipipe-9-xenomai-3.1/kernel/xenomai/sched.c: 1052                                                            xnthread_switch_fpu(sched);
0xffffffff811838c6 <___xnsched_run+502>:        callq  0xffffffff81189eb0 <xnthread_switch_fpu>
linux-4.19.89-ipipe-9-xenomai-3.1/include/xenomai/cobalt/kernel/trace.h: 84
0xffffffff811838cb <___xnsched_run+507>:        mov    $0x1,%r15d

[78270.764791] BUG: unable to handle kernel paging request at 00000000000a22e8
[78270.771791] PGD 0 P4D 0
[78270.774375] Oops: 0000 [#1] PREEMPT SMP NOPTI
[78270.778769] CPU: 2 PID: 26821 Comm: RunCommandActor Kdump: loaded Tainted: P           O      4.19.89-ipipe-9-xenomai-3.1-20201027 #1
[78270.790893] Hardware name: Supermicro Super Server/C422-SF, BIOS E906.1 04/27/2021
[78270.798536] I-pipe domain: Linux
[78270.801787] RIP: a2260:0xffff88885fe1c560
[78270.805834] Code: 00 00 00 00 00 00 00 00 00 00 c0 8f ea 5f 88 88 ff ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <01> 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00
[78270.824763] RSP: 81037f07:00000000000a2260 EFLAGS: ffff88885fe1c560 ORIG_RAX: ffffffff8111953e
[78270.833421] RAX: 00000000000a2260 RBX: 0000000000000002 RCX: ffffffff828021c0
[78270.840596] RDX: ffff88885fe1c560 RSI: 0000000000000000 RDI: 0000000000000000
[78270.847790] RBP: ffff88885fea4540 R08: 000000000005f3e0 R09: 000000000000fe0c                          
[78270.854984] R10: ffffffff8117cddc R11: 0000000000000002 R12: ffff88885fea5d08                     
[78270.862185] R13: ffffe8ffffc079c8 R14: ffff88885fea4540 R15: ffffffff811838cb                      
[78270.869335] FS:  00007fe12237c700(0000) GS:ffff88885fe00000(0000) knlGS:0000000000000000       
[78270.877499] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[78270.883298] CR2: 00000000000a22e8 CR3: 0000000852a06006 CR4: 00000000001606e0
[78270.890465] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[78270.897658] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[78270.904843] Call Trace:
[78270.996185] CR2: 00000000000a22e8

From the backtrace above, based on the value stored in the CPU register at the time of the kernel crash, the location where the page fault was triggered appears to be at ___xnsched_run+507.
Kernel crash occurs with RAX value 0xa2260 from <___xnsched_run+447>: mov $0xa2260,%rax. The value of the GS register is ffff88885fe00000, the address of the percpu variable in CPU2.
crash> kmem -o
PER-CPU OFFSET VALUES:
CPU 0: ffff88885fc00000
CPU 1: ffff88885fd00000
CPU 2: ffff88885fe00000
CPU 3: ffff88885ff00000

The RDX register has the value ffff88885fe1c560, which is the address of the ipipe_percpu variable in CPU2
crash> p ipipe_percpu
PER-CPU DATA TYPE:
struct ipipe_percpu_data ipipe_percpu;
PER-CPU ADDRESSES:
[0]: ffff88885fc1c560
[1]: ffff88885fd1c560
[2]: ffff88885fe1c560
[3]: ffff88885ff1c560

<___xnsched_run+454>:  mov %gs:0x7ee8b93a(%rip),%rcx # 0xf1d8 namely __ipipe_raw_cpu_read(iPipe_percpu.curr) The member of curr is obtained through the ipipe_percpu variable
crash> ipipe_percpu_data ffff88885fe1c560 -o
struct ipipe_percpu_data {
[ffff88885fe1c560] struct ipipe_percpu_domain_data root;
[ffff88885fe5f3e0] struct ipipe_percpu_domain_data head;
[ffff88885fea2260] struct ipipe_percpu_domain_data *curr;
[ffff88885fea2268] struct pt_regs tick_regs;
[ffff88885fea2310] int hrtimer_irq;
[ffff88885fea2318] struct task_struct *task_hijacked;
[ffff88885fea2320] struct task_struct *rqlock_owner;
[ffff88885fea2328] struct ipipe_vm_notifier *vm_notifier;
[ffff88885fea2330] unsigned long nmi_state;
[ffff88885fea2338] struct mm_struct *active_mm;
}
SIZE: 548320

RIP a2260:0xffff88885fe1c560 is the location of ipipe_percpu.curr,
CR2 00000000000a22e8 Stores the linear address of the page fault when it occurs, which points to the ip member of the struct pt_regs from below, and the value of the GS segment register + CR2 register is ffff88885fea22e8.
crash_pageoffset> struct pt_regs ffff88885fea2268 -o
struct pt_regs {
[ffff88885fea2268] unsigned long r15;
[ffff88885fea2270] unsigned long r14;
[ffff88885fea2278] unsigned long r13;
[ffff88885fea2280] unsigned long r12;
[ffff88885fea2288] unsigned long bp;
[ffff88885fea2290] unsigned long bx;
[ffff88885fea2298] unsigned long r11;
[ffff88885fea22a0] unsigned long r10;
[ffff88885fea22a8] unsigned long r9;
[ffff88885fea22b0] unsigned long r8;
[ffff88885fea22b8] unsigned long ax;
[ffff88885fea22c0] unsigned long cx;
[ffff88885fea22c8] unsigned long dx;
[ffff88885fea22d0] unsigned long si;
[ffff88885fea22d8] unsigned long di;
[ffff88885fea22e0] unsigned long orig_ax;
[ffff88885fea22e8] unsigned long ip;
[ffff88885fea22f0] unsigned long cs;
[ffff88885fea22f8] unsigned long flags;
[ffff88885fea2300] unsigned long sp;
[ffff88885fea2308] unsigned long ss;
}
SIZE: 168

<___xnsched_run+475>:  cmp $0xffffffff825f8940,%rax Where $0xffffffff825f8940 is the address of the ipipe_percpu_domain_data root variable in root mode.
The contents of the register when the kernel crashes appear to be the root domain when executing __ipipe_current_domain == ipipe_root_domain to determine whether the domain is the root domain at this time. A crash occurs when obtaining the ip member of pt_regs on the percpu variable ipipe_percpu_data. At present, the rtThread real-time thread is bound to run in CPU2, and the switch from xenomai mode to linux mode occurs during the run time. From this log, is the kernel crash caused by a mode switch?
Here are some of the meanings of CPU registers:
R08 5f3e0: Indicates the offset of ipipe_percpu.head
a2260: indicates the offset of ipipe_percpu.curr
RBP ffff88885fea4540: struct xnsched nksched
R12 ffff88885fea5d08: xnsched.current_account
0xffff88885fe1c560: struct ipipe_percpu_data
RCX FFFFFF828021C0: The value is 0000000000042e80


At 2024-04-10 15:46:48, "Kowalsky, Clara" <clara.kowalsky@siemens.com> wrote:
>Hi,
>
>just so you know, we're looking at this issue.
>Which cmdline did you use? Was xenomai.supported_cpus set?
>In the meantime, if you have more information or if you have been able to create a simplified application where the bug still occurs, please let us know.
>
>Best
>Clara
>
>> -----Original Message-----
>> From: JerryJun <jerryjun123@163.com>
>> Sent: Montag, 26. Februar 2024 15:34
>> To: Kiszka, Jan (T CED) <jan.kiszka@siemens.com>
>> Cc: xenomai <xenomai@lists.linux.dev>
>> Subject: Re:Re: NX-protected page
>> 
>> hi, Jan,  if  you have reproducted it  after trying to run the case for hours in
>> qemu? or is there any doubt?
>> 
>> 2024-02-02 16:37:59, "Jan Kiszka" <jan.kiszka@siemens.com> wrote:
>> >On 02.02.24 08:49, JerryJun wrote:
>> >>
>> >> how long did you run it?it will take a while for this crash but not more than
>> 12 hours at mostly.
>> >> ok,i will try to run it without get /proc…… thank you very much.
>> >>
>> >
>> >Not for hours, but one setup was running for about 30 min. at least.
>> >
>> >Jan
>> >
>> >>
>> >> At 2024-02-02 14:49:53, "Jan Kiszka" <jan.kiszka@siemens.com> wrote:
>> >>> On 01.02.24 08:09, Jan Kiszka wrote:
>> >>>> On 31.01.24 08:25, JerryJun wrote:
>> >>>>> yet, you just do make in the project to product the binary of oop-test,
>> run kill_test.sh , and finally wait kernel oop .
>> >>>>> we get the same results of kernel oop in the qemu-amd64, which use
>> the config from xenomai-images
>> ,"https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsou
>> rce.denx.de%2FXenomai%2Fxenomai-images%2F-
>> %2Fblob%2Fmaster%2Frecipes-
>> kernel%2Flinux%2Ffiles%2Famd64_defconfig%3Fref_type%3Dheads&data=0
>> 5%7C02%7Cclara.kowalsky%40siemens.com%7C494bca636f6c48566cc208d
>> c36d843eb%7C38ae3bcd95794fd4addab42e1495d55a%7C1%7C0%7C6384
>> 45549672638180%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMD
>> AiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&s
>> data=xOckYYYOkjKZ3yKchy4B7kkX2eth%2Bqe88Mrd7XjldFw%3D&reserved=
>> 0", as follow.
>> >>>>> [ 6191.671953] BUG: kernel NULL pointer dereference, address:
>> >>>>> 0000000000000000 [ 6191.671955] #PF: supervisor instruction fetch
>> >>>>> in kernel mode [ 6191.671956] #PF: error_code(0x0010) -
>> >>>>> not-present page [ 6191.671956] PGD 0 P4D 0 [ 6191.671958] Oops:
>> >>>>> 0010 [#1] PREEMPT SMP PTI IRQ_PIPELINE [ 6191.671959] CPU: 3 PID:
>> >>>>> 7047 Comm: oop-test Kdump: loaded Not tainted
>> >>>>> 5.10.199-devo-3.2.4-20240130 #1 [ 6191.671960] Hardware name:
>> QEMU
>> >>>>> Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 [
>> >>>>> 6191.671961] IRQ stage: Linux [ 6191.671961] RIP: 0010:0x0 [
>> >>>>> 6191.671962] Code: Unable to access opcode bytes at RIP
>> 0xffffffffffffffd6.
>> >>>>> [ 6191.671962] RSP: 0018:ffffc90000118f80 EFLAGS: 00010246 [
>> >>>>> 6191.671963] RAX: 0000000000000000 RBX: ffff888108d3d730 RCX:
>> >>>>> 0000000000000000 [ 6191.671964] RDX: ffff888108d3d738 RSI:
>> >>>>> ffffffff82516284 RDI: ffff888108d3d730 [ 6191.671965] RBP:
>> >>>>> 0000000000000000 R08: 0000000000000000 R09:
>> 0000000000000001 [
>> >>>>> 6191.671965] R10: 0000000000000000 R11: 0000000000000001
>> R12:
>> >>>>> 0000000000000000 [ 6191.671966] R13: 0000000000000018 R14:
>> >>>>> ffffffff811d37e0 R15: 0000000000000000 [ 6191.671967] FS:
>> >>>>> 00007fc6e83ea780(0000) GS:ffff88813bd80000(0000)
>> >>>>> knlGS:0000000000000000 [ 6191.671967] CS:  0010 DS: 0000 ES:
>> 0000 CR0: 0000000080050033 [ 6191.671968] CR2: ffffffffffffffd6 CR3:
>> 0000000106750005 CR4: 0000000000170ee0 [ 6191.671968] Call Trace:
>> >>>>> [ 6191.671968]  <IRQ>
>> >>>>> [ 6191.671969]  ? show_trace_log_lvl+0x1c2/0x2c9 [ 6191.671969]  ?
>> >>>>> irq_work_single+0x42/0x80 [ 6191.671969]  ?
>> >>>>> __die_body.cold+0x8/0xd [ 6191.671970]  ? no_context+0x13a/0x270
>> [
>> >>>>> 6191.671970]  ? exc_page_fault+0x6f/0x200 [ 6191.671970]  ?
>> >>>>> asm_exc_page_fault+0x1e/0x30 [ 6191.671970]  ?
>> >>>>> handle_oob_irq+0x90/0x90 [
>> 6191.671971]  irq_work_single+0x42/0x80
>> >>>>> [ 6191.671971]  irq_work_run_list+0x2d/0x40 [ 6191.671971]
>> >>>>> irq_work_run+0x26/0x40 [ 6191.671972]
>> >>>>> inband_work_interrupt+0xa/0x20 [ 6191.671972]
>> >>>>> handle_synthetic_irq+0xb1/0x210 [ 6191.671972]
>> >>>>> asm_call_irq_on_stack+0x12/0x20 [ 6191.671973]  </IRQ> [
>> >>>>> 6191.671973]  arch_do_IRQ_pipelined+0x15d/0x2a0 [ 6191.671973]
>> >>>>> sync_current_irq_stage+0x157/0x1f0
>> >>>>> [ 6191.671974]  __inband_irq_enable+0x4b/0x70 [ 6191.671974]
>> >>>>> _raw_spin_unlock_irqrestore+0x35/0x70
>> >>>>> [ 6191.671974]  __set_cpus_allowed_ptr+0xbf/0x230 [
>> 6191.671975]
>> >>>>> sched_setaffinity+0x383/0x6b0 [ 6191.671975]
>> >>>>> __x64_sys_sched_setaffinity+0x45/0x80
>> >>>>> [ 6191.671975]  do_syscall_64+0x39/0x50 [ 6191.671978]
>> >>>>> entry_SYSCALL_64_after_hwframe+0x62/0xc7
>> >>>>> [ 6191.671979] RIP: 0033:0x7fc6e84d6617 [ 6191.671979] Code: 1f
>> 40
>> >>>>> 00 48 8b 15 79 c8 0e 00 f7 d8 41 b9 ff ff ff ff 64 89 02 44 89 c8
>> >>>>> c3 66 2e 0f 1f 84 00 00 00 00 00 b8 cb 00 00 00 0f 05 <48> 3d 00
>> >>>>> f0 ff ff 77 29 41 89 c0 83 f8 ff 74 18 64 c7 04 25 38 00 [
>> >>>>> 6191.671980] RSP: 002b:00007ffd574dfde8 EFLAGS: 00000206
>> ORIG_RAX:
>> >>>>> 00000000000000cb [ 6191.671981] RAX: ffffffffffffffda RBX:
>> >>>>> 00007fc6e7b985c8 RCX: 00007fc6e84d6617 [ 6191.671982] RDX:
>> >>>>> 00007fc6e7b985f0 RSI: 0000000000000080 RDI:
>> 0000000000001b8d [
>> >>>>> 6191.671982] RBP: 00007ffd574dfe70 R08: 00007fc6e83ea780 R09:
>> 00000000ffffffff [ 6191.671983] R10: fffffffffffffdca R11:
>> 0000000000000206 R12: 000055a818eb61c0 [ 6191.671983] R13:
>> 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [
>> 6191.671984] Modules linked in:
>> >>>>> [ 6191.671985] CR2: 0000000000000000
>> >>>>>
>> >>>>
>> >>>> Okay... that is "good" news - looking for some time slot to
>> >>>> reproduce and debug myself now.
>> >>>>
>> >>>
>> >>> I've tried to reproduce, using the .config.yaml below for
>> >>> xenomai-images, but it does not trigger here. Do I need to run this
>> >>> for hours? What exactly did you do to trigger it in the qemu image
>> >>> of xenomai-images?
>> >>>
>> >>> BTW, I assume the access of /proc/xenomai/sched/stat plays an
>> >>> important role in causing the crash, right? You don't get it without them?
>> >>>
>> >>> Jan
>> >>>
>> >>> #
>> >>> # Automatically generated by kas 4.2 #
>> >>> _source_dir: /repo
>> >>> build_system: isar
>> >>> header:
>> >>>  includes:
>> >>>  - kas.yml
>> >>>  - board-qemu-amd64.yml
>> >>>  - opt-linux-latest-5.10.yml
>> >>>  version: 16
>> >>> menu_configuration:
>> >>>  DEBUG: false
>> >>>  KERNEL_4_19: false
>> >>>  KERNEL_4_19_LATEST: false
>> >>>  KERNEL_5_10: false
>> >>>  KERNEL_5_10_LATEST: true
>> >>>  KERNEL_5_15: false
>> >>>  KERNEL_5_15_LATEST: false
>> >>>  KERNEL_5_4: false
>> >>>  KERNEL_5_4_LATEST: false
>> >>>  KERNEL_6_1: false
>> >>>  KERNEL_6_1_LATEST: false
>> >>>  TARGET_BBB: false
>> >>>  TARGET_HIKEY: false
>> >>>  TARGET_QEMU_AMD64: true
>> >>>  TARGET_QEMU_ARM: false
>> >>>  TARGET_QEMU_ARM64: false
>> >>>  TARGET_X86_64_EFI: false
>> >>>  XENOMAI3: true
>> >>>  XENOMAI4: false
>> >>>  XENOMAI_3_0_LATEST: false
>> >>>  XENOMAI_3_1_LATEST: false
>> >>>  XENOMAI_3_2: true
>> >>>  XENOMAI_3_2_LATEST: false
>> >>>  XENOMAI_LATEST: false
>> >>>
>> >>> --
>> >>> Siemens AG, Technology
>> >>> Linux Expert Center
>> >
>> >--
>> >Siemens AG, Technology
>> >Linux Expert Center

  reply	other threads:[~2024-04-15 23:37 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-22  3:10 NX-protected page JerryJun
2024-01-25  9:50 ` Jan Kiszka
2024-01-31  7:25   ` JerryJun
2024-02-01  7:09     ` Jan Kiszka
2024-02-02  6:49       ` Jan Kiszka
2024-02-02  7:49         ` JerryJun
2024-02-02  8:37           ` Jan Kiszka
2024-02-02 10:03             ` JerryJun
2024-02-26 14:34             ` JerryJun
2024-04-10  7:46               ` Kowalsky, Clara
2024-04-15 23:37                 ` JerryJun [this message]
2024-05-20 15:01                   ` Florian Bezdeka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=123d35fb.288.18ee41f9711.Coremail.jerryjun123@163.com \
    --to=jerryjun123@163.com \
    --cc=clara.kowalsky@siemens.com \
    --cc=xenomai@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).