All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xen.org>, <boris.ostrovsky@oracle.com>,
	<roger.pau@citrix.com>, <stephen.s.brennan@oracle.com>
Subject: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in some cases")
Date: Mon, 14 Jun 2021 12:15:35 +0100	[thread overview]
Message-ID: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com> (raw)

Hi, Boris, Stephen, Roger,

We have stress tested recent changes on staging-4.13 which includes a
backport of the subject. Since the backport is identical to the
master branch and all of the pre-reqs are in place - we have no reason
to believe the issue is not the same on master.

Here is what we got by running heavy stress testing including multiple
repeated VM lifecycle operations with storage and network load:


Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
----[ Xen-4.13.3-10.7-d  x86_64  debug=y   Not tainted ]----
CPU:    17
RIP:    e008:[<ffff82d080246b65>] common/timer.c#active_timer+0xc/0x1b
RFLAGS: 0000000000010046   CONTEXT: hypervisor (d675v0)
rax: 0000000000000000   rbx: ffff83137a8ed300   rcx: 0000000000000000
rdx: ffff83303fff7fff   rsi: ffff83303fff2549   rdi: ffff83137a8ed300
rbp: ffff83303fff7cf8   rsp: ffff83303fff7cf8   r8:  0000000000000001
r9:  0000000000000000   r10: 0000000000000011   r11: 0000168b0cc08083
r12: 0000000000000000   r13: ffff82d0805cf300   r14: ffff82d0805cf300
r15: 0000000000000292   cr0: 0000000080050033   cr4: 00000000000426e0
cr3: 00000013c1a32000   cr2: 0000000000000000
fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
Xen code around <ffff82d080246b65> (common/timer.c#active_timer+0xc/0x1b):
  0f b6 47 2a 84 c0 75 02 <0f> 0b 3c 04 76 02 0f 0b 3c 02 0f 97 c0 5d c3 55
Xen stack trace from rsp=ffff83303fff7cf8:
    ffff83303fff7d48 ffff82d0802479f1 0000168b0192b846 ffff83137a8ed328
    000000001d0776eb ffff83137a8ed2c0 ffff83133ee47568 ffff83133ee47000
    ffff83133ee47560 ffff832b1a0cd000 ffff83303fff7d78 ffff82d08031e74e
    ffff83102d898000 ffff83133ee47000 ffff83102db8d000 0000000000000011
    ffff83303fff7dc8 ffff82d08027df19 0000000000000000 ffff83133ee47060
    ffff82d0805d0088 ffff83102d898000 ffff83133ee47000 0000000000000011
    0000000000000001 0000000000000011 ffff83303fff7e08 ffff82d0802414e0
    ffff83303fff7df8 0000168b0192b846 ffff83102d8a4660 0000168b0192b846
    ffff83102d8a4720 0000000000000011 ffff83303fff7ea8 ffff82d080241d6c
    ffff83133ee47000 ffff831244137a50 ffff83303fff7e48 ffff82d08031b5b8
    ffff83133ee47000 ffff832b1a0cd000 ffff83303fff7e68 ffff82d080312b65
    ffff83133ee47000 0000000000000000 ffff83303fff7ee8 ffff83102d8a4678
    ffff83303fff7ee8 ffff82d0805d6380 ffff82d0805d5b00 ffffffffffffffff
    ffff83303fff7fff 0000000000000000 ffff83303fff7ed8 ffff82d0802431f5
    ffff83133ee47000 0000000000000000 0000000000000000 0000000000000000
    ffff83303fff7ee8 ffff82d08024324a 00007ccfc00080e7 ffff82d08033930b
    ffffffffb0ebd5a0 000000000000000d 0000000000000062 0000000000000097
    000000000000001e 000000000000001f ffffffffb0ebd5ad 0000000000000000
    0000000000000005 000000000003d91d 0000000000000000 0000000000000000
    00000000000003d5 000000000000001e 0000000000000000 0000beef0000beef
Xen call trace:
    [<ffff82d080246b65>] R common/timer.c#active_timer+0xc/0x1b
    [<ffff82d0802479f1>] F stop_timer+0xf5/0x188
    [<ffff82d08031e74e>] F pt_save_timer+0x45/0x8a
    [<ffff82d08027df19>] F context_switch+0xf9/0xee0
    [<ffff82d0802414e0>] F common/schedule.c#sched_context_switch+0x146/0x151
    [<ffff82d080241d6c>] F common/schedule.c#schedule+0x28a/0x299
    [<ffff82d0802431f5>] F common/softirq.c#__do_softirq+0x85/0x90
    [<ffff82d08024324a>] F do_softirq+0x13/0x15
    [<ffff82d08033930b>] F vmx_asm_do_vmentry+0x2b/0x30

****************************************
Panic on CPU 17:
Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
****************************************

That looks like a race opened by this change - we didn't see the problem
before while running with all of the pre-req patches.

Could you please analyse this assertion failure?

Igor


             reply	other threads:[~2021-06-14 11:16 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-14 11:15 Igor Druzhinin [this message]
2021-06-14 11:53 ` BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in some cases") Jan Beulich
2021-06-14 13:27   ` Roger Pau Monné
2021-06-14 16:01     ` Jan Beulich
2021-06-14 16:10       ` Andrew Cooper
2021-06-14 17:06         ` Boris Ostrovsky
2021-06-15  8:12       ` Roger Pau Monné
2021-06-15  9:17         ` Andrew Cooper
2021-06-15 13:17           ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com \
    --to=igor.druzhinin@citrix.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=roger.pau@citrix.com \
    --cc=stephen.s.brennan@oracle.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.