All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [xen-unstable test] 97737: regressions - FAIL
@ 2016-07-22  3:27 osstest service owner
  2016-07-22 10:49 ` Wei Liu
  2016-07-25  8:53 ` Wei Liu
  0 siblings, 2 replies; 10+ messages in thread
From: osstest service owner @ 2016-07-22  3:27 UTC (permalink / raw
  To: xen-devel, osstest-admin

[-- Attachment #1: Type: text/plain, Size: 12006 bytes --]

flight 97737 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97737/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit2  19 guest-start/debian.repeat fail REGR. vs. 97664
 test-armhf-armhf-xl          15 guest-start/debian.repeat fail REGR. vs. 97664
 test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat fail REGR. vs. 97664

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      6 xen-boot                  fail REGR. vs. 97664
 build-amd64-rumpuserxen       6 xen-build                    fail   like 97664
 build-i386-rumpuserxen        6 xen-build                    fail   like 97664
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop             fail like 97664
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop              fail like 97664
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop             fail like 97664
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop              fail like 97664
 test-armhf-armhf-xl-rtds     15 guest-start/debian.repeat    fail   like 97664

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)               blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)               blocked n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestore            fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-xsm      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-xsm      12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  a43cc8fc0827a4110b884b0fd94bf98628f27ab7
baseline version:
 xen                  e763268781d341fef05d461f3057e6ced5e033f2

Last test of basis    97664  2016-07-19 15:37:51 Z    2 days
Failing since         97709  2016-07-20 11:27:05 Z    1 days    2 attempts
Testing same since    97737  2016-07-21 02:09:09 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dario.faggioli@citrix.com>
  David Scott <dave@recoil.org>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jonathan Daugherty <jtd@galois.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumpuserxen                                      fail    
 build-i386-rumpuserxen                                       fail    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvh-amd                                  fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumpuserxen-amd64                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumpuserxen-i386                             blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvh-intel                                fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-amd64-xl-qemut-winxpsp3                           pass    
 test-amd64-i386-xl-qemut-winxpsp3                            pass    
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass    
 test-amd64-i386-xl-qemuu-winxpsp3                            pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 385 lines long.)


[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [xen-unstable test] 97737: regressions - FAIL
  2016-07-22  3:27 [xen-unstable test] 97737: regressions - FAIL osstest service owner
@ 2016-07-22 10:49 ` Wei Liu
  2016-07-22 12:10   ` Dario Faggioli
  2016-07-25  8:53 ` Wei Liu
  1 sibling, 1 reply; 10+ messages in thread
From: Wei Liu @ 2016-07-22 10:49 UTC (permalink / raw
  To: osstest service owner; +Cc: George Dunlap, Dario Faggioli, xen-devel, Wei Liu

On Fri, Jul 22, 2016 at 03:27:30AM +0000, osstest service owner wrote:
> flight 97737 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/97737/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-xl-credit2  19 guest-start/debian.repeat fail REGR. vs. 97664
>  test-armhf-armhf-xl          15 guest-start/debian.repeat fail REGR. vs. 97664
>  test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat fail REGR. vs. 97664

From
http://logs.test-lab.xenproject.org/osstest/logs/97737/test-amd64-amd64-xl-credit2/serial-fiano0.log

Jul 21 07:38:22.383917 (XEN) Assertion 'rqd->avgload >= 0 && rqd->b_avgload >= 0' failed at sched_credit2.c:734
Jul 21 07:38:23.200007 (XEN) ----[ Xen-4.8-unstable  x86_64  debug=y  Not tainted ]----
Jul 21 07:38:23.207896 (XEN) CPU:    2
Jul 21 07:38:23.207943 (XEN) RIP:    e008:[<ffff82d080126987>] sched_credit2.c#__update_runq_load+0xde/0x11e
Jul 21 07:38:23.215908 (XEN) RFLAGS: 0000000000010082   CONTEXT: hypervisor
Jul 21 07:38:23.215970 (XEN) rax: 000000000000502c   rbx: 000000000000502c   rcx: ffffffffffffa1df
Jul 21 07:38:23.223906 (XEN) rdx: 0000000000000001   rsi: ffff83027d800460   rdi: 0000000038f6e47e
Jul 21 07:38:23.231882 (XEN) rbp: ffff830277eb7c10   rsp: ffff830277eb7be0   r8:  fffffffffffffd71
Jul 21 07:38:23.239900 (XEN) r9:  0000000000000012   r10: 0000000000000014   r11: 000000000000022d
Jul 21 07:38:23.247906 (XEN) r12: ffff830277d8f900   r13: 0000000000000001   r14: 000000e3db91fb86
Jul 21 07:38:23.255909 (XEN) r15: ffff83027d800460   cr0: 000000008005003b   cr4: 00000000001526e0
Jul 21 07:38:23.255974 (XEN) cr3: 000000007dcfd000   cr2: 00007f81409f1b20
Jul 21 07:38:23.263910 (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
Jul 21 07:38:23.272145 (XEN) Xen code around <ffff82d080126987> (sched_credit2.c#__update_runq_load+0xde/0x11e):
Jul 21 07:38:23.279948 (XEN)  00 00 00 48 85 c9 79 02 <0f> 0b 83 3d 44 2e 1c 00 00 74 2e 8b 36 40 88 75
Jul 21 07:38:23.287896 (XEN) Xen stack trace from rsp=ffff830277eb7be0:
Jul 21 07:38:23.287960 (XEN)    0000000000000297 ffff830277eb7c00 ffff82d080130969 000000000026785f
Jul 21 07:38:23.295904 (XEN)    ffff830277eb7c60 ffff82d0802e96c0 ffff830277eb7c50 ffff82d080126b19
Jul 21 07:38:23.303909 (XEN)    0000000000000297 ffff830277d8f900 000000e3db91fb86 0000000000000000
Jul 21 07:38:23.311948 (XEN)    ffff82d0802e96c0 ffff83027d800464 ffff830277eb7c80 ffff82d08012956e
Jul 21 07:38:23.319895 (XEN)    ffff83007dafc000 000000e3db91f9b6 ffff82d080344140 ffff83027d800464
Jul 21 07:38:23.319969 (XEN)    ffff830277eb7ce0 ffff82d08012de99 0000000000000297 0000000000000046
Jul 21 07:38:23.327929 (XEN)    ffff82d080130969 0000000000266c7b ffff830277eb7d10 ffff83007dafc000
Jul 21 07:38:23.335903 (XEN)    0000000000000000 ffff830277ebd000 ffff83027d8450b8 ffff82c00021001c
Jul 21 07:38:23.343911 (XEN)    ffff830277eb7cf0 ffff82d08012e393 ffff830277eb7d10 ffff82d08016b7d4
Jul 21 07:38:23.351904 (XEN)    0000000000000007 000000000000002e ffff830277eb7d20 ffff82d08016b844
Jul 21 07:38:23.359911 (XEN)    ffff830277eb7d90 ffff82d08010ab2f ffff82c0002100b8 070000000000002e
Jul 21 07:38:23.367896 (XEN)    ffff83027d845010 0000000000000086 ffff83007dafc000 0000000000000007
Jul 21 07:38:23.367966 (XEN)    ffff830277eb7d78 0000000000000003 ffff830277ebd000 ffff83007dafc19c
Jul 21 07:38:23.375919 (XEN)    0000000000000286 ffff83007dafc000 ffff830277eb7dd0 ffff82d08010847b
Jul 21 07:38:23.383904 (XEN)    00000000ffffffff ffff83007dc9d000 00000000ffffffff ffff8302630b3aa8
Jul 21 07:38:23.391904 (XEN)    ffff8302630b3000 ffff830277da4a28 ffff830277eb7e00 ffff82d080105c7a
Jul 21 07:38:23.399910 (XEN)    ffff830277eba040 0000000000000000 0000000000000000 ffff830277eb7fff
Jul 21 07:38:23.407908 (XEN)    ffff830277eb7e30 ffff82d080122a08 ffff82d08031aa80 ffff82d08031ab80
Jul 21 07:38:23.407977 (XEN)    ffff82d08031aa80 fffffffffffffffd ffff830277eb7e60 ffff82d080130116
Jul 21 07:38:23.415913 (XEN)    0000000000000002 ffff830277da4970 0000000000000004 00000000ffffffff
Jul 21 07:38:23.423908 (XEN) Xen call trace:
Jul 21 07:38:23.423964 (XEN)    [<ffff82d080126987>] sched_credit2.c#__update_runq_load+0xde/0x11e
Jul 21 07:38:23.431916 (XEN)    [<ffff82d080126b19>] sched_credit2.c#update_load+0x53/0x78
Jul 21 07:38:23.439900 (XEN)    [<ffff82d08012956e>] sched_credit2.c#csched2_vcpu_wake+0xf7/0x119
Jul 21 07:38:23.447913 (XEN)    [<ffff82d08012de99>] vcpu_wake+0x23f/0x3cb
Jul 21 07:38:23.447979 (XEN)    [<ffff82d08012e393>] vcpu_unblock+0x4b/0x4e
Jul 21 07:38:23.455908 (XEN)    [<ffff82d08016b7d4>] vcpu_kick+0x17/0x5b
Jul 21 07:38:23.463889 (XEN)    [<ffff82d08016b844>] vcpu_mark_events_pending+0x2c/0x2f
Jul 21 07:38:23.463962 (XEN)    [<ffff82d08010ab2f>] event_fifo.c#evtchn_fifo_set_pending+0x36a/0x3de
Jul 21 07:38:23.471914 (XEN)    [<ffff82d08010847b>] send_global_virq+0xe7/0x117
Jul 21 07:38:23.479916 (XEN)    [<ffff82d080105c7a>] domain.c#complete_domain_destroy+0x179/0x182
Jul 21 07:38:23.487875 (XEN)    [<ffff82d080122a08>] rcupdate.c#rcu_process_callbacks+0x141/0x1a2
Jul 21 07:38:23.487915 (XEN)    [<ffff82d080130116>] softirq.c#__do_softirq+0x7f/0x8a
Jul 21 07:38:23.495903 (XEN)    [<ffff82d080130156>] process_pending_softirqs+0x35/0x37
Jul 21 07:38:23.503894 (XEN)    [<ffff82d0801bcd17>] mwait-idle.c#mwait_idle+0xfc/0x2f9
Jul 21 07:38:23.511887 (XEN)    [<ffff82d0801656d4>] domain.c#idle_loop+0x4b/0x62
Jul 21 07:38:23.511953 (XEN) 
Jul 21 07:38:23.511993 (XEN) 
Jul 21 07:38:23.512030 (XEN) ****************************************
Jul 21 07:38:23.519944 (XEN) Panic on CPU 2:
Jul 21 07:38:23.519996 (XEN) Assertion 'rqd->avgload >= 0 && rqd->b_avgload >= 0' failed at sched_credit2.c:734
Jul 21 07:38:23.527915 (XEN) ****************************************
Jul 21 07:38:23.535867 (XEN) 
Jul 21 07:38:23.535899 (XEN) Reboot in five seconds...
Jul 21 07:38:23.535929 (XEN) Assertion 'rqd->avgload >= 0 && rqd->b_avgload >= 0' failed at sched_credit2.c:734
Jul 21 07:38:23.543957 (XEN) ----[ Xen-4.8-unstable  x86_64  debug=y  Not tainted ]----
Jul 21 07:38:23.551901 (XEN) CPU:    1
Jul 21 07:38:23.551951 (XEN) RIP:    e008:[<ffff82d080126987>] sched_credit2.c#__update_runq_load+0xde/0x11e
Jul 21 07:38:23.559919 (XEN) RFLAGS: 0000000000010082   CONTEXT: hypervisor (d0v1)
Jul 21 07:38:23.567881 (XEN) rax: 0000000000009ce6   rbx: 0000000000009ce6   rcx: ffffffffffffe1ab
Jul 21 07:38:23.567946 (XEN) rdx: 0000000000000001   rsi: ffff83027d800610   rdi: 0000000038fc04fb
Jul 21 07:38:23.575897 (XEN) rbp: ffff830277db7cc8   rsp: ffff830277db7c98   r8:  ffffffffffffefc6
Jul 21 07:38:23.583892 (XEN) r9:  0000000000000012   r10: 0000000000000014   r11: 00000000000053eb
Jul 21 07:38:23.591900 (XEN) r12: ffff830277d873f0   r13: 0000000000000001   r14: 000000e3f013efdf
Jul 21 07:38:23.599891 (XEN) r15: ffff83027d800610   cr0: 0000000080050033   cr4: 00000000001526e0
Jul 21 07:38:23.607882 (XEN) cr3: 000000026ef2d000   cr2: ffff880002971918
Jul 21 07:38:23.607929 (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
Jul 21 07:38:23.615802 (XEN) Xen code around <ffff82d080126987> (sched_credit2.c#__update_runq_load+0xde/0x11e):
Jul 21 07:38:23.623808 (XEN)  00 00 00 48 85 c9 79 02 <0f> 0b 83 3d 44 2e 1c 00 00 74 2e 8b 36 40 88 75
Jul 21 07:38:23.631794 (XEN) Xen stack trace from rsp=ffff830277db7c98:
Jul 21 07:38:23.631813 (XEN)    ffff82e004bb07a0 000000000025d83d 800000025d83d067 ffff830277ebd000
Jul 21 07:38:23.639807 (XEN)    ffff830277db7cc8 ffff82d0802e96c0 ffff830277db7d08 ffff82d080126b19
Jul 21 07:38:23.647800 (XEN)    ffff83007dafb000 ffff830277d873f0 000000e3f013efdf 0000000000000002
Jul 21 07:38:23.655802 (XEN)    ffff82d0802e96c0 ffff83027d800614 ffff830277db7d38 ffff82d08012956e
Jul 21 07:38:23.663795 (XEN)    ffff83007dafa000 000000e3f013ed94 ffff82d080344140 ffff83027d800614
Jul 21 07:38:23.671793 (XEN)    ffff830277db7d98 ffff82d08012de99 00007f389ab23000 0000000000000246
Jul 21 07:38:23.671814 (XEN)    ffff88001ef32080 ffff8800026f3dd8 ffff880002971918 ffff83007dafa000
Jul 21 07:38:23.679803 (XEN)    0000000000000000 ffff830277ebd000 ffff83027d8453f8 ffff830277ebc440
Jul 21 07:38:23.687799 (XEN)    ffff830277db7da8 ffff82d08012e393 ffff830277db7dc8 ffff82d08016b7d4
Jul 21 07:38:23.695800 (XEN)    0000000000000007 0000000000000011 ffff830277db7dd8 ffff82d08016b844
Jul 21 07:38:23.703796 (XEN)    ffff830277db7e48 ffff82d08010ab2f ffff82c000210044 07ff82d000000011
Jul 21 07:38:23.711845 (XEN)    ffff83027d845350 0000000000000286 ffff83007dafa000 0000000000000007
Jul 21 07:38:23.711882 (XEN)    ffff830277db7f18 ffff830277ebc440 ffff830277ebd000 00000000ffffffea
Jul 21 07:38:23.719869 (XEN)    ffff830277ebc440 0000000000000002 ffff830277db7e78 ffff82d080108239
Jul 21 07:38:23.727808 (XEN)    ffff83007dafb000 ffff8800026f3acc ffff88001fd12c40 0000000000000001
Jul 21 07:38:23.735794 (XEN)    ffff830277db7ef8 ffff82d080109681 ffff880002971918 0000000000000000
Jul 21 07:38:23.743811 (XEN)    0000000000000206 00007f3899b1c9a7 0000000000000100 00007f3800000011
Jul 21 07:38:23.751796 (XEN)    0000000000000033 0000000000000206 00007ffc355020e8 ffff83007dafb000
Jul 21 07:38:23.759792 (XEN)    ffff88001a456040 ffff88001fd12c40 0000000000000001 0000000000000002
Jul 21 07:38:23.759813 (XEN)    00007cfd882480c7 ffff82d08024303d ffffffff8100140a 0000000000000020
Jul 21 07:38:23.767799 (XEN) Xen call trace:
Jul 21 07:38:23.767815 (XEN)    [<ffff82d080126987>] sched_credit2.c#__update_runq_load+0xde/0x11e
Jul 21 07:38:23.775804 (XEN)    [<ffff82d080126b19>] sched_credit2.c#update_load+0x53/0x78
Jul 21 07:38:23.783799 (XEN)    [<ffff82d08012956e>] sched_credit2.c#csched2_vcpu_wake+0xf7/0x119
Jul 21 07:38:23.791794 (XEN)    [<ffff82d08012de99>] vcpu_wake+0x23f/0x3cb
Jul 21 07:38:23.791813 (XEN)    [<ffff82d08012e393>] vcpu_unblock+0x4b/0x4e
Jul 21 07:38:23.799832 (XEN)    [<ffff82d08016b7d4>] vcpu_kick+0x17/0x5b
Jul 21 07:38:23.807793 (XEN)    [<ffff82d08016b844>] vcpu_mark_events_pending+0x2c/0x2f
Jul 21 07:38:23.807834 (XEN)    [<ffff82d08010ab2f>] event_fifo.c#evtchn_fifo_set_pending+0x36a/0x3de
Jul 21 07:38:23.815798 (XEN)    [<ffff82d080108239>] evtchn_send+0x156/0x177
Jul 21 07:38:23.823796 (XEN)    [<ffff82d080109681>] do_event_channel_op+0xddd/0x14ae
Jul 21 07:38:23.823816 (XEN)    [<ffff82d08024303d>] lstar_enter+0xdd/0x137
Jul 21 07:38:23.831809 (XEN) 
Jul 21 07:38:23.831824 (XEN) 
Jul 21 07:38:23.831837 (XEN) ****************************************
Jul 21 07:38:23.839798 (XEN) Panic on CPU 1:
Jul 21 07:38:23.839815 (XEN) Assertion 'rqd->avgload >= 0 && rqd->b_avgload >= 0' failed at sched_credit2.c:734
Jul 21 07:38:23.847801 (XEN) ****************************************
Jul 21 07:38:23.847818 (XEN) 
Jul 21 07:38:23.855766 (XEN) Reboot in five seconds...
Jul 21 07:38:23.855783 (XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
Jul 21 07:38:28.863893 Jul 21 07:38:29.114294 <Modem lines changed: -CTS -DSR>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [xen-unstable test] 97737: regressions - FAIL
  2016-07-22 10:49 ` Wei Liu
@ 2016-07-22 12:10   ` Dario Faggioli
  0 siblings, 0 replies; 10+ messages in thread
From: Dario Faggioli @ 2016-07-22 12:10 UTC (permalink / raw
  To: Wei Liu, osstest service owner; +Cc: George Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1717 bytes --]

On Fri, 2016-07-22 at 11:49 +0100, Wei Liu wrote:
> On Fri, Jul 22, 2016 at 03:27:30AM +0000, osstest service owner
> wrote:
> > 
> > flight 97737 xen-unstable real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/97737/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-amd64-xl-credit2  19 guest-start/debian.repeat fail
> > REGR. vs. 97664
> >  test-armhf-armhf-xl          15 guest-start/debian.repeat fail
> > REGR. vs. 97664
> >  test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat fail
> > REGR. vs. 97664
>
Thanks from bringing my attention to this. I've seen it happening in
local testing a few times also, and was investigating already.

> From
> http://logs.test-lab.xenproject.org/osstest/logs/97737/test-amd64-amd
> 64-xl-credit2/serial-fiano0.log
> 
> Jul 21 07:38:22.383917 (XEN) Assertion 'rqd->avgload >= 0 && rqd-
> >b_avgload >= 0' failed at sched_credit2.c:734
>
Right. My investigation shows that it is the b_avgload>=0 check that is
actually failing, and looking at the code confirmed that, and I found
out why.

I've just sent a patch that I believe will fix this.

I can't find it in the archives yet, but it should be:
 [PATCH] xen: credit2: don't let b_avgload go negative.
 msg-id: <146918909364.19443.6394900696027710502.stgit@Solace.fritz.box>

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [xen-unstable test] 97737: regressions - FAIL
  2016-07-22  3:27 [xen-unstable test] 97737: regressions - FAIL osstest service owner
  2016-07-22 10:49 ` Wei Liu
@ 2016-07-25  8:53 ` Wei Liu
  2016-07-25 11:05   ` Julien Grall
  1 sibling, 1 reply; 10+ messages in thread
From: Wei Liu @ 2016-07-25  8:53 UTC (permalink / raw
  To: osstest service owner
  Cc: Julien Grall, xen-devel, Wei Liu, Stefano Stabellini

On Fri, Jul 22, 2016 at 03:27:30AM +0000, osstest service owner wrote:
> flight 97737 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/97737/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-armhf-armhf-xl          15 guest-start/debian.repeat fail REGR. vs. 97664

From

http://logs.test-lab.xenproject.org/osstest/logs/97737/test-armhf-armhf-xl/serial-cubietruck-picasso.log


Jul 21 17:08:59.405183 [ 4479.814529] ------------[ cut here ]------------

Jul 21 17:09:16.961529 [ 4479.814600] kernel BUG at drivers/xen/grant-table.c:923!

Jul 21 17:09:16.966838 [ 4479.814628] Internal error: Oops - BUG: 0 [#1] SMP ARM

Jul 21 17:09:16.972090 [ 4479.814656] Modules linked in: xen_gntalloc bridge stp ipv6 llc brcmfmac brcmutil cfg80211

Jul 21 17:09:16.980340 [ 4479.814759] CPU: 1 PID: 24761 Comm: vif5.0-q0-guest Not tainted 3.16.7-ckt12+ #1

Jul 21 17:09:16.987841 [ 4479.814795] task: d8ef7600 ti: d85bc000 task.ti: d85bc000

Jul 21 17:09:16.993339 [ 4479.814833] PC is at gnttab_batch_copy+0xd0/0xe4

Jul 21 17:09:16.997963 [ 4479.814860] LR is at gnttab_batch_copy+0x1c/0xe4

Jul 21 17:09:17.002718 [ 4479.814888] pc : [<c04bb190>]    lr : [<c04bb0dc>]    psr: a0070013

Jul 21 17:09:17.008962 [ 4479.814888] sp : d85bdea0  ip : deadbeef  fp : c0c8e140

Jul 21 17:09:17.014341 [ 4479.814935] r10: 00000000  r9 : e1bec000  r8 : 00000000

Jul 21 17:09:17.019595 [ 4479.814960] r7 : 00000002  r6 : 00000002  r5 : d85bdf20  r4 : e1bf4d30

Jul 21 17:09:17.026095 [ 4479.814990] r3 : 00000001  r2 : deadbeef  r1 : deadbeef  r0 : fffffff2

Jul 21 17:09:17.032717 [ 4479.815021] Flags: NzCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment kernel

Jul 21 17:09:17.040091 [ 4479.815055] Control: 10c5387d  Table: 78d8406a  DAC: 00000015

Jul 21 17:09:17.045964 [ 4479.815084] Process vif5.0-q0-guest (pid: 24761, stack limit = 0xd85bc248)

Jul 21 17:09:17.052840 [ 4479.815114] Stack: (0xd85bdea0 to 0xd85be000)

Jul 21 17:09:17.057218 [ 4479.815145] dea0: 00000001 d8b11388 d85bdf20 d85bdf04 00000002 c05eb054 00000388 00000000

Jul 21 17:09:17.065469 [ 4479.815183] dec0: d85bdf04 00000000 00000000 c0b7ea80 db0995c0 c05e86e4 e1bf4000 0000003c

Jul 21 17:09:17.073753 [ 4479.815221] dee0: 00000000 00000000 00000000 c0b8849c e1bf4cfc c0c8e140 e1bf4d30 e1bf4cc4

Jul 21 17:09:17.082001 [ 4479.815260] df00: db0c3e80 00000000 d85bdf08 d85bdf08 d8c5cb40 d8c5cb40 00000001 00000000

Jul 21 17:09:17.090217 [ 4479.815298] df20: 00000002 00000000 00000001 00000000 e1bf4d30 e1c1f530 000004c6 0000023c

Jul 21 17:09:17.098466 [ 4479.815337] df40: 00000000 00000000 d84aab80 e1bec000 c05ea990 00000000 00000000 00000000

Jul 21 17:09:17.106720 [ 4479.815375] df60: 00000000 c0266238 00000000 00000000 000000f8 e1bec000 00000000 00000000

Jul 21 17:09:17.114844 [ 4479.815414] df80: d85bdf80 d85bdf80 00000000 00000000 d85bdf90 d85bdf90 d85bdfac d84aab80

Jul 21 17:09:17.123093 [ 4479.815451] dfa0: c0266168 00000000 00000000 c020f138 00000000 00000000 00000000 00000000

Jul 21 17:09:17.131345 [ 4479.815489] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000

Jul 21 17:09:17.139596 [ 4479.815527] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000

Jul 21 17:09:17.147841 [ 4479.815583] [<c04bb190>] (gnttab_batch_copy) from [<c05eb054>] (xenvif_kthread_guest_rx+0x6c4/0xb58)

Jul 21 17:09:17.156969 [ 4479.815636] [<c05eb054>] (xenvif_kthread_guest_rx) from [<c0266238>] (kthread+0xd0/0xe8)

Jul 21 17:09:17.165217 [ 4479.815681] [<c0266238>] (kthread) from [<c020f138>] (ret_from_fork+0x14/0x3c)

Jul 21 17:09:17.172467 [ 4479.815721] Code: e1c432b4 eaffffe0 e7f001f2 e8bd80f8 (e7f001f2) 

Jul 21 17:09:17.178595 [ 4479.815766] ---[ end trace 6ba7d172d52e24e2 ]---

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [xen-unstable test] 97737: regressions - FAIL
  2016-07-25  8:53 ` Wei Liu
@ 2016-07-25 11:05   ` Julien Grall
  2016-07-25 11:11     ` Wei Liu
  0 siblings, 1 reply; 10+ messages in thread
From: Julien Grall @ 2016-07-25 11:05 UTC (permalink / raw
  To: Wei Liu, osstest service owner
  Cc: Andre Przywara, xen-devel, Stefano Stabellini, Wei Chen,
	Steve Capper

Hi Wei,

On 25/07/16 09:53, Wei Liu wrote:
> On Fri, Jul 22, 2016 at 03:27:30AM +0000, osstest service owner wrote:
>> flight 97737 xen-unstable real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/97737/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  test-armhf-armhf-xl          15 guest-start/debian.repeat fail REGR. vs. 97664
>
> From
>
> \
>
>
> Jul 21 17:08:59.405183 [ 4479.814529] ------------[ cut here ]------------
>
> Jul 21 17:09:16.961529 [ 4479.814600] kernel BUG at drivers/xen/grant-table.c:923!
>
> Jul 21 17:09:16.966838 [ 4479.814628] Internal error: Oops - BUG: 0 [#1] SMP ARM
>
> Jul 21 17:09:16.972090 [ 4479.814656] Modules linked in: xen_gntalloc bridge stp ipv6 llc brcmfmac brcmutil cfg80211
>
> Jul 21 17:09:16.980340 [ 4479.814759] CPU: 1 PID: 24761 Comm: vif5.0-q0-guest Not tainted 3.16.7-ckt12+ #1
>
> Jul 21 17:09:16.987841 [ 4479.814795] task: d8ef7600 ti: d85bc000 task.ti: d85bc000
>
> Jul 21 17:09:16.993339 [ 4479.814833] PC is at gnttab_batch_copy+0xd0/0xe4
>
> Jul 21 17:09:16.997963 [ 4479.814860] LR is at gnttab_batch_copy+0x1c/0xe4
>
> Jul 21 17:09:17.002718 [ 4479.814888] pc : [<c04bb190>]    lr : [<c04bb0dc>]    psr: a0070013
>
> Jul 21 17:09:17.008962 [ 4479.814888] sp : d85bdea0  ip : deadbeef  fp : c0c8e140
>
> Jul 21 17:09:17.014341 [ 4479.814935] r10: 00000000  r9 : e1bec000  r8 : 00000000
>
> Jul 21 17:09:17.019595 [ 4479.814960] r7 : 00000002  r6 : 00000002  r5 : d85bdf20  r4 : e1bf4d30
>
> Jul 21 17:09:17.026095 [ 4479.814990] r3 : 00000001  r2 : deadbeef  r1 : deadbeef  r0 : fffffff2
>
> Jul 21 17:09:17.032717 [ 4479.815021] Flags: NzCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
>
> Jul 21 17:09:17.040091 [ 4479.815055] Control: 10c5387d  Table: 78d8406a  DAC: 00000015
>
> Jul 21 17:09:17.045964 [ 4479.815084] Process vif5.0-q0-guest (pid: 24761, stack limit = 0xd85bc248)
>
> Jul 21 17:09:17.052840 [ 4479.815114] Stack: (0xd85bdea0 to 0xd85be000)
>
> Jul 21 17:09:17.057218 [ 4479.815145] dea0: 00000001 d8b11388 d85bdf20 d85bdf04 00000002 c05eb054 00000388 00000000
>
> Jul 21 17:09:17.065469 [ 4479.815183] dec0: d85bdf04 00000000 00000000 c0b7ea80 db0995c0 c05e86e4 e1bf4000 0000003c
>
> Jul 21 17:09:17.073753 [ 4479.815221] dee0: 00000000 00000000 00000000 c0b8849c e1bf4cfc c0c8e140 e1bf4d30 e1bf4cc4
>
> Jul 21 17:09:17.082001 [ 4479.815260] df00: db0c3e80 00000000 d85bdf08 d85bdf08 d8c5cb40 d8c5cb40 00000001 00000000
>
> Jul 21 17:09:17.090217 [ 4479.815298] df20: 00000002 00000000 00000001 00000000 e1bf4d30 e1c1f530 000004c6 0000023c
>
> Jul 21 17:09:17.098466 [ 4479.815337] df40: 00000000 00000000 d84aab80 e1bec000 c05ea990 00000000 00000000 00000000
>
> Jul 21 17:09:17.106720 [ 4479.815375] df60: 00000000 c0266238 00000000 00000000 000000f8 e1bec000 00000000 00000000
>
> Jul 21 17:09:17.114844 [ 4479.815414] df80: d85bdf80 d85bdf80 00000000 00000000 d85bdf90 d85bdf90 d85bdfac d84aab80
>
> Jul 21 17:09:17.123093 [ 4479.815451] dfa0: c0266168 00000000 00000000 c020f138 00000000 00000000 00000000 00000000
>
> Jul 21 17:09:17.131345 [ 4479.815489] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
>
> Jul 21 17:09:17.139596 [ 4479.815527] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000
>
> Jul 21 17:09:17.147841 [ 4479.815583] [<c04bb190>] (gnttab_batch_copy) from [<c05eb054>] (xenvif_kthread_guest_rx+0x6c4/0xb58)

 From my understanding the hypercall can only return a non-zero value if 
copy_*_guest helpers fails.

Those helpers will only fail when it is not possible to retrieve the 
page associated to a virtual address. The value is in r0 (-EFAULT), seem 
to confirm that. So this looks very suspicious.

Looking at the other parameters and the assembly code (see [1]):
	count = 2 (saved in r6)
         batch = 0xe1bf4d30 (saved in r4)

They looks valid to me. Also, there was no major change around that code 
recently.

I don't have much ideas what is going on. And unfortunately Xen ARM does 
not print much information when the translation fail.

I have CCed few more people to see if they have a clue.

>
> Jul 21 17:09:17.156969 [ 4479.815636] [<c05eb054>] (xenvif_kthread_guest_rx) from [<c0266238>] (kthread+0xd0/0xe8)
>
> Jul 21 17:09:17.165217 [ 4479.815681] [<c0266238>] (kthread) from [<c020f138>] (ret_from_fork+0x14/0x3c)
>
> Jul 21 17:09:17.172467 [ 4479.815721] Code: e1c432b4 eaffffe0 e7f001f2 e8bd80f8 (e7f001f2)
>
> Jul 21 17:09:17.178595 [ 4479.815766] ---[ end trace 6ba7d172d52e24e2 ]---
>

Regards,

[1] 
http://logs.test-lab.xenproject.org/osstest/logs/97737/build-armhf-pvops/info.html

c04bb0c0 <gnttab_batch_copy>:
c04bb0c0:       e92d40f8        push    {r3, r4, r5, r6, r7, lr}
c04bb0c4:       e1a02001        mov     r2, r1
c04bb0c8:       e1a04000        mov     r4, r0
c04bb0cc:       e1a06001        mov     r6, r1
c04bb0d0:       e1a01000        mov     r1, r0
c04bb0d4:       e3a00005        mov     r0, #5
c04bb0d8:       ebf54e39        bl      c020e9c4 <HYPERVISOR_grant_table_op>
c04bb0dc:       e3500000        cmp     r0, #0
c04bb0e0:       1a00002a        bne     c04bb190 <gnttab_batch_copy+0xd0>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [xen-unstable test] 97737: regressions - FAIL
  2016-07-25 11:05   ` Julien Grall
@ 2016-07-25 11:11     ` Wei Liu
  2016-07-25 11:34       ` Julien Grall
  0 siblings, 1 reply; 10+ messages in thread
From: Wei Liu @ 2016-07-25 11:11 UTC (permalink / raw
  To: Julien Grall
  Cc: xen-devel, Wei Liu, Steve Capper, Andre Przywara,
	osstest service owner, Stefano Stabellini, Wei Chen

Thanks for investigating.

There are only two arm related changes in the range being tested:

* a43cc8f - (origin/smoke) arm/traps: fix bug in dump_guest_s1_walk handling of level 2 page tables (5 days ago) <Jonathan Daugherty>
* 60e06f2 - arm/traps: fix bug in dump_guest_s1_walk L1 page table offset computation (5 days ago) <Jonathan Daugherty>

They don't look very suspicious.

If you need help navigating osstest test report, please let me know.

Wei.


On Mon, Jul 25, 2016 at 12:05:08PM +0100, Julien Grall wrote:
> Hi Wei,
> 
> On 25/07/16 09:53, Wei Liu wrote:
> >On Fri, Jul 22, 2016 at 03:27:30AM +0000, osstest service owner wrote:
> >>flight 97737 xen-unstable real [real]
> >>http://logs.test-lab.xenproject.org/osstest/logs/97737/
> >>
> >>Regressions :-(
> >>
> >>Tests which did not succeed and are blocking,
> >>including tests which could not be run:
> >> test-armhf-armhf-xl          15 guest-start/debian.repeat fail REGR. vs. 97664
> >
> >From
> >
> >\
> >
> >
> >Jul 21 17:08:59.405183 [ 4479.814529] ------------[ cut here ]------------
> >
> >Jul 21 17:09:16.961529 [ 4479.814600] kernel BUG at drivers/xen/grant-table.c:923!
> >
> >Jul 21 17:09:16.966838 [ 4479.814628] Internal error: Oops - BUG: 0 [#1] SMP ARM
> >
> >Jul 21 17:09:16.972090 [ 4479.814656] Modules linked in: xen_gntalloc bridge stp ipv6 llc brcmfmac brcmutil cfg80211
> >
> >Jul 21 17:09:16.980340 [ 4479.814759] CPU: 1 PID: 24761 Comm: vif5.0-q0-guest Not tainted 3.16.7-ckt12+ #1
> >
> >Jul 21 17:09:16.987841 [ 4479.814795] task: d8ef7600 ti: d85bc000 task.ti: d85bc000
> >
> >Jul 21 17:09:16.993339 [ 4479.814833] PC is at gnttab_batch_copy+0xd0/0xe4
> >
> >Jul 21 17:09:16.997963 [ 4479.814860] LR is at gnttab_batch_copy+0x1c/0xe4
> >
> >Jul 21 17:09:17.002718 [ 4479.814888] pc : [<c04bb190>]    lr : [<c04bb0dc>]    psr: a0070013
> >
> >Jul 21 17:09:17.008962 [ 4479.814888] sp : d85bdea0  ip : deadbeef  fp : c0c8e140
> >
> >Jul 21 17:09:17.014341 [ 4479.814935] r10: 00000000  r9 : e1bec000  r8 : 00000000
> >
> >Jul 21 17:09:17.019595 [ 4479.814960] r7 : 00000002  r6 : 00000002  r5 : d85bdf20  r4 : e1bf4d30
> >
> >Jul 21 17:09:17.026095 [ 4479.814990] r3 : 00000001  r2 : deadbeef  r1 : deadbeef  r0 : fffffff2
> >
> >Jul 21 17:09:17.032717 [ 4479.815021] Flags: NzCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
> >
> >Jul 21 17:09:17.040091 [ 4479.815055] Control: 10c5387d  Table: 78d8406a  DAC: 00000015
> >
> >Jul 21 17:09:17.045964 [ 4479.815084] Process vif5.0-q0-guest (pid: 24761, stack limit = 0xd85bc248)
> >
> >Jul 21 17:09:17.052840 [ 4479.815114] Stack: (0xd85bdea0 to 0xd85be000)
> >
> >Jul 21 17:09:17.057218 [ 4479.815145] dea0: 00000001 d8b11388 d85bdf20 d85bdf04 00000002 c05eb054 00000388 00000000
> >
> >Jul 21 17:09:17.065469 [ 4479.815183] dec0: d85bdf04 00000000 00000000 c0b7ea80 db0995c0 c05e86e4 e1bf4000 0000003c
> >
> >Jul 21 17:09:17.073753 [ 4479.815221] dee0: 00000000 00000000 00000000 c0b8849c e1bf4cfc c0c8e140 e1bf4d30 e1bf4cc4
> >
> >Jul 21 17:09:17.082001 [ 4479.815260] df00: db0c3e80 00000000 d85bdf08 d85bdf08 d8c5cb40 d8c5cb40 00000001 00000000
> >
> >Jul 21 17:09:17.090217 [ 4479.815298] df20: 00000002 00000000 00000001 00000000 e1bf4d30 e1c1f530 000004c6 0000023c
> >
> >Jul 21 17:09:17.098466 [ 4479.815337] df40: 00000000 00000000 d84aab80 e1bec000 c05ea990 00000000 00000000 00000000
> >
> >Jul 21 17:09:17.106720 [ 4479.815375] df60: 00000000 c0266238 00000000 00000000 000000f8 e1bec000 00000000 00000000
> >
> >Jul 21 17:09:17.114844 [ 4479.815414] df80: d85bdf80 d85bdf80 00000000 00000000 d85bdf90 d85bdf90 d85bdfac d84aab80
> >
> >Jul 21 17:09:17.123093 [ 4479.815451] dfa0: c0266168 00000000 00000000 c020f138 00000000 00000000 00000000 00000000
> >
> >Jul 21 17:09:17.131345 [ 4479.815489] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> >
> >Jul 21 17:09:17.139596 [ 4479.815527] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000
> >
> >Jul 21 17:09:17.147841 [ 4479.815583] [<c04bb190>] (gnttab_batch_copy) from [<c05eb054>] (xenvif_kthread_guest_rx+0x6c4/0xb58)
> 
> From my understanding the hypercall can only return a non-zero value if
> copy_*_guest helpers fails.
> 
> Those helpers will only fail when it is not possible to retrieve the page
> associated to a virtual address. The value is in r0 (-EFAULT), seem to
> confirm that. So this looks very suspicious.
> 
> Looking at the other parameters and the assembly code (see [1]):
> 	count = 2 (saved in r6)
>         batch = 0xe1bf4d30 (saved in r4)
> 
> They looks valid to me. Also, there was no major change around that code
> recently.
> 
> I don't have much ideas what is going on. And unfortunately Xen ARM does not
> print much information when the translation fail.
> 
> I have CCed few more people to see if they have a clue.
> 
> >
> >Jul 21 17:09:17.156969 [ 4479.815636] [<c05eb054>] (xenvif_kthread_guest_rx) from [<c0266238>] (kthread+0xd0/0xe8)
> >
> >Jul 21 17:09:17.165217 [ 4479.815681] [<c0266238>] (kthread) from [<c020f138>] (ret_from_fork+0x14/0x3c)
> >
> >Jul 21 17:09:17.172467 [ 4479.815721] Code: e1c432b4 eaffffe0 e7f001f2 e8bd80f8 (e7f001f2)
> >
> >Jul 21 17:09:17.178595 [ 4479.815766] ---[ end trace 6ba7d172d52e24e2 ]---
> >
> 
> Regards,
> 
> [1] http://logs.test-lab.xenproject.org/osstest/logs/97737/build-armhf-pvops/info.html
> 
> c04bb0c0 <gnttab_batch_copy>:
> c04bb0c0:       e92d40f8        push    {r3, r4, r5, r6, r7, lr}
> c04bb0c4:       e1a02001        mov     r2, r1
> c04bb0c8:       e1a04000        mov     r4, r0
> c04bb0cc:       e1a06001        mov     r6, r1
> c04bb0d0:       e1a01000        mov     r1, r0
> c04bb0d4:       e3a00005        mov     r0, #5
> c04bb0d8:       ebf54e39        bl      c020e9c4 <HYPERVISOR_grant_table_op>
> c04bb0dc:       e3500000        cmp     r0, #0
> c04bb0e0:       1a00002a        bne     c04bb190 <gnttab_batch_copy+0xd0>
> 
> -- 
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [xen-unstable test] 97737: regressions - FAIL
  2016-07-25 11:11     ` Wei Liu
@ 2016-07-25 11:34       ` Julien Grall
  2016-07-25 11:53         ` Wei Liu
  0 siblings, 1 reply; 10+ messages in thread
From: Julien Grall @ 2016-07-25 11:34 UTC (permalink / raw
  To: Wei Liu
  Cc: xen-devel, Steve Capper, Andre Przywara, osstest service owner,
	Stefano Stabellini, Wei Chen



On 25/07/16 12:11, Wei Liu wrote:
> Thanks for investigating.
>
> There are only two arm related changes in the range being tested:
>
> * a43cc8f - (origin/smoke) arm/traps: fix bug in dump_guest_s1_walk handling of level 2 page tables (5 days ago) <Jonathan Daugherty>
> * 60e06f2 - arm/traps: fix bug in dump_guest_s1_walk L1 page table offset computation (5 days ago) <Jonathan Daugherty>
>
> They don't look very suspicious.

The modified function is not called in the hypervisor at all. It's only 
here for manual debugging.

Although, this may change the offset of some function (assuming we have 
an hidden bug).

> If you need help navigating osstest test report, please let me know.

I have noticed that there is 2 kernel BUG in the logs (with one host 
reboot in the middle). Can you detail what the exact the test?

It looks to me that you are trying to power cycle multiple time a guest.

Cheers,

> Wei.
>
>
> On Mon, Jul 25, 2016 at 12:05:08PM +0100, Julien Grall wrote:
>> Hi Wei,
>>
>> On 25/07/16 09:53, Wei Liu wrote:
>>> On Fri, Jul 22, 2016 at 03:27:30AM +0000, osstest service owner wrote:
>>>> flight 97737 xen-unstable real [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/97737/
>>>>
>>>> Regressions :-(
>>>>
>>>> Tests which did not succeed and are blocking,
>>>> including tests which could not be run:
>>>> test-armhf-armhf-xl          15 guest-start/debian.repeat fail REGR. vs. 97664
>>>
>>> From
>>>
>>> \
>>>
>>>
>>> Jul 21 17:08:59.405183 [ 4479.814529] ------------[ cut here ]------------
>>>
>>> Jul 21 17:09:16.961529 [ 4479.814600] kernel BUG at drivers/xen/grant-table.c:923!
>>>
>>> Jul 21 17:09:16.966838 [ 4479.814628] Internal error: Oops - BUG: 0 [#1] SMP ARM
>>>
>>> Jul 21 17:09:16.972090 [ 4479.814656] Modules linked in: xen_gntalloc bridge stp ipv6 llc brcmfmac brcmutil cfg80211
>>>
>>> Jul 21 17:09:16.980340 [ 4479.814759] CPU: 1 PID: 24761 Comm: vif5.0-q0-guest Not tainted 3.16.7-ckt12+ #1
>>>
>>> Jul 21 17:09:16.987841 [ 4479.814795] task: d8ef7600 ti: d85bc000 task.ti: d85bc000
>>>
>>> Jul 21 17:09:16.993339 [ 4479.814833] PC is at gnttab_batch_copy+0xd0/0xe4
>>>
>>> Jul 21 17:09:16.997963 [ 4479.814860] LR is at gnttab_batch_copy+0x1c/0xe4
>>>
>>> Jul 21 17:09:17.002718 [ 4479.814888] pc : [<c04bb190>]    lr : [<c04bb0dc>]    psr: a0070013
>>>
>>> Jul 21 17:09:17.008962 [ 4479.814888] sp : d85bdea0  ip : deadbeef  fp : c0c8e140
>>>
>>> Jul 21 17:09:17.014341 [ 4479.814935] r10: 00000000  r9 : e1bec000  r8 : 00000000
>>>
>>> Jul 21 17:09:17.019595 [ 4479.814960] r7 : 00000002  r6 : 00000002  r5 : d85bdf20  r4 : e1bf4d30
>>>
>>> Jul 21 17:09:17.026095 [ 4479.814990] r3 : 00000001  r2 : deadbeef  r1 : deadbeef  r0 : fffffff2
>>>
>>> Jul 21 17:09:17.032717 [ 4479.815021] Flags: NzCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
>>>
>>> Jul 21 17:09:17.040091 [ 4479.815055] Control: 10c5387d  Table: 78d8406a  DAC: 00000015
>>>
>>> Jul 21 17:09:17.045964 [ 4479.815084] Process vif5.0-q0-guest (pid: 24761, stack limit = 0xd85bc248)
>>>
>>> Jul 21 17:09:17.052840 [ 4479.815114] Stack: (0xd85bdea0 to 0xd85be000)
>>>
>>> Jul 21 17:09:17.057218 [ 4479.815145] dea0: 00000001 d8b11388 d85bdf20 d85bdf04 00000002 c05eb054 00000388 00000000
>>>
>>> Jul 21 17:09:17.065469 [ 4479.815183] dec0: d85bdf04 00000000 00000000 c0b7ea80 db0995c0 c05e86e4 e1bf4000 0000003c
>>>
>>> Jul 21 17:09:17.073753 [ 4479.815221] dee0: 00000000 00000000 00000000 c0b8849c e1bf4cfc c0c8e140 e1bf4d30 e1bf4cc4
>>>
>>> Jul 21 17:09:17.082001 [ 4479.815260] df00: db0c3e80 00000000 d85bdf08 d85bdf08 d8c5cb40 d8c5cb40 00000001 00000000
>>>
>>> Jul 21 17:09:17.090217 [ 4479.815298] df20: 00000002 00000000 00000001 00000000 e1bf4d30 e1c1f530 000004c6 0000023c
>>>
>>> Jul 21 17:09:17.098466 [ 4479.815337] df40: 00000000 00000000 d84aab80 e1bec000 c05ea990 00000000 00000000 00000000
>>>
>>> Jul 21 17:09:17.106720 [ 4479.815375] df60: 00000000 c0266238 00000000 00000000 000000f8 e1bec000 00000000 00000000
>>>
>>> Jul 21 17:09:17.114844 [ 4479.815414] df80: d85bdf80 d85bdf80 00000000 00000000 d85bdf90 d85bdf90 d85bdfac d84aab80
>>>
>>> Jul 21 17:09:17.123093 [ 4479.815451] dfa0: c0266168 00000000 00000000 c020f138 00000000 00000000 00000000 00000000
>>>
>>> Jul 21 17:09:17.131345 [ 4479.815489] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
>>>
>>> Jul 21 17:09:17.139596 [ 4479.815527] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000
>>>
>>> Jul 21 17:09:17.147841 [ 4479.815583] [<c04bb190>] (gnttab_batch_copy) from [<c05eb054>] (xenvif_kthread_guest_rx+0x6c4/0xb58)
>>
>> From my understanding the hypercall can only return a non-zero value if
>> copy_*_guest helpers fails.
>>
>> Those helpers will only fail when it is not possible to retrieve the page
>> associated to a virtual address. The value is in r0 (-EFAULT), seem to
>> confirm that. So this looks very suspicious.
>>
>> Looking at the other parameters and the assembly code (see [1]):
>> 	count = 2 (saved in r6)
>>         batch = 0xe1bf4d30 (saved in r4)
>>
>> They looks valid to me. Also, there was no major change around that code
>> recently.
>>
>> I don't have much ideas what is going on. And unfortunately Xen ARM does not
>> print much information when the translation fail.
>>
>> I have CCed few more people to see if they have a clue.
>>
>>>
>>> Jul 21 17:09:17.156969 [ 4479.815636] [<c05eb054>] (xenvif_kthread_guest_rx) from [<c0266238>] (kthread+0xd0/0xe8)
>>>
>>> Jul 21 17:09:17.165217 [ 4479.815681] [<c0266238>] (kthread) from [<c020f138>] (ret_from_fork+0x14/0x3c)
>>>
>>> Jul 21 17:09:17.172467 [ 4479.815721] Code: e1c432b4 eaffffe0 e7f001f2 e8bd80f8 (e7f001f2)
>>>
>>> Jul 21 17:09:17.178595 [ 4479.815766] ---[ end trace 6ba7d172d52e24e2 ]---
>>>
>>
>> Regards,
>>
>> [1] http://logs.test-lab.xenproject.org/osstest/logs/97737/build-armhf-pvops/info.html
>>
>> c04bb0c0 <gnttab_batch_copy>:
>> c04bb0c0:       e92d40f8        push    {r3, r4, r5, r6, r7, lr}
>> c04bb0c4:       e1a02001        mov     r2, r1
>> c04bb0c8:       e1a04000        mov     r4, r0
>> c04bb0cc:       e1a06001        mov     r6, r1
>> c04bb0d0:       e1a01000        mov     r1, r0
>> c04bb0d4:       e3a00005        mov     r0, #5
>> c04bb0d8:       ebf54e39        bl      c020e9c4 <HYPERVISOR_grant_table_op>
>> c04bb0dc:       e3500000        cmp     r0, #0
>> c04bb0e0:       1a00002a        bne     c04bb190 <gnttab_batch_copy+0xd0>
>>
>> --
>> Julien Grall
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [xen-unstable test] 97737: regressions - FAIL
  2016-07-25 11:34       ` Julien Grall
@ 2016-07-25 11:53         ` Wei Liu
  2016-07-26  9:52           ` Julien Grall
  0 siblings, 1 reply; 10+ messages in thread
From: Wei Liu @ 2016-07-25 11:53 UTC (permalink / raw
  To: Julien Grall
  Cc: xen-devel, Wei Liu, Steve Capper, Andre Przywara,
	osstest service owner, Stefano Stabellini, Wei Chen

On Mon, Jul 25, 2016 at 12:34:53PM +0100, Julien Grall wrote:
> 
> 
> On 25/07/16 12:11, Wei Liu wrote:
> >Thanks for investigating.
> >
> >There are only two arm related changes in the range being tested:
> >
> >* a43cc8f - (origin/smoke) arm/traps: fix bug in dump_guest_s1_walk handling of level 2 page tables (5 days ago) <Jonathan Daugherty>
> >* 60e06f2 - arm/traps: fix bug in dump_guest_s1_walk L1 page table offset computation (5 days ago) <Jonathan Daugherty>
> >
> >They don't look very suspicious.
> 
> The modified function is not called in the hypervisor at all. It's only here
> for manual debugging.
> 
> Although, this may change the offset of some function (assuming we have an
> hidden bug).
> 
> >If you need help navigating osstest test report, please let me know.
> 
> I have noticed that there is 2 kernel BUG in the logs (with one host reboot
> in the middle). Can you detail what the exact the test?

What I normally do is to look at the summary page of the failed test to
identify the failed step and the time.

In this case:

http://logs.test-lab.xenproject.org/osstest/logs/97737/test-armhf-armhf-xl/info.html

The time stamp said the failed step started at 2016-07-21 19:30:10 Z,
and then I look at the output of failed step log to look for time stamp
that the test failed.  Then I would look for output between these two
time stamps in various log.

I now realise the log I pasted in was not from the failed test. I wanted
to paste in the second kernel oops, which should be the culprit that
the test failed. The two oops was the same one, though.

To identify  which test step was running when the first oops happened,
the same technique applies.

It seems that the oops happened during ts-debian-install according to
time stamps.

Wei.

> 
> It looks to me that you are trying to power cycle multiple time a guest.
> 
> Cheers,
> 
> >Wei.
> >
> >
> >On Mon, Jul 25, 2016 at 12:05:08PM +0100, Julien Grall wrote:
> >>Hi Wei,
> >>
> >>On 25/07/16 09:53, Wei Liu wrote:
> >>>On Fri, Jul 22, 2016 at 03:27:30AM +0000, osstest service owner wrote:
> >>>>flight 97737 xen-unstable real [real]
> >>>>http://logs.test-lab.xenproject.org/osstest/logs/97737/
> >>>>
> >>>>Regressions :-(
> >>>>
> >>>>Tests which did not succeed and are blocking,
> >>>>including tests which could not be run:
> >>>>test-armhf-armhf-xl          15 guest-start/debian.repeat fail REGR. vs. 97664
> >>>
> >>>From
> >>>
> >>>\
> >>>
> >>>
> >>>Jul 21 17:08:59.405183 [ 4479.814529] ------------[ cut here ]------------
> >>>
> >>>Jul 21 17:09:16.961529 [ 4479.814600] kernel BUG at drivers/xen/grant-table.c:923!
> >>>
> >>>Jul 21 17:09:16.966838 [ 4479.814628] Internal error: Oops - BUG: 0 [#1] SMP ARM
> >>>
> >>>Jul 21 17:09:16.972090 [ 4479.814656] Modules linked in: xen_gntalloc bridge stp ipv6 llc brcmfmac brcmutil cfg80211
> >>>
> >>>Jul 21 17:09:16.980340 [ 4479.814759] CPU: 1 PID: 24761 Comm: vif5.0-q0-guest Not tainted 3.16.7-ckt12+ #1
> >>>
> >>>Jul 21 17:09:16.987841 [ 4479.814795] task: d8ef7600 ti: d85bc000 task.ti: d85bc000
> >>>
> >>>Jul 21 17:09:16.993339 [ 4479.814833] PC is at gnttab_batch_copy+0xd0/0xe4
> >>>
> >>>Jul 21 17:09:16.997963 [ 4479.814860] LR is at gnttab_batch_copy+0x1c/0xe4
> >>>
> >>>Jul 21 17:09:17.002718 [ 4479.814888] pc : [<c04bb190>]    lr : [<c04bb0dc>]    psr: a0070013
> >>>
> >>>Jul 21 17:09:17.008962 [ 4479.814888] sp : d85bdea0  ip : deadbeef  fp : c0c8e140
> >>>
> >>>Jul 21 17:09:17.014341 [ 4479.814935] r10: 00000000  r9 : e1bec000  r8 : 00000000
> >>>
> >>>Jul 21 17:09:17.019595 [ 4479.814960] r7 : 00000002  r6 : 00000002  r5 : d85bdf20  r4 : e1bf4d30
> >>>
> >>>Jul 21 17:09:17.026095 [ 4479.814990] r3 : 00000001  r2 : deadbeef  r1 : deadbeef  r0 : fffffff2
> >>>
> >>>Jul 21 17:09:17.032717 [ 4479.815021] Flags: NzCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
> >>>
> >>>Jul 21 17:09:17.040091 [ 4479.815055] Control: 10c5387d  Table: 78d8406a  DAC: 00000015
> >>>
> >>>Jul 21 17:09:17.045964 [ 4479.815084] Process vif5.0-q0-guest (pid: 24761, stack limit = 0xd85bc248)
> >>>
> >>>Jul 21 17:09:17.052840 [ 4479.815114] Stack: (0xd85bdea0 to 0xd85be000)
> >>>
> >>>Jul 21 17:09:17.057218 [ 4479.815145] dea0: 00000001 d8b11388 d85bdf20 d85bdf04 00000002 c05eb054 00000388 00000000
> >>>
> >>>Jul 21 17:09:17.065469 [ 4479.815183] dec0: d85bdf04 00000000 00000000 c0b7ea80 db0995c0 c05e86e4 e1bf4000 0000003c
> >>>
> >>>Jul 21 17:09:17.073753 [ 4479.815221] dee0: 00000000 00000000 00000000 c0b8849c e1bf4cfc c0c8e140 e1bf4d30 e1bf4cc4
> >>>
> >>>Jul 21 17:09:17.082001 [ 4479.815260] df00: db0c3e80 00000000 d85bdf08 d85bdf08 d8c5cb40 d8c5cb40 00000001 00000000
> >>>
> >>>Jul 21 17:09:17.090217 [ 4479.815298] df20: 00000002 00000000 00000001 00000000 e1bf4d30 e1c1f530 000004c6 0000023c
> >>>
> >>>Jul 21 17:09:17.098466 [ 4479.815337] df40: 00000000 00000000 d84aab80 e1bec000 c05ea990 00000000 00000000 00000000
> >>>
> >>>Jul 21 17:09:17.106720 [ 4479.815375] df60: 00000000 c0266238 00000000 00000000 000000f8 e1bec000 00000000 00000000
> >>>
> >>>Jul 21 17:09:17.114844 [ 4479.815414] df80: d85bdf80 d85bdf80 00000000 00000000 d85bdf90 d85bdf90 d85bdfac d84aab80
> >>>
> >>>Jul 21 17:09:17.123093 [ 4479.815451] dfa0: c0266168 00000000 00000000 c020f138 00000000 00000000 00000000 00000000
> >>>
> >>>Jul 21 17:09:17.131345 [ 4479.815489] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> >>>
> >>>Jul 21 17:09:17.139596 [ 4479.815527] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000
> >>>
> >>>Jul 21 17:09:17.147841 [ 4479.815583] [<c04bb190>] (gnttab_batch_copy) from [<c05eb054>] (xenvif_kthread_guest_rx+0x6c4/0xb58)
> >>
> >>From my understanding the hypercall can only return a non-zero value if
> >>copy_*_guest helpers fails.
> >>
> >>Those helpers will only fail when it is not possible to retrieve the page
> >>associated to a virtual address. The value is in r0 (-EFAULT), seem to
> >>confirm that. So this looks very suspicious.
> >>
> >>Looking at the other parameters and the assembly code (see [1]):
> >>	count = 2 (saved in r6)
> >>        batch = 0xe1bf4d30 (saved in r4)
> >>
> >>They looks valid to me. Also, there was no major change around that code
> >>recently.
> >>
> >>I don't have much ideas what is going on. And unfortunately Xen ARM does not
> >>print much information when the translation fail.
> >>
> >>I have CCed few more people to see if they have a clue.
> >>
> >>>
> >>>Jul 21 17:09:17.156969 [ 4479.815636] [<c05eb054>] (xenvif_kthread_guest_rx) from [<c0266238>] (kthread+0xd0/0xe8)
> >>>
> >>>Jul 21 17:09:17.165217 [ 4479.815681] [<c0266238>] (kthread) from [<c020f138>] (ret_from_fork+0x14/0x3c)
> >>>
> >>>Jul 21 17:09:17.172467 [ 4479.815721] Code: e1c432b4 eaffffe0 e7f001f2 e8bd80f8 (e7f001f2)
> >>>
> >>>Jul 21 17:09:17.178595 [ 4479.815766] ---[ end trace 6ba7d172d52e24e2 ]---
> >>>
> >>
> >>Regards,
> >>
> >>[1] http://logs.test-lab.xenproject.org/osstest/logs/97737/build-armhf-pvops/info.html
> >>
> >>c04bb0c0 <gnttab_batch_copy>:
> >>c04bb0c0:       e92d40f8        push    {r3, r4, r5, r6, r7, lr}
> >>c04bb0c4:       e1a02001        mov     r2, r1
> >>c04bb0c8:       e1a04000        mov     r4, r0
> >>c04bb0cc:       e1a06001        mov     r6, r1
> >>c04bb0d0:       e1a01000        mov     r1, r0
> >>c04bb0d4:       e3a00005        mov     r0, #5
> >>c04bb0d8:       ebf54e39        bl      c020e9c4 <HYPERVISOR_grant_table_op>
> >>c04bb0dc:       e3500000        cmp     r0, #0
> >>c04bb0e0:       1a00002a        bne     c04bb190 <gnttab_batch_copy+0xd0>
> >>
> >>--
> >>Julien Grall
> >
> 
> -- 
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [xen-unstable test] 97737: regressions - FAIL
  2016-07-25 11:53         ` Wei Liu
@ 2016-07-26  9:52           ` Julien Grall
  2016-07-26 10:05             ` Wei Liu
  0 siblings, 1 reply; 10+ messages in thread
From: Julien Grall @ 2016-07-26  9:52 UTC (permalink / raw
  To: Wei Liu
  Cc: xen-devel, Steve Capper, Andre Przywara, osstest service owner,
	Stefano Stabellini, Wei Chen



On 25/07/16 12:53, Wei Liu wrote:
> On Mon, Jul 25, 2016 at 12:34:53PM +0100, Julien Grall wrote:
>>
>>
>> On 25/07/16 12:11, Wei Liu wrote:
>>> Thanks for investigating.
>>>
>>> There are only two arm related changes in the range being tested:
>>>
>>> * a43cc8f - (origin/smoke) arm/traps: fix bug in dump_guest_s1_walk handling of level 2 page tables (5 days ago) <Jonathan Daugherty>
>>> * 60e06f2 - arm/traps: fix bug in dump_guest_s1_walk L1 page table offset computation (5 days ago) <Jonathan Daugherty>
>>>
>>> They don't look very suspicious.
>>
>> The modified function is not called in the hypervisor at all. It's only here
>> for manual debugging.
>>
>> Although, this may change the offset of some function (assuming we have an
>> hidden bug).
>>
>>> If you need help navigating osstest test report, please let me know.
>>
>> I have noticed that there is 2 kernel BUG in the logs (with one host reboot
>> in the middle). Can you detail what the exact the test?
>
> What I normally do is to look at the summary page of the failed test to
> identify the failed step and the time.
>
> In this case:
>
> http://logs.test-lab.xenproject.org/osstest/logs/97737/test-armhf-armhf-xl/info.html
>
> The time stamp said the failed step started at 2016-07-21 19:30:10 Z,
> and then I look at the output of failed step log to look for time stamp
> that the test failed.  Then I would look for output between these two
> time stamps in various log.
>
> I now realise the log I pasted in was not from the failed test. I wanted
> to paste in the second kernel oops, which should be the culprit that
> the test failed. The two oops was the same one, though.
>
> To identify  which test step was running when the first oops happened,
> the same technique applies.
>
> It seems that the oops happened during ts-debian-install according to
> time stamps.

I looked at the new xen-unstable report [1], test-armhf-armhf-xl is 
passing (though it is on arndale and not a cubietruck).

Is there any way to re-run the previous test on the cubietruck? Just to 
confirm the bug is reproducable.

Regards,

[1] http://logs.test-lab.xenproject.org/osstest/logs/99605/

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [xen-unstable test] 97737: regressions - FAIL
  2016-07-26  9:52           ` Julien Grall
@ 2016-07-26 10:05             ` Wei Liu
  0 siblings, 0 replies; 10+ messages in thread
From: Wei Liu @ 2016-07-26 10:05 UTC (permalink / raw
  To: Julien Grall
  Cc: xen-devel, Wei Liu, Steve Capper, Andre Przywara,
	osstest service owner, Stefano Stabellini, Wei Chen

On Tue, Jul 26, 2016 at 10:52:13AM +0100, Julien Grall wrote:
> 
> 
> On 25/07/16 12:53, Wei Liu wrote:
> >On Mon, Jul 25, 2016 at 12:34:53PM +0100, Julien Grall wrote:
> >>
> >>
> >>On 25/07/16 12:11, Wei Liu wrote:
> >>>Thanks for investigating.
> >>>
> >>>There are only two arm related changes in the range being tested:
> >>>
> >>>* a43cc8f - (origin/smoke) arm/traps: fix bug in dump_guest_s1_walk handling of level 2 page tables (5 days ago) <Jonathan Daugherty>
> >>>* 60e06f2 - arm/traps: fix bug in dump_guest_s1_walk L1 page table offset computation (5 days ago) <Jonathan Daugherty>
> >>>
> >>>They don't look very suspicious.
> >>
> >>The modified function is not called in the hypervisor at all. It's only here
> >>for manual debugging.
> >>
> >>Although, this may change the offset of some function (assuming we have an
> >>hidden bug).
> >>
> >>>If you need help navigating osstest test report, please let me know.
> >>
> >>I have noticed that there is 2 kernel BUG in the logs (with one host reboot
> >>in the middle). Can you detail what the exact the test?
> >
> >What I normally do is to look at the summary page of the failed test to
> >identify the failed step and the time.
> >
> >In this case:
> >
> >http://logs.test-lab.xenproject.org/osstest/logs/97737/test-armhf-armhf-xl/info.html
> >
> >The time stamp said the failed step started at 2016-07-21 19:30:10 Z,
> >and then I look at the output of failed step log to look for time stamp
> >that the test failed.  Then I would look for output between these two
> >time stamps in various log.
> >
> >I now realise the log I pasted in was not from the failed test. I wanted
> >to paste in the second kernel oops, which should be the culprit that
> >the test failed. The two oops was the same one, though.
> >
> >To identify  which test step was running when the first oops happened,
> >the same technique applies.
> >
> >It seems that the oops happened during ts-debian-install according to
> >time stamps.
> 
> I looked at the new xen-unstable report [1], test-armhf-armhf-xl is passing
> (though it is on arndale and not a cubietruck).
> 
> Is there any way to re-run the previous test on the cubietruck? Just to
> confirm the bug is reproducable.
> 

This

http://logs.test-lab.xenproject.org/osstest/results/history/test-armhf-armhf-xl/xen-unstable

The history of that test case on xen-unstable branch.

> Regards,
> 
> [1] http://logs.test-lab.xenproject.org/osstest/logs/99605/
> 
> -- 
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-07-26 10:05 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-07-22  3:27 [xen-unstable test] 97737: regressions - FAIL osstest service owner
2016-07-22 10:49 ` Wei Liu
2016-07-22 12:10   ` Dario Faggioli
2016-07-25  8:53 ` Wei Liu
2016-07-25 11:05   ` Julien Grall
2016-07-25 11:11     ` Wei Liu
2016-07-25 11:34       ` Julien Grall
2016-07-25 11:53         ` Wei Liu
2016-07-26  9:52           ` Julien Grall
2016-07-26 10:05             ` Wei Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.