From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29]) by oss.sgi.com (Postfix) with ESMTP id 9EDB77F5F for ; Mon, 13 Jul 2015 09:06:46 -0500 (CDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25]) by relay2.corp.sgi.com (Postfix) with ESMTP id 7B78E304062 for ; Mon, 13 Jul 2015 07:06:43 -0700 (PDT) Received: from mail-wi0-f170.google.com (mail-wi0-f170.google.com [209.85.212.170]) by cuda.sgi.com with ESMTP id BB2oNz86NKynvCrl (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Mon, 13 Jul 2015 07:06:40 -0700 (PDT) Received: by wibud3 with SMTP id ud3so30810667wib.1 for ; Mon, 13 Jul 2015 07:06:39 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20150713125214.GA50787@bfoster.bfoster> References: <20150618133122.GC43254@bfoster.bfoster> <5582DCDA.9070200@sandeen.net> <20150709125119.GC63282@bfoster.bfoster> <20150709151811.GE63282@bfoster.bfoster> <20150709183255.GG63282@bfoster.bfoster> <20150713125214.GA50787@bfoster.bfoster> Date: Mon, 13 Jul 2015 22:06:39 +0800 Message-ID: Subject: Re: Data can't be wrote to XFS RIP [] xfs_dir2_sf_get_parent_ino+0xa/0x20 From: Kuo Hugo List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: multipart/mixed; boundary="===============6298729062421663666==" Errors-To: xfs-bounces@oss.sgi.com Sender: xfs-bounces@oss.sgi.com To: Brian Foster Cc: Hugo Kuo , Eric Sandeen , darrell@swiftstack.com, xfs@oss.sgi.com --===============6298729062421663666== Content-Type: multipart/alternative; boundary=047d7bfcf2f605e065051ac23bc2 --047d7bfcf2f605e065051ac23bc2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Brain, Sorry for the wrong file in previous message. I believe this the right one. https://cloud.swiftstack.com/v1/AUTH_hugo/public/xfs.ko.debug /usr/lib/debug/lib/modules/2.6.32-504.23.4.el6.x86_64/kernel/fs/xfs/xfs.ko.= debug MD5 : 27829c9c55f4f5b095d29a7de7c27254 Thanks // Hugo =E2=80=8B 2015-07-13 20:52 GMT+08:00 Brian Foster : > On Fri, Jul 10, 2015 at 01:36:41PM +0800, Kuo Hugo wrote: > > Hi Brain, > > > > Is this the file which you need ? > > > > https://cloud.swiftstack.com/v1/AUTH_hugo/public/xfs.ko > > > > $> modinfo xfs > > > > filename: /lib/modules/2.6.32-504.23.4.el6.x86_64/kernel/fs/xfs/xfs.ko > > license: GPL > > description: SGI XFS with ACLs, security attributes, large block/inode > > numbers, no debug enabled > > author: Silicon Graphics, Inc. > > srcversion: 0C1B17926BDDA4F121479EE > > depends: exportfs > > vermagic: 2.6.32-504.23.4.el6.x86_64 SMP mod_unload modversion > > > > No, this isn't the debug version. We need the one from the debug package > that was installed (/usr/lib/debug?). > > Brian > > > Thanks // Hugo > > =E2=80=8B > > > > 2015-07-10 2:32 GMT+08:00 Brian Foster : > > > > > On Fri, Jul 10, 2015 at 12:40:00AM +0800, Kuo Hugo wrote: > > > > Hi Brain, > > > > > > > > There you go. > > > > > > > > https://cloud.swiftstack.com/v1/AUTH_hugo/public/vmlinux > > > > > > > > https://cloud.swiftstack.com/v1/AUTH_hugo/public/System.map-2.6.32-504.23= .4.el6.x86_64 > > > > > > > > $ md5sum vmlinux > > > > 82aaa694a174c0a29e78c05e73adf5d8 vmlinux > > > > > > > > Yes, I can read it with this vmlinux image. Put all files > > > > (vmcore,vmlinux,System.map) in a folder and run $crash vmlinux vmco= re > > > > > > > > > > Thanks, I can actually load that up now. Note that we'll probably nee= d > > > the modules and whatnot (xfs.ko) also to be able to look at any XFS > > > bits. It might be easiest to just tar up and compress whatever > directory > > > structure has the debug-enabled vmlinux and all the kernel modules. > > > Thanks. > > > > > > Brian > > > > > > > Hugo > > > > =E2=80=8B > > > > > > > > 2015-07-09 23:18 GMT+08:00 Brian Foster : > > > > > > > > > On Thu, Jul 09, 2015 at 09:20:00PM +0800, Kuo Hugo wrote: > > > > > > Hi Brian, > > > > > > > > > > > > *Operating System Version:* > > > > > > Linux-2.6.32-504.23.4.el6.x86_64-x86_64-with-centos-6.6-Final > > > > > > > > > > > > *NODE 1* > > > > > > > > > > > > https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore > > > > > > > > > https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg.tx= t > > > > > > > > > > > > > > > > > > *NODE 2* > > > > > > > > > > > > > https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore_r2obj02 > > > > > > > > > > > > > > > https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg_r2obj0= 2.txt > > > > > > > > > > > > > > > > > > Any thoughts would be appreciate > > > > > > > > > > > > > > > > I'm not able to fire up crash with these core files and the kerne= l > > > debug > > > > > info from the following centos kernel debuginfo package: > > > > > > > > > > kernel-debuginfo-2.6.32-504.23.4.el6.centos.plus.x86_64.rpm > > > > > > > > > > It complains about a version mismatch between the vmlinux and cor= e > > > file. > > > > > I'm no crash expert... are you sure the cores above correspond to > this > > > > > kernel? Does crash load up for you on said box if you run somethi= ng > > > like > > > > > the following? > > > > > > > > > > crash /usr/lib/debug/lib/modules/.../vmlinux vmcore > > > > > > > > > > Note that you might need to install the above kernel-debuginfo > package > > > > > to get the debug (vmlinux) file. If so, could you also upload tha= t > > > > > debuginfo rpm somewhere? > > > > > > > > > > Brian > > > > > > > > > > > Thanks // Hugo > > > > > > > > > > > > > > > > > > 2015-07-09 20:51 GMT+08:00 Brian Foster : > > > > > > > > > > > > > On Thu, Jul 09, 2015 at 06:57:55PM +0800, Kuo Hugo wrote: > > > > > > > > Hi Folks, > > > > > > > > > > > > > > > > As the results of 32 disks with xfs_repair -n seems no any > error > > > > > shows > > > > > > > up. > > > > > > > > We currently tried to deploy CentOS 6.6 for testing. (The > > > previous > > > > > kernel > > > > > > > > panic was came from Ubuntu). > > > > > > > > The CentOS nodes encountered kernel panic with same daemon > but > > > the > > > > > > > problem > > > > > > > > may a bit differ. > > > > > > > > > > > > > > > > - It was broken on xfs_dir2_sf_get_parent_ino+0xa/0x20 i= n > > > Ubuntu. > > > > > > > > - Here=E2=80=99s the log in CentOS. It=E2=80=99s broken = on > > > > > > > > xfs_dir2_sf_getdents+0x2a0/0x3a0 > > > > > > > > > > > > > > > > > > > > > > I'd venture to guess it's the same behavior here. The previou= s > > > kernel > > > > > > > had a callback for the parent inode number that was called vi= a > > > > > > > xfs_dir2_sf_getdents(). Taking a look at a 6.6 kernel, it has= a > > > static > > > > > > > inline here instead. > > > > > > > > > > > > > > > <1>BUG: unable to handle kernel NULL pointer dereference at > > > > > > > 0000000000000001 > > > > > > > > <1>IP: [] xfs_dir2_sf_getdents+0x2a0/0x3a= 0 > > > [xfs] > > > > > > > > <4>PGD 1072327067 PUD 1072328067 PMD 0 > > > > > > > > <4>Oops: 0000 [#1] SMP > > > > > > > > <4>last sysfs file: > > > > > > > > > > > > > > > > > > > > > > > > /sys/devices/pci0000:80/0000:80:03.2/0000:83:00.0/host10/port-10:1/expand= er-10:1/port-10:1:16/end_device-10:1:16/target10:0:25/10:0:25:0/block/sdz/q= ueue/rotational > > > > > > > > <4>CPU 17 > > > > > > > > <4>Modules linked in: xt_conntrack tun xfs exportfs > > > iptable_filter > > > > > > > > ipt_REDIRECT iptable_nat nf_nat nf_conntrack_ipv4 > nf_conntrack > > > > > > > > nf_defrag_ipv4 ip_tables ip_vs ipv6 libcrc32c iTCO_wdt > > > > > > > > iTCO_vendor_support ses enclosure igb i2c_algo_bit sb_edac > > > edac_core > > > > > > > > i2c_i801 i2c_core sg shpchp lpc_ich mfd_core ixgbe dca ptp > > > pps_core > > > > > > > > mdio power_meter acpi_ipmi ipmi_si ipmi_msghandler ext4 jbd= 2 > > > mbcache > > > > > > > > sd_mod crc_t10dif mpt3sas scsi_transport_sas raid_class > xhci_hcd > > > ahci > > > > > > > > wmi dm_mirror dm_region_hash dm_log dm_mod [last unloaded: > > > > > > > > scsi_wait_scan] > > > > > > > > <4> > > > > > > > > <4>Pid: 4454, comm: swift-object-se Not tainted > > > > > > > > 2.6.32-504.23.4.el6.x86_64 #1 Silicon Mechanics Storform > > > > > > > > R518.v5P/X10DRi-T4+ > > > > > > > > <4>RIP: 0010:[] [] > > > > > > > > xfs_dir2_sf_getdents+0x2a0/0x3a0 [xfs] > > > > > > > > <4>RSP: 0018:ffff880871f6de18 EFLAGS: 00010202 > > > > > > > > <4>RAX: 0000000000000000 RBX: 0000000000000004 RCX: > > > 0000000000000000 > > > > > > > > <4>RDX: 0000000000000001 RSI: 0000000000000000 RDI: > > > 00007faa74006203 > > > > > > > > <4>RBP: ffff880871f6de68 R08: 000000032eb04bc9 R09: > > > 0000000000000004 > > > > > > > > <4>R10: 0000000000008030 R11: 0000000000000246 R12: > > > 0000000000000000 > > > > > > > > <4>R13: 0000000000000002 R14: ffff88106eff7000 R15: > > > ffff8808715b4580 > > > > > > > > <4>FS: 00007faa85425700(0000) GS:ffff880028360000(0000) > > > > > > > knlGS:0000000000000000 > > > > > > > > <4>CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > > > > > > > <4>CR2: 0000000000000001 CR3: 0000001072325000 CR4: > > > 00000000001407e0 > > > > > > > > <4>DR0: 0000000000000000 DR1: 0000000000000000 DR2: > > > 0000000000000000 > > > > > > > > <4>DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: > > > 0000000000000400 > > > > > > > > <4>Process swift-object-se (pid: 4454, threadinfo > > > ffff880871f6c000, > > > > > > > > task ffff880860f18ab0) > > > > > > > > <4>Stack: > > > > > > > > <4> ffff880871f6de28 ffffffff811a4bb0 ffff880871f6df38 > > > > > ffff880874749cc0 > > > > > > > > <4> 0000000100000103 ffff8802381f8c00 ffff880871f6df38 > > > > > > > ffff8808715b4580 > > > > > > > > <4> 0000000000000082 ffff8802381f8d88 ffff880871f6dec8 > > > > > > > ffffffffa035ab31 > > > > > > > > <4>Call Trace: > > > > > > > > <4> [] ? filldir+0x0/0xe0 > > > > > > > > <4> [] xfs_readdir+0xe1/0x130 [xfs] > > > > > > > > <4> [] ? filldir+0x0/0xe0 > > > > > > > > <4> [] xfs_file_readdir+0x39/0x50 [xfs] > > > > > > > > <4> [] vfs_readdir+0xc0/0xe0 > > > > > > > > <4> [] ? final_putname+0x26/0x50 > > > > > > > > <4> [] sys_getdents+0x89/0xf0 > > > > > > > > <4> [] system_call_fastpath+0x16/0x1b > > > > > > > > <4>Code: 01 00 00 00 48 c7 c6 38 6b 3a a0 48 8b 7d c0 ff 55 > b8 > > > 85 c0 > > > > > > > > 0f 85 af 00 00 00 49 8b 37 e9 ec fd ff ff 66 0f 1f 84 00 00 > 00 > > > 00 00 > > > > > > > > <41> 80 7c 24 01 00 0f 84 9c 00 00 00 45 0f b6 44 24 03 41 > 0f b6 > > > > > > > > <1>RIP [] xfs_dir2_sf_getdents+0x2a0/0x3= a0 > > > [xfs] > > > > > > > > <4> RSP > > > > > > > > <4>CR2: 0000000000000001 > > > > > > > > > > > > > > > ... > > > > > > > > > > > > > > > > I=E2=80=99ve got the vmcore dump from operator. Does vmcore= help for > > > > > > > > troubleshooting kind issue ? > > > > > > > > > > > > > > > > > > > > > > Hmm, well it couldn't hurt. Is the vmcore based on this 6.6 > > > kernel? Can > > > > > > > you provide the exact kernel version and post the vmcore > somewhere? > > > > > > > > > > > > > > Brian > > > > > > > > > > > > > > > Thanks // Hugo > > > > > > > > =E2=80=8B > > > > > > > > > > > > > > > > 2015-06-18 22:59 GMT+08:00 Eric Sandeen >: > > > > > > > > > > > > > > > > > On 6/18/15 9:29 AM, Kuo Hugo wrote: > > > > > > > > > >>- Have you tried an 'xfs_repair -n' of the affected > > > filesystem? > > > > > Note > > > > > > > > > that -n will report problems only and prevent any > modification > > > by > > > > > > > repair. > > > > > > > > > > > > > > > > > > > > *We might to to xfs_repair if we can address which disk > > > causes > > > > > the > > > > > > > > > issue. * > > > > > > > > > > > > > > > > > > If you do, please save the output, and if it finds > anything, > > > please > > > > > > > > > provide the output in this thread. > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > -Eric > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > xfs mailing list > > > > > > > > xfs@oss.sgi.com > > > > > > > > http://oss.sgi.com/mailman/listinfo/xfs > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > xfs mailing list > > xfs@oss.sgi.com > > http://oss.sgi.com/mailman/listinfo/xfs > > --047d7bfcf2f605e065051ac23bc2 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

Hi Brain,

Sorry for the wrong file in pre= vious message. I believe this the right one.

https://cloud.swiftstack.com/v1/= AUTH_hugo/public/xfs.ko.debug

/usr/lib/debug/lib/module=
s/2.6.32-504.23.4.el6.x86_64/kernel/fs/xfs/xfs.ko.debug

MD5 : 27829c9c55f4= f5b095d29a7de7c27254

Thanks // Hugo

=E2=80= =8B

2015-07-13 20:52 GMT+08:00 Brian Foster <bfoster@redhat.com>= :
On Fri, Jul 10,= 2015 at 01:36:41PM +0800, Kuo Hugo wrote:
> Hi Brain,
>
> Is this the file which you need ?
>
> https://cloud.swiftstack.com/v1/AUTH_hug= o/public/xfs.ko
>
> $> modinfo xfs
>
> filename: /lib/modules/2.6.32-504.23.4.el6.x86_64/kernel/fs/xfs/xfs.ko=
> license: GPL
> description: SGI XFS with ACLs, security attributes, large block/inode=
> numbers, no debug enabled
> author: Silicon Graphics, Inc.
> srcversion: 0C1B17926BDDA4F121479EE
> depends: exportfs
> vermagic: 2.6.32-504.23.4.el6.x86_64 SMP mod_unload modversion
>

No, this isn't the debug version. We need the one from the debug= package
that was installed (/usr/lib/debug?).

Brian

> Thanks // Hugo
> =E2=80=8B
>
> 2015-07-10 2:32 GMT+08:00 Brian Foster <bfoster@redhat.com>:
>
> > On Fri, Jul 10, 2015 at 12:40:00AM +0800, Kuo Hugo wrote:
> > > Hi Brain,
> > >
> > > There you go.
> > >
> > > https://cloud.swiftstack.com/= v1/AUTH_hugo/public/vmlinux
> > >
> > http= s://cloud.swiftstack.com/v1/AUTH_hugo/public/System.map-2.6.32-504.23.4.el6= .x86_64
> > >
> > > $ md5sum vmlinux
> > > 82aaa694a174c0a29e78c05e73adf5d8=C2=A0 vmlinux
> > >
> > > Yes, I can read it with this vmlinux image. Put all files > > > (vmcore,vmlinux,System.map) in a folder and run $crash vmlin= ux vmcore
> > >
> >
> > Thanks, I can actually load that up now. Note that we'll prob= ably need
> > the modules and whatnot (xfs.ko) also to be able to look at any X= FS
> > bits. It might be easiest to just tar up and compress whatever di= rectory
> > structure has the debug-enabled vmlinux and all the kernel module= s.
> > Thanks.
> >
> > Brian
> >
> > > Hugo
> > > =E2=80=8B
> > >
> > > 2015-07-09 23:18 GMT+08:00 Brian Foster <bfoster@redhat.com>:
> > >
> > > > On Thu, Jul 09, 2015 at 09:20:00PM +0800, Kuo Hugo wrot= e:
> > > > > Hi Brian,
> > > > >
> > > > > *Operating System Version:*
> > > > > Linux-2.6.32-504.23.4.el6.x86_64-x86_64-with-cento= s-6.6-Final
> > > > >
> > > > > *NODE 1*
> > > > >
> > > > > https://cloud.s= wiftstack.com/v1/AUTH_burton/brtnswift/vmcore
> > > > >
> > https://cloud.swifts= tack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg.txt
> > > > >
> > > > >
> > > > > *NODE 2*
> > > > >
> > > > > https:/= /cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore_r2obj02
> > > > >
> > > >
> > https://clou= d.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg_r2obj02.txt
> > > > >
> > > > >
> > > > > Any thoughts would be appreciate
> > > > >
> > > >
> > > > I'm not able to fire up crash with these core files= and the kernel
> > debug
> > > > info from the following centos kernel debuginfo package= :
> > > >
> > > > kernel-debuginfo-2.6.32-504.23.4.el6.centos.plus.x86_64= .rpm
> > > >
> > > > It complains about a version mismatch between the vmlin= ux and core
> > file.
> > > > I'm no crash expert... are you sure the cores above= correspond to this
> > > > kernel? Does crash load up for you on said box if you r= un something
> > like
> > > > the following?
> > > >
> > > >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0crash /usr/lib/debug/l= ib/modules/.../vmlinux vmcore
> > > >
> > > > Note that you might need to install the above kernel-de= buginfo package
> > > > to get the debug (vmlinux) file. If so, could you also = upload that
> > > > debuginfo rpm somewhere?
> > > >
> > > > Brian
> > > >
> > > > > Thanks // Hugo
> > > > >
> > > > >
> > > > > 2015-07-09 20:51 GMT+08:00 Brian Foster <bfoster@redhat.com>:
> > > > >
> > > > > > On Thu, Jul 09, 2015 at 06:57:55PM +0800, Kuo= Hugo wrote:
> > > > > > > Hi Folks,
> > > > > > >
> > > > > > > As the results of 32 disks with xfs_repa= ir -n seems no any error
> > > > shows
> > > > > > up.
> > > > > > > We currently tried to deploy CentOS 6.6 = for testing. (The
> > previous
> > > > kernel
> > > > > > > panic was came from Ubuntu).
> > > > > > > The CentOS nodes encountered kernel pani= c with same daemon but
> > the
> > > > > > problem
> > > > > > > may a bit differ.
> > > > > > >
> > > > > > >=C2=A0 =C2=A0 - It was broken on xfs_dir2= _sf_get_parent_ino+0xa/0x20 in
> > Ubuntu.
> > > > > > >=C2=A0 =C2=A0 - Here=E2=80=99s the log in= CentOS. It=E2=80=99s broken on
> > > > > > >=C2=A0 =C2=A0 xfs_dir2_sf_getdents+0x2a0/= 0x3a0
> > > > > > >
> > > > > >
> > > > > > I'd venture to guess it's the same be= havior here. The previous
> > kernel
> > > > > > had a callback for the parent inode number th= at was called via
> > > > > > xfs_dir2_sf_getdents(). Taking a look at a 6.= 6 kernel, it has a
> > static
> > > > > > inline here instead.
> > > > > >
> > > > > > > <1>BUG: unable to handle kernel NU= LL pointer dereference at
> > > > > > 0000000000000001
> > > > > > > <1>IP: [<ffffffffa0362d60>] = xfs_dir2_sf_getdents+0x2a0/0x3a0
> > [xfs]
> > > > > > > <4>PGD 1072327067 PUD 1072328067 P= MD 0
> > > > > > > <4>Oops: 0000 [#1] SMP
> > > > > > > <4>last sysfs file:
> > > > > > >
> > > > > >
> > > >
> > /sys/devices/pci0000:80/0000:80:03.2/0000:83:00.0/host10/port-10:= 1/expander-10:1/port-10:1:16/end_device-10:1:16/target10:0:25/10:0:25:0/blo= ck/sdz/queue/rotational
> > > > > > > <4>CPU 17
> > > > > > > <4>Modules linked in: xt_conntrack= tun xfs exportfs
> > iptable_filter
> > > > > > > ipt_REDIRECT iptable_nat nf_nat nf_connt= rack_ipv4 nf_conntrack
> > > > > > > nf_defrag_ipv4 ip_tables ip_vs ipv6 libc= rc32c iTCO_wdt
> > > > > > > iTCO_vendor_support ses enclosure igb i2= c_algo_bit sb_edac
> > edac_core
> > > > > > > i2c_i801 i2c_core sg shpchp lpc_ich mfd_= core ixgbe dca ptp
> > pps_core
> > > > > > > mdio power_meter acpi_ipmi ipmi_si ipmi_= msghandler ext4 jbd2
> > mbcache
> > > > > > > sd_mod crc_t10dif mpt3sas scsi_transport= _sas raid_class xhci_hcd
> > ahci
> > > > > > > wmi dm_mirror dm_region_hash dm_log dm_m= od [last unloaded:
> > > > > > > scsi_wait_scan]
> > > > > > > <4>
> > > > > > > <4>Pid: 4454, comm: swift-object-s= e Not tainted
> > > > > > > 2.6.32-504.23.4.el6.x86_64 #1 Silicon Me= chanics Storform
> > > > > > > R518.v5P/X10DRi-T4+
> > > > > > > <4>RIP: 0010:[<ffffffffa0362d60= >]=C2=A0 [<ffffffffa0362d60>]
> > > > > > > xfs_dir2_sf_getdents+0x2a0/0x3a0 [xfs] > > > > > > > <4>RSP: 0018:ffff880871f6de18=C2= =A0 EFLAGS: 00010202
> > > > > > > <4>RAX: 0000000000000000 RBX: 0000= 000000000004 RCX:
> > 0000000000000000
> > > > > > > <4>RDX: 0000000000000001 RSI: 0000= 000000000000 RDI:
> > 00007faa74006203
> > > > > > > <4>RBP: ffff880871f6de68 R08: 0000= 00032eb04bc9 R09:
> > 0000000000000004
> > > > > > > <4>R10: 0000000000008030 R11: 0000= 000000000246 R12:
> > 0000000000000000
> > > > > > > <4>R13: 0000000000000002 R14: ffff= 88106eff7000 R15:
> > ffff8808715b4580
> > > > > > > <4>FS:=C2=A0 00007faa85425700(0000= ) GS:ffff880028360000(0000)
> > > > > > knlGS:0000000000000000
> > > > > > > <4>CS:=C2=A0 0010 DS: 0000 ES: 000= 0 CR0: 0000000080050033
> > > > > > > <4>CR2: 0000000000000001 CR3: 0000= 001072325000 CR4:
> > 00000000001407e0
> > > > > > > <4>DR0: 0000000000000000 DR1: 0000= 000000000000 DR2:
> > 0000000000000000
> > > > > > > <4>DR3: 0000000000000000 DR6: 0000= 0000ffff0ff0 DR7:
> > 0000000000000400
> > > > > > > <4>Process swift-object-se (pid: 4= 454, threadinfo
> > ffff880871f6c000,
> > > > > > > task ffff880860f18ab0)
> > > > > > > <4>Stack:
> > > > > > > <4> ffff880871f6de28 ffffffff811a4= bb0 ffff880871f6df38
> > > > ffff880874749cc0
> > > > > > > <4><d> 0000000100000103 ffff= 8802381f8c00 ffff880871f6df38
> > > > > > ffff8808715b4580
> > > > > > > <4><d> 0000000000000082 ffff= 8802381f8d88 ffff880871f6dec8
> > > > > > ffffffffa035ab31
> > > > > > > <4>Call Trace:
> > > > > > > <4> [<ffffffff811a4bb0>] ? f= illdir+0x0/0xe0
> > > > > > > <4> [<ffffffffa035ab31>] xfs= _readdir+0xe1/0x130 [xfs]
> > > > > > > <4> [<ffffffff811a4bb0>] ? f= illdir+0x0/0xe0
> > > > > > > <4> [<ffffffffa038fe29>] xfs= _file_readdir+0x39/0x50 [xfs]
> > > > > > > <4> [<ffffffff811a4e30>] vfs= _readdir+0xc0/0xe0
> > > > > > > <4> [<ffffffff8119bd86>] ? f= inal_putname+0x26/0x50
> > > > > > > <4> [<ffffffff811a4fb9>] sys= _getdents+0x89/0xf0
> > > > > > > <4> [<ffffffff8100b0f2>] sys= tem_call_fastpath+0x16/0x1b
> > > > > > > <4>Code: 01 00 00 00 48 c7 c6 38 6= b 3a a0 48 8b 7d c0 ff 55 b8
> > 85 c0
> > > > > > > 0f 85 af 00 00 00 49 8b 37 e9 ec fd ff f= f 66 0f 1f 84 00 00 00
> > 00 00
> > > > > > > <41> 80 7c 24 01 00 0f 84 9c 00 00= 00 45 0f b6 44 24 03 41 0f b6
> > > > > > > <1>RIP=C2=A0 [<ffffffffa0362d60= >] xfs_dir2_sf_getdents+0x2a0/0x3a0
> > [xfs]
> > > > > > > <4> RSP <ffff880871f6de18> > > > > > > > <4>CR2: 0000000000000001
> > > > > > >
> > > > > > ...
> > > > > > >
> > > > > > > I=E2=80=99ve got the vmcore dump from op= erator. Does vmcore help for
> > > > > > > troubleshooting kind issue ?
> > > > > > >
> > > > > >
> > > > > > Hmm, well it couldn't hurt. Is the vmcore= based on this 6.6
> > kernel? Can
> > > > > > you provide the exact kernel version and post= the vmcore somewhere?
> > > > > >
> > > > > > Brian
> > > > > >
> > > > > > > Thanks // Hugo
> > > > > > > =E2=80=8B
> > > > > > >
> > > > > > > 2015-06-18 22:59 GMT+08:00 Eric Sandeen = <sandeen@sandeen.net>:
> > > > > > >
> > > > > > > > On 6/18/15 9:29 AM, Kuo Hugo wrote:=
> > > > > > > > >>- Have you tried an 'xf= s_repair -n' of the affected
> > filesystem?
> > > > Note
> > > > > > > > that -n will report problems only a= nd prevent any modification
> > by
> > > > > > repair.
> > > > > > > > >
> > > > > > > > > *We might to to xfs_repair if = we can address which disk
> > causes
> > > > the
> > > > > > > > issue. *
> > > > > > > >
> > > > > > > > If you do, please save the output, = and if it finds anything,
> > please
> > > > > > > > provide the output in this thread.<= br> > > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > -Eric
> > > > > > > >
> > > > > >
> > > > > > > ________________________________________= _______
> > > > > > > xfs mailing list
> > > > > > > xfs@o= ss.sgi.com
> > > > > > > http://oss.sgi.com/mailman= /listinfo/xfs
> > > > > >
> > > > > >
> > > >
> >

> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs


--047d7bfcf2f605e065051ac23bc2-- --===============6298729062421663666== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs --===============6298729062421663666==--