From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from relay.sgi.com (relay1.corp.sgi.com [137.38.102.111]) by oss.sgi.com (Postfix) with ESMTP id 0B9317F37 for ; Thu, 9 Jul 2015 08:20:07 -0500 (CDT) Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15]) by relay1.corp.sgi.com (Postfix) with ESMTP id EEFFA8F8035 for ; Thu, 9 Jul 2015 06:20:03 -0700 (PDT) Received: from mail-wi0-f178.google.com (mail-wi0-f178.google.com [209.85.212.178]) by cuda.sgi.com with ESMTP id TBc1xazwq5GIwDdn (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Thu, 09 Jul 2015 06:20:01 -0700 (PDT) Received: by wiwl6 with SMTP id l6so18582793wiw.0 for ; Thu, 09 Jul 2015 06:20:00 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20150709125119.GC63282@bfoster.bfoster> References: <20150618133122.GC43254@bfoster.bfoster> <5582DCDA.9070200@sandeen.net> <20150709125119.GC63282@bfoster.bfoster> Date: Thu, 9 Jul 2015 21:20:00 +0800 Message-ID: Subject: Re: Data can't be wrote to XFS RIP [] xfs_dir2_sf_get_parent_ino+0xa/0x20 From: Kuo Hugo List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: multipart/mixed; boundary="===============3593431590360515185==" Errors-To: xfs-bounces@oss.sgi.com Sender: xfs-bounces@oss.sgi.com To: Brian Foster Cc: Hugo Kuo , Eric Sandeen , darrell@swiftstack.com, xfs@oss.sgi.com --===============3593431590360515185== Content-Type: multipart/alternative; boundary=047d7b4724e2cc2a84051a711c61 --047d7b4724e2cc2a84051a711c61 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Brian, *Operating System Version:* Linux-2.6.32-504.23.4.el6.x86_64-x86_64-with-centos-6.6-Final *NODE 1* https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg.txt *NODE 2* https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore_r2obj02 https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg_r2obj02.= txt Any thoughts would be appreciate Thanks // Hugo 2015-07-09 20:51 GMT+08:00 Brian Foster : > On Thu, Jul 09, 2015 at 06:57:55PM +0800, Kuo Hugo wrote: > > Hi Folks, > > > > As the results of 32 disks with xfs_repair -n seems no any error shows > up. > > We currently tried to deploy CentOS 6.6 for testing. (The previous kern= el > > panic was came from Ubuntu). > > The CentOS nodes encountered kernel panic with same daemon but the > problem > > may a bit differ. > > > > - It was broken on xfs_dir2_sf_get_parent_ino+0xa/0x20 in Ubuntu. > > - Here=E2=80=99s the log in CentOS. It=E2=80=99s broken on > > xfs_dir2_sf_getdents+0x2a0/0x3a0 > > > > I'd venture to guess it's the same behavior here. The previous kernel > had a callback for the parent inode number that was called via > xfs_dir2_sf_getdents(). Taking a look at a 6.6 kernel, it has a static > inline here instead. > > > <1>BUG: unable to handle kernel NULL pointer dereference at > 0000000000000001 > > <1>IP: [] xfs_dir2_sf_getdents+0x2a0/0x3a0 [xfs] > > <4>PGD 1072327067 PUD 1072328067 PMD 0 > > <4>Oops: 0000 [#1] SMP > > <4>last sysfs file: > > > /sys/devices/pci0000:80/0000:80:03.2/0000:83:00.0/host10/port-10:1/expand= er-10:1/port-10:1:16/end_device-10:1:16/target10:0:25/10:0:25:0/block/sdz/q= ueue/rotational > > <4>CPU 17 > > <4>Modules linked in: xt_conntrack tun xfs exportfs iptable_filter > > ipt_REDIRECT iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack > > nf_defrag_ipv4 ip_tables ip_vs ipv6 libcrc32c iTCO_wdt > > iTCO_vendor_support ses enclosure igb i2c_algo_bit sb_edac edac_core > > i2c_i801 i2c_core sg shpchp lpc_ich mfd_core ixgbe dca ptp pps_core > > mdio power_meter acpi_ipmi ipmi_si ipmi_msghandler ext4 jbd2 mbcache > > sd_mod crc_t10dif mpt3sas scsi_transport_sas raid_class xhci_hcd ahci > > wmi dm_mirror dm_region_hash dm_log dm_mod [last unloaded: > > scsi_wait_scan] > > <4> > > <4>Pid: 4454, comm: swift-object-se Not tainted > > 2.6.32-504.23.4.el6.x86_64 #1 Silicon Mechanics Storform > > R518.v5P/X10DRi-T4+ > > <4>RIP: 0010:[] [] > > xfs_dir2_sf_getdents+0x2a0/0x3a0 [xfs] > > <4>RSP: 0018:ffff880871f6de18 EFLAGS: 00010202 > > <4>RAX: 0000000000000000 RBX: 0000000000000004 RCX: 0000000000000000 > > <4>RDX: 0000000000000001 RSI: 0000000000000000 RDI: 00007faa74006203 > > <4>RBP: ffff880871f6de68 R08: 000000032eb04bc9 R09: 0000000000000004 > > <4>R10: 0000000000008030 R11: 0000000000000246 R12: 0000000000000000 > > <4>R13: 0000000000000002 R14: ffff88106eff7000 R15: ffff8808715b4580 > > <4>FS: 00007faa85425700(0000) GS:ffff880028360000(0000) > knlGS:0000000000000000 > > <4>CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > <4>CR2: 0000000000000001 CR3: 0000001072325000 CR4: 00000000001407e0 > > <4>DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > > <4>DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 > > <4>Process swift-object-se (pid: 4454, threadinfo ffff880871f6c000, > > task ffff880860f18ab0) > > <4>Stack: > > <4> ffff880871f6de28 ffffffff811a4bb0 ffff880871f6df38 ffff880874749cc0 > > <4> 0000000100000103 ffff8802381f8c00 ffff880871f6df38 > ffff8808715b4580 > > <4> 0000000000000082 ffff8802381f8d88 ffff880871f6dec8 > ffffffffa035ab31 > > <4>Call Trace: > > <4> [] ? filldir+0x0/0xe0 > > <4> [] xfs_readdir+0xe1/0x130 [xfs] > > <4> [] ? filldir+0x0/0xe0 > > <4> [] xfs_file_readdir+0x39/0x50 [xfs] > > <4> [] vfs_readdir+0xc0/0xe0 > > <4> [] ? final_putname+0x26/0x50 > > <4> [] sys_getdents+0x89/0xf0 > > <4> [] system_call_fastpath+0x16/0x1b > > <4>Code: 01 00 00 00 48 c7 c6 38 6b 3a a0 48 8b 7d c0 ff 55 b8 85 c0 > > 0f 85 af 00 00 00 49 8b 37 e9 ec fd ff ff 66 0f 1f 84 00 00 00 00 00 > > <41> 80 7c 24 01 00 0f 84 9c 00 00 00 45 0f b6 44 24 03 41 0f b6 > > <1>RIP [] xfs_dir2_sf_getdents+0x2a0/0x3a0 [xfs] > > <4> RSP > > <4>CR2: 0000000000000001 > > > ... > > > > I=E2=80=99ve got the vmcore dump from operator. Does vmcore help for > > troubleshooting kind issue ? > > > > Hmm, well it couldn't hurt. Is the vmcore based on this 6.6 kernel? Can > you provide the exact kernel version and post the vmcore somewhere? > > Brian > > > Thanks // Hugo > > =E2=80=8B > > > > 2015-06-18 22:59 GMT+08:00 Eric Sandeen : > > > > > On 6/18/15 9:29 AM, Kuo Hugo wrote: > > > >>- Have you tried an 'xfs_repair -n' of the affected filesystem? Not= e > > > that -n will report problems only and prevent any modification by > repair. > > > > > > > > *We might to to xfs_repair if we can address which disk causes the > > > issue. * > > > > > > If you do, please save the output, and if it finds anything, please > > > provide the output in this thread. > > > > > > Thanks, > > > -Eric > > > > > > _______________________________________________ > > xfs mailing list > > xfs@oss.sgi.com > > http://oss.sgi.com/mailman/listinfo/xfs > > --047d7b4724e2cc2a84051a711c61 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

2015-07-09 20:51 GMT+08:00 Brian F= oster <bfoster@redhat.com>:
On Thu, Jul 09, 2015 at 06:57:55PM +0800, Kuo Hugo wrot= e:
> Hi Folks,
>
> As the results of 32 disks with xfs_repair -n seems no any error shows= up.
> We currently tried to deploy CentOS 6.6 for testing. (The previous ker= nel
> panic was came from Ubuntu).
> The CentOS nodes encountered kernel panic with same daemon but the pro= blem
> may a bit differ.
>
>=C2=A0 =C2=A0 - It was broken on xfs_dir2_sf_get_parent_ino+0xa/= 0x20 in Ubuntu.
>=C2=A0 =C2=A0 - Here=E2=80=99s the log in CentOS. It=E2=80=99s broken o= n
>=C2=A0 =C2=A0 xfs_dir2_sf_getdents+0x2a0/0x3a0
>

I'd venture to guess it's the same behavior here. The previous kern= el
had a callback for the parent inode number that was called via
xfs_dir2_sf_getdents(). Taking a look at a 6.6 kernel, it has a static
inline here instead.

> <1>BUG: unable to handle kernel NULL pointer dereference at 0000= 000000000001
> <1>IP: [<ffffffffa0362d60>] xfs_dir2_sf_getdents+0x2a0/0x3= a0 [xfs]
> <4>PGD 1072327067 PUD 1072328067 PMD 0
> <4>Oops: 0000 [#1] SMP
> <4>last sysfs file:
> /sys/devices/pci0000:80/0000:80:03.2/0000:83:00.0/host10/port-10:1/exp= ander-10:1/port-10:1:16/end_device-10:1:16/target10:0:25/10:0:25:0/block/sd= z/queue/rotational
> <4>CPU 17
> <4>Modules linked in: xt_conntrack tun xfs exportfs iptable_filt= er
> ipt_REDIRECT iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack
> nf_defrag_ipv4 ip_tables ip_vs ipv6 libcrc32c iTCO_wdt
> iTCO_vendor_support ses enclosure igb i2c_algo_bit sb_edac edac_core > i2c_i801 i2c_core sg shpchp lpc_ich mfd_core ixgbe dca ptp pps_core > mdio power_meter acpi_ipmi ipmi_si ipmi_msghandler ext4 jbd2 mbcache > sd_mod crc_t10dif mpt3sas scsi_transport_sas raid_class xhci_hcd ahci<= br> > wmi dm_mirror dm_region_hash dm_log dm_mod [last unloaded:
> scsi_wait_scan]
> <4>
> <4>Pid: 4454, comm: swift-object-se Not tainted
> 2.6.32-504.23.4.el6.x86_64 #1 Silicon Mechanics Storform
> R518.v5P/X10DRi-T4+
> <4>RIP: 0010:[<ffffffffa0362d60>]=C2=A0 [<ffffffffa0362= d60>]
> xfs_dir2_sf_getdents+0x2a0/0x3a0 [xfs]
> <4>RSP: 0018:ffff880871f6de18=C2=A0 EFLAGS: 00010202
> <4>RAX: 0000000000000000 RBX: 0000000000000004 RCX: 000000000000= 0000
> <4>RDX: 0000000000000001 RSI: 0000000000000000 RDI: 00007faa7400= 6203
> <4>RBP: ffff880871f6de68 R08: 000000032eb04bc9 R09: 000000000000= 0004
> <4>R10: 0000000000008030 R11: 0000000000000246 R12: 000000000000= 0000
> <4>R13: 0000000000000002 R14: ffff88106eff7000 R15: ffff8808715b= 4580
> <4>FS:=C2=A0 00007faa85425700(0000) GS:ffff880028360000(0000) kn= lGS:0000000000000000
> <4>CS:=C2=A0 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> <4>CR2: 0000000000000001 CR3: 0000001072325000 CR4: 000000000014= 07e0
> <4>DR0: 0000000000000000 DR1: 0000000000000000 DR2: 000000000000= 0000
> <4>DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 000000000000= 0400
> <4>Process swift-object-se (pid: 4454, threadinfo ffff880871f6c0= 00,
> task ffff880860f18ab0)
> <4>Stack:
> <4> ffff880871f6de28 ffffffff811a4bb0 ffff880871f6df38 ffff88087= 4749cc0
> <4><d> 0000000100000103 ffff8802381f8c00 ffff880871f6df38 = ffff8808715b4580
> <4><d> 0000000000000082 ffff8802381f8d88 ffff880871f6dec8 = ffffffffa035ab31
> <4>Call Trace:
> <4> [<ffffffff811a4bb0>] ? filldir+0x0/0xe0
> <4> [<ffffffffa035ab31>] xfs_readdir+0xe1/0x130 [xfs]
> <4> [<ffffffff811a4bb0>] ? filldir+0x0/0xe0
> <4> [<ffffffffa038fe29>] xfs_file_readdir+0x39/0x50 [xfs]<= br> > <4> [<ffffffff811a4e30>] vfs_readdir+0xc0/0xe0
> <4> [<ffffffff8119bd86>] ? final_putname+0x26/0x50
> <4> [<ffffffff811a4fb9>] sys_getdents+0x89/0xf0
> <4> [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b > <4>Code: 01 00 00 00 48 c7 c6 38 6b 3a a0 48 8b 7d c0 ff 55 b8 8= 5 c0
> 0f 85 af 00 00 00 49 8b 37 e9 ec fd ff ff 66 0f 1f 84 00 00 00 00 00 > <41> 80 7c 24 01 00 0f 84 9c 00 00 00 45 0f b6 44 24 03 41 0f b6=
> <1>RIP=C2=A0 [<ffffffffa0362d60>] xfs_dir2_sf_getdents+0x2= a0/0x3a0 [xfs]
> <4> RSP <ffff880871f6de18>
> <4>CR2: 0000000000000001
>
...
>
> I=E2=80=99ve got the vmcore dump from operator. Does vmcore help for > troubleshooting kind issue ?
>

Hmm, well it couldn't hurt. Is the vmcore based on this 6.6 kern= el? Can
you provide the exact kernel version and post the vmcore somewhere?

Brian

> Thanks // Hugo
> =E2=80=8B
>
> 2015-06-18 22:59 GMT+08:00 Eric Sandeen <sandeen@sandeen.net>:
>
> > On 6/18/15 9:29 AM, Kuo Hugo wrote:
> > >>- Have you tried an 'xfs_repair -n' of the affect= ed filesystem? Note
> > that -n will report problems only and prevent any modification by= repair.
> > >
> > > *We might to to xfs_repair if we can address which disk caus= es the
> > issue. *
> >
> > If you do, please save the output, and if it finds anything, plea= se
> > provide the output in this thread.
> >
> > Thanks,
> > -Eric
> >

> _______________________= ________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs


--047d7b4724e2cc2a84051a711c61-- --===============3593431590360515185== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs --===============3593431590360515185==--