cmogstored dev/user discussion/issues/patches/etc
 help / color / mirror / code / Atom feed
From: Arkadi Colson <arkadi@smartbit.be>
To: Eric Wong <e@80x24.org>
Cc: "cmogstored-public@bogomips.org" <cmogstored-public@bogomips.org>
Subject: Re: Heavy load
Date: Tue, 17 Dec 2019 08:57:10 +0000	[thread overview]
Message-ID: <2fc09aa7-dc0a-d37e-4015-2ba834aa4525@smartbit.be> (raw)
In-Reply-To: <20191217084330.GB32606@dcvr>



On 17/12/19 09:43, Eric Wong wrote:
> Arkadi Colson<arkadi@smartbit.be>  wrote:
>> Hi
>>
>> Yes we had this already on older version but less frequent. Below you
> Oh, I would've welcomed any bug reports much earlier :)
>
> Which version(s)?  I had more time to work on this in the past.
> Were any versions always good?

I know, I should have reported this earlier ;-)

Last version 1.7.1

Hard to say, I remember me more crashes in the past but not that 
frequently. Maybe it's because our cluster is much more stressed.

>> can find the backtrace:
>>
>> [Current thread is 1 (Thread 0x7f5e98655700 (LWP 8317))]
>> (gdb) bt
>> #0  __find_specmb (format=0x7f5e980de374 "%s") at printf-parse.h:108
>> #1  _IO_vfprintf_internal (s=0x7f5e98650560, format=0x7f5e980de374 "%s",
>> ap=0x7f5e98652c08) at vfprintf.c:1312
>> #2  0x00007f5e97fc3c53 in buffered_vfprintf (s=0x7f5e98314520
>> <_IO_2_1_stderr_>, format=<optimized out>, args=<optimized out>) at
>> vfprintf.c:2325
>> #3  0x00007f5e97fc0f25 in _IO_vfprintf_internal
>> (s=s@entry=0x7f5e98314520 <_IO_2_1_stderr_>,
>> format=format@entry=0x7f5e980de374 "%s", ap=ap@entry=0x7f5e98652c08) at
>> vfprintf.c:1293
>> #4  0x00007f5e97fe09b2 in __fxprintf (fp=0x7f5e98314520
>> <_IO_2_1_stderr_>, fp@entry=0x0, fmt=fmt@entry=0x7f5e980de374 "%s") at
>> fxprintf.c:50
>> #5  0x00007f5e97fa5de0 in __assert_fail_base (fmt=0x7f5e980df310
>> "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
>> assertion=assertion@entry=0x563930234ee7 "0 && \"BUG: unset HTTP
>> method\"", file=file@entry=0x563930234ee0 "http.c", line=line@entry=80,
>> function=function@entry=0x563930235440 "http_process_client")
>>       at assert.c:64
>> #6  0x00007f5e97fa5f12 in __GI___assert_fail (assertion=0x563930234ee7
>> "0 && \"BUG: unset HTTP method\"", file=0x563930234ee0 "http.c",
>> line=80, function=0x563930235440 "http_process_client") at assert.c:101
>> #7  0x000056393020b66f in ?? ()
>> #8  0x000056393020bfa5 in ?? ()
>> #9  0x000056393020c10e in ?? ()
>> #10 0x000056393021328d in ?? ()
> Yeah, -ggdb3 should be giving info on those "??" lines.
> I just double-checked the HTTP parser and it should never
> even get to http.c line=80 if the method was unrecognized;
> so I suspect there's some other memory corruption bug
> which isn't the parser...
>
>> #11 0x00007f5e983204a4 in start_thread (arg=0x7f5e98655700) at
>> pthread_create.c:456
>> #12 0x00007f5e98062d0f in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
>>
>> (gdb) t 2
>> [Switching to thread 2 (Thread 0x7f5e98591700 (LWP 8345))]
>> (gdb) t 3
>> [Switching to thread 3 (Thread 0x7f5e9870b700 (LWP 8291))]
> <snip>
>
> Any more threads?  (just a number of threads is fine).

Other threads are reporting this:

(gdb) t 4
[Switching to thread 4 (Thread 0x7f5e986be700 (LWP 8302))]
#0  0x00007f5e98063303 in epoll_wait () at 
../sysdeps/unix/syscall-template.S:84
84    in ../sysdeps/unix/syscall-template.S
(gdb) bt
#0  0x00007f5e98063303 in epoll_wait () at 
../sysdeps/unix/syscall-template.S:84
#1  0x00005639302130ab in ?? ()
#2  0x00005639302132c0 in ?? ()
#3  0x00007f5e983204a4 in start_thread (arg=0x7f5e986be700) at 
pthread_create.c:456
#4  0x00007f5e98062d0f in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:97


> Any idea? If you need more info, please just ask!
We are having about 192 devices spread over about 23 cmogstored hosts. 
Each device is one disk with one partition...
> How many "/devXYZ" devices do you have?  Are they all
> on different partitions?

  reply	other threads:[~2019-12-17  8:57 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-11 13:54 Heavy load Arkadi Colson
2019-12-11 17:06 ` Eric Wong
2019-12-12  7:30   ` Arkadi Colson
2019-12-12  7:59     ` Eric Wong
2019-12-12 19:16     ` Eric Wong
2019-12-17  7:40       ` Arkadi Colson
2019-12-17  8:43         ` Eric Wong
2019-12-17  8:57           ` Arkadi Colson [this message]
2019-12-17 19:42             ` Eric Wong
2019-12-18  7:56               ` Arkadi Colson
2019-12-18 17:58                 ` Eric Wong
2020-01-06  9:46                   ` Arkadi Colson
2020-01-08  3:35                     ` Eric Wong
2020-01-08  9:40                       ` Arkadi Colson
2020-01-30  0:35                         ` Eric Wong
2020-03-03 15:46                           ` Arkadi Colson
2019-12-17  7:41       ` Arkadi Colson
2019-12-17  8:31         ` Eric Wong
2019-12-17  8:43           ` Arkadi Colson
2019-12-17  8:50             ` Eric Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://yhbt.net/cmogstored/README

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2fc09aa7-dc0a-d37e-4015-2ba834aa4525@smartbit.be \
    --to=arkadi@smartbit.be \
    --cc=cmogstored-public@bogomips.org \
    --cc=e@80x24.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://yhbt.net/cmogstored.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).