unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
* Shared memory between workers
@ 2010-04-26  8:18 Iñaki Baz Castillo
  2010-04-26 19:03 ` Eric Wong
  0 siblings, 1 reply; 4+ messages in thread
From: Iñaki Baz Castillo @ 2010-04-26  8:18 UTC (permalink / raw)
  To: unicorn list

Hi, I plan to build a SIP TCP server (no UDP) based on
Unicorn/Rainbows! HTTP server. The main different between a SIP server
and HTTP server are:

- SIP uses persistent TCP connections, so I should use Rainbows!.
- For a SIP server it's not valid a simple request-response model.
Different workers could handle SIP messages (requests and responses)
belonging to the same SIP session so I need a shared memory between
all the workers.

Another option is using EventMachine, perhaps more suitable for this
purpose by design as it uses a single Ruby process so sharing memory
is not a problem. In the other side using a single process in a
multicore server is a pain.
I would like to use Unicorn/Rainbows as I love its design: by far it's
the more reliable and efficient Ruby HTTP server and it takes
advantages of Unix's features.

I don't want to use a DB server neither MemCache as "shared memory" as
it would be too slow. Is there any way to share RAM memory between
different Unicorn/Rainbows! workers in a *safe* way? I could create a
Hash or Array of SIP sessions into the master process so all the
workers reuse it, but I don't think it would be safe to access/write
into it from different Ruby processes. For that I would also need a
semaphore system (perhaps again a shared variable between all workers
in order to lock the shared Array/Hash when a worker writes into it).

Any tip about it? suggstions?
Thanks a lot.

-- 
Iñaki Baz Castillo
<ibc@aliax.net>
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Shared memory between workers
  2010-04-26  8:18 Shared memory between workers Iñaki Baz Castillo
@ 2010-04-26 19:03 ` Eric Wong
  2010-04-26 22:46   ` Iñaki Baz Castillo
  0 siblings, 1 reply; 4+ messages in thread
From: Eric Wong @ 2010-04-26 19:03 UTC (permalink / raw)
  To: unicorn list

Iñaki Baz Castillo <ibc@aliax.net> wrote:
> Hi, I plan to build a SIP TCP server (no UDP) based on
> Unicorn/Rainbows! HTTP server. The main different between a SIP server
> and HTTP server are:
> 
> - SIP uses persistent TCP connections, so I should use Rainbows!.
> - For a SIP server it's not valid a simple request-response model.
> Different workers could handle SIP messages (requests and responses)
> belonging to the same SIP session so I need a shared memory between
> all the workers.
> 
> Another option is using EventMachine, perhaps more suitable for this
> purpose by design as it uses a single Ruby process so sharing memory
> is not a problem. In the other side using a single process in a
> multicore server is a pain.
> I would like to use Unicorn/Rainbows as I love its design: by far it's
> the more reliable and efficient Ruby HTTP server and it takes
> advantages of Unix's features.
> 
> I don't want to use a DB server neither MemCache as "shared memory" as
> it would be too slow. Is there any way to share RAM memory between
> different Unicorn/Rainbows! workers in a *safe* way? I could create a
> Hash or Array of SIP sessions into the master process so all the
> workers reuse it, but I don't think it would be safe to access/write
> into it from different Ruby processes. For that I would also need a
> semaphore system (perhaps again a shared variable between all workers
> in order to lock the shared Array/Hash when a worker writes into it).
> 
> Any tip about it? suggstions?

Hi Iñaki,

First off I wouldn't call Memcached slow...  How many requests do you
need out of it and what kind of access patterns are you doing?

If you're on a modern Linux 2.6 box, then anything on the local
filesystem is basically shared memory if you have enough RAM to store
your working set (and it sounds like you do).

If you're willing to be confined to a single box, I've had great
experiences with TokyoCabinet the past few years.  One system I built
shares 10G of read-mostly data (on 16G boxes) across 8 Unicorn
(previously Mongrel) workers.  TokyoCabinet uses mmap and pread/pwrite,
so you can safely share DB handles across any number of native
threads/processes.

Writes in TC need filesystem locking and you can only have one active
writer, but that was still plenty fast enough in my experience.

If fsync() becomes a bottleneck, then for non-critical data either:

  1) disable it (TC lets you)
  2) or use libeatmydata if you choose something that doesn't let
     you disable fsync()

<soapbox>
  My personal opinion is that fsync()/O_SYNC are completely overrated.
  All important machines these days have redundant storage, redundant
  power supplies, ECC memory, hardware health monitoring/alerting and
  backup power.  Any remaining hardware/kernel failure possibilities
  are far less likely than application bugs that corrupt data :)

  But, if you really have critical data, /then/ get an SSD or
  a battery-backed write cache.
</soapbox>

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Shared memory between workers
  2010-04-26 19:03 ` Eric Wong
@ 2010-04-26 22:46   ` Iñaki Baz Castillo
  2010-04-29  0:12     ` Gleb Arshinov
  0 siblings, 1 reply; 4+ messages in thread
From: Iñaki Baz Castillo @ 2010-04-26 22:46 UTC (permalink / raw)
  To: unicorn list

2010/4/26 Eric Wong <normalperson@yhbt.net>:

> Hi Iñaki,
>
> First off I wouldn't call Memcached slow...  How many requests do you
> need out of it and what kind of access patterns are you doing?

Well, I want to do as efficient as possible. This is not the main
purpose (if so I wouldn't use Ruby) but it's important.

For example:
- 8 workers.
- A SIP request arrives and it handled by worker 1.
- The server must store in *shared* memory info about such request
(status, last SIP response sereplied for it....). The request
transaction remains in memory for some seconds after sending the
response.
- Due to a non so unusual SIP issue, a retransmission of the previous
request arrives to the server but now it is handled by worker 2.
- Worker 2 must check the request "branch" id into shared memory, find
the previously created entry, check that a response was already sent
for it, and re-send such response.

Well, in fact this is really common in SIP under UDP. Under TCP
usually there are no retransmissions (but they could be so the worker
must check it).



> If you're on a modern Linux 2.6 box, then anything on the local
> filesystem is basically shared memory if you have enough RAM to store
> your working set (and it sounds like you do).

And what about mounting a filesystem under RAM? :would it be ever faster? :)


> If you're willing to be confined to a single box, I've had great
> experiences with TokyoCabinet the past few years.  One system I built
> shares 10G of read-mostly data (on 16G boxes) across 8 Unicorn
> (previously Mongrel) workers.  TokyoCabinet uses mmap and pread/pwrite,
> so you can safely share DB handles across any number of native
> threads/processes.

I didn't know it. I've just read the doc, installed and tested it.
Really impressive!


> Writes in TC need filesystem locking and you can only have one active
> writer, but that was still plenty fast enough in my experience.

If you use a "on memory database" there wouldn't be filesystem locking, right?


> If fsync() becomes a bottleneck, then for non-critical data either:
>
>  1) disable it (TC lets you)
>  2) or use libeatmydata if you choose something that doesn't let
>     you disable fsync()
>
> <soapbox>
>  My personal opinion is that fsync()/O_SYNC are completely overrated.
>  All important machines these days have redundant storage, redundant
>  power supplies, ECC memory, hardware health monitoring/alerting and
>  backup power.  Any remaining hardware/kernel failure possibilities
>  are far less likely than application bugs that corrupt data :)
>
>  But, if you really have critical data, /then/ get an SSD or
>  a battery-backed write cache.
> </soapbox>

Thanks, but the fact is that I don't need persistent storage. This is,
the data I will need to share between Ruby processes (workers) is
about realtime data, so if the server is stopped and restarted I don't
want to retrieve such information. Well, it could be useful for some
cases however, I need to think about it.
I've seen a method "sync" in the Tokyo Tyrant client API which forces
the server to store/sync the on memory data to the file (if a file is
being used).


So thanks a lot. Definitely I'll learn about TokioCabinet as it seems
a really nice solution for my needs.
Regards.


-- 
Iñaki Baz Castillo
<ibc@aliax.net>
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Shared memory between workers
  2010-04-26 22:46   ` Iñaki Baz Castillo
@ 2010-04-29  0:12     ` Gleb Arshinov
  0 siblings, 0 replies; 4+ messages in thread
From: Gleb Arshinov @ 2010-04-29  0:12 UTC (permalink / raw)
  To: unicorn list

> If you use a "on memory database" there wouldn't be filesystem locking, right?

fsync throughput (a.k.a. commit rate) is easy to test:

  sudo aptitude install sysbench
  sysbench --test=fileio --file-fsync-freq=1 --file-num=1 \
   --file-total-size=16384 --file-test-mode=rndwr run \
   | grep "Requests/sec"

You can then see if any additional tuning (RAM drive, etc.) is even
necessary for your requirements.

Unless something is doing write-back caching in between the app and
the drive this rate will be a function of RPM of your drive, e.g.
~120/s for 7200RPM   We did a bunch of benchmarking when setting up a
couple of DB servers recently and found that consumer-grade drives
(SATA) lie and have write-back caching enabled by default, and you get
1-3k/s.  It's not considered safe to run a database on such
configuration as HDD cache lacks battery backup.  For comparison we
get ~10k/s on a 4 disk RAID 10 SAS array (w/ battery backup unit) with
write-back caching turned on in the controller and off in the drives.

So, the point is that consumer hardware lies and you are quite likely
to get good fsync performance.  Since you don't care about data safety
for your setup this should be fine.

Best regards,

Gleb
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2010-04-29  0:13 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-04-26  8:18 Shared memory between workers Iñaki Baz Castillo
2010-04-26 19:03 ` Eric Wong
2010-04-26 22:46   ` Iñaki Baz Castillo
2010-04-29  0:12     ` Gleb Arshinov

Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).