From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38967) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7Hp4-0001LR-21 for qemu-devel@nongnu.org; Tue, 23 Jun 2015 02:36:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z7How-0004Nr-0P for qemu-devel@nongnu.org; Tue, 23 Jun 2015 02:36:38 -0400 Received: from mx-v6.kamp.de ([2a02:248:0:51::16]:57508 helo=mx01.kamp.de) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7Hov-0004Nc-HF for qemu-devel@nongnu.org; Tue, 23 Jun 2015 02:36:33 -0400 Message-ID: <5588FE67.7030809@kamp.de> Date: Tue, 23 Jun 2015 08:36:23 +0200 From: Peter Lieven MIME-Version: 1.0 References: <55803637.3060607@kamp.de> <20150617083539.GA4202@noname.str.redhat.com> <55826789.6080008@kamp.de> <55826C49.2030605@redhat.com> <55826D4B.3000703@kamp.de> <55826F41.70808@kamp.de> <20150618074500.GB4270@noname.redhat.com> <558281B2.6020905@kamp.de> <20150618084241.GC4270@noname.redhat.com> <55828F73.3080809@kamp.de> <558415C8.3060207@kamp.de> <55880926.5070800@kamp.de> <55888430.50504@redhat.com> In-Reply-To: <55888430.50504@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: John Snow , Stefan Hajnoczi Cc: Kevin Wolf , Paolo Bonzini , qemu-devel , qemu block , Alexander Bezzubikov Am 22.06.2015 um 23:54 schrieb John Snow: > > On 06/22/2015 09:09 AM, Peter Lieven wrote: >> Am 22.06.2015 um 11:25 schrieb Stefan Hajnoczi: >>> On Fri, Jun 19, 2015 at 2:14 PM, Peter Lieven wrote: >>>> Am 18.06.2015 um 11:36 schrieb Stefan Hajnoczi: >>>>> On Thu, Jun 18, 2015 at 10:29 AM, Peter Lieven wrote: >>>>>> Am 18.06.2015 um 10:42 schrieb Kevin Wolf: >>>>>>> Am 18.06.2015 um 10:30 hat Peter Lieven geschrieben: >>>>>>>> Am 18.06.2015 um 09:45 schrieb Kevin Wolf: >>>>>>>>> Am 18.06.2015 um 09:12 hat Peter Lieven geschrieben: >>>>>>>>>> Thread 2 (Thread 0x7ffff5550700 (LWP 2636)): >>>>>>>>>> #0 0x00007ffff5d87aa3 in ppoll () from >>>>>>>>>> /lib/x86_64-linux-gnu/libc.so.6 >>>>>>>>>> No symbol table info available. >>>>>>>>>> #1 0x0000555555955d91 in qemu_poll_ns (fds=0x5555563889c0, >>>>>>>>>> nfds=3, >>>>>>>>>> timeout=4999424576) at qemu-timer.c:326 >>>>>>>>>> ts = {tv_sec = 4, tv_nsec = 999424576} >>>>>>>>>> tvsec = 4 >>>>>>>>>> #2 0x0000555555956feb in aio_poll (ctx=0x5555563528e0, >>>>>>>>>> blocking=true) >>>>>>>>>> at aio-posix.c:231 >>>>>>>>>> node = 0x0 >>>>>>>>>> was_dispatching = false >>>>>>>>>> ret = 1 >>>>>>>>>> progress = false >>>>>>>>>> #3 0x000055555594aeed in bdrv_prwv_co (bs=0x55555637eae0, >>>>>>>>>> offset=4292007936, >>>>>>>>>> qiov=0x7ffff554f760, is_write=false, flags=0) at >>>>>>>>>> block.c:2699 >>>>>>>>>> aio_context = 0x5555563528e0 >>>>>>>>>> co = 0x5555563888a0 >>>>>>>>>> rwco = {bs = 0x55555637eae0, offset = 4292007936, >>>>>>>>>> qiov = 0x7ffff554f760, is_write = false, ret = >>>>>>>>>> 2147483647, >>>>>>>>>> flags = 0} >>>>>>>>>> #4 0x000055555594afa9 in bdrv_rw_co (bs=0x55555637eae0, >>>>>>>>>> sector_num=8382828, >>>>>>>>>> buf=0x7ffff44cc800 "(", nb_sectors=4, is_write=false, >>>>>>>>>> flags=0) >>>>>>>>>> at block.c:2722 >>>>>>>>>> qiov = {iov = 0x7ffff554f780, niov = 1, nalloc = -1, >>>>>>>>>> size = >>>>>>>>>> 2048} >>>>>>>>>> iov = {iov_base = 0x7ffff44cc800, iov_len = 2048} >>>>>>>>>> #5 0x000055555594b008 in bdrv_read (bs=0x55555637eae0, >>>>>>>>>> sector_num=8382828, >>>>>>>>>> buf=0x7ffff44cc800 "(", nb_sectors=4) at block.c:2730 >>>>>>>>>> No locals. >>>>>>>>>> #6 0x000055555599acef in blk_read (blk=0x555556376820, >>>>>>>>>> sector_num=8382828, >>>>>>>>>> buf=0x7ffff44cc800 "(", nb_sectors=4) at >>>>>>>>>> block/block-backend.c:404 >>>>>>>>>> No locals. >>>>>>>>>> #7 0x0000555555833ed2 in cd_read_sector (s=0x555556408f88, >>>>>>>>>> lba=2095707, >>>>>>>>>> buf=0x7ffff44cc800 "(", sector_size=2048) at >>>>>>>>>> hw/ide/atapi.c:116 >>>>>>>>>> ret = 32767 >>>>>>>>> Here is the problem: The ATAPI emulation uses synchronous >>>>>>>>> blk_read() >>>>>>>>> instead of the AIO or coroutine interfaces. This means that it >>>>>>>>> keeps >>>>>>>>> polling for request completion while it holds the BQL until the >>>>>>>>> request >>>>>>>>> is completed. >>>>>>>> I will look at this. >>>>>> I need some further help. My way to "emulate" a hung NFS Server is to >>>>>> block it in the Firewall. Currently I face the problem that I >>>>>> cannot mount >>>>>> a CD Iso via libnfs (nfs://) without hanging Qemu (i previously >>>>>> tried with >>>>>> a kernel NFS mount). It reads a few sectors and then stalls (maybe >>>>>> another >>>>>> bug): >>>>>> >>>>>> (gdb) thread apply all bt full >>>>>> >>>>>> Thread 3 (Thread 0x7ffff0c21700 (LWP 29710)): >>>>>> #0 qemu_cond_broadcast (cond=cond@entry=0x555556259940) at >>>>>> util/qemu-thread-posix.c:120 >>>>>> err = >>>>>> __func__ = "qemu_cond_broadcast" >>>>>> #1 0x0000555555911164 in rfifolock_unlock >>>>>> (r=r@entry=0x555556259910) at >>>>>> util/rfifolock.c:75 >>>>>> __PRETTY_FUNCTION__ = "rfifolock_unlock" >>>>>> #2 0x0000555555875921 in aio_context_release >>>>>> (ctx=ctx@entry=0x5555562598b0) >>>>>> at async.c:329 >>>>>> No locals. >>>>>> #3 0x000055555588434c in aio_poll (ctx=ctx@entry=0x5555562598b0, >>>>>> blocking=blocking@entry=true) at aio-posix.c:272 >>>>>> node = >>>>>> was_dispatching = false >>>>>> i = >>>>>> ret = >>>>>> progress = false >>>>>> timeout = 611734526 >>>>>> __PRETTY_FUNCTION__ = "aio_poll" >>>>>> #4 0x00005555558bc43d in bdrv_prwv_co (bs=bs@entry=0x55555627c0f0, >>>>>> offset=offset@entry=7038976, qiov=qiov@entry=0x7ffff0c208f0, >>>>>> is_write=is_write@entry=false, flags=flags@entry=(unknown: 0)) at >>>>>> block/io.c:552 >>>>>> aio_context = 0x5555562598b0 >>>>>> co = >>>>>> rwco = {bs = 0x55555627c0f0, offset = 7038976, qiov = >>>>>> 0x7ffff0c208f0, is_write = false, ret = 2147483647, flags = >>>>>> (unknown: 0)} >>>>>> #5 0x00005555558bc533 in bdrv_rw_co (bs=0x55555627c0f0, >>>>>> sector_num=sector_num@entry=13748, buf=buf@entry=0x555557874800 "(", >>>>>> nb_sectors=nb_sectors@entry=4, is_write=is_write@entry=false, >>>>>> flags=flags@entry=(unknown: 0)) at block/io.c:575 >>>>>> qiov = {iov = 0x7ffff0c208e0, niov = 1, nalloc = -1, size >>>>>> = 2048} >>>>>> iov = {iov_base = 0x555557874800, iov_len = 2048} >>>>>> #6 0x00005555558bc593 in bdrv_read (bs=, >>>>>> sector_num=sector_num@entry=13748, buf=buf@entry=0x555557874800 "(", >>>>>> nb_sectors=nb_sectors@entry=4) at block/io.c:583 >>>>>> No locals. >>>>>> #7 0x00005555558af75d in blk_read (blk=, >>>>>> sector_num=sector_num@entry=13748, buf=buf@entry=0x555557874800 "(", >>>>>> nb_sectors=nb_sectors@entry=4) at block/block-backend.c:493 >>>>>> ret = >>>>>> #8 0x00005555557abb88 in cd_read_sector (sector_size=, >>>>>> buf=0x555557874800 "(", lba=3437, s=0x55555760db70) at >>>>>> hw/ide/atapi.c:116 >>>>>> ret = >>>>>> #9 ide_atapi_cmd_reply_end (s=0x55555760db70) at hw/ide/atapi.c:190 >>>>>> byte_count_limit = >>>>>> size = >>>>>> ret = 2 >>>>> This is still the same scenario Kevin explained. >>>>> >>>>> The ATAPI CD-ROM emulation code is using synchronous blk_read(). This >>>>> function holds the QEMU global mutex while waiting for the I/O request >>>>> to complete. This blocks other vcpu threads and the main loop thread. >>>>> >>>>> The solution is to convert the CD-ROM emulation code to use >>>>> blk_aio_readv() instead of blk_read(). >>>> I tried a little, but i am stuck with my approach. I reads one sector >>>> and then doesn't continue. Maybe someone with more knowledge >>>> of ATAPI/IDE could help? >>> Converting synchronous code to asynchronous requires an understanding >>> of the device's state transitions. Asynchronous code has to put the >>> device registers into a busy state until the request completes. It >>> also needs to handle hardware register accesses that occur while the >>> request is still pending. >> That was my assumption as well. But I don't know how to proceed... >> >>> I don't know ATAPI/IDE code well enough to suggest a fix. >> Maybe @John can help? >> >> Peter >> > Sure thing. I will take a deep look as soon as I get my NCQ patches out > the door. I don't have high hopes for a proper comprehensive fix for > 2.4, of course, but is there anything we should stick a band-aid on for > 2.4? My current reading is "It's just as broken as it's always been, so > it's not necessarily dire." It was broken all the time. So there is no particular need to fix it for 2.4. But it should be fixed. The second problem in the code is that a DMA cancel drains all block devices. This is the second point I have seen a VM hang when I forcibly should down the NFS server where my ISOs are on. I don't know if thats something that can be fixed as well. > > Also: since ATAPI apparently is doing all of its reads in a synchronous > manner in the SCSI fakery layer it has, I think a lot of the work is > done by making sure the IDE device is set +BSY +DRQ which will prevent > any new commands being sent to it just like DMA_READ commands already do. > > Looks like the ATAPI state machine is supposed to be something like this: > > CMD_PACKET is received: BSY bit is set. > Device is ready to receive command packet: -BSY +DRQ > Based on the nIEN bit, we either wait for an interrupt, or: > Poll the status register until BSY and DRQ clear. > > The IDE layer already prevents new commands from showing up while we > have BSY or DRQ set, and it looks like the flow is: > > - ide_exec_cmd sets +BSY > - cmd_packet sets -BST > - ide_transfer_start sets +DRQ > (Note: this is fully synchronous for e.g. AHCI, PCI/ISA will wait > for PIO data) > - After the last byte is transferred, we'll invoke ide_atapi_cmd. > - ide_atapi_cmd invokes e.g. cmd_inquiry > - cmd_inquiry will fill its buffer and invoke ide_atapi_cmd_reply. > - ide_atapi_cmd_reply either does a DMA transfer (-BSY +DRQ) > or a PIO reply (ide_atapi_cmd_reply_end) (-BSY -DRQ) > (Note again: AHCI is still fully synchronous here, PCI/ISA will > wait for data reads.) > > (Hmm, it looks like there's an opening for new commands to show up here, > since we've got -BSY and -DRQ) > > - ide_atapi_cmd_reply_end will call ide_atapi_cmd_ok, which will > clear the error bits, definitely set -BSY -DRQ +RDY, and set the IRQ > if nIEN is not set. > > > I think this won't be too bad, since the ide_exec_cmd layer itself is > already used to commands returning that aren't actually finished yet, > and the cmd_packet launcher itself also assumes the same. > > The way the ATAPI commands seem to work is: Tell the core layer that > we're not finished (even if we possibly are already) and set the > appropriate status bits ourselves after we're done, synchronously or not. > > A lot of the pathways are almost all protected by BSY/DRQ the whole way > and we already have a nearly asynchronous method for clearing them only > when the command is actually complete. > > Maybe I'll start hacking away at this after hard freeze to see what I > can do. If you already started, want to link me to a git and I'll start > from there? Thanks for your comprehensive explainations. It makes the states at least a bit clearer to me. I try to find some time to check why my patch does not work. What I have is in this repo. Its not much AND does not work, but maybe its just a small thing that needs to be changed... https://github.com/plieven/qemu/tree/atapi_async Peter