From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: [PATCH v2 05/17] x86/hvm: unify stdvga mmio intercept with standard mmio intercept Date: Wed, 17 Jun 2015 16:50:58 +0100 Message-ID: <5581B38202000078000864B3@mail.emea.novell.com> References: <1434037381-10917-1-git-send-email-paul.durrant@citrix.com> <1434037381-10917-6-git-send-email-paul.durrant@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1Z5FcC-0004fv-IK for xen-devel@lists.xenproject.org; Wed, 17 Jun 2015 15:51:00 +0000 In-Reply-To: <1434037381-10917-6-git-send-email-paul.durrant@citrix.com> Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Paul Durrant Cc: Andrew Cooper , Keir Fraser , xen-devel@lists.xenproject.org List-Id: xen-devel@lists.xenproject.org >>> On 11.06.15 at 17:42, wrote: > It's clear from the following check in hvmemul_rep_movs: > > if ( sp2mt == p2m_mmio_direct || dp2mt == p2m_mmio_direct || > (sp2mt == p2m_mmio_dm && dp2mt == p2m_mmio_dm) ) > return X86EMUL_UNHANDLEABLE; > > that mmio <-> mmio copy is not handled. This means the code in the > stdvga mmio intercept that explicitly handles mmio <-> mmio copy when > hvm_copy_to/from_guest_phys() fails is never going to be executed. Looking at the code I agree with the analysis, but I doubt the code was put there originally just for fun, and I wonder whether some of the slowness we "gained" over the years in boot loader graphics performance isn't related to this code having got hooked off. And in the end we should get to the point where MMIO -> MMIO transfers get emulated efficiently anyway, so I wonder whether this patch moves us in the right direction. > --- a/xen/arch/x86/hvm/emulate.c > +++ b/xen/arch/x86/hvm/emulate.c > @@ -273,6 +273,15 @@ static int hvmemul_acquire_page(unsigned long gmfn, struct page_info **page) > return X86EMUL_RETRY; > } > > + /* This code should not be reached if the gmfn is not RAM */ > + if ( p2m_is_mmio(p2mt) ) > + { > + domain_crash(d); Log a message at least? Silent domain death is rather ugly to analyze. > --- a/xen/arch/x86/hvm/stdvga.c > +++ b/xen/arch/x86/hvm/stdvga.c > @@ -275,7 +275,8 @@ static uint8_t stdvga_mem_readb(uint64_t addr) > return ret; > } > > -static uint64_t stdvga_mem_read(uint64_t addr, uint64_t size) > +static int stdvga_mem_read(struct vcpu *v, unsigned long addr, > + unsigned long size, unsigned long *p_data) > { > uint64_t data = 0; > > @@ -313,7 +314,9 @@ static uint64_t stdvga_mem_read(uint64_t addr, uint64_t size) > break; > } > > - return data; > + *p_data = data; Even if sizeof(long) == sizeof(uint64_t), please try to use consistent types (i.e. the new function parameter should be of uint64_t * type). > + if (!hvm_buffered_io_send(&p)) > + return X86EMUL_UNHANDLEABLE; Coding style. > +static int stdvga_mem_check(struct vcpu *v, unsigned long addr) > { > + struct hvm_hw_stdvga *s = &v->domain->arch.hvm_domain.stdvga; > + int rc; > > spin_lock(&s->lock); > > + rc = s->stdvga && s->cache && > + (addr >= VGA_MEM_BASE) && > + (addr < (VGA_MEM_BASE + VGA_MEM_SIZE)); This (compared with the code being removed) makes clear that the check() mechanism lacks the size (it did so before, but with your generalization I think this needs to be corrected). > +const struct hvm_mmio_ops stdvga_mem_ops = { static Jan