From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: [PATCH V5 1/3] xen/mem_access: Support for memory-content hiding Date: Tue, 14 Jul 2015 13:22:12 +0100 Message-ID: <55A51B140200007800090AE1@mail.emea.novell.com> References: <1436807687-9826-1-git-send-email-rcojocaru@bitdefender.com> <1436807687-9826-2-git-send-email-rcojocaru@bitdefender.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1436807687-9826-2-git-send-email-rcojocaru@bitdefender.com> Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Razvan Cojocaru Cc: jun.nakajima@intel.com, kevin.tian@intel.com, wei.liu2@citrix.com, ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org, eddie.dong@intel.com, Aravind.Gopalakrishnan@amd.com, suravee.suthikulpanit@amd.com, tlengyel@novetta.com, keir@xen.org, boris.ostrovsky@oracle.com List-Id: xen-devel@lists.xenproject.org >>> On 13.07.15 at 19:14, wrote: > --- a/xen/arch/x86/domain.c > +++ b/xen/arch/x86/domain.c > @@ -511,6 +511,8 @@ int vcpu_initialise(struct vcpu *v) > > void vcpu_destroy(struct vcpu *v) > { > + xfree(v->arch.vm_event.emul_read_data); Is this still needed now that you have vm_event_cleanup_domain()? > --- a/xen/arch/x86/hvm/emulate.c > +++ b/xen/arch/x86/hvm/emulate.c > @@ -67,6 +67,27 @@ static int null_write(const struct hvm_io_handler *handler, > return X86EMUL_OKAY; > } > > +static int set_context_data(void *buffer, unsigned int bytes) I think in the context of this function alone, "size" would be better than "bytes". > +{ > + struct vcpu *curr = current; > + > + if ( !curr->arch.vm_event.emul_read_data ) > + return X86EMUL_UNHANDLEABLE; > + else Else after the respective if() unconditionally returning is odd. Perhaps better to invert the if() condition... > + { > + unsigned int safe_bytes = > + min(bytes, curr->arch.vm_event.emul_read_data->size); > + > + if ( safe_bytes ) > + memcpy(buffer, curr->arch.vm_event.emul_read_data->data, > + safe_bytes); So why did you still keep this conditional? > @@ -1005,6 +1043,36 @@ static int hvmemul_rep_ins( > !!(ctxt->regs->eflags & X86_EFLAGS_DF), gpa); > } > > +static int hvmemul_rep_outs_set_context( > + enum x86_segment src_seg, > + unsigned long src_offset, > + uint16_t dst_port, > + unsigned int bytes_per_rep, > + unsigned long *reps, > + struct x86_emulate_ctxt *ctxt) > +{ > + unsigned int bytes = *reps * bytes_per_rep; > + char *buf; > + int rc; > + > + buf = xmalloc_array(char, bytes); > + > + if ( buf == NULL ) > + return X86EMUL_UNHANDLEABLE; > + > + rc = set_context_data(buf, bytes); > + > + if ( rc != X86EMUL_OKAY ) > + goto out; > + > + rc = hvmemul_do_pio_buffer(dst_port, bytes, IOREQ_WRITE, buf); > + > +out: Labels should be indented by at least one space. But realistically the code wouldn't look worse without any goto... > @@ -1133,7 +1205,20 @@ static int hvmemul_rep_movs( > */ > rc = hvm_copy_from_guest_phys(buf, sgpa, bytes); > if ( rc == HVMCOPY_okay ) > + { > + if ( unlikely(hvmemul_ctxt->set_context) ) > + { > + rc = set_context_data(buf, bytes); > + > + if ( rc != X86EMUL_OKAY) > + { > + xfree(buf); > + return rc; > + } > + } > + > rc = hvm_copy_to_guest_phys(dgpa, buf, bytes); > + } Why do you not bypass hvm_copy_from_guest_phys() here? This way it would - afaict - become consistent with the other cases. > --- a/xen/arch/x86/vm_event.c > +++ b/xen/arch/x86/vm_event.c > @@ -23,6 +23,36 @@ > #include > #include > > +int vm_event_init_domain(struct domain *d) > +{ > + struct vcpu *v; > + > + for_each_vcpu( d, v ) Either you consider for_each_vcpu a keyword-like thing, then this is missing a blank before the opening parenthesis, or you don't, in which case the blanks immediately inside the parentheses should be dropped. > + { > + if ( v->arch.vm_event.emul_read_data ) > + continue; > + > + v->arch.vm_event.emul_read_data = > + xzalloc(struct vm_event_emul_read_data); > + > + if ( !v->arch.vm_event.emul_read_data ) > + return -ENOMEM; > + } > + > + return 0; > +} > + > +void vm_event_cleanup_domain(struct domain *d) > +{ > + struct vcpu *v; > + > + for_each_vcpu( d, v ) > + { > + xfree(v->arch.vm_event.emul_read_data); > + v->arch.vm_event.emul_read_data = NULL; > + } > +} There not being a race here is attributed to vm_event_enable() being implicitly serialized by the domctl lock, and vm_event_disable(), beyond its domctl use, only being on domain cleanup paths? Would be nice to have a comment to that effect. Jan