All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
To: Yong Wu <yong.wu-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org>
Cc: "laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org"
	<laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org>,
	"arnd-r2nGTMty4D4@public.gmane.org"
	<arnd-r2nGTMty4D4@public.gmane.org>,
	Catalin Marinas <Catalin.Marinas-5wv7dgnIgG8@public.gmane.org>,
	Will Deacon <Will.Deacon-5wv7dgnIgG8@public.gmane.org>,
	"tiffany.lin-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org"
	<tiffany.lin-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org>,
	"iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org"
	<iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org>,
	"djkurtz-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org"
	<djkurtz-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	"thunder.leizhen-hv44wF8Li93QT0dZR+AlfA@public.gmane.org"
	<thunder.leizhen-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>,
	"yingjoe.chen-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org"
	<yingjoe.chen-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org>,
	"treding-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org"
	<treding-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>,
	"linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org"
	<linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org>
Subject: Re: [PATCH v2 2/4] iommu: Implement common IOMMU ops for DMA mapping
Date: Fri, 03 Jul 2015 17:44:03 +0100	[thread overview]
Message-ID: <5596BBD3.1040708@arm.com> (raw)
In-Reply-To: <1435915649.21346.11.camel@mhfsdcap03>

On 03/07/15 10:27, Yong Wu wrote:
[...]
>> +/**
>> + * iommu_dma_alloc - Allocate and map a buffer contiguous in IOVA space
>> + * @dev: Device to allocate memory for. Must be a real device
>> + *	 attached to an iommu_dma_domain
>> + * @size: Size of buffer in bytes
>> + * @gfp: Allocation flags
>> + * @prot: IOMMU mapping flags
>> + * @coherent: Which dma_mask to base IOVA allocation on
>> + * @handle: Out argument for allocated DMA handle
>> + * @flush_page: Arch callback to flush a single page from caches as
>> + *		necessary. May be NULL for coherent allocations
>> + *
>> + * If @size is less than PAGE_SIZE, then a full CPU page will be allocated,
>> + * but an IOMMU which supports smaller pages might not map the whole thing.
>> + * For now, the buffer is unconditionally zeroed for compatibility
>> + *
>> + * Return: Array of struct page pointers describing the buffer,
>> + *	   or NULL on failure.
>> + */
>> +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>> +		int prot, bool coherent, dma_addr_t *handle,
>> +		void (*flush_page)(const void *, phys_addr_t))
>> +{
>> +	struct iommu_dma_domain *dom = arch_get_dma_domain(dev);
>> +	struct iova_domain *iovad = dom->iovad;
>> +	struct iova *iova;
>> +	struct page **pages;
>> +	struct sg_table sgt;
>> +	struct sg_mapping_iter miter;
>> +	dma_addr_t dma_addr;
>> +	unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>> +
>> +	*handle = DMA_ERROR_CODE;
>
> Hi Robin,
>
>    Compare with the dma before, You delete this line here.
>
>            size = PAGE_ALIGN(size);
>
>    Do you expect the user should make sure the size must be aligned?
>
>    so do __iommu_free_attrs.

Yeah, there is an oversight here due to some back-and-forth refactoring 
- both the page allocation and the IOVA allocation do get suitably 
aligned as they should, but the segments of the temporary scatterlist 
don't. Somehow I managed not to hit an iommu_map_sg failure due to that 
until some time after this posting (it seems most of the drivers I test 
with only make page-sized allocations...)

I've currently got the fixup below waiting for the next posting.

Robin.

>> +
>> +	/* IOMMU can map any pages, so himem can also be used here */
>> +	gfp |= __GFP_NOWARN | __GFP_HIGHMEM;
>> +	pages = __iommu_dma_alloc_pages(count, gfp);
>> +	if (!pages)
>> +		return NULL;
>> +
>> +	iova = __alloc_iova(dev, size, coherent);
>> +	if (!iova)
>> +		goto out_free_pages;
>> +

+	size = iova_align(iovad, size);

>> +	if (sg_alloc_table_from_pages(&sgt, pages, count, 0, size, GFP_KERNEL))
>> +		goto out_free_iova;
>> +
>> +	dma_addr = iova_dma_addr(iovad, iova);
>> +	if (iommu_map_sg(dom->domain, dma_addr, sgt.sgl, sgt.orig_nents, prot)
>> +			< size)
>> +		goto out_free_sg;
>> +
>> +	/* Using the non-flushing flag since we're doing our own */
>> +	sg_miter_start(&miter, sgt.sgl, sgt.orig_nents, SG_MITER_FROM_SG);
>> +	while (sg_miter_next(&miter)) {
>> +		memset(miter.addr, 0, PAGE_SIZE);
>> +		if (flush_page)
>> +			flush_page(miter.addr, page_to_phys(miter.page));
>> +	}
>> +	sg_miter_stop(&miter);
>> +	sg_free_table(&sgt);
>> +
>> +	*handle = dma_addr;
>> +	return pages;
>> +
>> +out_free_sg:
>> +	sg_free_table(&sgt);
>> +out_free_iova:
>> +	__free_iova(iovad, iova);
>> +out_free_pages:
>> +	__iommu_dma_free_pages(pages, count);
>> +	return NULL;
>> +}
>> +
> [...]
>> +
>> +#endif	/* __KERNEL__ */
>> +#endif	/* __DMA_IOMMU_H */
>
>

WARNING: multiple messages have this Message-ID (diff)
From: robin.murphy@arm.com (Robin Murphy)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v2 2/4] iommu: Implement common IOMMU ops for DMA mapping
Date: Fri, 03 Jul 2015 17:44:03 +0100	[thread overview]
Message-ID: <5596BBD3.1040708@arm.com> (raw)
In-Reply-To: <1435915649.21346.11.camel@mhfsdcap03>

On 03/07/15 10:27, Yong Wu wrote:
[...]
>> +/**
>> + * iommu_dma_alloc - Allocate and map a buffer contiguous in IOVA space
>> + * @dev: Device to allocate memory for. Must be a real device
>> + *	 attached to an iommu_dma_domain
>> + * @size: Size of buffer in bytes
>> + * @gfp: Allocation flags
>> + * @prot: IOMMU mapping flags
>> + * @coherent: Which dma_mask to base IOVA allocation on
>> + * @handle: Out argument for allocated DMA handle
>> + * @flush_page: Arch callback to flush a single page from caches as
>> + *		necessary. May be NULL for coherent allocations
>> + *
>> + * If @size is less than PAGE_SIZE, then a full CPU page will be allocated,
>> + * but an IOMMU which supports smaller pages might not map the whole thing.
>> + * For now, the buffer is unconditionally zeroed for compatibility
>> + *
>> + * Return: Array of struct page pointers describing the buffer,
>> + *	   or NULL on failure.
>> + */
>> +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>> +		int prot, bool coherent, dma_addr_t *handle,
>> +		void (*flush_page)(const void *, phys_addr_t))
>> +{
>> +	struct iommu_dma_domain *dom = arch_get_dma_domain(dev);
>> +	struct iova_domain *iovad = dom->iovad;
>> +	struct iova *iova;
>> +	struct page **pages;
>> +	struct sg_table sgt;
>> +	struct sg_mapping_iter miter;
>> +	dma_addr_t dma_addr;
>> +	unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>> +
>> +	*handle = DMA_ERROR_CODE;
>
> Hi Robin,
>
>    Compare with the dma before, You delete this line here.
>
>            size = PAGE_ALIGN(size);
>
>    Do you expect the user should make sure the size must be aligned?
>
>    so do __iommu_free_attrs.

Yeah, there is an oversight here due to some back-and-forth refactoring 
- both the page allocation and the IOVA allocation do get suitably 
aligned as they should, but the segments of the temporary scatterlist 
don't. Somehow I managed not to hit an iommu_map_sg failure due to that 
until some time after this posting (it seems most of the drivers I test 
with only make page-sized allocations...)

I've currently got the fixup below waiting for the next posting.

Robin.

>> +
>> +	/* IOMMU can map any pages, so himem can also be used here */
>> +	gfp |= __GFP_NOWARN | __GFP_HIGHMEM;
>> +	pages = __iommu_dma_alloc_pages(count, gfp);
>> +	if (!pages)
>> +		return NULL;
>> +
>> +	iova = __alloc_iova(dev, size, coherent);
>> +	if (!iova)
>> +		goto out_free_pages;
>> +

+	size = iova_align(iovad, size);

>> +	if (sg_alloc_table_from_pages(&sgt, pages, count, 0, size, GFP_KERNEL))
>> +		goto out_free_iova;
>> +
>> +	dma_addr = iova_dma_addr(iovad, iova);
>> +	if (iommu_map_sg(dom->domain, dma_addr, sgt.sgl, sgt.orig_nents, prot)
>> +			< size)
>> +		goto out_free_sg;
>> +
>> +	/* Using the non-flushing flag since we're doing our own */
>> +	sg_miter_start(&miter, sgt.sgl, sgt.orig_nents, SG_MITER_FROM_SG);
>> +	while (sg_miter_next(&miter)) {
>> +		memset(miter.addr, 0, PAGE_SIZE);
>> +		if (flush_page)
>> +			flush_page(miter.addr, page_to_phys(miter.page));
>> +	}
>> +	sg_miter_stop(&miter);
>> +	sg_free_table(&sgt);
>> +
>> +	*handle = dma_addr;
>> +	return pages;
>> +
>> +out_free_sg:
>> +	sg_free_table(&sgt);
>> +out_free_iova:
>> +	__free_iova(iovad, iova);
>> +out_free_pages:
>> +	__iommu_dma_free_pages(pages, count);
>> +	return NULL;
>> +}
>> +
> [...]
>> +
>> +#endif	/* __KERNEL__ */
>> +#endif	/* __DMA_IOMMU_H */
>
>

  reply	other threads:[~2015-07-03 16:44 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-11 15:54 [PATCH v2 0/4] arm64: IOMMU-backed DMA mapping Robin Murphy
2015-06-11 15:54 ` Robin Murphy
     [not found] ` <cover.1434036796.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-06-11 15:54   ` [PATCH v2 1/4] iommu/iova: Avoid over-allocating when size-aligned Robin Murphy
2015-06-11 15:54     ` Robin Murphy
2015-06-11 15:54   ` [PATCH v2 2/4] iommu: Implement common IOMMU ops for DMA mapping Robin Murphy
2015-06-11 15:54     ` Robin Murphy
     [not found]     ` <bf63d80c36533aa4dcc401b5a4b312ba00237e1c.1434036796.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-06-18 15:00       ` Yong Wu
2015-06-18 15:00         ` Yong Wu
2015-06-18 16:54         ` Robin Murphy
2015-06-18 16:54           ` Robin Murphy
     [not found]           ` <5582F7C5.8060901-5wv7dgnIgG8@public.gmane.org>
2015-06-19  1:06             ` Yong Wu
2015-06-19  1:06               ` Yong Wu
2015-07-03  9:27       ` Yong Wu
2015-07-03  9:27         ` Yong Wu
2015-07-03 16:44         ` Robin Murphy [this message]
2015-07-03 16:44           ` Robin Murphy
     [not found]           ` <5596BBD3.1040708-5wv7dgnIgG8@public.gmane.org>
2015-07-07  7:37             ` Yong Wu
2015-07-07  7:37               ` Yong Wu
2015-06-11 15:54   ` [PATCH v2 3/4] arm64: Add IOMMU dma_ops Robin Murphy
2015-06-11 15:54     ` Robin Murphy
     [not found]     ` <4e298538ccedbe6f593d8edc40eb3a08ad9e4df5.1434036796.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-06-18 15:00       ` Yong Wu
2015-06-18 15:00         ` Yong Wu
2015-06-11 15:54   ` [PATCH v2 4/4] arm64: Hook up " Robin Murphy
2015-06-11 15:54     ` Robin Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5596BBD3.1040708@arm.com \
    --to=robin.murphy-5wv7dgnigg8@public.gmane.org \
    --cc=Catalin.Marinas-5wv7dgnIgG8@public.gmane.org \
    --cc=Will.Deacon-5wv7dgnIgG8@public.gmane.org \
    --cc=arnd-r2nGTMty4D4@public.gmane.org \
    --cc=djkurtz-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org \
    --cc=linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org \
    --cc=thunder.leizhen-hv44wF8Li93QT0dZR+AlfA@public.gmane.org \
    --cc=tiffany.lin-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org \
    --cc=treding-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org \
    --cc=yingjoe.chen-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org \
    --cc=yong.wu-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.