All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] um: implement flush_cache_vmap/flush_cache_vunmap
@ 2021-03-15 22:38 Johannes Berg
  2021-03-16  8:46 ` Anton Ivanov
  2021-03-16 11:10 ` Anton Ivanov
  0 siblings, 2 replies; 5+ messages in thread
From: Johannes Berg @ 2021-03-15 22:38 UTC (permalink / raw
  To: linux-um; +Cc: Johannes Berg

From: Johannes Berg <johannes.berg@intel.com>

vmalloc() heavy workloads in UML are extremely slow, due to
flushing the entire kernel VM space (flush_tlb_kernel_vm())
on the first segfault.

Implement flush_cache_vmap() to avoid that, and while at it
also add flush_cache_vunmap() since it's trivial.

This speeds up my vmalloc() heavy test of copying files out
from /sys/kernel/debug/gcov/ by 30x (from 30s to 1s.)

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
---
 arch/um/include/asm/cacheflush.h | 9 +++++++++
 arch/um/include/asm/tlb.h        | 2 +-
 2 files changed, 10 insertions(+), 1 deletion(-)
 create mode 100644 arch/um/include/asm/cacheflush.h

diff --git a/arch/um/include/asm/cacheflush.h b/arch/um/include/asm/cacheflush.h
new file mode 100644
index 000000000000..4c9858cd36ec
--- /dev/null
+++ b/arch/um/include/asm/cacheflush.h
@@ -0,0 +1,9 @@
+#ifndef __UM_ASM_CACHEFLUSH_H
+#define __UM_ASM_CACHEFLUSH_H
+
+#include <asm/tlbflush.h>
+#define flush_cache_vmap flush_tlb_kernel_range
+#define flush_cache_vunmap flush_tlb_kernel_range
+
+#include <asm-generic/cacheflush.h>
+#endif /* __UM_ASM_CACHEFLUSH_H */
diff --git a/arch/um/include/asm/tlb.h b/arch/um/include/asm/tlb.h
index ff9c62828962..0422467bda5b 100644
--- a/arch/um/include/asm/tlb.h
+++ b/arch/um/include/asm/tlb.h
@@ -5,7 +5,7 @@
 #include <linux/mm.h>
 
 #include <asm/tlbflush.h>
-#include <asm-generic/cacheflush.h>
+#include <asm/cacheflush.h>
 #include <asm-generic/tlb.h>
 
 #endif
-- 
2.30.2


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] um: implement flush_cache_vmap/flush_cache_vunmap
  2021-03-15 22:38 [PATCH] um: implement flush_cache_vmap/flush_cache_vunmap Johannes Berg
@ 2021-03-16  8:46 ` Anton Ivanov
  2021-03-16  8:55   ` Johannes Berg
  2021-03-16 11:10 ` Anton Ivanov
  1 sibling, 1 reply; 5+ messages in thread
From: Anton Ivanov @ 2021-03-16  8:46 UTC (permalink / raw
  To: Johannes Berg, linux-um; +Cc: Johannes Berg



On 15/03/2021 22:38, Johannes Berg wrote:
> From: Johannes Berg <johannes.berg@intel.com>
> 
> vmalloc() heavy workloads in UML are extremely slow, due to
> flushing the entire kernel VM space (flush_tlb_kernel_vm())
> on the first segfault.
> 
> Implement flush_cache_vmap() to avoid that, and while at it
> also add flush_cache_vunmap() since it's trivial.
> 
> This speeds up my vmalloc() heavy test of copying files out
> from /sys/kernel/debug/gcov/ by 30x (from 30s to 1s.)
> 
> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
> ---
>   arch/um/include/asm/cacheflush.h | 9 +++++++++
>   arch/um/include/asm/tlb.h        | 2 +-
>   2 files changed, 10 insertions(+), 1 deletion(-)
>   create mode 100644 arch/um/include/asm/cacheflush.h
> 
> diff --git a/arch/um/include/asm/cacheflush.h b/arch/um/include/asm/cacheflush.h
> new file mode 100644
> index 000000000000..4c9858cd36ec
> --- /dev/null
> +++ b/arch/um/include/asm/cacheflush.h
> @@ -0,0 +1,9 @@
> +#ifndef __UM_ASM_CACHEFLUSH_H
> +#define __UM_ASM_CACHEFLUSH_H
> +
> +#include <asm/tlbflush.h>
> +#define flush_cache_vmap flush_tlb_kernel_range
> +#define flush_cache_vunmap flush_tlb_kernel_range
> +
> +#include <asm-generic/cacheflush.h>
> +#endif /* __UM_ASM_CACHEFLUSH_H */
> diff --git a/arch/um/include/asm/tlb.h b/arch/um/include/asm/tlb.h
> index ff9c62828962..0422467bda5b 100644
> --- a/arch/um/include/asm/tlb.h
> +++ b/arch/um/include/asm/tlb.h
> @@ -5,7 +5,7 @@
>   #include <linux/mm.h>
>   
>   #include <asm/tlbflush.h>
> -#include <asm-generic/cacheflush.h>
> +#include <asm/cacheflush.h>
>   #include <asm-generic/tlb.h>
>   
>   #endif
> 

Well spotted.

Unless I am mistaken, there may be a slightly better way of doing it.

We can implement arch_sync_kernel_mappings() and sync only where the page modified mask says so by setting ARCH_PAGE_TABLE_SYNC_MASK

This way flush_cache_* can remain nops as in asm-generic

I am going to give that a spin, if it works, I will post it to the list by lunchtime GMT

 >


-- 
Anton R. Ivanov
https://www.kot-begemot.co.uk/

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] um: implement flush_cache_vmap/flush_cache_vunmap
  2021-03-16  8:46 ` Anton Ivanov
@ 2021-03-16  8:55   ` Johannes Berg
  2021-03-16  8:59     ` Anton Ivanov
  0 siblings, 1 reply; 5+ messages in thread
From: Johannes Berg @ 2021-03-16  8:55 UTC (permalink / raw
  To: Anton Ivanov, linux-um

On Tue, 2021-03-16 at 08:46 +0000, Anton Ivanov wrote:
> 
> Well spotted.
> 
> Unless I am mistaken, there may be a slightly better way of doing it.
> 
> We can implement arch_sync_kernel_mappings() and sync only where the
> page modified mask says so by setting ARCH_PAGE_TABLE_SYNC_MASK
> 
> This way flush_cache_* can remain nops as in asm-generic

Would that actually buy us anything?

Not that I mind, or even understand the TLB code well, but it seems
fairly similar?

> I am going to give that a spin, if it works, I will post it to the list by lunchtime GMT

Sounds good to me :)

I also made these patches:

https://lore.kernel.org/lkml/20210315235453.e3fbb86e99a0.I08a3ee6dbe47ea3e8024956083f162884a958e40@changeid/T/#u


so my "vmalloc-heavy workload" no longer is vmalloc heavy since it now
uses kvmalloc and never hits vmalloc, always kmalloc :)

johannes


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] um: implement flush_cache_vmap/flush_cache_vunmap
  2021-03-16  8:55   ` Johannes Berg
@ 2021-03-16  8:59     ` Anton Ivanov
  0 siblings, 0 replies; 5+ messages in thread
From: Anton Ivanov @ 2021-03-16  8:59 UTC (permalink / raw
  To: Johannes Berg, linux-um



On 16/03/2021 08:55, Johannes Berg wrote:
> On Tue, 2021-03-16 at 08:46 +0000, Anton Ivanov wrote:
>>
>> Well spotted.
>>
>> Unless I am mistaken, there may be a slightly better way of doing it.
>>
>> We can implement arch_sync_kernel_mappings() and sync only where the
>> page modified mask says so by setting ARCH_PAGE_TABLE_SYNC_MASK
>>
>> This way flush_cache_* can remain nops as in asm-generic
> 
> Would that actually buy us anything?

It makes flushing conditional on a specific mask from the result of the mapping op. In theory, should be better. In practice - probably more of the same.

> 
> Not that I mind, or even understand the TLB code well, but it seems
> fairly similar?
> 
>> I am going to give that a spin, if it works, I will post it to the list by lunchtime GMT
> 
> Sounds good to me :)
> 
> I also made these patches:
> 
> https://lore.kernel.org/lkml/20210315235453.e3fbb86e99a0.I08a3ee6dbe47ea3e8024956083f162884a958e40@changeid/T/#u
> 
> 
> so my "vmalloc-heavy workload" no longer is vmalloc heavy since it now
> uses kvmalloc and never hits vmalloc, always kmalloc :)

:)

> 
> johannes
> 
> 
> _______________________________________________
> linux-um mailing list
> linux-um@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-um
> 

-- 
Anton R. Ivanov
https://www.kot-begemot.co.uk/

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] um: implement flush_cache_vmap/flush_cache_vunmap
  2021-03-15 22:38 [PATCH] um: implement flush_cache_vmap/flush_cache_vunmap Johannes Berg
  2021-03-16  8:46 ` Anton Ivanov
@ 2021-03-16 11:10 ` Anton Ivanov
  1 sibling, 0 replies; 5+ messages in thread
From: Anton Ivanov @ 2021-03-16 11:10 UTC (permalink / raw
  To: Johannes Berg, linux-um; +Cc: Johannes Berg



On 15/03/2021 22:38, Johannes Berg wrote:
> From: Johannes Berg <johannes.berg@intel.com>
> 
> vmalloc() heavy workloads in UML are extremely slow, due to
> flushing the entire kernel VM space (flush_tlb_kernel_vm())
> on the first segfault.
> 
> Implement flush_cache_vmap() to avoid that, and while at it
> also add flush_cache_vunmap() since it's trivial.
> 
> This speeds up my vmalloc() heavy test of copying files out
> from /sys/kernel/debug/gcov/ by 30x (from 30s to 1s.)
> 
> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
> ---
>   arch/um/include/asm/cacheflush.h | 9 +++++++++
>   arch/um/include/asm/tlb.h        | 2 +-
>   2 files changed, 10 insertions(+), 1 deletion(-)
>   create mode 100644 arch/um/include/asm/cacheflush.h
> 
> diff --git a/arch/um/include/asm/cacheflush.h b/arch/um/include/asm/cacheflush.h
> new file mode 100644
> index 000000000000..4c9858cd36ec
> --- /dev/null
> +++ b/arch/um/include/asm/cacheflush.h
> @@ -0,0 +1,9 @@
> +#ifndef __UM_ASM_CACHEFLUSH_H
> +#define __UM_ASM_CACHEFLUSH_H
> +
> +#include <asm/tlbflush.h>
> +#define flush_cache_vmap flush_tlb_kernel_range
> +#define flush_cache_vunmap flush_tlb_kernel_range
> +
> +#include <asm-generic/cacheflush.h>
> +#endif /* __UM_ASM_CACHEFLUSH_H */
> diff --git a/arch/um/include/asm/tlb.h b/arch/um/include/asm/tlb.h
> index ff9c62828962..0422467bda5b 100644
> --- a/arch/um/include/asm/tlb.h
> +++ b/arch/um/include/asm/tlb.h
> @@ -5,7 +5,7 @@
>   #include <linux/mm.h>
>   
>   #include <asm/tlbflush.h>
> -#include <asm-generic/cacheflush.h>
> +#include <asm/cacheflush.h>
>   #include <asm-generic/tlb.h>
>   
>   #endif
> 
Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com>
-- 
Anton R. Ivanov
https://www.kot-begemot.co.uk/

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-03-16 11:10 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-03-15 22:38 [PATCH] um: implement flush_cache_vmap/flush_cache_vunmap Johannes Berg
2021-03-16  8:46 ` Anton Ivanov
2021-03-16  8:55   ` Johannes Berg
2021-03-16  8:59     ` Anton Ivanov
2021-03-16 11:10 ` Anton Ivanov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.