BPF Archive mirror
 help / color / mirror / Atom feed
* [LSF/MM/BPF TOPIC] SLUB: what's next?
@ 2024-04-30 15:42 Vlastimil Babka
  2024-05-01  9:23 ` Alexei Starovoitov
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Vlastimil Babka @ 2024-04-30 15:42 UTC (permalink / raw
  To: lsf-pc, linux-mm, bpf

Hi,

I'd like to propose a session about the next steps for SLUB. This is
different from the BOF about sheaves that Matthew suggested, which would be
not suitable for the whole group due to being not fleshed out enough yet.
But the session could be scheduled after the BOF so if we do brainstorm
something promising there, the result could be discussed as part of the full
session.

Aside from that my preliminary plan is to discuss:

- what was made possible by reducing the slab allocators implementations to
a single one, and what else could be done now with a single implementation

- the work-in-progress work (for now in the context of maple tree) on SLUB
per-cpu array caches and preallocation

- what functionality would SLUB need to gain so the extra caching done by
bpf allocator on top wouldn't be necessary? (kernel/bpf/memalloc.c)

- similar wrt lib/objpool.c (did you even noticed it was added? :)

- maybe the mempool functionality could be better integrated as well?

- are there more cases where people have invented layers outside mm and that
could be integrated with some effort? IIRC io_uring also has some caching on
top currently...

- better/more efficient memcg integration?

- any other features people would like SLUB to have?

Thanks,
Vlastimil

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [LSF/MM/BPF TOPIC] SLUB: what's next?
  2024-04-30 15:42 [LSF/MM/BPF TOPIC] SLUB: what's next? Vlastimil Babka
@ 2024-05-01  9:23 ` Alexei Starovoitov
  2024-05-02  7:59 ` [Lsf-pc] " Michal Hocko
  2024-05-20 20:52 ` Vlastimil Babka
  2 siblings, 0 replies; 6+ messages in thread
From: Alexei Starovoitov @ 2024-05-01  9:23 UTC (permalink / raw
  To: Vlastimil Babka; +Cc: lsf-pc, linux-mm, bpf

On Tue, Apr 30, 2024 at 8:42 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> Hi,
>
> I'd like to propose a session about the next steps for SLUB. This is
> different from the BOF about sheaves that Matthew suggested, which would be
> not suitable for the whole group due to being not fleshed out enough yet.
> But the session could be scheduled after the BOF so if we do brainstorm
> something promising there, the result could be discussed as part of the full
> session.
>
> Aside from that my preliminary plan is to discuss:
>
> - what was made possible by reducing the slab allocators implementations to
> a single one, and what else could be done now with a single implementation
>
> - the work-in-progress work (for now in the context of maple tree) on SLUB
> per-cpu array caches and preallocation
>
> - what functionality would SLUB need to gain so the extra caching done by
> bpf allocator on top wouldn't be necessary? (kernel/bpf/memalloc.c)

+1 to have this discussion.
Would be great to have it as part of slub.

> - similar wrt lib/objpool.c (did you even noticed it was added? :)
>
> - maybe the mempool functionality could be better integrated as well?
>
> - are there more cases where people have invented layers outside mm and that
> could be integrated with some effort? IIRC io_uring also has some caching on
> top currently...
>
> - better/more efficient memcg integration?
>
> - any other features people would like SLUB to have?
>
> Thanks,
> Vlastimil
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Lsf-pc] [LSF/MM/BPF TOPIC] SLUB: what's next?
  2024-04-30 15:42 [LSF/MM/BPF TOPIC] SLUB: what's next? Vlastimil Babka
  2024-05-01  9:23 ` Alexei Starovoitov
@ 2024-05-02  7:59 ` Michal Hocko
  2024-05-02  9:26   ` Vlastimil Babka
  2024-05-20 20:52 ` Vlastimil Babka
  2 siblings, 1 reply; 6+ messages in thread
From: Michal Hocko @ 2024-05-02  7:59 UTC (permalink / raw
  To: Vlastimil Babka; +Cc: lsf-pc, linux-mm, bpf

On Tue 30-04-24 17:42:18, Vlastimil Babka wrote:
> Hi,
> 
> I'd like to propose a session about the next steps for SLUB. This is
> different from the BOF about sheaves that Matthew suggested, which would be
> not suitable for the whole group due to being not fleshed out enough yet.
> But the session could be scheduled after the BOF so if we do brainstorm
> something promising there, the result could be discussed as part of the full
> session.
> 
> Aside from that my preliminary plan is to discuss:
> 
> - what was made possible by reducing the slab allocators implementations to
> a single one, and what else could be done now with a single implementation
> 
> - the work-in-progress work (for now in the context of maple tree) on SLUB
> per-cpu array caches and preallocation
> 
> - what functionality would SLUB need to gain so the extra caching done by
> bpf allocator on top wouldn't be necessary? (kernel/bpf/memalloc.c)
> 
> - similar wrt lib/objpool.c (did you even noticed it was added? :)
> 
> - maybe the mempool functionality could be better integrated as well?
> 
> - are there more cases where people have invented layers outside mm and that
> could be integrated with some effort? IIRC io_uring also has some caching on
> top currently...
> 
> - better/more efficient memcg integration?
> 
> - any other features people would like SLUB to have?

Thanks a lot Vlastimi. This is quite a list. Do you think this is a fit
into a single time slot or would that benefit from splitting into 2
slots?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Lsf-pc] [LSF/MM/BPF TOPIC] SLUB: what's next?
  2024-05-02  7:59 ` [Lsf-pc] " Michal Hocko
@ 2024-05-02  9:26   ` Vlastimil Babka
  2024-05-06 21:04     ` Roman Gushchin
  0 siblings, 1 reply; 6+ messages in thread
From: Vlastimil Babka @ 2024-05-02  9:26 UTC (permalink / raw
  To: Michal Hocko; +Cc: lsf-pc, linux-mm, bpf



On 5/2/24 09:59, Michal Hocko wrote:
> On Tue 30-04-24 17:42:18, Vlastimil Babka wrote:
>> Hi,
>>
>> I'd like to propose a session about the next steps for SLUB. This is
>> different from the BOF about sheaves that Matthew suggested, which would be
>> not suitable for the whole group due to being not fleshed out enough yet.
>> But the session could be scheduled after the BOF so if we do brainstorm
>> something promising there, the result could be discussed as part of the full
>> session.
>>
>> Aside from that my preliminary plan is to discuss:
>>
>> - what was made possible by reducing the slab allocators implementations to
>> a single one, and what else could be done now with a single implementation
>>
>> - the work-in-progress work (for now in the context of maple tree) on SLUB
>> per-cpu array caches and preallocation
>>
>> - what functionality would SLUB need to gain so the extra caching done by
>> bpf allocator on top wouldn't be necessary? (kernel/bpf/memalloc.c)
>>
>> - similar wrt lib/objpool.c (did you even noticed it was added? :)
>>
>> - maybe the mempool functionality could be better integrated as well?
>>
>> - are there more cases where people have invented layers outside mm and that
>> could be integrated with some effort? IIRC io_uring also has some caching on
>> top currently...
>>
>> - better/more efficient memcg integration?
>>
>> - any other features people would like SLUB to have?
> 
> Thanks a lot Vlastimi. This is quite a list. Do you think this is a fit
> into a single time slot or would that benefit from splitting into 2
> slots?

I think single slot is fine, could schedule another one later if we
don't fit?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Lsf-pc] [LSF/MM/BPF TOPIC] SLUB: what's next?
  2024-05-02  9:26   ` Vlastimil Babka
@ 2024-05-06 21:04     ` Roman Gushchin
  0 siblings, 0 replies; 6+ messages in thread
From: Roman Gushchin @ 2024-05-06 21:04 UTC (permalink / raw
  To: Vlastimil Babka; +Cc: Michal Hocko, lsf-pc, linux-mm, bpf


> On May 2, 2024, at 2:26 AM, Vlastimil Babka <vbabka@suse.cz> wrote:
> 
> 
> 
>> On 5/2/24 09:59, Michal Hocko wrote:
>>> On Tue 30-04-24 17:42:18, Vlastimil Babka wrote:
>>> Hi,
>>> 
>>> I'd like to propose a session about the next steps for SLUB. This is
>>> different from the BOF about sheaves that Matthew suggested, which would be
>>> not suitable for the whole group due to being not fleshed out enough yet.
>>> But the session could be scheduled after the BOF so if we do brainstorm
>>> something promising there, the result could be discussed as part of the full
>>> session.
>>> 
>>> Aside from that my preliminary plan is to discuss:
>>> 
>>> - what was made possible by reducing the slab allocators implementations to
>>> a single one, and what else could be done now with a single implementation
>>> 
>>> - the work-in-progress work (for now in the context of maple tree) on SLUB
>>> per-cpu array caches and preallocation
>>> 
>>> - what functionality would SLUB need to gain so the extra caching done by
>>> bpf allocator on top wouldn't be necessary? (kernel/bpf/memalloc.c)
>>> 
>>> - similar wrt lib/objpool.c (did you even noticed it was added? :)
>>> 
>>> - maybe the mempool functionality could be better integrated as well?
>>> 
>>> - are there more cases where people have invented layers outside mm and that
>>> could be integrated with some effort? IIRC io_uring also has some caching on
>>> top currently...
>>> 
>>> - better/more efficient memcg integration?

This is definitely an interesting topic, especially in a light of recent slab accounting performance conversations with Linus. Unfortunately I’m not attending in person this year, but happy to join virtually if it’s possible.

It’s not yet entirely clear to me if the kmem accounting performance problem exists outside of some micro-benchmarks.

Additionally, Linus proposed to optimize for cases when allocations might be short-living. In the proposed form it would complicate call sites significantly, but maybe we need some sort of transactional api, e.g.:

memcg_kmem_local_accounting_start();
p1 = kmalloc(GFP_ACCOUNT_LOCAL);
p2 = kmalloc(GFP_ACCOUNT_LOCAL);
…
kfree(p1);
memcg_kmem_local_accounting_commit();

In this case all allocations within the transaction will be saved to some temporarily buffer and not fully accounted until memcg_kmem_local_accounting_commit(). This will make them way faster. But a user should guarantee that these allocations won’t be freed from any other context until memcg_kmem_local_accounting_commit().

Thanks!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [LSF/MM/BPF TOPIC] SLUB: what's next?
  2024-04-30 15:42 [LSF/MM/BPF TOPIC] SLUB: what's next? Vlastimil Babka
  2024-05-01  9:23 ` Alexei Starovoitov
  2024-05-02  7:59 ` [Lsf-pc] " Michal Hocko
@ 2024-05-20 20:52 ` Vlastimil Babka
  2 siblings, 0 replies; 6+ messages in thread
From: Vlastimil Babka @ 2024-05-20 20:52 UTC (permalink / raw
  To: lsf-pc, linux-mm, bpf

On 4/30/24 5:42 PM, Vlastimil Babka wrote:
> Hi,
> 
> I'd like to propose a session about the next steps for SLUB. This is
> different from the BOF about sheaves that Matthew suggested, which would be
> not suitable for the whole group due to being not fleshed out enough yet.
> But the session could be scheduled after the BOF so if we do brainstorm
> something promising there, the result could be discussed as part of the full
> session.

Since I've been asked if the slides could be shared, here goes:
https://drive.google.com/file/d/1fHozm2y97Biceh19e_aL5PrHLAZFntZW/view

> Aside from that my preliminary plan is to discuss:
> 
> - what was made possible by reducing the slab allocators implementations to
> a single one, and what else could be done now with a single implementation
> 
> - the work-in-progress work (for now in the context of maple tree) on SLUB
> per-cpu array caches and preallocation
> 
> - what functionality would SLUB need to gain so the extra caching done by
> bpf allocator on top wouldn't be necessary? (kernel/bpf/memalloc.c)
> 
> - similar wrt lib/objpool.c (did you even noticed it was added? :)
> 
> - maybe the mempool functionality could be better integrated as well?
> 
> - are there more cases where people have invented layers outside mm and that
> could be integrated with some effort? IIRC io_uring also has some caching on
> top currently...
> 
> - better/more efficient memcg integration?
> 
> - any other features people would like SLUB to have?
> 
> Thanks,
> Vlastimil


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-05-20 20:52 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-30 15:42 [LSF/MM/BPF TOPIC] SLUB: what's next? Vlastimil Babka
2024-05-01  9:23 ` Alexei Starovoitov
2024-05-02  7:59 ` [Lsf-pc] " Michal Hocko
2024-05-02  9:26   ` Vlastimil Babka
2024-05-06 21:04     ` Roman Gushchin
2024-05-20 20:52 ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).