linux/lib
Linus Torvalds fb1dd1403c A set of changes for debugobjects:
- Prevent destroying the kmem_cache on early failure.
 
     Destroying a kmem_cache requires work queues to be set up, but in the
     early failure case they are not yet initializated. So rather leak the
     cache instead of triggering a BUG.
 
   - Reduce parallel pool fill attempts.
 
     Refilling the object pool requires to take the global pool lock, which
     causes a massive performance issue when a large number of CPUs attempt
     to refill concurrently. It turns out that it's sufficient to let one
     CPU handle the refill from the to free list and in case there are not
     enough objects on it to allocate new objects from the kmem cache.
 
     This also splits the free list handling from the actual allocation path
     as that yields better results on RT where allocation is restricted to
     preemptible code paths. The refill from free list has no such
     restrictions.
 
   - Consolidate the global and the per CPU pools to use the same data
     structure, so all helper functions can be shared.
 
   - Simplify the object allocation/free logic.
 
     The allocation/free logic is an incomprehensible maze, which tries to
     utilize the to free list and the global pool in the best way. This all
     can be simplified into a straight forward comprehensible code flow.
 
   - Convert the allocation/free mechanism to batch mode.
 
     Transferring objects from the global pool to the per CPU pools or vice
     versa is done by walking the hlist and moving object by object. That
     not only increases the pool lock held time, it also dirties up to 17
     cache lines.
 
     This can be avoided by storing the pointer to the first object in a
     batch of 16 objects in the objects themself and propagate it through
     the batch when an object is enqueued into a pool or to a temporary
     hlist head on allocation.
 
     This allows to move batches of objects with at max four cache lines
     dirtied and reduces the pool lock held time and therefore contention
     significantly.
 
   - Improve the object reusage
 
     The current implementation is too agressively freeing unused objects,
     which is counterproductive on bursty workloads like a kernel compile.
 
     Address this by:
 
     	* increasing the per CPU pool size
 
 	* refilling the per CPU pool from the to be freed pool when the per
           CPU pool emptied a batch
 
 	* keeping track of object usage with a exponentially wheighted
           moving average which prevents the work queue callback to free
           objects prematuraly.
 
     This combined reduces the allocation/free rate for a full kernel
     compile significantly:
 
                 kmem_cache_alloc()  kmem_cache_free()
     Baseline:   380k                330k
     Improved:   170k                117k
 
   - A few cleanups and a more cache line friendly layout of debug
     information on top.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmc7ezETHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoYqOD/42X0//BzqdCs0W3jAuaSxbcncp14en
 kxuKJVcIOwTwiry5xnSD647YYBdXGZyEa1FR84eFpI6cM6O68mCm+Q4Ab+O02MwC
 1tAAQ7fS3fhPBHip6RQtBygexH8WXH3I9BeeXkzQgMCyyObkjRSL3oLIGA4Azfuo
 q79LNZ5ctp9zd2DMWD/h+DEzYKr7LZfCMeoxXKLv6BdpZSS35cZhX4u7uu7DPryE
 AWPCFCE/bEv/QQZ9bUz9Zc8KXsclcgrPXn/ubP8NVK6IHJ2RjIXqBDzQo0C2+QVi
 yb/XdjmQJXNxb3RZxOpwwrefy/jhd8h41rY3prnfnHBU8XU7IFUgN6MfAC46peZR
 dXOLGxsLhJk2xaGcddqD7rSDA1hm7Dpn6ZtTbgiaxWd+ksUCxQckkzWCYlGXl3Az
 4M0LeexWEBKQYBAb1XjAOmfWmndVZWJ6QDFNMN67o0YZt4Uh2APSV/0fevUBGjzT
 nVWxDzN0a/0kMuvmFtwnReVnnGKixC4X3AV4/mvNYQOoRhSrTxjwkBn2TxvZ+3Sh
 v5uNGkUGe3dXS4XBWbytm/HeDdzKZ/C3KATm+bHSqQ+/ktxuCp13EhiursYf5Yc/
 44T8sPEcSTj+xWHLZpsJfz0lpQM4q3KJj0HPQkSIHUD5KWTMkBSFonuBF6jHkf9H
 R4OsmrvXTdFG5g==
 =zxbA
 -----END PGP SIGNATURE-----

Merge tag 'core-debugobjects-2024-11-18' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull debugobjects updates from Thomas Gleixner:

 - Prevent destroying the kmem_cache on early failure.

   Destroying a kmem_cache requires work queues to be set up, but in the
   early failure case they are not yet initializated. So rather leak the
   cache instead of triggering a BUG.

 - Reduce parallel pool fill attempts.

   Refilling the object pool requires to take the global pool lock,
   which causes a massive performance issue when a large number of CPUs
   attempt to refill concurrently. It turns out that it's sufficient to
   let one CPU handle the refill from the to free list and in case there
   are not enough objects on it to allocate new objects from the kmem
   cache.

   This also splits the free list handling from the actual allocation
   path as that yields better results on RT where allocation is
   restricted to preemptible code paths. The refill from free list has
   no such restrictions.

 - Consolidate the global and the per CPU pools to use the same data
   structure, so all helper functions can be shared.

 - Simplify the object allocation/free logic.

   The allocation/free logic is an incomprehensible maze, which tries to
   utilize the to free list and the global pool in the best way. This
   all can be simplified into a straight forward comprehensible code
   flow.

 - Convert the allocation/free mechanism to batch mode.

   Transferring objects from the global pool to the per CPU pools or
   vice versa is done by walking the hlist and moving object by object.
   That not only increases the pool lock held time, it also dirties up
   to 17 cache lines.

   This can be avoided by storing the pointer to the first object in a
   batch of 16 objects in the objects themself and propagate it through
   the batch when an object is enqueued into a pool or to a temporary
   hlist head on allocation.

   This allows to move batches of objects with at max four cache lines
   dirtied and reduces the pool lock held time and therefore contention
   significantly.

 - Improve the object reusage

   The current implementation is too agressively freeing unused objects,
   which is counterproductive on bursty workloads like a kernel compile.

   Address this by:

      * increasing the per CPU pool size

      * refilling the per CPU pool from the to be freed pool when the
        per CPU pool emptied a batch

      * keeping track of object usage with a exponentially wheighted
        moving average which prevents the work queue callback to free
        objects prematuraly.

   This combined reduces the allocation/free rate for a full kernel
   compile significantly:

                  kmem_cache_alloc()  kmem_cache_free()
      Baseline:   380k                330k
      Improved:   170k                117k

 - A few cleanups and a more cache line friendly layout of debug
   information on top.

* tag 'core-debugobjects-2024-11-18' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (25 commits)
  debugobjects: Track object usage to avoid premature freeing of objects
  debugobjects: Refill per CPU pool more agressively
  debugobjects: Double the per CPU slots
  debugobjects: Move pool statistics into global_pool struct
  debugobjects: Implement batch processing
  debugobjects: Prepare kmem_cache allocations for batching
  debugobjects: Prepare for batching
  debugobjects: Use static key for boot pool selection
  debugobjects: Rework free_object_work()
  debugobjects: Rework object freeing
  debugobjects: Rework object allocation
  debugobjects: Move min/max count into pool struct
  debugobjects: Rename and tidy up per CPU pools
  debugobjects: Use separate list head for boot pool
  debugobjects: Move pools into a datastructure
  debugobjects: Reduce parallel pool fill attempts
  debugobjects: Make debug_objects_enabled bool
  debugobjects: Provide and use free_object_list()
  debugobjects: Remove pointless debug printk
  debugobjects: Reuse put_objects() on OOM
  ...
2024-11-19 15:20:04 -08:00
..
842
crypto
dim
fonts
kunit
lz4
lzo
math
pldmfw
raid6
reed_solomon
test_fortify
vdso
xz
zlib_deflate
zlib_dfltcc
zlib_inflate
zstd
.gitignore
alloc_tag.c
argv_split.c
ashldi3.c
ashrdi3.c
asn1_decoder.c
asn1_encoder.c
assoc_array.c
atomic64_test.c
atomic64.c
audit.c
base64.c
bcd.c
bch.c
bitfield_kunit.c
bitmap-str.c
bitmap.c
bitrev.c
bootconfig-data.S
bootconfig.c
bsearch.c
btree.c
bucket_locks.c
bug.c
build_OID_registry
buildid.c
bust_spinlocks.c
check_signature.c
checksum_kunit.c
checksum.c
closure.c
clz_ctz.c
clz_tab.c
cmdline_kunit.c
cmdline.c
cmpdi2.c
cmpxchg-emu.c
codetag.c
compat_audit.c
cpu_rmap.c
cpumask_kunit.c
cpumask.c
crc4.c
crc7.c
crc8.c
crc16.c
crc32.c
crc32defs.h
crc32test.c
crc64-rocksoft.c
crc64.c
crc-ccitt.c
crc-itu-t.c
crc-t10dif.c
ctype.c
debug_info.c
debug_locks.c
debugobjects.c
dec_and_lock.c
decompress_bunzip2.c
decompress_inflate.c
decompress_unlz4.c
decompress_unlzma.c
decompress_unlzo.c
decompress_unxz.c
decompress_unzstd.c
decompress.c
devmem_is_allowed.c
devres.c
dhry_1.c
dhry_2.c
dhry_run.c
dhry.h
digsig.c
dump_stack.c
dynamic_debug.c
dynamic_queue_limits.c
earlycpio.c
errname.c
error-inject.c
errseq.c
extable.c
fault-inject-usercopy.c
fault-inject.c
fdt_addresses.c
fdt_empty_tree.c
fdt_ro.c
fdt_rw.c
fdt_strerror.c
fdt_sw.c
fdt_wip.c
fdt.c
find_bit_benchmark.c
find_bit.c
flex_proportions.c
fortify_kunit.c
fw_table.c
gen_crc32table.c
gen_crc64table.c
genalloc.c
generic-radix-tree.c
glob.c
globtest.c
group_cpus.c
hashtable_test.c
hexdump.c
hweight.c
idr.c
inflate.c
interval_tree_test.c
interval_tree.c
iomap_copy.c
iomap.c
iommu-helper.c
iov_iter.c
irq_poll.c
irq_regs.c
is_signed_type_kunit.c
is_single_threaded.c
kasprintf.c
Kconfig
Kconfig.debug Locking changes for v6.13 are: 2024-11-19 12:43:11 -08:00
Kconfig.kasan
Kconfig.kcsan
Kconfig.kfence
Kconfig.kgdb
Kconfig.kmsan
Kconfig.ubsan
kfifo.c
klist.c
kobject_uevent.c
kobject.c
kstrtox.c
kstrtox.h
kunit_iov_iter.c
libcrc32c.c
linear_ranges.c
list_debug.c
list_sort.c
list-test.c
llist.c
locking-selftest-hardirq.h
locking-selftest-mutex.h
locking-selftest-rlock-hardirq.h
locking-selftest-rlock-softirq.h
locking-selftest-rlock.h
locking-selftest-rsem.h
locking-selftest-rtmutex.h
locking-selftest-softirq.h
locking-selftest-spin-hardirq.h
locking-selftest-spin-softirq.h
locking-selftest-spin.h
locking-selftest-wlock-hardirq.h
locking-selftest-wlock-softirq.h
locking-selftest-wlock.h
locking-selftest-wsem.h
locking-selftest.c
lockref.c
logic_iomem.c
logic_pio.c
lru_cache.c
lshrdi3.c
lwq.c
Makefile
maple_tree.c
memcat_p.c
memcpy_kunit.c
memory-notifier-error-inject.c
memregion.c
memweight.c
muldi3.c
net_utils.c
netdev-notifier-error-inject.c
nlattr.c
nmi_backtrace.c
notifier-error-inject.c
notifier-error-inject.h
objagg.c
objpool.c
of-reconfig-notifier-error-inject.c
oid_registry.c
once.c
overflow_kunit.c
packing.c
parman.c
parser.c
percpu_counter.c
percpu_test.c
percpu-refcount.c
plist.c
pm-notifier-error-inject.c
polynomial.c
radix-tree.c
radix-tree.h
random32.c
ratelimit.c
rbtree_test.c
rbtree.c
rcuref.c
ref_tracker.c
refcount.c
rhashtable.c
sbitmap.c
scatterlist.c
seq_buf.c
sg_pool.c
sg_split.c
siphash_kunit.c
siphash.c
slub_kunit.c
smp_processor_id.c
sort.c
stackdepot.c
stackinit_kunit.c
stmp_device.c
string_helpers_kunit.c
string_helpers.c
string_kunit.c
string.c
strncpy_from_user.c
strnlen_user.c
syscall.c
test_bitmap.c
test_bitops.c
test_bits.c
test_blackhole_dev.c
test_bpf.c
test_debug_virtual.c
test_dynamic_debug.c
test_firmware.c
test_fprobe.c
test_fpu_glue.c
test_fpu_impl.c
test_fpu.h
test_free_pages.c
test_hash.c
test_hexdump.c
test_hmm_uapi.h
test_hmm.c
test_ida.c
test_kmod.c
test_kprobes.c
test_linear_ranges.c
test_list_sort.c
test_lockup.c
test_maple_tree.c
test_memcat_p.c
test_meminit.c
test_min_heap.c
test_module.c
test_objagg.c
test_objpool.c
test_parman.c
test_printf.c
test_ref_tracker.c
test_rhashtable.c
test_scanf.c
test_sort.c
test_static_key_base.c
test_static_keys.c
test_sysctl.c
test_ubsan.c
test_uuid.c
test_vmalloc.c
test_xarray.c
test-kstrtox.c
textsearch.c
timerqueue.c
trace_readwrite.c
ts_bm.c
ts_fsm.c
ts_kmp.c
ubsan.c
ubsan.h
ucmpdi2.c
ucs2_string.c
union_find.c
usercopy_kunit.c
usercopy.c
uuid.c
vsprintf.c
win_minmax.c
xarray.c
xxhash.c