Commit 6d23d3ac authored by Oran Agra's avatar Oran Agra
Browse files

Squashed 'deps/jemalloc/' changes from ea6b3e973..54eaed1d8

54eaed1d8 Merge branch 'dev'
304c91982 Update ChangeLog for 5.3.0.
8cb814629 Make the default option of zero realloc match the system allocator.
66c889500 Make test/unit/background_thread_enable more conservative.
a7d73dd4c Update TUNING.md to include the new tcache_max option.
254b01191 Small doc tweak of opt.trust_madvise.
f5e840bbf Minor typo fix in doc.
ceca07d2c Correct the name of stats.mutexes.prof_thds_data in doc.
391bad4b9 Avoid abort() in test/integration/cpp/infallible_new_true.
9a242f16d fix some typos
0e29ad4ef Rename zero_realloc option "strict" to "alloc".
5841b6dbe Update FreeBSD image to 12.3 for cirrus ci.
ed5fc14b2 Use volatile to workaround buffer overflow false positives.
25517b852 Reoreder TravisCI jobs to optimize CI time
8a49b62e7 Enable TravisCI for Windows
fdb6c1016 Add FreeBSD to TravisCI
a93931537 Do not disable SEC by default for 64k pages platforms
eaaa368ba Add comments and use meaningful vars in sz_psz2ind.
5bf03f8ce Implement PAGE_FLOOR macro
52631c90f Fix size class calculation for sec
7ae0f15c5 Add a default page size when cross-compile for Apple M1.
eb65d1b07 Fix FreeBSD system jemalloc TSD cleanup
78b58379c Fix possible "nmalloc >= ndalloc" assertion.
ca709c313 Fix failed assertion due to racy memory access
063d134ae Properly detect background thread support on Darwin.
a4e81221c Document 'make uninstall'
20f9802e4 Avoid overflow warnings in test/unit/safety_check.
8c59c44ff Add a dependency checking step at the end of malloc_conf_init.
efc539c04 Initialize prof_leak during prof init.
002f0e939 Disable TravisCI jobs generation for Windows
01a293fc0 Add Windows to TravisCI
b798fabdf Add prof_leak_error option
eafd2ac39 Forbid spaces in prefix and exec_prefix
36a09ba2c Forbid spaces in install suffix
640c3c72e Add support for 'make uninstall'
f15d8f3b4 Echo installed files via verbose 'install' command
eb196815d Avoid calculating size of size class twice & delete sc_data_global.
011449f17 Fix doc build with install-suffix.
8b49eb132 Fix the HELP_STRING of --enable-doc.
ddb170b1d Simplify arena_migrate() to take arena_t* instead of indices.
648b3b9f7 Lower the num_threads in the stress test of test/unit/prof_recent
d66162e03 Fix the extent state checking on the merge error path.
c9946fa7e FreeBSD also needs the OS-X "don't declare system functions as nothrow" fix since it also has jemalloc in the base system
89fe8ee6b Use the isb instruction instead of yield for spin locks on arm
6230cc88b Add background thread sleep retry in test/unit/hpa_background_thread
61978bbe6 Purge all if the last thread migrated away from an arena.
c91e62dd3 #include <features.h> as requested
18510020e Fix symbol conflict with musl libc
f509703af Fix two conversion warnings in tcache.
067c2da07 Fix unnecessary returns in san_(un)guard_pages_two_sided.
d660683d3 Fix test config of lg_san_uaf_align.
eabe88916 Rename full_position to low_bound in cache_bin.h.
dfdd7562f Rename san_enabled() to san_guard_enabled().
01d61a3c6 Fix a conversion warning.
8b34a788b Fix an used-uninitialized warning (false positive).
e491cef9a Add stats for stashed bytes in tcache.
b75822bc6 Implement use-after-free detection using junk and stash.
06aac61c4 Split the core logic of tcache flush into a separate function.
d038160f3 Fix shadowed variable usage.
bd70d8fc0 Add the profiling settings for tests explicit.
e491df1d2 Fix warnings when using autoheader.
60b9637cc Only invoke malloc_cpu_count_is_deterministic() when necessary.
837b37c4c Fix the time-since computation in HPA.
310af725b Add nstime_ns_since which obtains the duration since the input time.
cafe9a315 Disable percpu arena in case of non deterministic CPU count
bb5052ce9 Fix base_ehooks_get_for_metadata
9015e129b Update visual studio projects
d90655390 San: Create a function for committing and zeroing
800ce49c1 San: Bump alloc frequently reused guarded allocations
f56f5b993 Pass 'frequent_reuse' hint to PAI
2c70e8d35 Rename 'arena_decay' to 'arena_util'
0f6da1257 San: Implement bump alloc
34b00f896 San: Avoid running san tests with prof enabled
62f9c54d2 San: Rename 'guard' to 'san'
d9bbf539f CI: Refactor gen_travis.py
7dcf77809 Mark slab as true on sized dealloc fast path.
af6ee27c0 Enforce abort_conf:true when malloc_conf is not fully recognized.
113e8e68e freebsd 14 build fix proposal.
3b3257a70 Correct opt.prof_leak documentation
cdabe908d Track the initialized state of nstime_t on debug build.
400c59895 Fix uninitialized nstime reading / updating on the stack in hpa.
8b81d3f21 Fix the initialization of last_event in thread event init.
6bdb4f5ab Check prof_active in addtion to opt_prof during batch_alloc().
37342a4d3 Add ctl interface for experimental_infallible_new.
6cb585b13 San: Unguard guarded slabs during arena destruction
b6a7a535b Optimize away a branch on the free fastpath.
4d56aaeca Optimize away the tsd_fast() check on free fastpath.
26f5257b8 Remove declaration of an undefined function
215961541 Add new architecture loongarch.
8daac7958 Redefine functions with test hooks only for tests
c9ebff0fd Initialize deferred_work_generated
912324a1a Add debug check outside of the loop in hpa_alloc_batch.
cf9724531 Darwin malloc_size override support proposal.
ab0f1604b Delay the atexit call to prof_log_start().
11b6db744 CPU affinity on BSD platforms support.
83f329402 Small refactors around 7bb05e0.
3c4b717ff Remove unused header base_structs.h.
deb8e62a8 Implement guard pages.
7bb05e04b add experimental.arenas_create_ext mallctl
a9031a097 Allow setting a dump hook
f7d46b811 Allow setting custom backtrace hook
523cfa55c Guard prof related mallctl with opt_prof.
6e848a005 Remove opt_background_thread_hpa_interval_max_ms
8229cc77c Wake up background threads on demand
97da57c13 HPA: Add min_purge_interval_ms option
b8b8027f1 Allow PAI to calculate time until deferred work
26140dd24 Reject --enable-prof-libunwind without --enable-prof
e5062e9fb Makefile.in: make sure doc generated before install
8b24cb8fd Don't assume initialized arena in the default alloc hook.
c01a885e9 HPA: Correctly calculate retained pages
2c625d5cd Fix warnings when compiled with clang
9d02bdc88 Port gen_run_tests.py to python3
5884a076f Rename prof.dump_prefix to prof.prefix
6a0160071 Add Cirrus CI testing matrix
f58064b93 Verify that HPA is used before calling its functions
27f71242b Mutex: Tweak internal spin count.
6f41ba55e Mutex: Make spin count configurable.
dae24589b PH: Insert-below-min fast-path.
40d53e007 ph: Add aux-list counting and pre-merging.
dcb7b83fa Eset: Cache summary information for heap edatas.
252e0942d Eset: Pull per-pszind data into structs.
dc0a4b8b2 Edata: Pull out comparison fields into a summary.
0170dd198 Edata: Fix a couple typos.
08a4cc096 Pairing heap: inline functions instead of macros.
92a1e38f5 edata_cache: Allow unbounded fast caching.
d93eef2f4 HPA: Introduce a redesigned hpa_central_t.
e09eac1d4 Remove hpa_central.
c88fe355e Add unit tests for decay
aaea4fd1e Add more documentation to decay.c
4b633b9a8 Clean up background thread sleep computation
6630c5989 HPA: Hugification hysteresis.
113938b6f HPA: Pull out a hooks type.
1d4a7666d HPA: Do deferred operations on background threads.
583284f2d Add HPA deferral functionality.
ace329d11 HPA batch dalloc: Just do one deferred work check.
47d8a7e6b psset: Purge empty slabs first.
41fd56605 HPA: Purge across retained extents.
347523517 PAI: Fix a typo.
9c42ed2d1 Travis: Don't test "clang" on OS X.
d202218e8 HPA: Fix typos with big performance implications.
de033f56c mpsc_queue: Add module.
4452a4812 Add opt.experimental_infallible_new.
0689448b1 Travis: Unbreak the builds.
4fb93a18e extent_can_acquire_neighbor typo fix
2381efab5 ARC: add Minimum allocation alignment
2c0f4c2ac Fix typo in configure.ac: experimetal -> experimental
36c6bfb96 SEC: Allow arbitrarily many shards, cached sizes.
11beab38b Added --debug-syms-by-id option
08089589f Fix an interaction between the oversize_threshold test and bgthds.
541793821 Red-black tree: add summarize/filter.
b2c08ef2e RB unit tests: don't test reentrantly.
aea91b8c3 Clean up some minor data structure inconsistencies
1f688490e Stats: Fix a printing bug when hpa_dirty_mult = -1
4f7cb3a41 Sized deallocation: fix a typo.
12cd13cd4 Fix thread.name/prof_sys_thread_name interaction
304cdbb13 Fix a prof_recent/prof_sys_thread_name interaction
9b523c6c1 Refactor the locking in extent_recycle().
ce68f326b Avoid the release & re-acquire of the ecache locks around the merge hook.
7dc77527b Delete the mutex_pool module.
03d95cba8 Remove the unnecessary arena_ind_set in base_alloc_edata().
3093d9455 Move the edata mergeability related functions to extent.h.
7c964b035 Add rtree_write_range(): writing the same content to multiple leaf elements.
add636596 Stop checking head state in the merge hook.
49b7d7f0a Passing down the original edata on the expand path.
178493968 Use rtree tracked states to protect edata outside of ecache locks.
9ea235f8f Add witness_assert_positive_depth_to_rank().
4d8c22f9a Store edata->state in rtree leaf and make edata_t 128B aligned.
70d1541c5 Track extent is_head state in rtree leaf.
862219e46 Add quiescence sync before deleting base during arena_destroy.
a137a6825 Remove redundant declaration, pac_retain_grow_limit_get_set was declared twice in pac.h
2ae1ef7db Fix doc large size 54 KiB error
61afb6a40 Fix locking on arena_i_destroy_ctl().
9193ea224 Cirrus: fix build.
391307714 Mark head state during dss alloc.
11127240c Remove redundant enable-debug definition in configure.
22be724af Set is_head in extent_alloc_wrapper w/ retain.
73ca4b8ef HPA: Use dirtiest-first purging.
0f6c420f8 HPA: Make purging/hugifying more principled.
6bddb92ad psset: Rename "bitmap" to "pageslab_bitmap".
154aa5fcc Use the flat bitmap for eset and psset bitmaps.
271a676dc hpdata: early bailout for longest free range.
d21d5b46b Edata: Move sn into its own field.
fb327368d SEC: Expand option configurability.
ce9386370 HPA: Implement batch allocation.
cdae6706a SEC: Use batch fills.
480f3b11c Add a batch allocation interface to the PAI.
bf448d7a5 SEC: Reduce lock hold times.
1944ebbe7 HPA: Implement batch deallocation.
f47b4c2cd PAI/SEC: Add a dalloc_batch function.
4b8870c7d SEC: Fix a comment typo.
cde7097ec Update INSTALL.md to mention 'autoconf'
a11be5033 Implement opt.cache_oblivious.
8c5e5f50a Fix stats for "tcache_max" (was "lg_tcache_max")
041145c27 Report the correct and wrong sizes on sized dealloc bug detection.
f3b2668b3 Report the offending pointer on sized dealloc bug detection.
edbfe6912 Inline malloc fastpath into operator new.
79f81a373 HPA: Make dirty_mult configurable.
32dd15379 HPA: Make dehugification threshold configurable.
4790db15e HPA: make the hugification threshold configurable.
b3df80bc7 Pull HPA options into a containing struct.
bdb7307ff fxp: Add FXP_INIT_PERCENT
caef4c286 FXP: add fxp_mul_frac.
56e85c0e4 HPA: Use a whole-shard purging heuristic.
dc886e560 hpdata: Return the number of pages to be purged.
9fd9c876b psset: keep aggregate stats.
da63f23e6 HPA: Track pending purges/hugifies in the psset.
0ea3d6307 CTL, Stats: report HPA empty slab stats.
bf64557ed Move empty slab tracking to the psset.
99fc0717e psset: Reconceptualize insertion/removal.
061cabb71 HPA stats: report retained instead of inactive.
d3e5ea03c HPA: Track dirty stats.
68a1666e9 hpdata: Rename "dirty" to "touched".
be0d7a53f HPA: Don't track inactive pages.
55e0f60ca psset stats: Simplify handling.
94cd9444c HPA: Some minor reformattings.
b25ee5d88 HPA: Add purge stats.
746ea3de6 HPA stats: Allow some derived stats.
30b9e8162 HPA: Generalize purging.
70692cfb1 hpdata: Add state changing helpers.
9b75808be flat bitmap: Add a bitwise and/or/not.
2ae966222 hpdata: track per-page dirty state.
ff4086aa6 hpdata: count active pages instead of free ones.
3624dd42f hpdata: Add a comment for hpdata_consistent.
20140629b Bin: Move stats closer to the mutex.
c259323ab Use ticker_geom_t for arena tcache decay.
8edfc5b17 Add ticker_geom_t.
396732981 Arena: share bin offsets in a global.
2fcbd1811 Cache bin: Don't reverse flush order.
4c46e1136 Cache an arena's index in the arena.
229994a20 Tcache flush: keep common path state in registers.
31a629c3d Tcache flush: prefetch edata contents.
9f9247a62 Tcache fluhing: increase cache miss parallelism.
181ba7fd4 Tcache flush: Add an emap "batch lookup" path.
c007c537f Tcache flush: Unify edata lookup path.
35a855260 Mac OS: Tag mapped pages.
f6699803e Fix duration in prof log
a943172b7 Add runtime detection for MADV_DONTNEED zeroes pages (mostly for qemu)
2e3104ba0 Update config.{sub,guess} to support support-aarch64-apple-darwin as a target
a011c4c22 cache_bin: Separate out local and remote accesses.
14d689c0f Add prof stats mutex stats
9f71b5779 Output prof stats in stats print
1f1a0231e Split macros for initializing stats headers
4352cbc21 Add alignment tests for prof stats
54f3351f1 Add mallctl for prof stats fetching
40fa4d29d Track per size class internal fragmentation
afa489c3c Record request size in prof info
f9bb8dede Un-force-inline do_rallocx.
a9fa2defd Add JEMALLOC_COLD, and mark some functions cold.
5d8e70ab2 prof_recent: cassert(config_prof) more often.
83cad746a prof_log: cassert(config_prof) in public functions
526180b76 Extent.c: Avoid an rtree NULL-check.
b35ac00d5 Do not bump to large size for page aligned request
8a56d6b63 Add last-N mutex stats
22d62d8cb Handle ending gap properly for HPA stats
6c5a3a24d Omit bin stats rows with no data
ea013d8fa Enforce realloc sizing stability
74bd63b20 Optimize stats print using partial name-to-mib
4557c0a67 Enable ctl on partial mib and partial name
006dd0414 Add partial name-to-mib functionality
f2e1a5be7 Do not fail on partial ctl path for ctl_nametomib()
6ab181d2b Extract node lookup given mib input
3a627b967 No need to record all nodes in ctl_lookup()
91e006c4c Enable ctl_lookup() to start from arbitrary node
063a767ff Define JEMALLOC_HAS_ALLOCA_H for QNX
4e3fe218e Use posix_madvise to purge pages when available
26c1dc5a3 Support AutoConf for posix_madvise and POSIX_MADV_DONTNEED
96a59c3bb Fix recursive malloc during bootstrap on QNX
986cbe488 Disable JEMALLOC_TLS for QNX
1e3b8636f HPA: Remove unused malloc_conf options.
e82771807 Cache mallctl mib for batch allocation stress test
0dfdd31e0 Add tiny batch size to batch allocation stress test
9522ae41d Move n_search outside of assert as reported by static analyzer
a559caf74 hpdata: Strengthen assertions.
f51948d9e psset unit test: fix a bug.
54c94c167 flat bitmap: add scount / ucount functions.
e6c057ad3 fb: implement assign in terms of a visitor.
734e72ce8 bit_util: Guarantee popcount's presence.
d9f7e6c66 hpdata: Add a test.
3ed0b4e8a HPA: Add an nevictions counter.
fffcefed3 malloc_conf: Clarify HPA options.
f7cf23aa4 psset: Relegate alloc/dalloc to test code.
f9299ca57 HPA: Use psset fit/insert/remove.
0971e1e4e hpdata: Use addr/size instead of begin/npages.
5228d869e psset: Use fit/insert/remove as basis functions.
089f8fa44 Move hpdata bitmap logic out of the psset.
ca30b5db2 Introduce hpdata_t.
4a15008cf HPA unit test: skip if unsupported.
43af63fff HPA: Manage whole hugepages at a time.
63677dde6 Pages: Statically detect if pages_huge may succeed
c1b2a7793 psset: Move in stats.
d0a991d47 psset: Add insert/remove functions.
d438296b1 narenas_ratio: Accept fractional values.
ecd39418a Add fxp: A fixed-point math library.
99c2d6c23 Backport jeprof --collapse for flamegraph generation
520b75fa2 utrace support with label based signature.
92e189be8 Add some comments to the batch allocation logic flow
d96e4525a Route batch allocation of small batch size to tcache
ac480136d Split out locality checking in batch allocation tests
be5e49f4f Add a batch mode for cache_bin_alloc()
4a65f3493 Fix a cache bin test
566c4a859 Slight changes to cache bin internal functions
9545c2cd3 Add sample interval to prof last-N dump
cf2549a14 Add a per-arena oversize_threshold.
4ca3d91e9 Rename geom_grow -> exp_grow.
b4c37a6e8 Rename edata_tree_t -> edata_avail_t.
95f0a77fd Detect pthread_getname_np explicitly.
b3c5690b7 Update config.{guess,sub} to 2020-11-07@77632d9
589638182 Use the edata_cache_small_t in the HPA.
03a604711 Edata cache small: rewrite.
c9757d9e3 HPA: Don't disable shards that were never started.
1b3ee7566 Add experimental.thread.activity_callback.
27ef02ca9 Android build fix proposal.
d2d941017 MADV_DO[NOT]DUMP support equivalence on FreeBSD.
180b84315 Appveyor: fix 404 errors.
ef6d51ed4 DragonFlyBSD build support.
bf72188f8 Allow opt.tcache_max to accept small size classes.
ea32060f9 SEC: Implement thread affinity.
d16849c91 psset: Do first-fit based on slab age.
634ec6f50 Edata: add an "age" field.
6599651ae PA: Use an SEC in fron of the HPA shard.
ea51e97bb Add SEC module: a small extent cache.
1964b0839 HPA: Add stats for the hpa_shard.
534504d4a HPA: add size-exclusion functionality.
484f04733 HPA: Add central mutex contention stats.
bf025d2ec HPA: Make slab sizes and maxes configurable.
1c7da3331 HPA: Tie components into a PAI implementation.
c8209150f Switch from opt.lg_tcache_max to opt.tcache_max
5ba861715 Add thread name in prof last-N records
4ef5b8b4d Add a logo to doc_internal.
5e41ff9b7 Add a hard limit on tcache max size class.
3de19ba40 Eagerly detect double free and sized dealloc bugs for large sizes.
be9548f2b Tcaches: Fix a subtle race condition.
a9aa6f6d0 Fix the alloc_ctx check in free_fastpath.
b971f7c4d Add "default" option to slab sizes.
21b70cb54 Add hpa_central module
1ed7ec369 Emap: Add emap_assert_not_mapped.
2a6ba121b PRNG test: cleanups.
9e6aa77ab PRNG: Remove atomic functionality.
051304717 PRNG: Allow a a range argument of 1.
bdb60a805 Appveyor: don't update msys2 keyring.
025d8c37c Add a script to check for clang-formattedness.
f6bbfc1e9 Add a .clang-format file.
259c5e3e8 psset: Add stats
018b162d6 Add psset: a set of pageslabs.
ed99d300b Flat bitmap: Add longest-range computation.
e03450069 Edata: rename "ranged" bit to "pai".
7ad2f7866 Avoid a -Wundef warning on LG_SLAB_MAXREGS.
40cf71a06 Remove --with-slab-maxregs options from INSTALL.md
36ebb5abe CI support for PPC64LE architecture
1541ffc76 configure: add --with-lg-slab-maxregs configure option.
d243b4ec4 Add PROFILING_INTERNALS.md
09eda2c9b Add unit tests for usize in prof recent records
b549389e4 Correct usize in prof last-N record
202f01d4f Fix szind computation in profiling
866231fc6 Do not repeat reentrancy test in profiling
20f2479ed Do not create size class tables for non-prof builds
8efcdc3f9 Move unbias data to prof_data
5e90fd006 Geom_grow: Don't keep the mutex internal.
c57494879 Geom_grow: Don't take tsdn at init.
ffe552223 Geom_grow: Move in advancing logic.
131b1b533 Rename ecache_grow -> geom_grow.
b399463fb flat_bitmap unit test: Silence a warning.
b0ffa39ca Mallctl stress test: fix a type.
753bbf184 Benchmarks: Also print ns / iter.
7b187360e IO: Support 0-padding for unsigned numbers.
32d467322 Add a mallctl speed stress test.
38867c5c1 Makefile: alphabetize stress/analyze utilities.
ab274a23b Add narenas_ratio.
9e18ae639 Config: safety checks don't imply size checks.
8f9e958e1 Add alignment stress test for rallocx
743021b63 Fix size miscalculation bug in reallocation
eaed1e39b Add sized-delete size-checking functionality.
53084cc5c Safety check: Don't directly abort.
60993697d Prof: Add prof_unbias.
81c2f841e Add a simple utility to detect profiling bias.
e032a1a1d Add a stress test for batch allocation
f6cf5eb38 Add mallctl for batch allocation API
978f830ee Add batch allocation API
c6f59e9bb Add surplus reading API for thread event lookahead
f80546895 Add zero option to arena batch allocation
49e5c2fe7 Add batch allocation from fresh slabs
2bb8060d5 Add empty test and concat for typed list
f28cc2bc8 Extract bin shard selection out of bin locking
ddb8dc4ad FB: Add range iteration support.
ceee82351 Add flat_bitmap.
7fde6ac49 Nbits: Add a couple more interesting sizes.
efeab1f49 bitset test: Pull NBITS_TAB into its own file.
22da83609 bit_util: Add fls_ functions; "find last set".
1ed0288d9 bit_util: Change ffs functions indexing.
786a27b9e CI: Update keyring.
fb347dc61 Verify output space before doing heavy work in mallctl
f5fb4e5a9 Modify mallctl output length when needed
425840204 Corrections for prof_log_start()
e6cb7a1c9 Shorten wait time for peak events
6107857b7 PA->PAC: Move in PAI implementation.
6041aaba9 PA -> PAC: Move in destruction functions.
cbf096b05 Arena: remove redundant bg inactivity check.
471eb5913 PAC: Move in decay rate setting.
6a2774719 PA->PAC: Move in decay functions.
4ee75be3a PA -> PAC: Move in decay_purge enum.
72435b0ab PA->PAC: Make extent.c forget about PA.
dee5d1c42 PA->PAC: Move in extent_sn.
739138234 PA->PAC: Move in stats.
db211eefb PAC: Move in decay.
c81e38999 PAC: Move in ecache_grow.
65803171a PAC: move in emap
7efcb946c PAC: Add an init function.
722652222 PAC: Move in edata_cache accesses.
777b0ba96 Add PAC: Page allocator classic.
1b5f632e0 Introduce PAI: Page allocator interface
3cf19c6e5 atomic: add atomic_load_sub_store
f1f4ec315 Tcache: Tweak nslots_max tuning parameter.
ae541d3fa Edata: Reserve some space for hugepages.
392f645f4 Edata: split up different list linkage uses.
129b72705 Add typed-list module.
00f06c9be enabling mpss on solaris/illumos.
c2e7a0639 No need to intercept prof_dump_header() in tests
f58ebdff7 Generalize prof_cnt_all() for testing
80d18c18c Pass prof dump parameters explicitly in prof_sys
d4259ea53 Simplify signatures for prof dump functions
5d823f3a9 Consolidate struct definitions for prof dump parameters
1f5fe3a3e Pass write callback explicitly in prof_data
4556d3c0c Define structures for prof dump parameters
1c6742e6a Migrate prof dumping to use buffered writer
dad821bb2 Move unwind to prof_sys
d128efcb6 Relocate a few prof utilities to the right modules
4736fb4fc Move file handling logic in prof_data to prof_sys
767a2e179 Move file handling logic in prof to prof_sys
03ae509f3 Create prof_sys module for reading system thread name
adfd9d7b1 Change tsdn to tsd for thread name allocation
841af2b42 Move thread name handling to prof_data module
8118056c0 Expose prof_data testing internals only in prof tests
f43ac8543 Correct prof header macro namings
c8683bee8 Unify printing for prof counts object
5d292b566 Push error handling logic out of core dumping logic
f541871f5 Reduce prof dump buffer size in debug build
354183b10 Define prof dump buffer size centrally
7455813e5 Make dump file writing replaceable in test
21e44c45d Make maps file opening replaceable in test
4bb4037db Extract utility function for opening maps file
f307b2580 Only replace the dump file opening function in test
d8cea8756 Move size inspections to test/analyze
537a4bedb Add a tool to examine random number distributions
d460333ef Improve naming for prof system thread name option
25e43c602 Witness: Make ranks an enum.
092fcac0b Remove unnecessary source files
a795b1932 Remove beginning define in source files
24bbf376c Unify arena flag reading and selection
e128b170a Do not fallback to auto arena when manual arena is requested
95a59d2f7 Unify tcache flag reading and selection
4b0c00848 Unify zero flag reading and setting
2a84f9b8f Unify alignment flag reading and computation
b7858abfc Expose prof testing internal functions
40fa6674a Fix prof timestamp conf reading
7e09a57b3 stress/sizes: Fix an off-by-one issue.
dcfa6fd50 stress/sizes: Add a couple more types.
40672b0b7 Remove duplicate logging in malloc.
4aea74327 High Resolution Timestamps for Profiling
d82a164d0 Add thread.peak.[read|reset] mallctls.
fe7108305 Add peak_t, for tracking allocator net max.
17a64fe91 Add a small program to print data structure sizes.
3e19ebd2e Add lock to protect prof last-N dumping
a835d9cf8 Make prof last-N dumping non-blocking
fc8bc4b5c Increase dump buffer for prof last-N list
264d89d64 Extract restore and async cleanup functions for prof last-N list
857ebd3da Make edata pointer on prof recent record an atomic fence
b8bdea6b2 Fix: prof_recent_alloc_max_ctl_read() does not take tsd
730658f72 Extract alloc/dalloc utility for last-N nodes
035be4486 Separate out dumping for each prof recent record
8da0896b7 Tcache: Make an integer conversion explicit.
cd28e6033 Don't warn on uniform initialization.
6cdac3c57 Tcache: Make flush fractions configurable.
7503b5b33 Stats, CTL: Expose new tcache settings.
ee72bf1cf Tcache: Add tcache gc delay option.
d338dd45d Tcache: Make incremental gc bytes configurable.
ec0b57956 Tcache: Privatize opt_lg_tcache_max default.
10b96f635 Tcache: Remove some unused gc constants.
181093173 Tcache: make slot sizing configurable.
b58dea8d1 Cache bin: expose ncached_max publicly.
634afc412 Tcache: Make size computation configurable.
97b7a9cf7 Add a fill/flush microbenchmark.
33372cbd4 cpu instruction spin wait for arm32/64
27f29e424 LQ_QUANTUM should be 4 on mips64 hardware.
eda9c2858 Edata: zero stack edatas before initializing.
5dead37a9 Allow narenas:default.
dcea2c0f8 Get rid of TSD -> thread event dependency
75dae934a Always initialize TE counters in TSD init
b06dfb9cc Push event handlers to constituent modules
381c97caa Treat postponed prof sample event as new event
abd467493 Extract out per event postponed wait time fetching
f72014d09 Only compute thread event threshold once per trigger
7324c4f85 Break down event init and handler functions
6de77799d Move thread event wait time update to local
733ae918f Extract out per event new wait time fetching
1e2524e15 Do not reset sample wait time when re-initing tdata
855d20f6f Remove outdated comments in thread event
fc052ff72 Migrate counter to use locked int
b543c20a9 Minor update to locked int
f533ab6da Add forking handling for stats
508303077 Add forking handling for prof idump counter
4d970f8bf Add forking handling for counter module
2097e1945 Unify write callback signature
fef9abdcc Cleanup tcache allocation logic
e6cb6919c Consolidate prof inline function headers
d454af90f Remove unused prof_accum field from arena
8be558449 Initialize prof idump counter once rather than once per arena
e10e5059e Make prof_idump_accum() non-inline
039bfd4e3 Do not rollback prof idump counter in arena_prof_promote()
0295aa38a Deduplicate entries in witness error message
f1f8a7549 Let opt.zero propagate to core allocation.
2c09d4349 Add a benchmark of large allocations.
46471ea32 SC: Name the max lookup constant.
79dd0c04e SC: Simplify SC_NPSIZES computation.
fb6cfffd3 Configure: Get rid of LG_QUANTA.
4f8efba82 TSD: Make rtree_ctx a slow-path field.
cd29ebefd Tcache: treat small and large cache bins uniformly
a13fbad37 Tcache: split up fast and slow path data.
7099c6620 Arena: fill in terms of cache_bins.
40e7aed59 TSD: Move in some of the tcache fields.
58a00df23 TSD: Put all fast-path data together.
3589571bf SC: use SC_LG_NGROUP instead of its value.
877af247a QL, QR: Add documentation.
79ae7f921 Rtree: Remove the per-field accessors.
26e9a3103 PA: Simple decay test.
bb6a41852 Emap: Drop szind/slab splitting parameters.
50289750b Extent: Remove szind/slab knowledge.
dc26b3009 Rtree: Clean up compact/non-compact split.
93b99dd14 Extent: Stop passing an edata_cache everywhere.
a4759a191 Ehooks: avoid touching arena_emap_global in tests.
11c47cb13 Extent: Take "bool zero" over "bool *zero".
1a1124462 PA: Take zero as a bool rather than as a bool *.
294b276fc PA: Parameterize emap.  Move emap_global to arena.
f73057727 Eset: Parameterize last globals accesses.
7bb6e2dc0 Eset: take opt_lg_max_active_fit as a parameter.
883ab327c Emap: Move out last edata state touching.
0c96a2f03 Emap: Move out remaining edata modifications.
dfef0df71 Emap: Move edata modification out of emap_remap.
12eb888e5 Edata: Add a ranged bit.
bd4fdf295 Rtree: Pull leaf contents into their own struct.
faec7219b PA: Move in decay initialization.
45671e4a2 PA: Move in retain growth limit setting.
daefde88f PA: Move in mutex stats reading.
07675840a PA: Move in some more internals accesses.
238f3c743 PA: Move in full stats merging.
81c602759 Arena stats: Give it its own "mapped".
506d907e4 PA: Move in basic stats merging.
f29f6090f PA: Add pa_extra.c and put PA forking there.
8164fad40 Stats: Fix edata_cache size merging.
565045ef7 Arena: Make more derived stats non-atomic/locked.
d0c43217b Arena stats: Move retained to PA, use plain ints.
e2cf3fb1a PA: Move in all modifications of mapped.
436789ad9 PA: Make mapped stat atomic.
3c28aa6f1 PA: Move edata_avail stat in, make it non-atomic.
f6bfa3dcc Move extent stats to the PA module.
527dd4cdb PA: Move in nactive counter.
c075fd0bc PA: Minor cleanups and comment fixes.
46a9d7fc0 PA: Move in rest of purging.
2d6eec7b5 PA: Move in decay-all pathway.
65698b7f2 PA: Remove public visibility of some internals.
f012c43be PA: Move in decay_to_limit
103f5feda Move bg thread activity check out of purging core.
3034f4a50 PA: Move in decay_stashed.
aef28b2f8 PA: Move in stash_decayed.
655a09634 Move bg inactivity check out of purge inner loop.
71fc0dc96 PA: Move in remaining page allocation functions.
74958567a PA: have expand take sizes instead of new usize.
5bcc2c2ab PA: Have expand take szind and slab.
0880c2ab9 PA: Have large expands use it.
7be3dea82 PA: Have slab allocations use it.
9f93625c1 PA: Move in arena large allocation functionality.
7624043a4 PA: Add ehook-getting support.
eba35e2e4 Remove extent knowledge of arena.
e77f47a85 Move arena decay getters to PA.
48a2cd6d7 Decay: Add a (mostly stub) test case.
f77cec311 Decay: Take current time as an argument.
bf55e58e6 Rename test/unit/decay -> test/unit/arena_decay.
d1d7e1076 Decay: move in some background_thread accesses.
cdb916ed3 Decay: Add comments for the public API.
8f2193dc8 Decay: Move in arena decay functions.
4d090d23f Decay: Introduce a stub .c file.
7b6288547 Introduce decay module and put decay objects in PA
497836dbc Arena stats: mark edata_avail as derived.
3192d6b77 Extents: Have extent_dalloc_gap take ehooks.
22a0a7b93 Move arena_decay_extent to extent module.
70d12ffa0 PA: Move mapped into pa stats.
6ca918d0c PA: Add a stats comment.
ce8c0d6c0 PA: Move in arena extent_sn counter.
1ada4aef8 PA: Get rid of arena_ind_get calls.
1ad368c8b PA: Move in decay stats.
356aaa7dc Introduce lockedint module.
acd0bf6a2 PA: move in ecache_grow.
32cb7c2f0 PA: Add a stats type.
688fb3eb8 PA: Move in the arena edata_cache.
8433ad84e PA: move in shard initialization.
a24faed56 PA: Move in the ecache_t objects.
585f92505 Move cache index randomization out of extent.
12be9f572 Add a stub PA module -- a page allocator.
c4e9ea8cc Get rid of locks in prof recent test
2deabac07 Get rid of custom iterator for last-N records
a5ddfa7d9 Use ql for prof last-N list
8da6676a0 Don't do reentrant testing in junk tests.
ce17af422 Better structure ql module
4b66297ea Add move constructor to ql module
a62b7ed92 Add emptiness checking to ql module
1dd24ca6d Add rotate functionality to ql module
0dc95a882 Add concat and split functionality to ql module
1ad06aa53 deduplicate insert and delete logic in qr module
c9d56cddf Optimize meld in qr module
0d6d9e858 configure.ac: Put public symbols on one line.
f9aad7a49 Add piping API to buffered writer
09cd79495 Encapsulate buffer allocation failure in buffered writer
a166c2081 Make prof_tctx_t pointer a true prof atomic fence
d936b46d3 Add malloc_conf_2_conf_harder
3b4a03b92 Mac: don't declare system functions as nothrow.
2256ef896 Add option to fetch system thread name on each prof sample
ccdc70a5c Fix: assertion could abort on past failures
b30a5c2f9 Reorganize cpp APIs and suppress unused function warnings
2e5899c12 Stats: Fix tcache_bytes reporting.
a5780598b Remove thread_event_rollback()
ba783b3a0 Remove prof -> thread_event dependency
441d88d1c Rewrite profiling thread event
0dcd57660 Edata cache: atomic fetch-add -> load-store.
99b1291d1 Edata cache: add edata_cache_small_t.
734109d9c Edata cache: add a unit test.
e732344ef Inspect test: Reduce checks when profiling is on.
92485032b Cache bin: improve comments.
d701a085c Fast path: allow low-water mark changes.
397da0386 Cache bin: rewrite to track more state.
fef0b1ffe Cache bin: Remove last internals accesses.
0a2fcfac0 Tcache: Hold cache bin allocation explicitly.
d498a4bb0 Cache bin: Add an emptiness assertion.
6a7aa46ef Cache bin: Add a debug method for init checking.
370c1ea00 Cache bin: Write the unit test in terms of the API
7f5ebd211 Cache bin: set low-water internally.
60113dfe3 Cache bin: Move in initialization code.
44529da85 Cache-bin: Make flush modifications internal
ff6acc6ed Cache bin: simplify names and argument ordering.
e1dcc557d Cache bin: Only take the relevant cache_bin_info_t
1b00d808d cache_bin: Don't let arena see empty position.
d303f3079 cache_bin nflush -> n.
74d36d78e Cache bin: Make ncached_max a query on the info_t.
b66c0973c cache_bin: Don't allow direct internals access.
da68f7329 Move percpu_arena_update.
909c501b0 Cache_bin: Shouldn't know about tcache.
79f1ee2fc Move junking out of arena/tcache code.
b428dceea Config: Warn on void * pointer arithmetic.
22657a5e6 Extents: Silence the "potentially unused" warning.
4a78c6d81 Correct thread event unit test
305b1f6d9 Correction on geometric sampling
6c3491ad3 Tcache: Unify bin flush logic.
9f4fc2738 Ehooks: Fix a build warning.
bc31041ed Cirrus-CI: test on new freebsd releases.
51bd14742 Make use of assert_* in test/unit/thread_event.c
9d2cc3b0f Make use of assert_* in test/unit/prof_recent.c
a88d22ea1 Make use of assert_* in test/unit/inspect.c
0ceb31184 Make use of assert_* in test/unit/buf_writer.c
fa6157938 Add assert_* functionality to tests
21dfa4300 Change assert_* to expect_* in tests
162c2bcf3 Background thread: take base as a parameter.
29436fa05 Break prof and tcache knowledge of b0.
a0c1f4ac5 Rtree: take the base allocator as a parameter.
7013716aa Emap: Take (and propagate) a zeroed parameter.
182192f83 Base: Pull into a single header.
34b7165fd Put szind_t, pszind_t in sz.h.
7e6c8a728 Emap: Standardize naming.
ac50c1e44 Emap: Remove direct access to emap internals.
06e42090f Make jemalloc.c use the emap interface.
f7d9c6c42 Emap: Move in alloc_ctx lookup functionality.
65a54d771 Emap: Move in szind and slab modifications.
9b5d105fc Emap: Move in iealloc.
1d449bd9a Emap: Internal rtree context setting.
08eb1e6c3 Emap: Comments and cleanup
231d1477e Rename emap_split_prepare_t -> emap_prepare_t.
0586a56f3 Emap: Move in merge functionality.
040eac77c Tell edatas their creation arena immediately.
7c7b70206 Emap: Move over metadata splitting logic.
44f5f5360 Emap: Move over deregistration functions.
6513d9d92 Emap: Move over deregistration boundary functions.
9b5ca0b09 Emap: Move in slab interior registration.
d05b61db4 Emap: Move extent boundary registration in.
ca21ce407 Emap: Move in write_acquired from extent.
01f255161 Add emap, for tracking extent locking.
0f686e82a Avoid variable length array with length 0.
68e8ddcaf Add mallctl for dumping last-N profiling records
bc05ecebf Add const qualifier in assert_cmp()
ba0e35411 Rework the bin locking around tcache refill / flush.
7fd22f7b2 Fix Undefined Behavior in hash.h
ca1f08225 Disallow merge across mmap regions to preserve SN / first-fit.
7014f81e1 Add ASSURED_WRITE in mallctl
247688919 Add inspect.c to MSVC filters
9cac3fa8f Encapsulate buffer allocation in buffered writer
bdc08b515 Better naming buffered writer
c6bfe5585 Update the tsd description.
e89652261 Abbreviate thread-event to te.
5e500523a Remove thread_event_boot().
97dd79db6 Implement deallocation events.
536ea6858 NetBSD specific changes: - NetBSD overcommits - When mapping pages, use the maximum of the alignment requested and the   compiled-in PAGE constant which might be greater than the current kernel   pagesize, since we compile binaries with the maximum page size supported   by the architecture (so that they work with all kernels).
974222c62 Add safety check on sdallocx slow / sampled path.
88d9eca84 Enforce page alignment for sampled allocations.
0f552ed67 Don't purge huge extents when decay is off.
38a48e574 Set reentrancy to 1 for tsd_state_purgatory.
88b0e03a4 Implement opt.stats_interval and the _opts options.
d71a145ec Chagne prof_accum_t to counter_accum_t for general purpose.
ea351a7b5 Fix syntax errors in doc for thread.idle.
d92f0175c Introduce NEITHER_READ_NOR_WRITE in ctl.
6a622867c Add "thread.idle" mallctl.
f81341a48 Fallback to unbuffered printing if OOM
cd6e90824 Add stress test for last-N profiling mode
84b28c6a1 Properly handle tdata deletion race
d33120856 Get rid of redundant logic in prof
a72ea0db6 Restructure and correct sleep utility for testing
7b67ed0b5 Get rid of lock overlap in prof_recent_alloc_reset
bd3be8e0b Remove commit parameter to ecache functions.
b8df719d5 No tdata creation for backtracing on dying thread
dab81bd31 Rework and fix the assertions on malloc fastpath.
ad3f3fc56 Fetch time after tctx and only for samples
a5d3dd405 Fix an assertion on extent head state with dss.
2b604a301 Record request size in prof recent entries
40a391408 Define constructor for buffered writer argument
6d8e61690 Make buffered writer an independent module
6b6b4709b Unify buffered writer naming
9a60cf54e Last-N profiling mode
7a27a0594 Delete tdata states used for cleanup
e98ddf798 Fix unlikely condition in arena_prof_info_get()
3fa142cf3 Remove _externs from prof internal header names
112dc36dd Handle log_mtx during forking
ea42174d0 Refactor profiling headers
6342da097 Ehooks: Further optimize default merge case.
f2f2084e7 Ehooks: Assert alloc isn't NULL
e210ccc57 Move extent2 -> extent.
2f4fa8041 Rename extents -> ecache.
56cc56b69 Break extent split dependence on arena.
0aa9769fb Break commit functions' arena dependence
48ec5d435 Break extent_coalesce arena dependence
282a38232 Extent: Break [de]activation's arena dependence.
576d7047a Ecache: Should know its arena_ind.
372042a08 Remove merge dependence on the arena.
439219be7 Remove extent_can_coalesce arena dependency.
9cad5639f Ehooks: remove arena_ind parameter.
57fe99d4b Move relevant index into the ehooks_t itself.
c792f3e4a edata_cache: Remember the associated base_t.
ae23e5f42 Unify extent_alloc_wrapper with the other wrappers.
d8b0b66c6 Put extent_state_t into ecache as well as eset.
98eb40e56 Move delay_coalesce from the eset to the ecache.
bb70df8e5 Extent refactor: Introduce ecache module.
070451624 Ehooks: Add head tracking.
09475bf8a extent_may_dalloc -> ehooks_dalloc_will_fail
785918417 Pull out edata_t caching into its own module.
a7862df61 Rename extent_t to edata_t.
865debda2 Rename extent.h -> edata.h.
a738a66b5 Ehooks: Add some debug zero and addr checks.
4b2e5ee8b Ehooks: Add a "zero" ehook.
d0f187ad3 Arena: Loosen arena_may_have_muzzy restrictions.
ebbb97327 Base: Remove some unnecessary reentrancy guards.
403f2d166 Extents: Split out introspection functionality.
92a511d38 Make extent module hermetic.
e08c581cf Extent: Get rid of extent-specific pre/post reentrancy calls.
39fdc690a Ehooks comments and cleanup.
c8dae890c Extent -> Ehooks: Move over default hooks.
2fe510826 Extent -> Ehooks: Move merge hook.
1fff4d2ee Extent -> Ehooks: Move split hook.
a5b42a1a1 Extent -> Ehooks: Move purge_forced hook.
368baa42e Extent -> Ehooks: Move purge_lazy hook.
f83fdf533 Extent: Clean up a comma
d78fe241a Extent -> Ehooks: Move commit and decommit hooks.
5459ec9da Extent -> Ehooks: Move destroy hook.
bac8e2e5a Extent -> Ehooks: Move dalloc hook.
dc8b4e6e1 Extent -> Ehooks: Move alloc hook.
703fbc0ff Introduce unsafe reentrancy guards.
ae0d8e859 Move extent ehook calls into ehooks
ba8b9ecbc Add ehooks module
837119a94 base_structs.h: Remove some mid-line tabs.
9f6eb0958 Extents: Eagerly initialize extent hooks.
4278f8460 Move extent hook getters/setters to arena.c
9226e1f0d fix opt.thp:never still use THP with base_new
d5031ea82 Allow dallocx and sdallocx after tsd destruction.
4afd709d1 Restructure setters for profiling info
1d01e4c77 Initialization utilities for nstime
dd649c948 Optimize away the tsd_fast() check on fastpath.
1decf958d Fix incorrect usage of cassert.
45836d7fd Pass nstime_t pointer for profiling
7d2bac5a3 Refactor destroy code path for prof_tctx
055478cca Threshold is no longer updated before prof_realloc()
7e3671911 Get rid of old indentation style for prof
dfdd46f6c Refactor prof_tctx_t creation
aa1d71fb7 Rename prof_tctx to alloc_tctx in prof_info_t
5e0b09099 No need to pass usize to prof_tctx_set()
1b1e76acf Disable some spuriously-triggering warnings
a70909b13 Test on all supported release of FreeBSD
5c47a3022 Guard C++ aligned APIs
694537177 Change tsdn to tsd for profiling code path
b55419f9b Restructure profiling
8b2c2a596 Support C++17 over-aligned allocation
9a3c73800 Refactor arena_bin_malloc_hard().
9a7ae3c97 Reduce footprint of bin_t.
cb1a1f4ad Remove the unnecessary alloc_ctx on free_fastpath.
716061710 Add branch hints to free_fastpath.
a787d2f5b Prefer getaffinity() to detect number of CPUs.
04cb7d4d6 Bail out early for muzzy decay.
73510dfd1 Revert "Fix bug in prof_realloc"
3b5eecf10 Fix bug in prof_realloc
e4c36a6f3 Emphasize no modification through thread.allocatedp allowed.
c462753cc Use __forceinline for JEMALLOC_ALWAYS_INLINE on msvc
836d7a7e6 Check for large size first in the uncommon case of malloc.
9c59abe42 Fix a typo in Makefile.
da50d8ce8 Refactor and optimize prof sampling initialization.
bc774a351 Rename tsd->offset_state to tsd->prng_state.
19a51abf3 Avoid arena->offset_state when tsd not available for prng.
d01b425e5 Add -Wimplicit-fallthrough checks if supported
a8b578d53 Remove mallctl test for zero_realloc
43f0ce92d Define general purpose tsd_thread_event_init()
97f93fa0f Pull tcache GC events into thread event handler
198f02e79 Pull prof_accumbytes into thread event handler
152c0ef95 Build a general purpose thread event handler
6924f83cb use SYS_openat when available
de81a4ead Add stats counters for number of zero reallocs
9cfa80594 Realloc: Make behavior of realloc(ptr, 0) configurable.
ee961c231 Merge realloc and rallocx pathways.
bd6e28d6a Guard slabcur fetching in extent_util
4786099a3 Increase column width for global malloc/free rate
05681e387 Optimize cache_bin_alloc_easy for malloc fast path
4fe50bc7d Fix amd64 MSVC warning
4fbbc817c Simplify time setting and getting for prof log
4094b7c03 Limit # of iters of test_bitmap_xfu.
66e07f986 Suppress tdata creation in reentrancy
beb7c16e9 Guard prof_active reset by opt_prof
1df9dd351 Fix je_ prefix issue in test
3d84bd57f Arena: Add helper function arena_get_from_extent.
c97d25575 Eset: Remove temporary declaration.
ce5b128f1 Remove the undefined extent_size_quantize declarations.
821dd53a1 Extent -> Eset: Rename arena members.
e144b21e4 Extent -> Eset: Move fork handling.
77bbb35a9 Extent -> Eset: Move extent fit functions.
1210af9a4 Extent -> Eset: Move insertion and removal.
a42861540 Extents -> Eset: Convert some stats getters.
820f070c6 Move page quantization to sz module.
63d1b7a7a Extents -> Eset: move extents_state_get.
b416b96a3 Extents -> Eset: rename/move extents_init.
e6180fe1b Eset: Add a source file.
4e5e43f22 Rename extents_t -> eset_t.
723ccc6c2 Extents: Split out extent struct.
41187bdfb Extents: Break extent-struct/arena interactions
529cfe2ab Arena: rename arena_structs_b.h -> arena_structs.h
e7cf84a8d Rearrange slab data and constants
d1be488cd Add --with-lg-page=16 to CI.
ac5185f73 Fix tcache bin stack alignment.
b7c7df24b Add max_per_bg_thd stats for per background thread mutexes.
4b76c684b Add "prof.dump_prefix" to override filename prefixes for dumps.
242af439b Rename "prof_dump_seq_mtx" to "prof_dump_filename_mtx".
e06658cb2 check GNU make exists in path
22bc75ee3 Workaround the stringop-overflow check false positives.
93d615180 Pass tsd down to prof_backtrace()
671f120e2 Fix prof_backtrace() reentrancy level
785b84e60 Make cache_bin_sz_t unsigned.
23dc7a7fb Fix index type for cache_bin_alloc_easy.
2abb02ecd Fix MSVC 2015 build, as proposed by @christianaguilera-foundry.
719583f14 Fix large.nflushes in the merged stats.
adce29c88 Optimize for prof_active off
49e6fbce7 Always adjust thread_(de)allocated
57b81c078 Pull thread_(de)allocated out of config_stats
9e031c1d1 Bug fix for prof_active switch
0043e68d4 Track low_water == -1 case explicitly.
937ca1db9 Store ncached_max * ptr_size in tcache_bin_info.
7599c82d4 Redesign the cache bin metadata for fast path.
d2dddfb82 Add hint in the bogus version string.
d6b7995c1 Update INSTALL.md about the default doc build.
e2c758436 Simplify / refactor tcache_dalloc_large.
9c5c2a2c8 Unify the signature of tcache_flush small and large.
28ed9b9a5 Buffer stats printing
eb70fef8c Make compact json format as default
a219cfcda Clear tcache prof_accumbytes in tcache_flush_cache
ad3f7dbfa Buffer prof_log_stop
593484661 Fix large bin index accessed through cache bin descriptor.
22746d3c9 Properly dalloc prof nodes with idalloctm.
8c8466fa6 Add compact json option for emitter
7fc6b1b25 Add buffered writer
39343555d Report stats for tdatas_mtx and prof_dump_mtx
87e2400cb Fix tcaches mutex pre- / post-fork handling.
07ce2434b Refactor profiling
56126d0d2 Refactor prof log
56c8ecffc Correct tsd layout graph

git-subtree-dir: deps/jemalloc
git-subtree-split: 54eaed1d8b56b1aa528be3bdd1877e59c56fa90c
parent 220a0f08
...@@ -4,20 +4,26 @@ ...@@ -4,20 +4,26 @@
#include "jemalloc/internal/assert.h" #include "jemalloc/internal/assert.h"
#include "jemalloc/internal/atomic.h" #include "jemalloc/internal/atomic.h"
#include "jemalloc/internal/buf_writer.h"
#include "jemalloc/internal/ctl.h" #include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/emap.h"
#include "jemalloc/internal/extent_dss.h" #include "jemalloc/internal/extent_dss.h"
#include "jemalloc/internal/extent_mmap.h" #include "jemalloc/internal/extent_mmap.h"
#include "jemalloc/internal/fxp.h"
#include "jemalloc/internal/san.h"
#include "jemalloc/internal/hook.h" #include "jemalloc/internal/hook.h"
#include "jemalloc/internal/jemalloc_internal_types.h" #include "jemalloc/internal/jemalloc_internal_types.h"
#include "jemalloc/internal/log.h" #include "jemalloc/internal/log.h"
#include "jemalloc/internal/malloc_io.h" #include "jemalloc/internal/malloc_io.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/nstime.h"
#include "jemalloc/internal/rtree.h" #include "jemalloc/internal/rtree.h"
#include "jemalloc/internal/safety_check.h" #include "jemalloc/internal/safety_check.h"
#include "jemalloc/internal/sc.h" #include "jemalloc/internal/sc.h"
#include "jemalloc/internal/spin.h" #include "jemalloc/internal/spin.h"
#include "jemalloc/internal/sz.h" #include "jemalloc/internal/sz.h"
#include "jemalloc/internal/ticker.h" #include "jemalloc/internal/ticker.h"
#include "jemalloc/internal/thread_event.h"
#include "jemalloc/internal/util.h" #include "jemalloc/internal/util.h"
/******************************************************************************/ /******************************************************************************/
...@@ -29,6 +35,29 @@ const char *je_malloc_conf ...@@ -29,6 +35,29 @@ const char *je_malloc_conf
JEMALLOC_ATTR(weak) JEMALLOC_ATTR(weak)
#endif #endif
; ;
/*
* The usual rule is that the closer to runtime you are, the higher priority
* your configuration settings are (so the jemalloc config options get lower
* priority than the per-binary setting, which gets lower priority than the /etc
* setting, which gets lower priority than the environment settings).
*
* But it's a fairly common use case in some testing environments for a user to
* be able to control the binary, but nothing else (e.g. a performancy canary
* uses the production OS and environment variables, but can run any binary in
* those circumstances). For these use cases, it's handy to have an in-binary
* mechanism for overriding environment variable settings, with the idea that if
* the results are positive they get promoted to the official settings, and
* moved from the binary to the environment variable.
*
* We don't actually want this to be widespread, so we'll give it a silly name
* and not mention it in headers or documentation.
*/
const char *je_malloc_conf_2_conf_harder
#ifndef _WIN32
JEMALLOC_ATTR(weak)
#endif
;
bool opt_abort = bool opt_abort =
#ifdef JEMALLOC_DEBUG #ifdef JEMALLOC_DEBUG
true true
...@@ -66,16 +95,73 @@ bool opt_junk_free = ...@@ -66,16 +95,73 @@ bool opt_junk_free =
false false
#endif #endif
; ;
bool opt_trust_madvise =
#ifdef JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS
false
#else
true
#endif
;
bool opt_cache_oblivious =
#ifdef JEMALLOC_CACHE_OBLIVIOUS
true
#else
false
#endif
;
zero_realloc_action_t opt_zero_realloc_action =
#ifdef JEMALLOC_ZERO_REALLOC_DEFAULT_FREE
zero_realloc_action_free
#else
zero_realloc_action_alloc
#endif
;
atomic_zu_t zero_realloc_count = ATOMIC_INIT(0);
const char *zero_realloc_mode_names[] = {
"alloc",
"free",
"abort",
};
/*
* These are the documented values for junk fill debugging facilities -- see the
* man page.
*/
static const uint8_t junk_alloc_byte = 0xa5;
static const uint8_t junk_free_byte = 0x5a;
static void default_junk_alloc(void *ptr, size_t usize) {
memset(ptr, junk_alloc_byte, usize);
}
static void default_junk_free(void *ptr, size_t usize) {
memset(ptr, junk_free_byte, usize);
}
void (*junk_alloc_callback)(void *ptr, size_t size) = &default_junk_alloc;
void (*junk_free_callback)(void *ptr, size_t size) = &default_junk_free;
bool opt_utrace = false; bool opt_utrace = false;
bool opt_xmalloc = false; bool opt_xmalloc = false;
bool opt_experimental_infallible_new = false;
bool opt_zero = false; bool opt_zero = false;
unsigned opt_narenas = 0; unsigned opt_narenas = 0;
fxp_t opt_narenas_ratio = FXP_INIT_INT(4);
unsigned ncpus; unsigned ncpus;
/* Protects arenas initialization. */ /* Protects arenas initialization. */
malloc_mutex_t arenas_lock; malloc_mutex_t arenas_lock;
/* The global hpa, and whether it's on. */
bool opt_hpa = false;
hpa_shard_opts_t opt_hpa_opts = HPA_SHARD_OPTS_DEFAULT;
sec_opts_t opt_hpa_sec_opts = SEC_OPTS_DEFAULT;
/* /*
* Arenas that are used to service external requests. Not all elements of the * Arenas that are used to service external requests. Not all elements of the
* arenas array are necessarily used; arenas are created lazily as needed. * arenas array are necessarily used; arenas are created lazily as needed.
...@@ -94,13 +180,7 @@ static arena_t *a0; /* arenas[0]. */ ...@@ -94,13 +180,7 @@ static arena_t *a0; /* arenas[0]. */
unsigned narenas_auto; unsigned narenas_auto;
unsigned manual_arena_base; unsigned manual_arena_base;
typedef enum { malloc_init_t malloc_init_state = malloc_init_uninitialized;
malloc_init_uninitialized = 3,
malloc_init_a0_initialized = 2,
malloc_init_recursible = 1,
malloc_init_initialized = 0 /* Common case --> jnz. */
} malloc_init_t;
static malloc_init_t malloc_init_state = malloc_init_uninitialized;
/* False should be the common case. Set to true to trigger initialization. */ /* False should be the common case. Set to true to trigger initialization. */
bool malloc_slow = true; bool malloc_slow = true;
...@@ -180,7 +260,7 @@ typedef struct { ...@@ -180,7 +260,7 @@ typedef struct {
ut.p = (a); \ ut.p = (a); \
ut.s = (b); \ ut.s = (b); \
ut.r = (c); \ ut.r = (c); \
utrace(&ut, sizeof(ut)); \ UTRACE_CALL(&ut, sizeof(ut)); \
errno = utrace_serrno; \ errno = utrace_serrno; \
} \ } \
} while (0) } while (0)
...@@ -205,11 +285,6 @@ static bool malloc_init_hard(void); ...@@ -205,11 +285,6 @@ static bool malloc_init_hard(void);
* Begin miscellaneous support functions. * Begin miscellaneous support functions.
*/ */
bool
malloc_initialized(void) {
return (malloc_init_state == malloc_init_initialized);
}
JEMALLOC_ALWAYS_INLINE bool JEMALLOC_ALWAYS_INLINE bool
malloc_init_a0(void) { malloc_init_a0(void) {
if (unlikely(malloc_init_state == malloc_init_uninitialized)) { if (unlikely(malloc_init_state == malloc_init_uninitialized)) {
...@@ -257,7 +332,7 @@ a0dalloc(void *ptr) { ...@@ -257,7 +332,7 @@ a0dalloc(void *ptr) {
} }
/* /*
* FreeBSD's libc uses the bootstrap_*() functions in bootstrap-senstive * FreeBSD's libc uses the bootstrap_*() functions in bootstrap-sensitive
* situations that cannot tolerate TLS variable access (TLS allocation and very * situations that cannot tolerate TLS variable access (TLS allocation and very
* early internal data structure initialization). * early internal data structure initialization).
*/ */
...@@ -315,7 +390,7 @@ narenas_total_get(void) { ...@@ -315,7 +390,7 @@ narenas_total_get(void) {
/* Create a new arena and insert it into the arenas array at index ind. */ /* Create a new arena and insert it into the arenas array at index ind. */
static arena_t * static arena_t *
arena_init_locked(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks) { arena_init_locked(tsdn_t *tsdn, unsigned ind, const arena_config_t *config) {
arena_t *arena; arena_t *arena;
assert(ind <= narenas_total_get()); assert(ind <= narenas_total_get());
...@@ -337,7 +412,7 @@ arena_init_locked(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks) { ...@@ -337,7 +412,7 @@ arena_init_locked(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks) {
} }
/* Actually initialize the arena. */ /* Actually initialize the arena. */
arena = arena_new(tsdn, ind, extent_hooks); arena = arena_new(tsdn, ind, config);
return arena; return arena;
} }
...@@ -361,11 +436,11 @@ arena_new_create_background_thread(tsdn_t *tsdn, unsigned ind) { ...@@ -361,11 +436,11 @@ arena_new_create_background_thread(tsdn_t *tsdn, unsigned ind) {
} }
arena_t * arena_t *
arena_init(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks) { arena_init(tsdn_t *tsdn, unsigned ind, const arena_config_t *config) {
arena_t *arena; arena_t *arena;
malloc_mutex_lock(tsdn, &arenas_lock); malloc_mutex_lock(tsdn, &arenas_lock);
arena = arena_init_locked(tsdn, ind, extent_hooks); arena = arena_init_locked(tsdn, ind, config);
malloc_mutex_unlock(tsdn, &arenas_lock); malloc_mutex_unlock(tsdn, &arenas_lock);
arena_new_create_background_thread(tsdn, ind); arena_new_create_background_thread(tsdn, ind);
...@@ -394,14 +469,19 @@ arena_bind(tsd_t *tsd, unsigned ind, bool internal) { ...@@ -394,14 +469,19 @@ arena_bind(tsd_t *tsd, unsigned ind, bool internal) {
} }
void void
arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind) { arena_migrate(tsd_t *tsd, arena_t *oldarena, arena_t *newarena) {
arena_t *oldarena, *newarena; assert(oldarena != NULL);
assert(newarena != NULL);
oldarena = arena_get(tsd_tsdn(tsd), oldind, false);
newarena = arena_get(tsd_tsdn(tsd), newind, false);
arena_nthreads_dec(oldarena, false); arena_nthreads_dec(oldarena, false);
arena_nthreads_inc(newarena, false); arena_nthreads_inc(newarena, false);
tsd_arena_set(tsd, newarena); tsd_arena_set(tsd, newarena);
if (arena_nthreads_get(oldarena, false) == 0) {
/* Purge if the old arena has no associated threads anymore. */
arena_decay(tsd_tsdn(tsd), oldarena,
/* is_background_thread */ false, /* all */ true);
}
} }
static void static void
...@@ -418,82 +498,6 @@ arena_unbind(tsd_t *tsd, unsigned ind, bool internal) { ...@@ -418,82 +498,6 @@ arena_unbind(tsd_t *tsd, unsigned ind, bool internal) {
} }
} }
arena_tdata_t *
arena_tdata_get_hard(tsd_t *tsd, unsigned ind) {
arena_tdata_t *tdata, *arenas_tdata_old;
arena_tdata_t *arenas_tdata = tsd_arenas_tdata_get(tsd);
unsigned narenas_tdata_old, i;
unsigned narenas_tdata = tsd_narenas_tdata_get(tsd);
unsigned narenas_actual = narenas_total_get();
/*
* Dissociate old tdata array (and set up for deallocation upon return)
* if it's too small.
*/
if (arenas_tdata != NULL && narenas_tdata < narenas_actual) {
arenas_tdata_old = arenas_tdata;
narenas_tdata_old = narenas_tdata;
arenas_tdata = NULL;
narenas_tdata = 0;
tsd_arenas_tdata_set(tsd, arenas_tdata);
tsd_narenas_tdata_set(tsd, narenas_tdata);
} else {
arenas_tdata_old = NULL;
narenas_tdata_old = 0;
}
/* Allocate tdata array if it's missing. */
if (arenas_tdata == NULL) {
bool *arenas_tdata_bypassp = tsd_arenas_tdata_bypassp_get(tsd);
narenas_tdata = (ind < narenas_actual) ? narenas_actual : ind+1;
if (tsd_nominal(tsd) && !*arenas_tdata_bypassp) {
*arenas_tdata_bypassp = true;
arenas_tdata = (arena_tdata_t *)a0malloc(
sizeof(arena_tdata_t) * narenas_tdata);
*arenas_tdata_bypassp = false;
}
if (arenas_tdata == NULL) {
tdata = NULL;
goto label_return;
}
assert(tsd_nominal(tsd) && !*arenas_tdata_bypassp);
tsd_arenas_tdata_set(tsd, arenas_tdata);
tsd_narenas_tdata_set(tsd, narenas_tdata);
}
/*
* Copy to tdata array. It's possible that the actual number of arenas
* has increased since narenas_total_get() was called above, but that
* causes no correctness issues unless two threads concurrently execute
* the arenas.create mallctl, which we trust mallctl synchronization to
* prevent.
*/
/* Copy/initialize tickers. */
for (i = 0; i < narenas_actual; i++) {
if (i < narenas_tdata_old) {
ticker_copy(&arenas_tdata[i].decay_ticker,
&arenas_tdata_old[i].decay_ticker);
} else {
ticker_init(&arenas_tdata[i].decay_ticker,
DECAY_NTICKS_PER_UPDATE);
}
}
if (narenas_tdata > narenas_actual) {
memset(&arenas_tdata[narenas_actual], 0, sizeof(arena_tdata_t)
* (narenas_tdata - narenas_actual));
}
/* Read the refreshed tdata array. */
tdata = &arenas_tdata[ind];
label_return:
if (arenas_tdata_old != NULL) {
a0dalloc(arenas_tdata_old);
}
return tdata;
}
/* Slow path, called only by arena_choose(). */ /* Slow path, called only by arena_choose(). */
arena_t * arena_t *
arena_choose_hard(tsd_t *tsd, bool internal) { arena_choose_hard(tsd_t *tsd, bool internal) {
...@@ -576,8 +580,7 @@ arena_choose_hard(tsd_t *tsd, bool internal) { ...@@ -576,8 +580,7 @@ arena_choose_hard(tsd_t *tsd, bool internal) {
/* Initialize a new arena. */ /* Initialize a new arena. */
choose[j] = first_null; choose[j] = first_null;
arena = arena_init_locked(tsd_tsdn(tsd), arena = arena_init_locked(tsd_tsdn(tsd),
choose[j], choose[j], &arena_config_default);
(extent_hooks_t *)&extent_hooks_default);
if (arena == NULL) { if (arena == NULL) {
malloc_mutex_unlock(tsd_tsdn(tsd), malloc_mutex_unlock(tsd_tsdn(tsd),
&arenas_lock); &arenas_lock);
...@@ -629,20 +632,6 @@ arena_cleanup(tsd_t *tsd) { ...@@ -629,20 +632,6 @@ arena_cleanup(tsd_t *tsd) {
} }
} }
void
arenas_tdata_cleanup(tsd_t *tsd) {
arena_tdata_t *arenas_tdata;
/* Prevent tsd->arenas_tdata from being (re)created. */
*tsd_arenas_tdata_bypassp_get(tsd) = true;
arenas_tdata = tsd_arenas_tdata_get(tsd);
if (arenas_tdata != NULL) {
tsd_arenas_tdata_set(tsd, NULL);
a0dalloc(arenas_tdata);
}
}
static void static void
stats_print_atexit(void) { stats_print_atexit(void) {
if (config_stats) { if (config_stats) {
...@@ -661,11 +650,13 @@ stats_print_atexit(void) { ...@@ -661,11 +650,13 @@ stats_print_atexit(void) {
for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { for (i = 0, narenas = narenas_total_get(); i < narenas; i++) {
arena_t *arena = arena_get(tsdn, i, false); arena_t *arena = arena_get(tsdn, i, false);
if (arena != NULL) { if (arena != NULL) {
tcache_t *tcache; tcache_slow_t *tcache_slow;
malloc_mutex_lock(tsdn, &arena->tcache_ql_mtx); malloc_mutex_lock(tsdn, &arena->tcache_ql_mtx);
ql_foreach(tcache, &arena->tcache_ql, link) { ql_foreach(tcache_slow, &arena->tcache_ql,
tcache_stats_merge(tsdn, tcache, arena); link) {
tcache_stats_merge(tsdn,
tcache_slow->tcache, arena);
} }
malloc_mutex_unlock(tsdn, malloc_mutex_unlock(tsdn,
&arena->tcache_ql_mtx); &arena->tcache_ql_mtx);
...@@ -730,18 +721,28 @@ malloc_ncpus(void) { ...@@ -730,18 +721,28 @@ malloc_ncpus(void) {
SYSTEM_INFO si; SYSTEM_INFO si;
GetSystemInfo(&si); GetSystemInfo(&si);
result = si.dwNumberOfProcessors; result = si.dwNumberOfProcessors;
#elif defined(JEMALLOC_GLIBC_MALLOC_HOOK) && defined(CPU_COUNT) #elif defined(CPU_COUNT)
/* /*
* glibc >= 2.6 has the CPU_COUNT macro. * glibc >= 2.6 has the CPU_COUNT macro.
* *
* glibc's sysconf() uses isspace(). glibc allocates for the first time * glibc's sysconf() uses isspace(). glibc allocates for the first time
* *before* setting up the isspace tables. Therefore we need a * *before* setting up the isspace tables. Therefore we need a
* different method to get the number of CPUs. * different method to get the number of CPUs.
*
* The getaffinity approach is also preferred when only a subset of CPUs
* is available, to avoid using more arenas than necessary.
*/ */
{ {
# if defined(__FreeBSD__) || defined(__DragonFly__)
cpuset_t set;
# else
cpu_set_t set; cpu_set_t set;
# endif
# if defined(JEMALLOC_HAVE_SCHED_SETAFFINITY)
sched_getaffinity(0, sizeof(set), &set);
# else
pthread_getaffinity_np(pthread_self(), sizeof(set), &set); pthread_getaffinity_np(pthread_self(), sizeof(set), &set);
# endif
result = CPU_COUNT(&set); result = CPU_COUNT(&set);
} }
#else #else
...@@ -750,9 +751,47 @@ malloc_ncpus(void) { ...@@ -750,9 +751,47 @@ malloc_ncpus(void) {
return ((result == -1) ? 1 : (unsigned)result); return ((result == -1) ? 1 : (unsigned)result);
} }
/*
* Ensure that number of CPUs is determistinc, i.e. it is the same based on:
* - sched_getaffinity()
* - _SC_NPROCESSORS_ONLN
* - _SC_NPROCESSORS_CONF
* Since otherwise tricky things is possible with percpu arenas in use.
*/
static bool
malloc_cpu_count_is_deterministic()
{
#ifdef _WIN32
return true;
#else
long cpu_onln = sysconf(_SC_NPROCESSORS_ONLN);
long cpu_conf = sysconf(_SC_NPROCESSORS_CONF);
if (cpu_onln != cpu_conf) {
return false;
}
# if defined(CPU_COUNT)
# if defined(__FreeBSD__) || defined(__DragonFly__)
cpuset_t set;
# else
cpu_set_t set;
# endif /* __FreeBSD__ */
# if defined(JEMALLOC_HAVE_SCHED_SETAFFINITY)
sched_getaffinity(0, sizeof(set), &set);
# else /* !JEMALLOC_HAVE_SCHED_SETAFFINITY */
pthread_getaffinity_np(pthread_self(), sizeof(set), &set);
# endif /* JEMALLOC_HAVE_SCHED_SETAFFINITY */
long cpu_affinity = CPU_COUNT(&set);
if (cpu_affinity != cpu_conf) {
return false;
}
# endif /* CPU_COUNT */
return true;
#endif
}
static void static void
init_opt_stats_print_opts(const char *v, size_t vlen) { init_opt_stats_opts(const char *v, size_t vlen, char *dest) {
size_t opts_len = strlen(opt_stats_print_opts); size_t opts_len = strlen(dest);
assert(opts_len <= stats_print_tot_num_options); assert(opts_len <= stats_print_tot_num_options);
for (size_t i = 0; i < vlen; i++) { for (size_t i = 0; i < vlen; i++) {
...@@ -763,16 +802,16 @@ init_opt_stats_print_opts(const char *v, size_t vlen) { ...@@ -763,16 +802,16 @@ init_opt_stats_print_opts(const char *v, size_t vlen) {
default: continue; default: continue;
} }
if (strchr(opt_stats_print_opts, v[i]) != NULL) { if (strchr(dest, v[i]) != NULL) {
/* Ignore repeated. */ /* Ignore repeated. */
continue; continue;
} }
opt_stats_print_opts[opts_len++] = v[i]; dest[opts_len++] = v[i];
opt_stats_print_opts[opts_len] = '\0'; dest[opts_len] = '\0';
assert(opts_len <= stats_print_tot_num_options); assert(opts_len <= stats_print_tot_num_options);
} }
assert(opts_len == strlen(opt_stats_print_opts)); assert(opts_len == strlen(dest));
} }
/* Reads the next size pair in a multi-sized option. */ /* Reads the next size pair in a multi-sized option. */
...@@ -854,10 +893,12 @@ malloc_conf_next(char const **opts_p, char const **k_p, size_t *klen_p, ...@@ -854,10 +893,12 @@ malloc_conf_next(char const **opts_p, char const **k_p, size_t *klen_p,
if (opts != *opts_p) { if (opts != *opts_p) {
malloc_write("<jemalloc>: Conf string ends " malloc_write("<jemalloc>: Conf string ends "
"with key\n"); "with key\n");
had_conf_error = true;
} }
return true; return true;
default: default:
malloc_write("<jemalloc>: Malformed conf string\n"); malloc_write("<jemalloc>: Malformed conf string\n");
had_conf_error = true;
return true; return true;
} }
} }
...@@ -876,6 +917,7 @@ malloc_conf_next(char const **opts_p, char const **k_p, size_t *klen_p, ...@@ -876,6 +917,7 @@ malloc_conf_next(char const **opts_p, char const **k_p, size_t *klen_p,
if (*opts == '\0') { if (*opts == '\0') {
malloc_write("<jemalloc>: Conf string ends " malloc_write("<jemalloc>: Conf string ends "
"with comma\n"); "with comma\n");
had_conf_error = true;
} }
*vlen_p = (uintptr_t)opts - 1 - (uintptr_t)*v_p; *vlen_p = (uintptr_t)opts - 1 - (uintptr_t)*v_p;
accept = true; accept = true;
...@@ -932,7 +974,7 @@ malloc_slow_flag_init(void) { ...@@ -932,7 +974,7 @@ malloc_slow_flag_init(void) {
} }
/* Number of sources for initializing malloc_conf */ /* Number of sources for initializing malloc_conf */
#define MALLOC_CONF_NSOURCES 4 #define MALLOC_CONF_NSOURCES 5
static const char * static const char *
obtain_malloc_conf(unsigned which_source, char buf[PATH_MAX + 1]) { obtain_malloc_conf(unsigned which_source, char buf[PATH_MAX + 1]) {
...@@ -1010,6 +1052,9 @@ obtain_malloc_conf(unsigned which_source, char buf[PATH_MAX + 1]) { ...@@ -1010,6 +1052,9 @@ obtain_malloc_conf(unsigned which_source, char buf[PATH_MAX + 1]) {
ret = NULL; ret = NULL;
} }
break; break;
} case 4: {
ret = je_malloc_conf_2_conf_harder;
break;
} default: } default:
not_reached(); not_reached();
ret = NULL; ret = NULL;
...@@ -1026,7 +1071,9 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1026,7 +1071,9 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
"string pointed to by the global variable malloc_conf", "string pointed to by the global variable malloc_conf",
"\"name\" of the file referenced by the symbolic link named " "\"name\" of the file referenced by the symbolic link named "
"/etc/malloc.conf", "/etc/malloc.conf",
"value of the environment variable MALLOC_CONF" "value of the environment variable MALLOC_CONF",
"string pointed to by the global variable "
"malloc_conf_2_conf_harder",
}; };
unsigned i; unsigned i;
const char *opts, *k, *v; const char *opts, *k, *v;
...@@ -1094,39 +1141,50 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1094,39 +1141,50 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
#define CONF_CHECK_MIN(um, min) ((um) < (min)) #define CONF_CHECK_MIN(um, min) ((um) < (min))
#define CONF_DONT_CHECK_MAX(um, max) false #define CONF_DONT_CHECK_MAX(um, max) false
#define CONF_CHECK_MAX(um, max) ((um) > (max)) #define CONF_CHECK_MAX(um, max) ((um) > (max))
#define CONF_HANDLE_T_U(t, o, n, min, max, check_min, check_max, clip) \
if (CONF_MATCH(n)) { \ #define CONF_VALUE_READ(max_t, result) \
uintmax_t um; \
char *end; \ char *end; \
\
set_errno(0); \ set_errno(0); \
um = malloc_strtoumax(v, &end, 0); \ result = (max_t)malloc_strtoumax(v, &end, 0);
if (get_errno() != 0 || (uintptr_t)end -\ #define CONF_VALUE_READ_FAIL() \
(uintptr_t)v != vlen) { \ (get_errno() != 0 || (uintptr_t)end - (uintptr_t)v != vlen)
#define CONF_HANDLE_T(t, max_t, o, n, min, max, check_min, check_max, clip) \
if (CONF_MATCH(n)) { \
max_t mv; \
CONF_VALUE_READ(max_t, mv) \
if (CONF_VALUE_READ_FAIL()) { \
CONF_ERROR("Invalid conf value",\ CONF_ERROR("Invalid conf value",\
k, klen, v, vlen); \ k, klen, v, vlen); \
} else if (clip) { \ } else if (clip) { \
if (check_min(um, (t)(min))) { \ if (check_min(mv, (t)(min))) { \
o = (t)(min); \ o = (t)(min); \
} else if ( \ } else if ( \
check_max(um, (t)(max))) { \ check_max(mv, (t)(max))) { \
o = (t)(max); \ o = (t)(max); \
} else { \ } else { \
o = (t)um; \ o = (t)mv; \
} \ } \
} else { \ } else { \
if (check_min(um, (t)(min)) || \ if (check_min(mv, (t)(min)) || \
check_max(um, (t)(max))) { \ check_max(mv, (t)(max))) { \
CONF_ERROR( \ CONF_ERROR( \
"Out-of-range " \ "Out-of-range " \
"conf value", \ "conf value", \
k, klen, v, vlen); \ k, klen, v, vlen); \
} else { \ } else { \
o = (t)um; \ o = (t)mv; \
} \ } \
} \ } \
CONF_CONTINUE; \ CONF_CONTINUE; \
} }
#define CONF_HANDLE_T_U(t, o, n, min, max, check_min, check_max, clip) \
CONF_HANDLE_T(t, uintmax_t, o, n, min, max, check_min, \
check_max, clip)
#define CONF_HANDLE_T_SIGNED(t, o, n, min, max, check_min, check_max, clip)\
CONF_HANDLE_T(t, intmax_t, o, n, min, max, check_min, \
check_max, clip)
#define CONF_HANDLE_UNSIGNED(o, n, min, max, check_min, check_max, \ #define CONF_HANDLE_UNSIGNED(o, n, min, max, check_min, check_max, \
clip) \ clip) \
CONF_HANDLE_T_U(unsigned, o, n, min, max, \ CONF_HANDLE_T_U(unsigned, o, n, min, max, \
...@@ -1134,27 +1192,15 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1134,27 +1192,15 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
#define CONF_HANDLE_SIZE_T(o, n, min, max, check_min, check_max, clip) \ #define CONF_HANDLE_SIZE_T(o, n, min, max, check_min, check_max, clip) \
CONF_HANDLE_T_U(size_t, o, n, min, max, \ CONF_HANDLE_T_U(size_t, o, n, min, max, \
check_min, check_max, clip) check_min, check_max, clip)
#define CONF_HANDLE_INT64_T(o, n, min, max, check_min, check_max, clip) \
CONF_HANDLE_T_SIGNED(int64_t, o, n, min, max, \
check_min, check_max, clip)
#define CONF_HANDLE_UINT64_T(o, n, min, max, check_min, check_max, clip)\
CONF_HANDLE_T_U(uint64_t, o, n, min, max, \
check_min, check_max, clip)
#define CONF_HANDLE_SSIZE_T(o, n, min, max) \ #define CONF_HANDLE_SSIZE_T(o, n, min, max) \
if (CONF_MATCH(n)) { \ CONF_HANDLE_T_SIGNED(ssize_t, o, n, min, max, \
long l; \ CONF_CHECK_MIN, CONF_CHECK_MAX, false)
char *end; \
\
set_errno(0); \
l = strtol(v, &end, 0); \
if (get_errno() != 0 || (uintptr_t)end -\
(uintptr_t)v != vlen) { \
CONF_ERROR("Invalid conf value",\
k, klen, v, vlen); \
} else if (l < (ssize_t)(min) || l > \
(ssize_t)(max)) { \
CONF_ERROR( \
"Out-of-range conf value", \
k, klen, v, vlen); \
} else { \
o = l; \
} \
CONF_CONTINUE; \
}
#define CONF_HANDLE_CHAR_P(o, n, d) \ #define CONF_HANDLE_CHAR_P(o, n, d) \
if (CONF_MATCH(n)) { \ if (CONF_MATCH(n)) { \
size_t cpylen = (vlen <= \ size_t cpylen = (vlen <= \
...@@ -1174,13 +1220,14 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1174,13 +1220,14 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
CONF_HANDLE_BOOL(opt_abort, "abort") CONF_HANDLE_BOOL(opt_abort, "abort")
CONF_HANDLE_BOOL(opt_abort_conf, "abort_conf") CONF_HANDLE_BOOL(opt_abort_conf, "abort_conf")
CONF_HANDLE_BOOL(opt_trust_madvise, "trust_madvise")
if (strncmp("metadata_thp", k, klen) == 0) { if (strncmp("metadata_thp", k, klen) == 0) {
int i; int m;
bool match = false; bool match = false;
for (i = 0; i < metadata_thp_mode_limit; i++) { for (m = 0; m < metadata_thp_mode_limit; m++) {
if (strncmp(metadata_thp_mode_names[i], if (strncmp(metadata_thp_mode_names[m],
v, vlen) == 0) { v, vlen) == 0) {
opt_metadata_thp = i; opt_metadata_thp = m;
match = true; match = true;
break; break;
} }
...@@ -1193,18 +1240,18 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1193,18 +1240,18 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
} }
CONF_HANDLE_BOOL(opt_retain, "retain") CONF_HANDLE_BOOL(opt_retain, "retain")
if (strncmp("dss", k, klen) == 0) { if (strncmp("dss", k, klen) == 0) {
int i; int m;
bool match = false; bool match = false;
for (i = 0; i < dss_prec_limit; i++) { for (m = 0; m < dss_prec_limit; m++) {
if (strncmp(dss_prec_names[i], v, vlen) if (strncmp(dss_prec_names[m], v, vlen)
== 0) { == 0) {
if (extent_dss_prec_set(i)) { if (extent_dss_prec_set(m)) {
CONF_ERROR( CONF_ERROR(
"Error setting dss", "Error setting dss",
k, klen, v, vlen); k, klen, v, vlen);
} else { } else {
opt_dss = opt_dss =
dss_prec_names[i]; dss_prec_names[m];
match = true; match = true;
break; break;
} }
...@@ -1216,9 +1263,27 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1216,9 +1263,27 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
} }
CONF_CONTINUE; CONF_CONTINUE;
} }
CONF_HANDLE_UNSIGNED(opt_narenas, "narenas", 1, if (CONF_MATCH("narenas")) {
UINT_MAX, CONF_CHECK_MIN, CONF_DONT_CHECK_MAX, if (CONF_MATCH_VALUE("default")) {
false) opt_narenas = 0;
CONF_CONTINUE;
} else {
CONF_HANDLE_UNSIGNED(opt_narenas,
"narenas", 1, UINT_MAX,
CONF_CHECK_MIN, CONF_DONT_CHECK_MAX,
/* clip */ false)
}
}
if (CONF_MATCH("narenas_ratio")) {
char *end;
bool err = fxp_parse(&opt_narenas_ratio, v,
&end);
if (err || (size_t)(end - v) != vlen) {
CONF_ERROR("Invalid conf value",
k, klen, v, vlen);
}
CONF_CONTINUE;
}
if (CONF_MATCH("bin_shards")) { if (CONF_MATCH("bin_shards")) {
const char *bin_shards_segment_cur = v; const char *bin_shards_segment_cur = v;
size_t vlen_left = vlen; size_t vlen_left = vlen;
...@@ -1241,6 +1306,9 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1241,6 +1306,9 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
} while (vlen_left > 0); } while (vlen_left > 0);
CONF_CONTINUE; CONF_CONTINUE;
} }
CONF_HANDLE_INT64_T(opt_mutex_max_spin,
"mutex_max_spin", -1, INT64_MAX, CONF_CHECK_MIN,
CONF_DONT_CHECK_MAX, false);
CONF_HANDLE_SSIZE_T(opt_dirty_decay_ms, CONF_HANDLE_SSIZE_T(opt_dirty_decay_ms,
"dirty_decay_ms", -1, NSTIME_SEC_MAX * KQU(1000) < "dirty_decay_ms", -1, NSTIME_SEC_MAX * KQU(1000) <
QU(SSIZE_MAX) ? NSTIME_SEC_MAX * KQU(1000) : QU(SSIZE_MAX) ? NSTIME_SEC_MAX * KQU(1000) :
...@@ -1251,7 +1319,16 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1251,7 +1319,16 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
SSIZE_MAX); SSIZE_MAX);
CONF_HANDLE_BOOL(opt_stats_print, "stats_print") CONF_HANDLE_BOOL(opt_stats_print, "stats_print")
if (CONF_MATCH("stats_print_opts")) { if (CONF_MATCH("stats_print_opts")) {
init_opt_stats_print_opts(v, vlen); init_opt_stats_opts(v, vlen,
opt_stats_print_opts);
CONF_CONTINUE;
}
CONF_HANDLE_INT64_T(opt_stats_interval,
"stats_interval", -1, INT64_MAX,
CONF_CHECK_MIN, CONF_DONT_CHECK_MAX, false)
if (CONF_MATCH("stats_interval_opts")) {
init_opt_stats_opts(v, vlen,
opt_stats_interval_opts);
CONF_CONTINUE; CONF_CONTINUE;
} }
if (config_fill) { if (config_fill) {
...@@ -1287,9 +1364,61 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1287,9 +1364,61 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
if (config_xmalloc) { if (config_xmalloc) {
CONF_HANDLE_BOOL(opt_xmalloc, "xmalloc") CONF_HANDLE_BOOL(opt_xmalloc, "xmalloc")
} }
if (config_enable_cxx) {
CONF_HANDLE_BOOL(
opt_experimental_infallible_new,
"experimental_infallible_new")
}
CONF_HANDLE_BOOL(opt_tcache, "tcache") CONF_HANDLE_BOOL(opt_tcache, "tcache")
CONF_HANDLE_SSIZE_T(opt_lg_tcache_max, "lg_tcache_max", CONF_HANDLE_SIZE_T(opt_tcache_max, "tcache_max",
-1, (sizeof(size_t) << 3) - 1) 0, TCACHE_MAXCLASS_LIMIT, CONF_DONT_CHECK_MIN,
CONF_CHECK_MAX, /* clip */ true)
if (CONF_MATCH("lg_tcache_max")) {
size_t m;
CONF_VALUE_READ(size_t, m)
if (CONF_VALUE_READ_FAIL()) {
CONF_ERROR("Invalid conf value",
k, klen, v, vlen);
} else {
/* clip if necessary */
if (m > TCACHE_LG_MAXCLASS_LIMIT) {
m = TCACHE_LG_MAXCLASS_LIMIT;
}
opt_tcache_max = (size_t)1 << m;
}
CONF_CONTINUE;
}
/*
* Anyone trying to set a value outside -16 to 16 is
* deeply confused.
*/
CONF_HANDLE_SSIZE_T(opt_lg_tcache_nslots_mul,
"lg_tcache_nslots_mul", -16, 16)
/* Ditto with values past 2048. */
CONF_HANDLE_UNSIGNED(opt_tcache_nslots_small_min,
"tcache_nslots_small_min", 1, 2048,
CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)
CONF_HANDLE_UNSIGNED(opt_tcache_nslots_small_max,
"tcache_nslots_small_max", 1, 2048,
CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)
CONF_HANDLE_UNSIGNED(opt_tcache_nslots_large,
"tcache_nslots_large", 1, 2048,
CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)
CONF_HANDLE_SIZE_T(opt_tcache_gc_incr_bytes,
"tcache_gc_incr_bytes", 1024, SIZE_T_MAX,
CONF_CHECK_MIN, CONF_DONT_CHECK_MAX,
/* clip */ true)
CONF_HANDLE_SIZE_T(opt_tcache_gc_delay_bytes,
"tcache_gc_delay_bytes", 0, SIZE_T_MAX,
CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX,
/* clip */ false)
CONF_HANDLE_UNSIGNED(opt_lg_tcache_flush_small_div,
"lg_tcache_flush_small_div", 1, 16,
CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)
CONF_HANDLE_UNSIGNED(opt_lg_tcache_flush_large_div,
"lg_tcache_flush_large_div", 1, 16,
CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)
/* /*
* The runtime option of oversize_threshold remains * The runtime option of oversize_threshold remains
...@@ -1309,16 +1438,16 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1309,16 +1438,16 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
if (strncmp("percpu_arena", k, klen) == 0) { if (strncmp("percpu_arena", k, klen) == 0) {
bool match = false; bool match = false;
for (int i = percpu_arena_mode_names_base; i < for (int m = percpu_arena_mode_names_base; m <
percpu_arena_mode_names_limit; i++) { percpu_arena_mode_names_limit; m++) {
if (strncmp(percpu_arena_mode_names[i], if (strncmp(percpu_arena_mode_names[m],
v, vlen) == 0) { v, vlen) == 0) {
if (!have_percpu_arena) { if (!have_percpu_arena) {
CONF_ERROR( CONF_ERROR(
"No getcpu support", "No getcpu support",
k, klen, v, vlen); k, klen, v, vlen);
} }
opt_percpu_arena = i; opt_percpu_arena = m;
match = true; match = true;
break; break;
} }
...@@ -1336,7 +1465,83 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1336,7 +1465,83 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
opt_max_background_threads, opt_max_background_threads,
CONF_CHECK_MIN, CONF_CHECK_MAX, CONF_CHECK_MIN, CONF_CHECK_MAX,
true); true);
CONF_HANDLE_BOOL(opt_hpa, "hpa")
CONF_HANDLE_SIZE_T(opt_hpa_opts.slab_max_alloc,
"hpa_slab_max_alloc", PAGE, HUGEPAGE,
CONF_CHECK_MIN, CONF_CHECK_MAX, true);
/*
* Accept either a ratio-based or an exact hugification
* threshold.
*/
CONF_HANDLE_SIZE_T(opt_hpa_opts.hugification_threshold,
"hpa_hugification_threshold", PAGE, HUGEPAGE,
CONF_CHECK_MIN, CONF_CHECK_MAX, true);
if (CONF_MATCH("hpa_hugification_threshold_ratio")) {
fxp_t ratio;
char *end;
bool err = fxp_parse(&ratio, v,
&end);
if (err || (size_t)(end - v) != vlen
|| ratio > FXP_INIT_INT(1)) {
CONF_ERROR("Invalid conf value",
k, klen, v, vlen);
} else {
opt_hpa_opts.hugification_threshold =
fxp_mul_frac(HUGEPAGE, ratio);
}
CONF_CONTINUE;
}
CONF_HANDLE_UINT64_T(
opt_hpa_opts.hugify_delay_ms, "hpa_hugify_delay_ms",
0, 0, CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX,
false);
CONF_HANDLE_UINT64_T(
opt_hpa_opts.min_purge_interval_ms,
"hpa_min_purge_interval_ms", 0, 0,
CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX, false);
if (CONF_MATCH("hpa_dirty_mult")) {
if (CONF_MATCH_VALUE("-1")) {
opt_hpa_opts.dirty_mult = (fxp_t)-1;
CONF_CONTINUE;
}
fxp_t ratio;
char *end;
bool err = fxp_parse(&ratio, v,
&end);
if (err || (size_t)(end - v) != vlen) {
CONF_ERROR("Invalid conf value",
k, klen, v, vlen);
} else {
opt_hpa_opts.dirty_mult = ratio;
}
CONF_CONTINUE;
}
CONF_HANDLE_SIZE_T(opt_hpa_sec_opts.nshards,
"hpa_sec_nshards", 0, 0, CONF_CHECK_MIN,
CONF_DONT_CHECK_MAX, true);
CONF_HANDLE_SIZE_T(opt_hpa_sec_opts.max_alloc,
"hpa_sec_max_alloc", PAGE, 0, CONF_CHECK_MIN,
CONF_DONT_CHECK_MAX, true);
CONF_HANDLE_SIZE_T(opt_hpa_sec_opts.max_bytes,
"hpa_sec_max_bytes", PAGE, 0, CONF_CHECK_MIN,
CONF_DONT_CHECK_MAX, true);
CONF_HANDLE_SIZE_T(opt_hpa_sec_opts.bytes_after_flush,
"hpa_sec_bytes_after_flush", PAGE, 0,
CONF_CHECK_MIN, CONF_DONT_CHECK_MAX, true);
CONF_HANDLE_SIZE_T(opt_hpa_sec_opts.batch_fill_extra,
"hpa_sec_batch_fill_extra", 0, HUGEPAGE_PAGES,
CONF_CHECK_MIN, CONF_CHECK_MAX, true);
if (CONF_MATCH("slab_sizes")) { if (CONF_MATCH("slab_sizes")) {
if (CONF_MATCH_VALUE("default")) {
sc_data_init(sc_data);
CONF_CONTINUE;
}
bool err; bool err;
const char *slab_size_segment_cur = v; const char *slab_size_segment_cur = v;
size_t vlen_left = vlen; size_t vlen_left = vlen;
...@@ -1378,7 +1583,44 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1378,7 +1583,44 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
CONF_HANDLE_BOOL(opt_prof_gdump, "prof_gdump") CONF_HANDLE_BOOL(opt_prof_gdump, "prof_gdump")
CONF_HANDLE_BOOL(opt_prof_final, "prof_final") CONF_HANDLE_BOOL(opt_prof_final, "prof_final")
CONF_HANDLE_BOOL(opt_prof_leak, "prof_leak") CONF_HANDLE_BOOL(opt_prof_leak, "prof_leak")
CONF_HANDLE_BOOL(opt_prof_leak_error,
"prof_leak_error")
CONF_HANDLE_BOOL(opt_prof_log, "prof_log") CONF_HANDLE_BOOL(opt_prof_log, "prof_log")
CONF_HANDLE_SSIZE_T(opt_prof_recent_alloc_max,
"prof_recent_alloc_max", -1, SSIZE_MAX)
CONF_HANDLE_BOOL(opt_prof_stats, "prof_stats")
CONF_HANDLE_BOOL(opt_prof_sys_thread_name,
"prof_sys_thread_name")
if (CONF_MATCH("prof_time_resolution")) {
if (CONF_MATCH_VALUE("default")) {
opt_prof_time_res =
prof_time_res_default;
} else if (CONF_MATCH_VALUE("high")) {
if (!config_high_res_timer) {
CONF_ERROR(
"No high resolution"
" timer support",
k, klen, v, vlen);
} else {
opt_prof_time_res =
prof_time_res_high;
}
} else {
CONF_ERROR("Invalid conf value",
k, klen, v, vlen);
}
CONF_CONTINUE;
}
/*
* Undocumented. When set to false, don't
* correct for an unbiasing bug in jeprof
* attribution. This can be handy if you want
* to get consistent numbers from your binary
* across different jemalloc versions, even if
* those numbers are incorrect. The default is
* true.
*/
CONF_HANDLE_BOOL(opt_prof_unbias, "prof_unbias")
} }
if (config_log) { if (config_log) {
if (CONF_MATCH("log")) { if (CONF_MATCH("log")) {
...@@ -1392,15 +1634,15 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1392,15 +1634,15 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
} }
if (CONF_MATCH("thp")) { if (CONF_MATCH("thp")) {
bool match = false; bool match = false;
for (int i = 0; i < thp_mode_names_limit; i++) { for (int m = 0; m < thp_mode_names_limit; m++) {
if (strncmp(thp_mode_names[i],v, vlen) if (strncmp(thp_mode_names[m],v, vlen)
== 0) { == 0) {
if (!have_madvise_huge) { if (!have_madvise_huge && !have_memcntl) {
CONF_ERROR( CONF_ERROR(
"No THP support", "No THP support",
k, klen, v, vlen); k, klen, v, vlen);
} }
opt_thp = i; opt_thp = m;
match = true; match = true;
break; break;
} }
...@@ -1411,6 +1653,55 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1411,6 +1653,55 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
} }
CONF_CONTINUE; CONF_CONTINUE;
} }
if (CONF_MATCH("zero_realloc")) {
if (CONF_MATCH_VALUE("alloc")) {
opt_zero_realloc_action
= zero_realloc_action_alloc;
} else if (CONF_MATCH_VALUE("free")) {
opt_zero_realloc_action
= zero_realloc_action_free;
} else if (CONF_MATCH_VALUE("abort")) {
opt_zero_realloc_action
= zero_realloc_action_abort;
} else {
CONF_ERROR("Invalid conf value",
k, klen, v, vlen);
}
CONF_CONTINUE;
}
if (config_uaf_detection &&
CONF_MATCH("lg_san_uaf_align")) {
ssize_t a;
CONF_VALUE_READ(ssize_t, a)
if (CONF_VALUE_READ_FAIL() || a < -1) {
CONF_ERROR("Invalid conf value",
k, klen, v, vlen);
}
if (a == -1) {
opt_lg_san_uaf_align = -1;
CONF_CONTINUE;
}
/* clip if necessary */
ssize_t max_allowed = (sizeof(size_t) << 3) - 1;
ssize_t min_allowed = LG_PAGE;
if (a > max_allowed) {
a = max_allowed;
} else if (a < min_allowed) {
a = min_allowed;
}
opt_lg_san_uaf_align = a;
CONF_CONTINUE;
}
CONF_HANDLE_SIZE_T(opt_san_guard_small,
"san_guard_small", 0, SIZE_T_MAX,
CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX, false)
CONF_HANDLE_SIZE_T(opt_san_guard_large,
"san_guard_large", 0, SIZE_T_MAX,
CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX, false)
CONF_ERROR("Invalid conf pair", k, klen, v, vlen); CONF_ERROR("Invalid conf pair", k, klen, v, vlen);
#undef CONF_ERROR #undef CONF_ERROR
#undef CONF_CONTINUE #undef CONF_CONTINUE
...@@ -1421,7 +1712,9 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1421,7 +1712,9 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
#undef CONF_CHECK_MIN #undef CONF_CHECK_MIN
#undef CONF_DONT_CHECK_MAX #undef CONF_DONT_CHECK_MAX
#undef CONF_CHECK_MAX #undef CONF_CHECK_MAX
#undef CONF_HANDLE_T
#undef CONF_HANDLE_T_U #undef CONF_HANDLE_T_U
#undef CONF_HANDLE_T_SIGNED
#undef CONF_HANDLE_UNSIGNED #undef CONF_HANDLE_UNSIGNED
#undef CONF_HANDLE_SIZE_T #undef CONF_HANDLE_SIZE_T
#undef CONF_HANDLE_SSIZE_T #undef CONF_HANDLE_SSIZE_T
...@@ -1436,15 +1729,33 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS], ...@@ -1436,15 +1729,33 @@ malloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],
atomic_store_b(&log_init_done, true, ATOMIC_RELEASE); atomic_store_b(&log_init_done, true, ATOMIC_RELEASE);
} }
static bool
malloc_conf_init_check_deps(void) {
if (opt_prof_leak_error && !opt_prof_final) {
malloc_printf("<jemalloc>: prof_leak_error is set w/o "
"prof_final.\n");
return true;
}
return false;
}
static void static void
malloc_conf_init(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS]) { malloc_conf_init(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS]) {
const char *opts_cache[MALLOC_CONF_NSOURCES] = {NULL, NULL, NULL, NULL}; const char *opts_cache[MALLOC_CONF_NSOURCES] = {NULL, NULL, NULL, NULL,
NULL};
char buf[PATH_MAX + 1]; char buf[PATH_MAX + 1];
/* The first call only set the confirm_conf option and opts_cache */ /* The first call only set the confirm_conf option and opts_cache */
malloc_conf_init_helper(NULL, NULL, true, opts_cache, buf); malloc_conf_init_helper(NULL, NULL, true, opts_cache, buf);
malloc_conf_init_helper(sc_data, bin_shard_sizes, false, opts_cache, malloc_conf_init_helper(sc_data, bin_shard_sizes, false, opts_cache,
NULL); NULL);
if (malloc_conf_init_check_deps()) {
/* check_deps does warning msg only; abort below if needed. */
if (opt_abort_conf) {
malloc_abort_invalid_conf();
}
}
} }
#undef MALLOC_CONF_NSOURCES #undef MALLOC_CONF_NSOURCES
...@@ -1488,8 +1799,8 @@ malloc_init_hard_a0_locked() { ...@@ -1488,8 +1799,8 @@ malloc_init_hard_a0_locked() {
* Ordering here is somewhat tricky; we need sc_boot() first, since that * Ordering here is somewhat tricky; we need sc_boot() first, since that
* determines what the size classes will be, and then * determines what the size classes will be, and then
* malloc_conf_init(), since any slab size tweaking will need to be done * malloc_conf_init(), since any slab size tweaking will need to be done
* before sz_boot and bin_boot, which assume that the values they read * before sz_boot and bin_info_boot, which assume that the values they
* out of sc_data_global are final. * read out of sc_data_global are final.
*/ */
sc_boot(&sc_data); sc_boot(&sc_data);
unsigned bin_shard_sizes[SC_NBINS]; unsigned bin_shard_sizes[SC_NBINS];
...@@ -1503,8 +1814,9 @@ malloc_init_hard_a0_locked() { ...@@ -1503,8 +1814,9 @@ malloc_init_hard_a0_locked() {
prof_boot0(); prof_boot0();
} }
malloc_conf_init(&sc_data, bin_shard_sizes); malloc_conf_init(&sc_data, bin_shard_sizes);
sz_boot(&sc_data); san_init(opt_lg_san_uaf_align);
bin_boot(&sc_data, bin_shard_sizes); sz_boot(&sc_data, opt_cache_oblivious);
bin_info_boot(&sc_data, bin_shard_sizes);
if (opt_stats_print) { if (opt_stats_print) {
/* Print statistics at exit. */ /* Print statistics at exit. */
...@@ -1515,12 +1827,20 @@ malloc_init_hard_a0_locked() { ...@@ -1515,12 +1827,20 @@ malloc_init_hard_a0_locked() {
} }
} }
} }
if (stats_boot()) {
return true;
}
if (pages_boot()) { if (pages_boot()) {
return true; return true;
} }
if (base_boot(TSDN_NULL)) { if (base_boot(TSDN_NULL)) {
return true; return true;
} }
/* emap_global is static, hence zeroed. */
if (emap_init(&arena_emap_global, b0get(), /* zeroed */ true)) {
return true;
}
if (extent_boot()) { if (extent_boot()) {
return true; return true;
} }
...@@ -1530,8 +1850,20 @@ malloc_init_hard_a0_locked() { ...@@ -1530,8 +1850,20 @@ malloc_init_hard_a0_locked() {
if (config_prof) { if (config_prof) {
prof_boot1(); prof_boot1();
} }
arena_boot(&sc_data); if (opt_hpa && !hpa_supported()) {
if (tcache_boot(TSDN_NULL)) { malloc_printf("<jemalloc>: HPA not supported in the current "
"configuration; %s.",
opt_abort_conf ? "aborting" : "disabling");
if (opt_abort_conf) {
malloc_abort_invalid_conf();
} else {
opt_hpa = false;
}
}
if (arena_boot(&sc_data, b0get(), opt_hpa)) {
return true;
}
if (tcache_boot(TSDN_NULL, b0get())) {
return true; return true;
} }
if (malloc_mutex_init(&arenas_lock, "arenas", WITNESS_RANK_ARENAS, if (malloc_mutex_init(&arenas_lock, "arenas", WITNESS_RANK_ARENAS,
...@@ -1550,11 +1882,29 @@ malloc_init_hard_a0_locked() { ...@@ -1550,11 +1882,29 @@ malloc_init_hard_a0_locked() {
* Initialize one arena here. The rest are lazily created in * Initialize one arena here. The rest are lazily created in
* arena_choose_hard(). * arena_choose_hard().
*/ */
if (arena_init(TSDN_NULL, 0, (extent_hooks_t *)&extent_hooks_default) if (arena_init(TSDN_NULL, 0, &arena_config_default) == NULL) {
== NULL) {
return true; return true;
} }
a0 = arena_get(TSDN_NULL, 0, false); a0 = arena_get(TSDN_NULL, 0, false);
if (opt_hpa && !hpa_supported()) {
malloc_printf("<jemalloc>: HPA not supported in the current "
"configuration; %s.",
opt_abort_conf ? "aborting" : "disabling");
if (opt_abort_conf) {
malloc_abort_invalid_conf();
} else {
opt_hpa = false;
}
} else if (opt_hpa) {
hpa_shard_opts_t hpa_shard_opts = opt_hpa_opts;
hpa_shard_opts.deferral_allowed = background_thread_enabled();
if (pa_shard_enable_hpa(TSDN_NULL, &a0->pa_shard,
&hpa_shard_opts, &opt_hpa_sec_opts)) {
return true;
}
}
malloc_init_state = malloc_init_a0_initialized; malloc_init_state = malloc_init_a0_initialized;
return false; return false;
...@@ -1576,6 +1926,29 @@ malloc_init_hard_recursible(void) { ...@@ -1576,6 +1926,29 @@ malloc_init_hard_recursible(void) {
malloc_init_state = malloc_init_recursible; malloc_init_state = malloc_init_recursible;
ncpus = malloc_ncpus(); ncpus = malloc_ncpus();
if (opt_percpu_arena != percpu_arena_disabled) {
bool cpu_count_is_deterministic =
malloc_cpu_count_is_deterministic();
if (!cpu_count_is_deterministic) {
/*
* If # of CPU is not deterministic, and narenas not
* specified, disables per cpu arena since it may not
* detect CPU IDs properly.
*/
if (opt_narenas == 0) {
opt_percpu_arena = percpu_arena_disabled;
malloc_write("<jemalloc>: Number of CPUs "
"detected is not deterministic. Per-CPU "
"arena disabled.\n");
if (opt_abort_conf) {
malloc_abort_invalid_conf();
}
if (opt_abort) {
abort();
}
}
}
}
#if (defined(JEMALLOC_HAVE_PTHREAD_ATFORK) && !defined(JEMALLOC_MUTEX_INIT_CB) \ #if (defined(JEMALLOC_HAVE_PTHREAD_ATFORK) && !defined(JEMALLOC_MUTEX_INIT_CB) \
&& !defined(JEMALLOC_ZONE) && !defined(_WIN32) && \ && !defined(JEMALLOC_ZONE) && !defined(_WIN32) && \
...@@ -1606,7 +1979,13 @@ malloc_narenas_default(void) { ...@@ -1606,7 +1979,13 @@ malloc_narenas_default(void) {
* default. * default.
*/ */
if (ncpus > 1) { if (ncpus > 1) {
return ncpus << 2; fxp_t fxp_ncpus = FXP_INIT_INT(ncpus);
fxp_t goal = fxp_mul(fxp_ncpus, opt_narenas_ratio);
uint32_t int_goal = fxp_round_nearest(goal);
if (int_goal == 0) {
return 1;
}
return int_goal;
} else { } else {
return 1; return 1;
} }
...@@ -1765,10 +2144,11 @@ malloc_init_hard(void) { ...@@ -1765,10 +2144,11 @@ malloc_init_hard(void) {
/* Set reentrancy level to 1 during init. */ /* Set reentrancy level to 1 during init. */
pre_reentrancy(tsd, NULL); pre_reentrancy(tsd, NULL);
/* Initialize narenas before prof_boot2 (for allocation). */ /* Initialize narenas before prof_boot2 (for allocation). */
if (malloc_init_narenas() || background_thread_boot1(tsd_tsdn(tsd))) { if (malloc_init_narenas()
|| background_thread_boot1(tsd_tsdn(tsd), b0get())) {
UNLOCK_RETURN(tsd_tsdn(tsd), true, true) UNLOCK_RETURN(tsd_tsdn(tsd), true, true)
} }
if (config_prof && prof_boot2(tsd)) { if (config_prof && prof_boot2(tsd, b0get())) {
UNLOCK_RETURN(tsd_tsdn(tsd), true, true) UNLOCK_RETURN(tsd_tsdn(tsd), true, true)
} }
...@@ -1907,38 +2287,107 @@ dynamic_opts_init(dynamic_opts_t *dynamic_opts) { ...@@ -1907,38 +2287,107 @@ dynamic_opts_init(dynamic_opts_t *dynamic_opts) {
dynamic_opts->arena_ind = ARENA_IND_AUTOMATIC; dynamic_opts->arena_ind = ARENA_IND_AUTOMATIC;
} }
/* ind is ignored if dopts->alignment > 0. */ /*
JEMALLOC_ALWAYS_INLINE void * * ind parameter is optional and is only checked and filled if alignment == 0;
imalloc_no_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd, * return true if result is out of range.
size_t size, size_t usize, szind_t ind) { */
tcache_t *tcache; JEMALLOC_ALWAYS_INLINE bool
arena_t *arena; aligned_usize_get(size_t size, size_t alignment, size_t *usize, szind_t *ind,
bool bump_empty_aligned_alloc) {
/* Fill in the tcache. */ assert(usize != NULL);
if (dopts->tcache_ind == TCACHE_IND_AUTOMATIC) { if (alignment == 0) {
if (likely(!sopts->slow)) { if (ind != NULL) {
/* Getting tcache ptr unconditionally. */ *ind = sz_size2index(size);
tcache = tsd_tcachep_get(tsd); if (unlikely(*ind >= SC_NSIZES)) {
assert(tcache == tcache_get(tsd)); return true;
} else { }
tcache = tcache_get(tsd); *usize = sz_index2size(*ind);
assert(*usize > 0 && *usize <= SC_LARGE_MAXCLASS);
return false;
}
*usize = sz_s2u(size);
} else {
if (bump_empty_aligned_alloc && unlikely(size == 0)) {
size = 1;
}
*usize = sz_sa2u(size, alignment);
}
if (unlikely(*usize == 0 || *usize > SC_LARGE_MAXCLASS)) {
return true;
}
return false;
}
JEMALLOC_ALWAYS_INLINE bool
zero_get(bool guarantee, bool slow) {
if (config_fill && slow && unlikely(opt_zero)) {
return true;
} else {
return guarantee;
}
}
JEMALLOC_ALWAYS_INLINE tcache_t *
tcache_get_from_ind(tsd_t *tsd, unsigned tcache_ind, bool slow, bool is_alloc) {
tcache_t *tcache;
if (tcache_ind == TCACHE_IND_AUTOMATIC) {
if (likely(!slow)) {
/* Getting tcache ptr unconditionally. */
tcache = tsd_tcachep_get(tsd);
assert(tcache == tcache_get(tsd));
} else if (is_alloc ||
likely(tsd_reentrancy_level_get(tsd) == 0)) {
tcache = tcache_get(tsd);
} else {
tcache = NULL;
} }
} else if (dopts->tcache_ind == TCACHE_IND_NONE) { } else {
/*
* Should not specify tcache on deallocation path when being
* reentrant.
*/
assert(is_alloc || tsd_reentrancy_level_get(tsd) == 0 ||
tsd_state_nocleanup(tsd));
if (tcache_ind == TCACHE_IND_NONE) {
tcache = NULL; tcache = NULL;
} else { } else {
tcache = tcaches_get(tsd, dopts->tcache_ind); tcache = tcaches_get(tsd, tcache_ind);
}
} }
return tcache;
}
/* Fill in the arena. */ /* Return true if a manual arena is specified and arena_get() OOMs. */
if (dopts->arena_ind == ARENA_IND_AUTOMATIC) { JEMALLOC_ALWAYS_INLINE bool
arena_get_from_ind(tsd_t *tsd, unsigned arena_ind, arena_t **arena_p) {
if (arena_ind == ARENA_IND_AUTOMATIC) {
/* /*
* In case of automatic arena management, we defer arena * In case of automatic arena management, we defer arena
* computation until as late as we can, hoping to fill the * computation until as late as we can, hoping to fill the
* allocation out of the tcache. * allocation out of the tcache.
*/ */
arena = NULL; *arena_p = NULL;
} else { } else {
arena = arena_get(tsd_tsdn(tsd), dopts->arena_ind, true); *arena_p = arena_get(tsd_tsdn(tsd), arena_ind, true);
if (unlikely(*arena_p == NULL) && arena_ind >= narenas_auto) {
return true;
}
}
return false;
}
/* ind is ignored if dopts->alignment > 0. */
JEMALLOC_ALWAYS_INLINE void *
imalloc_no_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd,
size_t size, size_t usize, szind_t ind) {
/* Fill in the tcache. */
tcache_t *tcache = tcache_get_from_ind(tsd, dopts->tcache_ind,
sopts->slow, /* is_alloc */ true);
/* Fill in the arena. */
arena_t *arena;
if (arena_get_from_ind(tsd, dopts->arena_ind, &arena)) {
return NULL;
} }
if (unlikely(dopts->alignment != 0)) { if (unlikely(dopts->alignment != 0)) {
...@@ -1962,6 +2411,7 @@ imalloc_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd, ...@@ -1962,6 +2411,7 @@ imalloc_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd,
szind_t ind_large; szind_t ind_large;
size_t bumped_usize = usize; size_t bumped_usize = usize;
dopts->alignment = prof_sample_align(dopts->alignment);
if (usize <= SC_SMALL_MAXCLASS) { if (usize <= SC_SMALL_MAXCLASS) {
assert(((dopts->alignment == 0) ? assert(((dopts->alignment == 0) ?
sz_s2u(SC_LARGE_MINCLASS) : sz_s2u(SC_LARGE_MINCLASS) :
...@@ -1978,6 +2428,7 @@ imalloc_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd, ...@@ -1978,6 +2428,7 @@ imalloc_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd,
} else { } else {
ret = imalloc_no_sample(sopts, dopts, tsd, usize, usize, ind); ret = imalloc_no_sample(sopts, dopts, tsd, usize, usize, ind);
} }
assert(prof_sample_aligned(ret));
return ret; return ret;
} }
...@@ -2031,16 +2482,14 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) { ...@@ -2031,16 +2482,14 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) {
/* Filled in by compute_size_with_overflow below. */ /* Filled in by compute_size_with_overflow below. */
size_t size = 0; size_t size = 0;
/* /*
* For unaligned allocations, we need only ind. For aligned * The zero initialization for ind is actually dead store, in that its
* allocations, or in case of stats or profiling we need usize. * value is reset before any branch on its value is taken. Sometimes
* * though, it's convenient to pass it as arguments before this point.
* These are actually dead stores, in that their values are reset before * To avoid undefined behavior then, we initialize it with dummy stores.
* any branch on their value is taken. Sometimes though, it's
* convenient to pass them as arguments before this point. To avoid
* undefined behavior then, we initialize them with dummy stores.
*/ */
szind_t ind = 0; szind_t ind = 0;
size_t usize = 0; /* usize will always be properly initialized. */
size_t usize;
/* Reentrancy is only checked on slow path. */ /* Reentrancy is only checked on slow path. */
int8_t reentrancy_level; int8_t reentrancy_level;
...@@ -2057,31 +2506,12 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) { ...@@ -2057,31 +2506,12 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) {
} }
/* This is the beginning of the "core" algorithm. */ /* This is the beginning of the "core" algorithm. */
dopts->zero = zero_get(dopts->zero, sopts->slow);
if (dopts->alignment == 0) { if (aligned_usize_get(size, dopts->alignment, &usize, &ind,
ind = sz_size2index(size); sopts->bump_empty_aligned_alloc)) {
if (unlikely(ind >= SC_NSIZES)) {
goto label_oom; goto label_oom;
} }
if (config_stats || (config_prof && opt_prof) || sopts->usize) {
usize = sz_index2size(ind);
dopts->usize = usize;
assert(usize > 0 && usize
<= SC_LARGE_MAXCLASS);
}
} else {
if (sopts->bump_empty_aligned_alloc) {
if (unlikely(size == 0)) {
size = 1;
}
}
usize = sz_sa2u(size, dopts->alignment);
dopts->usize = usize; dopts->usize = usize;
if (unlikely(usize == 0
|| usize > SC_LARGE_MAXCLASS)) {
goto label_oom;
}
}
/* Validate the user input. */ /* Validate the user input. */
if (sopts->assert_nonempty_alloc) { if (sopts->assert_nonempty_alloc) {
assert (size != 0); assert (size != 0);
...@@ -2107,26 +2537,25 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) { ...@@ -2107,26 +2537,25 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) {
dopts->arena_ind = 0; dopts->arena_ind = 0;
} }
/* If profiling is on, get our profiling context. */
if (config_prof && opt_prof) {
/* /*
* Note that if we're going down this path, usize must have been * If dopts->alignment > 0, then ind is still 0, but usize was computed
* initialized in the previous if statement. * in the previous if statement. Down the positive alignment path,
* imalloc_no_sample and imalloc_sample will ignore ind.
*/ */
prof_tctx_t *tctx = prof_alloc_prep(
tsd, usize, prof_active_get_unlocked(), true);
alloc_ctx_t alloc_ctx; /* If profiling is on, get our profiling context. */
if (config_prof && opt_prof) {
bool prof_active = prof_active_get_unlocked();
bool sample_event = te_prof_sample_event_lookahead(tsd, usize);
prof_tctx_t *tctx = prof_alloc_prep(tsd, prof_active,
sample_event);
emap_alloc_ctx_t alloc_ctx;
if (likely((uintptr_t)tctx == (uintptr_t)1U)) { if (likely((uintptr_t)tctx == (uintptr_t)1U)) {
alloc_ctx.slab = (usize alloc_ctx.slab = (usize <= SC_SMALL_MAXCLASS);
<= SC_SMALL_MAXCLASS);
allocation = imalloc_no_sample( allocation = imalloc_no_sample(
sopts, dopts, tsd, usize, usize, ind); sopts, dopts, tsd, usize, usize, ind);
} else if ((uintptr_t)tctx > (uintptr_t)1U) { } else if ((uintptr_t)tctx > (uintptr_t)1U) {
/*
* Note that ind might still be 0 here. This is fine;
* imalloc_sample ignores ind if dopts->alignment > 0.
*/
allocation = imalloc_sample( allocation = imalloc_sample(
sopts, dopts, tsd, usize, ind); sopts, dopts, tsd, usize, ind);
alloc_ctx.slab = false; alloc_ctx.slab = false;
...@@ -2135,17 +2564,12 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) { ...@@ -2135,17 +2564,12 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) {
} }
if (unlikely(allocation == NULL)) { if (unlikely(allocation == NULL)) {
prof_alloc_rollback(tsd, tctx, true); prof_alloc_rollback(tsd, tctx);
goto label_oom; goto label_oom;
} }
prof_malloc(tsd_tsdn(tsd), allocation, usize, &alloc_ctx, tctx); prof_malloc(tsd, allocation, size, usize, &alloc_ctx, tctx);
} else { } else {
/* assert(!opt_prof);
* If dopts->alignment > 0, then ind is still 0, but usize was
* computed in the previous if statement. Down the positive
* alignment path, imalloc_no_sample ignores ind and size
* (relying only on usize).
*/
allocation = imalloc_no_sample(sopts, dopts, tsd, size, usize, allocation = imalloc_no_sample(sopts, dopts, tsd, size, usize,
ind); ind);
if (unlikely(allocation == NULL)) { if (unlikely(allocation == NULL)) {
...@@ -2157,12 +2581,17 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) { ...@@ -2157,12 +2581,17 @@ imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) {
* Allocation has been done at this point. We still have some * Allocation has been done at this point. We still have some
* post-allocation work to do though. * post-allocation work to do though.
*/ */
thread_alloc_event(tsd, usize);
assert(dopts->alignment == 0 assert(dopts->alignment == 0
|| ((uintptr_t)allocation & (dopts->alignment - 1)) == ZU(0)); || ((uintptr_t)allocation & (dopts->alignment - 1)) == ZU(0));
if (config_stats) {
assert(usize == isalloc(tsd_tsdn(tsd), allocation)); assert(usize == isalloc(tsd_tsdn(tsd), allocation));
*tsd_thread_allocatedp_get(tsd) += usize;
if (config_fill && sopts->slow && !dopts->zero
&& unlikely(opt_junk_alloc)) {
junk_alloc_callback(allocation, usize);
} }
if (sopts->slow) { if (sopts->slow) {
...@@ -2273,7 +2702,11 @@ malloc_default(size_t size) { ...@@ -2273,7 +2702,11 @@ malloc_default(size_t size) {
static_opts_t sopts; static_opts_t sopts;
dynamic_opts_t dopts; dynamic_opts_t dopts;
LOG("core.malloc.entry", "size: %zu", size); /*
* This variant has logging hook on exit but not on entry. It's callled
* only by je_malloc, below, which emits the entry one for us (and, if
* it calls us, does so only via tail call).
*/
static_opts_init(&sopts); static_opts_init(&sopts);
dynamic_opts_init(&dopts); dynamic_opts_init(&dopts);
...@@ -2306,86 +2739,11 @@ malloc_default(size_t size) { ...@@ -2306,86 +2739,11 @@ malloc_default(size_t size) {
* Begin malloc(3)-compatible functions. * Begin malloc(3)-compatible functions.
*/ */
/*
* malloc() fastpath.
*
* Fastpath assumes size <= SC_LOOKUP_MAXCLASS, and that we hit
* tcache. If either of these is false, we tail-call to the slowpath,
* malloc_default(). Tail-calling is used to avoid any caller-saved
* registers.
*
* fastpath supports ticker and profiling, both of which will also
* tail-call to the slowpath if they fire.
*/
JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN
void JEMALLOC_NOTHROW * void JEMALLOC_NOTHROW *
JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1) JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1)
je_malloc(size_t size) { je_malloc(size_t size) {
LOG("core.malloc.entry", "size: %zu", size); return imalloc_fastpath(size, &malloc_default);
if (tsd_get_allocates() && unlikely(!malloc_initialized())) {
return malloc_default(size);
}
tsd_t *tsd = tsd_get(false);
if (unlikely(!tsd || !tsd_fast(tsd) || (size > SC_LOOKUP_MAXCLASS))) {
return malloc_default(size);
}
tcache_t *tcache = tsd_tcachep_get(tsd);
if (unlikely(ticker_trytick(&tcache->gc_ticker))) {
return malloc_default(size);
}
szind_t ind = sz_size2index_lookup(size);
size_t usize;
if (config_stats || config_prof) {
usize = sz_index2size(ind);
}
/* Fast path relies on size being a bin. I.e. SC_LOOKUP_MAXCLASS < SC_SMALL_MAXCLASS */
assert(ind < SC_NBINS);
assert(size <= SC_SMALL_MAXCLASS);
if (config_prof) {
int64_t bytes_until_sample = tsd_bytes_until_sample_get(tsd);
bytes_until_sample -= usize;
tsd_bytes_until_sample_set(tsd, bytes_until_sample);
if (unlikely(bytes_until_sample < 0)) {
/*
* Avoid a prof_active check on the fastpath.
* If prof_active is false, set bytes_until_sample to
* a large value. If prof_active is set to true,
* bytes_until_sample will be reset.
*/
if (!prof_active) {
tsd_bytes_until_sample_set(tsd, SSIZE_MAX);
}
return malloc_default(size);
}
}
cache_bin_t *bin = tcache_small_bin_get(tcache, ind);
bool tcache_success;
void* ret = cache_bin_alloc_easy(bin, &tcache_success);
if (tcache_success) {
if (config_stats) {
*tsd_thread_allocatedp_get(tsd) += usize;
bin->tstats.nrequests++;
}
if (config_prof) {
tcache->prof_accumbytes += usize;
}
LOG("core.malloc.exit", "result: %p", ret);
/* Fastpath success */
return ret;
}
return malloc_default(size);
} }
JEMALLOC_EXPORT int JEMALLOC_NOTHROW JEMALLOC_EXPORT int JEMALLOC_NOTHROW
...@@ -2502,56 +2860,6 @@ je_calloc(size_t num, size_t size) { ...@@ -2502,56 +2860,6 @@ je_calloc(size_t num, size_t size) {
return ret; return ret;
} }
static void *
irealloc_prof_sample(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t usize,
prof_tctx_t *tctx, hook_ralloc_args_t *hook_args) {
void *p;
if (tctx == NULL) {
return NULL;
}
if (usize <= SC_SMALL_MAXCLASS) {
p = iralloc(tsd, old_ptr, old_usize,
SC_LARGE_MINCLASS, 0, false, hook_args);
if (p == NULL) {
return NULL;
}
arena_prof_promote(tsd_tsdn(tsd), p, usize);
} else {
p = iralloc(tsd, old_ptr, old_usize, usize, 0, false,
hook_args);
}
return p;
}
JEMALLOC_ALWAYS_INLINE void *
irealloc_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t usize,
alloc_ctx_t *alloc_ctx, hook_ralloc_args_t *hook_args) {
void *p;
bool prof_active;
prof_tctx_t *old_tctx, *tctx;
prof_active = prof_active_get_unlocked();
old_tctx = prof_tctx_get(tsd_tsdn(tsd), old_ptr, alloc_ctx);
tctx = prof_alloc_prep(tsd, usize, prof_active, true);
if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) {
p = irealloc_prof_sample(tsd, old_ptr, old_usize, usize, tctx,
hook_args);
} else {
p = iralloc(tsd, old_ptr, old_usize, usize, 0, false,
hook_args);
}
if (unlikely(p == NULL)) {
prof_alloc_rollback(tsd, tctx, true);
return NULL;
}
prof_realloc(tsd, p, usize, tctx, prof_active, true, old_ptr, old_usize,
old_tctx);
return p;
}
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
ifree(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) { ifree(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) {
if (!slow_path) { if (!slow_path) {
...@@ -2565,30 +2873,50 @@ ifree(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) { ...@@ -2565,30 +2873,50 @@ ifree(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) {
assert(ptr != NULL); assert(ptr != NULL);
assert(malloc_initialized() || IS_INITIALIZER); assert(malloc_initialized() || IS_INITIALIZER);
alloc_ctx_t alloc_ctx; emap_alloc_ctx_t alloc_ctx;
rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); emap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr,
rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, &alloc_ctx);
(uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab);
assert(alloc_ctx.szind != SC_NSIZES); assert(alloc_ctx.szind != SC_NSIZES);
size_t usize; size_t usize = sz_index2size(alloc_ctx.szind);
if (config_prof && opt_prof) { if (config_prof && opt_prof) {
usize = sz_index2size(alloc_ctx.szind);
prof_free(tsd, ptr, usize, &alloc_ctx); prof_free(tsd, ptr, usize, &alloc_ctx);
} else if (config_stats) {
usize = sz_index2size(alloc_ctx.szind);
}
if (config_stats) {
*tsd_thread_deallocatedp_get(tsd) += usize;
} }
if (likely(!slow_path)) { if (likely(!slow_path)) {
idalloctm(tsd_tsdn(tsd), ptr, tcache, &alloc_ctx, false, idalloctm(tsd_tsdn(tsd), ptr, tcache, &alloc_ctx, false,
false); false);
} else { } else {
if (config_fill && slow_path && opt_junk_free) {
junk_free_callback(ptr, usize);
}
idalloctm(tsd_tsdn(tsd), ptr, tcache, &alloc_ctx, false, idalloctm(tsd_tsdn(tsd), ptr, tcache, &alloc_ctx, false,
true); true);
} }
thread_dalloc_event(tsd, usize);
}
JEMALLOC_ALWAYS_INLINE bool
maybe_check_alloc_ctx(tsd_t *tsd, void *ptr, emap_alloc_ctx_t *alloc_ctx) {
if (config_opt_size_checks) {
emap_alloc_ctx_t dbg_ctx;
emap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr,
&dbg_ctx);
if (alloc_ctx->szind != dbg_ctx.szind) {
safety_check_fail_sized_dealloc(
/* current_dealloc */ true, ptr,
/* true_size */ sz_size2index(dbg_ctx.szind),
/* input_size */ sz_size2index(alloc_ctx->szind));
return true;
}
if (alloc_ctx->slab != dbg_ctx.slab) {
safety_check_fail(
"Internal heap corruption detected: "
"mismatch in slab bit");
return true;
}
}
return false;
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
...@@ -2604,166 +2932,63 @@ isfree(tsd_t *tsd, void *ptr, size_t usize, tcache_t *tcache, bool slow_path) { ...@@ -2604,166 +2932,63 @@ isfree(tsd_t *tsd, void *ptr, size_t usize, tcache_t *tcache, bool slow_path) {
assert(ptr != NULL); assert(ptr != NULL);
assert(malloc_initialized() || IS_INITIALIZER); assert(malloc_initialized() || IS_INITIALIZER);
alloc_ctx_t alloc_ctx, *ctx; emap_alloc_ctx_t alloc_ctx;
if (!config_cache_oblivious && ((uintptr_t)ptr & PAGE_MASK) != 0) { if (!config_prof) {
alloc_ctx.szind = sz_size2index(usize);
alloc_ctx.slab = (alloc_ctx.szind < SC_NBINS);
} else {
if (likely(!prof_sample_aligned(ptr))) {
/* /*
* When cache_oblivious is disabled and ptr is not page aligned, * When the ptr is not page aligned, it was not sampled.
* the allocation was not sampled -- usize can be used to * usize can be trusted to determine szind and slab.
* determine szind directly.
*/ */
alloc_ctx.szind = sz_size2index(usize); alloc_ctx.szind = sz_size2index(usize);
alloc_ctx.slab = true; alloc_ctx.slab = (alloc_ctx.szind < SC_NBINS);
ctx = &alloc_ctx; } else if (opt_prof) {
if (config_debug) { emap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global,
alloc_ctx_t dbg_ctx; ptr, &alloc_ctx);
rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd);
rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, if (config_opt_safety_checks) {
rtree_ctx, (uintptr_t)ptr, true, &dbg_ctx.szind, /* Small alloc may have !slab (sampled). */
&dbg_ctx.slab); if (unlikely(alloc_ctx.szind !=
assert(dbg_ctx.szind == alloc_ctx.szind); sz_size2index(usize))) {
assert(dbg_ctx.slab == alloc_ctx.slab); safety_check_fail_sized_dealloc(
} /* current_dealloc */ true, ptr,
} else if (config_prof && opt_prof) { /* true_size */ sz_index2size(
rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); alloc_ctx.szind),
rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, /* input_size */ usize);
(uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab);
assert(alloc_ctx.szind == sz_size2index(usize));
ctx = &alloc_ctx;
} else {
ctx = NULL;
}
if (config_prof && opt_prof) {
prof_free(tsd, ptr, usize, ctx);
}
if (config_stats) {
*tsd_thread_deallocatedp_get(tsd) += usize;
} }
if (likely(!slow_path)) {
isdalloct(tsd_tsdn(tsd), ptr, usize, tcache, ctx, false);
} else {
isdalloct(tsd_tsdn(tsd), ptr, usize, tcache, ctx, true);
} }
}
JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN
void JEMALLOC_NOTHROW *
JEMALLOC_ALLOC_SIZE(2)
je_realloc(void *ptr, size_t arg_size) {
void *ret;
tsdn_t *tsdn JEMALLOC_CC_SILENCE_INIT(NULL);
size_t usize JEMALLOC_CC_SILENCE_INIT(0);
size_t old_usize = 0;
size_t size = arg_size;
LOG("core.realloc.entry", "ptr: %p, size: %zu\n", ptr, size);
if (unlikely(size == 0)) {
if (ptr != NULL) {
/* realloc(ptr, 0) is equivalent to free(ptr). */
UTRACE(ptr, 0, 0);
tcache_t *tcache;
tsd_t *tsd = tsd_fetch();
if (tsd_reentrancy_level_get(tsd) == 0) {
tcache = tcache_get(tsd);
} else { } else {
tcache = NULL; alloc_ctx.szind = sz_size2index(usize);
alloc_ctx.slab = (alloc_ctx.szind < SC_NBINS);
} }
uintptr_t args[3] = {(uintptr_t)ptr, size};
hook_invoke_dalloc(hook_dalloc_realloc, ptr, args);
ifree(tsd, ptr, tcache, true);
LOG("core.realloc.exit", "result: %p", NULL);
return NULL;
} }
size = 1; bool fail = maybe_check_alloc_ctx(tsd, ptr, &alloc_ctx);
if (fail) {
/*
* This is a heap corruption bug. In real life we'll crash; for
* the unit test we just want to avoid breaking anything too
* badly to get a test result out. Let's leak instead of trying
* to free.
*/
return;
} }
if (likely(ptr != NULL)) {
assert(malloc_initialized() || IS_INITIALIZER);
tsd_t *tsd = tsd_fetch();
check_entry_exit_locking(tsd_tsdn(tsd));
hook_ralloc_args_t hook_args = {true, {(uintptr_t)ptr,
(uintptr_t)arg_size, 0, 0}};
alloc_ctx_t alloc_ctx;
rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd);
rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx,
(uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab);
assert(alloc_ctx.szind != SC_NSIZES);
old_usize = sz_index2size(alloc_ctx.szind);
assert(old_usize == isalloc(tsd_tsdn(tsd), ptr));
if (config_prof && opt_prof) { if (config_prof && opt_prof) {
usize = sz_s2u(size); prof_free(tsd, ptr, usize, &alloc_ctx);
if (unlikely(usize == 0
|| usize > SC_LARGE_MAXCLASS)) {
ret = NULL;
} else {
ret = irealloc_prof(tsd, ptr, old_usize, usize,
&alloc_ctx, &hook_args);
}
} else {
if (config_stats) {
usize = sz_s2u(size);
}
ret = iralloc(tsd, ptr, old_usize, size, 0, false,
&hook_args);
} }
tsdn = tsd_tsdn(tsd); if (likely(!slow_path)) {
isdalloct(tsd_tsdn(tsd), ptr, usize, tcache, &alloc_ctx,
false);
} else { } else {
/* realloc(NULL, size) is equivalent to malloc(size). */ if (config_fill && slow_path && opt_junk_free) {
static_opts_t sopts; junk_free_callback(ptr, usize);
dynamic_opts_t dopts;
static_opts_init(&sopts);
dynamic_opts_init(&dopts);
sopts.null_out_result_on_error = true;
sopts.set_errno_on_error = true;
sopts.oom_string =
"<jemalloc>: Error in realloc(): out of memory\n";
dopts.result = &ret;
dopts.num_items = 1;
dopts.item_size = size;
imalloc(&sopts, &dopts);
if (sopts.slow) {
uintptr_t args[3] = {(uintptr_t)ptr, arg_size};
hook_invoke_alloc(hook_alloc_realloc, ret,
(uintptr_t)ret, args);
}
return ret;
}
if (unlikely(ret == NULL)) {
if (config_xmalloc && unlikely(opt_xmalloc)) {
malloc_write("<jemalloc>: Error in realloc(): "
"out of memory\n");
abort();
}
set_errno(ENOMEM);
} }
if (config_stats && likely(ret != NULL)) { isdalloct(tsd_tsdn(tsd), ptr, usize, tcache, &alloc_ctx,
tsd_t *tsd; true);
assert(usize == isalloc(tsdn, ret));
tsd = tsdn_tsd(tsdn);
*tsd_thread_allocatedp_get(tsd) += usize;
*tsd_thread_deallocatedp_get(tsd) += old_usize;
} }
UTRACE(ptr, size, ret); thread_dalloc_event(tsd, usize);
check_entry_exit_locking(tsdn);
LOG("core.realloc.exit", "result: %p", ret);
return ret;
} }
JEMALLOC_NOINLINE JEMALLOC_NOINLINE
...@@ -2782,79 +3007,149 @@ free_default(void *ptr) { ...@@ -2782,79 +3007,149 @@ free_default(void *ptr) {
tsd_t *tsd = tsd_fetch_min(); tsd_t *tsd = tsd_fetch_min();
check_entry_exit_locking(tsd_tsdn(tsd)); check_entry_exit_locking(tsd_tsdn(tsd));
tcache_t *tcache;
if (likely(tsd_fast(tsd))) { if (likely(tsd_fast(tsd))) {
tsd_assert_fast(tsd); tcache_t *tcache = tcache_get_from_ind(tsd,
/* Unconditionally get tcache ptr on fast path. */ TCACHE_IND_AUTOMATIC, /* slow */ false,
tcache = tsd_tcachep_get(tsd); /* is_alloc */ false);
ifree(tsd, ptr, tcache, false); ifree(tsd, ptr, tcache, /* slow */ false);
} else {
if (likely(tsd_reentrancy_level_get(tsd) == 0)) {
tcache = tcache_get(tsd);
} else { } else {
tcache = NULL; tcache_t *tcache = tcache_get_from_ind(tsd,
} TCACHE_IND_AUTOMATIC, /* slow */ true,
/* is_alloc */ false);
uintptr_t args_raw[3] = {(uintptr_t)ptr}; uintptr_t args_raw[3] = {(uintptr_t)ptr};
hook_invoke_dalloc(hook_dalloc_free, ptr, args_raw); hook_invoke_dalloc(hook_dalloc_free, ptr, args_raw);
ifree(tsd, ptr, tcache, true); ifree(tsd, ptr, tcache, /* slow */ true);
} }
check_entry_exit_locking(tsd_tsdn(tsd)); check_entry_exit_locking(tsd_tsdn(tsd));
} }
} }
JEMALLOC_ALWAYS_INLINE bool
free_fastpath_nonfast_aligned(void *ptr, bool check_prof) {
/*
* free_fastpath do not handle two uncommon cases: 1) sampled profiled
* objects and 2) sampled junk & stash for use-after-free detection.
* Both have special alignments which are used to escape the fastpath.
*
* prof_sample is page-aligned, which covers the UAF check when both
* are enabled (the assertion below). Avoiding redundant checks since
* this is on the fastpath -- at most one runtime branch from this.
*/
if (config_debug && cache_bin_nonfast_aligned(ptr)) {
assert(prof_sample_aligned(ptr));
}
if (config_prof && check_prof) {
/* When prof is enabled, the prof_sample alignment is enough. */
if (prof_sample_aligned(ptr)) {
return true;
} else {
return false;
}
}
if (config_uaf_detection) {
if (cache_bin_nonfast_aligned(ptr)) {
return true;
} else {
return false;
}
}
return false;
}
/* Returns whether or not the free attempt was successful. */
JEMALLOC_ALWAYS_INLINE JEMALLOC_ALWAYS_INLINE
bool free_fastpath(void *ptr, size_t size, bool size_hint) { bool free_fastpath(void *ptr, size_t size, bool size_hint) {
tsd_t *tsd = tsd_get(false); tsd_t *tsd = tsd_get(false);
if (unlikely(!tsd || !tsd_fast(tsd))) { /* The branch gets optimized away unless tsd_get_allocates(). */
if (unlikely(tsd == NULL)) {
return false; return false;
} }
tcache_t *tcache = tsd_tcachep_get(tsd);
alloc_ctx_t alloc_ctx;
/* /*
* If !config_cache_oblivious, we can check PAGE alignment to * The tsd_fast() / initialized checks are folded into the branch
* detect sampled objects. Otherwise addresses are * testing (deallocated_after >= threshold) later in this function.
* randomized, and we have to look it up in the rtree anyway. * The threshold will be set to 0 when !tsd_fast.
* See also isfree().
*/ */
if (!size_hint || config_cache_oblivious) { assert(tsd_fast(tsd) ||
rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); *tsd_thread_deallocated_next_event_fastp_get_unsafe(tsd) == 0);
bool res = rtree_szind_slab_read_fast(tsd_tsdn(tsd), &extents_rtree,
rtree_ctx, (uintptr_t)ptr, emap_alloc_ctx_t alloc_ctx;
&alloc_ctx.szind, &alloc_ctx.slab); if (!size_hint) {
bool err = emap_alloc_ctx_try_lookup_fast(tsd,
&arena_emap_global, ptr, &alloc_ctx);
/* Note: profiled objects will have alloc_ctx.slab set */ /* Note: profiled objects will have alloc_ctx.slab set */
if (!res || !alloc_ctx.slab) { if (unlikely(err || !alloc_ctx.slab ||
free_fastpath_nonfast_aligned(ptr,
/* check_prof */ false))) {
return false; return false;
} }
assert(alloc_ctx.szind != SC_NSIZES); assert(alloc_ctx.szind != SC_NSIZES);
} else { } else {
/* /*
* Check for both sizes that are too large, and for sampled objects. * Check for both sizes that are too large, and for sampled /
* Sampled objects are always page-aligned. The sampled object check * special aligned objects. The alignment check will also check
* will also check for null ptr. * for null ptr.
*/ */
if (size > SC_LOOKUP_MAXCLASS || (((uintptr_t)ptr & PAGE_MASK) == 0)) { if (unlikely(size > SC_LOOKUP_MAXCLASS ||
free_fastpath_nonfast_aligned(ptr,
/* check_prof */ true))) {
return false; return false;
} }
alloc_ctx.szind = sz_size2index_lookup(size); alloc_ctx.szind = sz_size2index_lookup(size);
/* Max lookup class must be small. */
assert(alloc_ctx.szind < SC_NBINS);
/* This is a dead store, except when opt size checking is on. */
alloc_ctx.slab = true;
} }
/*
* Currently the fastpath only handles small sizes. The branch on
* SC_LOOKUP_MAXCLASS makes sure of it. This lets us avoid checking
* tcache szind upper limit (i.e. tcache_maxclass) as well.
*/
assert(alloc_ctx.slab);
uint64_t deallocated, threshold;
te_free_fastpath_ctx(tsd, &deallocated, &threshold);
if (unlikely(ticker_trytick(&tcache->gc_ticker))) { size_t usize = sz_index2size(alloc_ctx.szind);
uint64_t deallocated_after = deallocated + usize;
/*
* Check for events and tsd non-nominal (fast_threshold will be set to
* 0) in a single branch. Note that this handles the uninitialized case
* as well (TSD init will be triggered on the non-fastpath). Therefore
* anything depends on a functional TSD (e.g. the alloc_ctx sanity check
* below) needs to be after this branch.
*/
if (unlikely(deallocated_after >= threshold)) {
return false; return false;
} }
assert(tsd_fast(tsd));
bool fail = maybe_check_alloc_ctx(tsd, ptr, &alloc_ctx);
if (fail) {
/* See the comment in isfree. */
return true;
}
cache_bin_t *bin = tcache_small_bin_get(tcache, alloc_ctx.szind); tcache_t *tcache = tcache_get_from_ind(tsd, TCACHE_IND_AUTOMATIC,
cache_bin_info_t *bin_info = &tcache_bin_info[alloc_ctx.szind]; /* slow */ false, /* is_alloc */ false);
if (!cache_bin_dalloc_easy(bin, bin_info, ptr)) { cache_bin_t *bin = &tcache->bins[alloc_ctx.szind];
/*
* If junking were enabled, this is where we would do it. It's not
* though, since we ensured above that we're on the fast path. Assert
* that to double-check.
*/
assert(!opt_junk_free);
if (!cache_bin_dalloc_easy(bin, ptr)) {
return false; return false;
} }
if (config_stats) { *tsd_thread_deallocatedp_get(tsd) = deallocated_after;
size_t usize = sz_index2size(alloc_ctx.szind);
*tsd_thread_deallocatedp_get(tsd) += usize;
}
return true; return true;
} }
...@@ -2965,6 +3260,8 @@ je_valloc(size_t size) { ...@@ -2965,6 +3260,8 @@ je_valloc(size_t size) {
* passed an extra argument for the caller return address, which will be * passed an extra argument for the caller return address, which will be
* ignored. * ignored.
*/ */
#include <features.h> // defines __GLIBC__ if we are compiling against glibc
JEMALLOC_EXPORT void (*__free_hook)(void *ptr) = je_free; JEMALLOC_EXPORT void (*__free_hook)(void *ptr) = je_free;
JEMALLOC_EXPORT void *(*__malloc_hook)(size_t size) = je_malloc; JEMALLOC_EXPORT void *(*__malloc_hook)(size_t size) = je_malloc;
JEMALLOC_EXPORT void *(*__realloc_hook)(void *ptr, size_t size) = je_realloc; JEMALLOC_EXPORT void *(*__realloc_hook)(void *ptr, size_t size) = je_realloc;
...@@ -2973,7 +3270,7 @@ JEMALLOC_EXPORT void *(*__memalign_hook)(size_t alignment, size_t size) = ...@@ -2973,7 +3270,7 @@ JEMALLOC_EXPORT void *(*__memalign_hook)(size_t alignment, size_t size) =
je_memalign; je_memalign;
# endif # endif
# ifdef CPU_COUNT # ifdef __GLIBC__
/* /*
* To enable static linking with glibc, the libc specific malloc interface must * To enable static linking with glibc, the libc specific malloc interface must
* be implemented also, so none of glibc's malloc.o functions are added to the * be implemented also, so none of glibc's malloc.o functions are added to the
...@@ -3016,6 +3313,26 @@ int __posix_memalign(void** r, size_t a, size_t s) PREALIAS(je_posix_memalign); ...@@ -3016,6 +3313,26 @@ int __posix_memalign(void** r, size_t a, size_t s) PREALIAS(je_posix_memalign);
* Begin non-standard functions. * Begin non-standard functions.
*/ */
JEMALLOC_ALWAYS_INLINE unsigned
mallocx_tcache_get(int flags) {
if (likely((flags & MALLOCX_TCACHE_MASK) == 0)) {
return TCACHE_IND_AUTOMATIC;
} else if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) {
return TCACHE_IND_NONE;
} else {
return MALLOCX_TCACHE_GET(flags);
}
}
JEMALLOC_ALWAYS_INLINE unsigned
mallocx_arena_get(int flags) {
if (unlikely((flags & MALLOCX_ARENA_MASK) != 0)) {
return MALLOCX_ARENA_GET(flags);
} else {
return ARENA_IND_AUTOMATIC;
}
}
#ifdef JEMALLOC_EXPERIMENTAL_SMALLOCX_API #ifdef JEMALLOC_EXPERIMENTAL_SMALLOCX_API
#define JEMALLOC_SMALLOCX_CONCAT_HELPER(x, y) x ## y #define JEMALLOC_SMALLOCX_CONCAT_HELPER(x, y) x ## y
...@@ -3060,25 +3377,10 @@ JEMALLOC_SMALLOCX_CONCAT_HELPER2(je_smallocx_, JEMALLOC_VERSION_GID_IDENT) ...@@ -3060,25 +3377,10 @@ JEMALLOC_SMALLOCX_CONCAT_HELPER2(je_smallocx_, JEMALLOC_VERSION_GID_IDENT)
dopts.num_items = 1; dopts.num_items = 1;
dopts.item_size = size; dopts.item_size = size;
if (unlikely(flags != 0)) { if (unlikely(flags != 0)) {
if ((flags & MALLOCX_LG_ALIGN_MASK) != 0) { dopts.alignment = MALLOCX_ALIGN_GET(flags);
dopts.alignment = MALLOCX_ALIGN_GET_SPECIFIED(flags);
}
dopts.zero = MALLOCX_ZERO_GET(flags); dopts.zero = MALLOCX_ZERO_GET(flags);
dopts.tcache_ind = mallocx_tcache_get(flags);
if ((flags & MALLOCX_TCACHE_MASK) != 0) { dopts.arena_ind = mallocx_arena_get(flags);
if ((flags & MALLOCX_TCACHE_MASK)
== MALLOCX_TCACHE_NONE) {
dopts.tcache_ind = TCACHE_IND_NONE;
} else {
dopts.tcache_ind = MALLOCX_TCACHE_GET(flags);
}
} else {
dopts.tcache_ind = TCACHE_IND_AUTOMATIC;
}
if ((flags & MALLOCX_ARENA_MASK) != 0)
dopts.arena_ind = MALLOCX_ARENA_GET(flags);
} }
imalloc(&sopts, &dopts); imalloc(&sopts, &dopts);
...@@ -3113,25 +3415,10 @@ je_mallocx(size_t size, int flags) { ...@@ -3113,25 +3415,10 @@ je_mallocx(size_t size, int flags) {
dopts.num_items = 1; dopts.num_items = 1;
dopts.item_size = size; dopts.item_size = size;
if (unlikely(flags != 0)) { if (unlikely(flags != 0)) {
if ((flags & MALLOCX_LG_ALIGN_MASK) != 0) { dopts.alignment = MALLOCX_ALIGN_GET(flags);
dopts.alignment = MALLOCX_ALIGN_GET_SPECIFIED(flags);
}
dopts.zero = MALLOCX_ZERO_GET(flags); dopts.zero = MALLOCX_ZERO_GET(flags);
dopts.tcache_ind = mallocx_tcache_get(flags);
if ((flags & MALLOCX_TCACHE_MASK) != 0) { dopts.arena_ind = mallocx_arena_get(flags);
if ((flags & MALLOCX_TCACHE_MASK)
== MALLOCX_TCACHE_NONE) {
dopts.tcache_ind = TCACHE_IND_NONE;
} else {
dopts.tcache_ind = MALLOCX_TCACHE_GET(flags);
}
} else {
dopts.tcache_ind = TCACHE_IND_AUTOMATIC;
}
if ((flags & MALLOCX_ARENA_MASK) != 0)
dopts.arena_ind = MALLOCX_ARENA_GET(flags);
} }
imalloc(&sopts, &dopts); imalloc(&sopts, &dopts);
...@@ -3154,6 +3441,8 @@ irallocx_prof_sample(tsdn_t *tsdn, void *old_ptr, size_t old_usize, ...@@ -3154,6 +3441,8 @@ irallocx_prof_sample(tsdn_t *tsdn, void *old_ptr, size_t old_usize,
if (tctx == NULL) { if (tctx == NULL) {
return NULL; return NULL;
} }
alignment = prof_sample_align(alignment);
if (usize <= SC_SMALL_MAXCLASS) { if (usize <= SC_SMALL_MAXCLASS) {
p = iralloct(tsdn, old_ptr, old_usize, p = iralloct(tsdn, old_ptr, old_usize,
SC_LARGE_MINCLASS, alignment, zero, tcache, SC_LARGE_MINCLASS, alignment, zero, tcache,
...@@ -3166,66 +3455,48 @@ irallocx_prof_sample(tsdn_t *tsdn, void *old_ptr, size_t old_usize, ...@@ -3166,66 +3455,48 @@ irallocx_prof_sample(tsdn_t *tsdn, void *old_ptr, size_t old_usize,
p = iralloct(tsdn, old_ptr, old_usize, usize, alignment, zero, p = iralloct(tsdn, old_ptr, old_usize, usize, alignment, zero,
tcache, arena, hook_args); tcache, arena, hook_args);
} }
assert(prof_sample_aligned(p));
return p; return p;
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
irallocx_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t size, irallocx_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t size,
size_t alignment, size_t *usize, bool zero, tcache_t *tcache, size_t alignment, size_t usize, bool zero, tcache_t *tcache,
arena_t *arena, alloc_ctx_t *alloc_ctx, hook_ralloc_args_t *hook_args) { arena_t *arena, emap_alloc_ctx_t *alloc_ctx,
hook_ralloc_args_t *hook_args) {
prof_info_t old_prof_info;
prof_info_get_and_reset_recent(tsd, old_ptr, alloc_ctx, &old_prof_info);
bool prof_active = prof_active_get_unlocked();
bool sample_event = te_prof_sample_event_lookahead(tsd, usize);
prof_tctx_t *tctx = prof_alloc_prep(tsd, prof_active, sample_event);
void *p; void *p;
bool prof_active;
prof_tctx_t *old_tctx, *tctx;
prof_active = prof_active_get_unlocked();
old_tctx = prof_tctx_get(tsd_tsdn(tsd), old_ptr, alloc_ctx);
tctx = prof_alloc_prep(tsd, *usize, prof_active, false);
if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) { if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) {
p = irallocx_prof_sample(tsd_tsdn(tsd), old_ptr, old_usize, p = irallocx_prof_sample(tsd_tsdn(tsd), old_ptr, old_usize,
*usize, alignment, zero, tcache, arena, tctx, hook_args); usize, alignment, zero, tcache, arena, tctx, hook_args);
} else { } else {
p = iralloct(tsd_tsdn(tsd), old_ptr, old_usize, size, alignment, p = iralloct(tsd_tsdn(tsd), old_ptr, old_usize, size, alignment,
zero, tcache, arena, hook_args); zero, tcache, arena, hook_args);
} }
if (unlikely(p == NULL)) { if (unlikely(p == NULL)) {
prof_alloc_rollback(tsd, tctx, false); prof_alloc_rollback(tsd, tctx);
return NULL; return NULL;
} }
assert(usize == isalloc(tsd_tsdn(tsd), p));
if (p == old_ptr && alignment != 0) { prof_realloc(tsd, p, size, usize, tctx, prof_active, old_ptr,
/* old_usize, &old_prof_info, sample_event);
* The allocation did not move, so it is possible that the size
* class is smaller than would guarantee the requested
* alignment, and that the alignment constraint was
* serendipitously satisfied. Additionally, old_usize may not
* be the same as the current usize because of in-place large
* reallocation. Therefore, query the actual value of usize.
*/
*usize = isalloc(tsd_tsdn(tsd), p);
}
prof_realloc(tsd, p, *usize, tctx, prof_active, false, old_ptr,
old_usize, old_tctx);
return p; return p;
} }
JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN static void *
void JEMALLOC_NOTHROW * do_rallocx(void *ptr, size_t size, int flags, bool is_realloc) {
JEMALLOC_ALLOC_SIZE(2)
je_rallocx(void *ptr, size_t size, int flags) {
void *p; void *p;
tsd_t *tsd; tsd_t *tsd;
size_t usize; size_t usize;
size_t old_usize; size_t old_usize;
size_t alignment = MALLOCX_ALIGN_GET(flags); size_t alignment = MALLOCX_ALIGN_GET(flags);
bool zero = flags & MALLOCX_ZERO;
arena_t *arena; arena_t *arena;
tcache_t *tcache;
LOG("core.rallocx.entry", "ptr: %p, size: %zu, flags: %d", ptr,
size, flags);
assert(ptr != NULL); assert(ptr != NULL);
assert(size != 0); assert(size != 0);
...@@ -3233,44 +3504,31 @@ je_rallocx(void *ptr, size_t size, int flags) { ...@@ -3233,44 +3504,31 @@ je_rallocx(void *ptr, size_t size, int flags) {
tsd = tsd_fetch(); tsd = tsd_fetch();
check_entry_exit_locking(tsd_tsdn(tsd)); check_entry_exit_locking(tsd_tsdn(tsd));
if (unlikely((flags & MALLOCX_ARENA_MASK) != 0)) { bool zero = zero_get(MALLOCX_ZERO_GET(flags), /* slow */ true);
unsigned arena_ind = MALLOCX_ARENA_GET(flags);
arena = arena_get(tsd_tsdn(tsd), arena_ind, true); unsigned arena_ind = mallocx_arena_get(flags);
if (unlikely(arena == NULL)) { if (arena_get_from_ind(tsd, arena_ind, &arena)) {
goto label_oom; goto label_oom;
} }
} else {
arena = NULL;
}
if (unlikely((flags & MALLOCX_TCACHE_MASK) != 0)) { unsigned tcache_ind = mallocx_tcache_get(flags);
if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) { tcache_t *tcache = tcache_get_from_ind(tsd, tcache_ind,
tcache = NULL; /* slow */ true, /* is_alloc */ true);
} else {
tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags));
}
} else {
tcache = tcache_get(tsd);
}
alloc_ctx_t alloc_ctx; emap_alloc_ctx_t alloc_ctx;
rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); emap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr,
rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, &alloc_ctx);
(uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab);
assert(alloc_ctx.szind != SC_NSIZES); assert(alloc_ctx.szind != SC_NSIZES);
old_usize = sz_index2size(alloc_ctx.szind); old_usize = sz_index2size(alloc_ctx.szind);
assert(old_usize == isalloc(tsd_tsdn(tsd), ptr)); assert(old_usize == isalloc(tsd_tsdn(tsd), ptr));
if (aligned_usize_get(size, alignment, &usize, NULL, false)) {
hook_ralloc_args_t hook_args = {false, {(uintptr_t)ptr, size, flags,
0}};
if (config_prof && opt_prof) {
usize = (alignment == 0) ?
sz_s2u(size) : sz_sa2u(size, alignment);
if (unlikely(usize == 0
|| usize > SC_LARGE_MAXCLASS)) {
goto label_oom; goto label_oom;
} }
p = irallocx_prof(tsd, ptr, old_usize, size, alignment, &usize,
hook_ralloc_args_t hook_args = {is_realloc, {(uintptr_t)ptr, size,
flags, 0}};
if (config_prof && opt_prof) {
p = irallocx_prof(tsd, ptr, old_usize, size, alignment, usize,
zero, tcache, arena, &alloc_ctx, &hook_args); zero, tcache, arena, &alloc_ctx, &hook_args);
if (unlikely(p == NULL)) { if (unlikely(p == NULL)) {
goto label_oom; goto label_oom;
...@@ -3281,20 +3539,22 @@ je_rallocx(void *ptr, size_t size, int flags) { ...@@ -3281,20 +3539,22 @@ je_rallocx(void *ptr, size_t size, int flags) {
if (unlikely(p == NULL)) { if (unlikely(p == NULL)) {
goto label_oom; goto label_oom;
} }
if (config_stats) { assert(usize == isalloc(tsd_tsdn(tsd), p));
usize = isalloc(tsd_tsdn(tsd), p);
}
} }
assert(alignment == 0 || ((uintptr_t)p & (alignment - 1)) == ZU(0)); assert(alignment == 0 || ((uintptr_t)p & (alignment - 1)) == ZU(0));
thread_alloc_event(tsd, usize);
thread_dalloc_event(tsd, old_usize);
if (config_stats) {
*tsd_thread_allocatedp_get(tsd) += usize;
*tsd_thread_deallocatedp_get(tsd) += old_usize;
}
UTRACE(ptr, size, p); UTRACE(ptr, size, p);
check_entry_exit_locking(tsd_tsdn(tsd)); check_entry_exit_locking(tsd_tsdn(tsd));
LOG("core.rallocx.exit", "result: %p", p); if (config_fill && unlikely(opt_junk_alloc) && usize > old_usize
&& !zero) {
size_t excess_len = usize - old_usize;
void *excess_start = (void *)((uintptr_t)p + old_usize);
junk_alloc_callback(excess_start, excess_len);
}
return p; return p;
label_oom: label_oom:
if (config_xmalloc && unlikely(opt_xmalloc)) { if (config_xmalloc && unlikely(opt_xmalloc)) {
...@@ -3304,10 +3564,103 @@ label_oom: ...@@ -3304,10 +3564,103 @@ label_oom:
UTRACE(ptr, size, 0); UTRACE(ptr, size, 0);
check_entry_exit_locking(tsd_tsdn(tsd)); check_entry_exit_locking(tsd_tsdn(tsd));
LOG("core.rallocx.exit", "result: %p", NULL);
return NULL; return NULL;
} }
JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN
void JEMALLOC_NOTHROW *
JEMALLOC_ALLOC_SIZE(2)
je_rallocx(void *ptr, size_t size, int flags) {
LOG("core.rallocx.entry", "ptr: %p, size: %zu, flags: %d", ptr,
size, flags);
void *ret = do_rallocx(ptr, size, flags, false);
LOG("core.rallocx.exit", "result: %p", ret);
return ret;
}
static void *
do_realloc_nonnull_zero(void *ptr) {
if (config_stats) {
atomic_fetch_add_zu(&zero_realloc_count, 1, ATOMIC_RELAXED);
}
if (opt_zero_realloc_action == zero_realloc_action_alloc) {
/*
* The user might have gotten an alloc setting while expecting a
* free setting. If that's the case, we at least try to
* reduce the harm, and turn off the tcache while allocating, so
* that we'll get a true first fit.
*/
return do_rallocx(ptr, 1, MALLOCX_TCACHE_NONE, true);
} else if (opt_zero_realloc_action == zero_realloc_action_free) {
UTRACE(ptr, 0, 0);
tsd_t *tsd = tsd_fetch();
check_entry_exit_locking(tsd_tsdn(tsd));
tcache_t *tcache = tcache_get_from_ind(tsd,
TCACHE_IND_AUTOMATIC, /* slow */ true,
/* is_alloc */ false);
uintptr_t args[3] = {(uintptr_t)ptr, 0};
hook_invoke_dalloc(hook_dalloc_realloc, ptr, args);
ifree(tsd, ptr, tcache, true);
check_entry_exit_locking(tsd_tsdn(tsd));
return NULL;
} else {
safety_check_fail("Called realloc(non-null-ptr, 0) with "
"zero_realloc:abort set\n");
/* In real code, this will never run; the safety check failure
* will call abort. In the unit test, we just want to bail out
* without corrupting internal state that the test needs to
* finish.
*/
return NULL;
}
}
JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN
void JEMALLOC_NOTHROW *
JEMALLOC_ALLOC_SIZE(2)
je_realloc(void *ptr, size_t size) {
LOG("core.realloc.entry", "ptr: %p, size: %zu\n", ptr, size);
if (likely(ptr != NULL && size != 0)) {
void *ret = do_rallocx(ptr, size, 0, true);
LOG("core.realloc.exit", "result: %p", ret);
return ret;
} else if (ptr != NULL && size == 0) {
void *ret = do_realloc_nonnull_zero(ptr);
LOG("core.realloc.exit", "result: %p", ret);
return ret;
} else {
/* realloc(NULL, size) is equivalent to malloc(size). */
void *ret;
static_opts_t sopts;
dynamic_opts_t dopts;
static_opts_init(&sopts);
dynamic_opts_init(&dopts);
sopts.null_out_result_on_error = true;
sopts.set_errno_on_error = true;
sopts.oom_string =
"<jemalloc>: Error in realloc(): out of memory\n";
dopts.result = &ret;
dopts.num_items = 1;
dopts.item_size = size;
imalloc(&sopts, &dopts);
if (sopts.slow) {
uintptr_t args[3] = {(uintptr_t)ptr, size};
hook_invoke_alloc(hook_alloc_realloc, ret,
(uintptr_t)ret, args);
}
LOG("core.realloc.exit", "result: %p", ret);
return ret;
}
}
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
ixallocx_helper(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size, ixallocx_helper(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size,
size_t extra, size_t alignment, bool zero) { size_t extra, size_t alignment, bool zero) {
...@@ -3324,51 +3677,46 @@ ixallocx_helper(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size, ...@@ -3324,51 +3677,46 @@ ixallocx_helper(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size,
static size_t static size_t
ixallocx_prof_sample(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size, ixallocx_prof_sample(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size,
size_t extra, size_t alignment, bool zero, prof_tctx_t *tctx) { size_t extra, size_t alignment, bool zero, prof_tctx_t *tctx) {
size_t usize; /* Sampled allocation needs to be page aligned. */
if (tctx == NULL || !prof_sample_aligned(ptr)) {
if (tctx == NULL) {
return old_usize; return old_usize;
} }
usize = ixallocx_helper(tsdn, ptr, old_usize, size, extra, alignment,
zero);
return usize; return ixallocx_helper(tsdn, ptr, old_usize, size, extra, alignment,
zero);
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
ixallocx_prof(tsd_t *tsd, void *ptr, size_t old_usize, size_t size, ixallocx_prof(tsd_t *tsd, void *ptr, size_t old_usize, size_t size,
size_t extra, size_t alignment, bool zero, alloc_ctx_t *alloc_ctx) { size_t extra, size_t alignment, bool zero, emap_alloc_ctx_t *alloc_ctx) {
size_t usize_max, usize; /*
bool prof_active; * old_prof_info is only used for asserting that the profiling info
prof_tctx_t *old_tctx, *tctx; * isn't changed by the ixalloc() call.
*/
prof_info_t old_prof_info;
prof_info_get(tsd, ptr, alloc_ctx, &old_prof_info);
prof_active = prof_active_get_unlocked();
old_tctx = prof_tctx_get(tsd_tsdn(tsd), ptr, alloc_ctx);
/* /*
* usize isn't knowable before ixalloc() returns when extra is non-zero. * usize isn't knowable before ixalloc() returns when extra is non-zero.
* Therefore, compute its maximum possible value and use that in * Therefore, compute its maximum possible value and use that in
* prof_alloc_prep() to decide whether to capture a backtrace. * prof_alloc_prep() to decide whether to capture a backtrace.
* prof_realloc() will use the actual usize to decide whether to sample. * prof_realloc() will use the actual usize to decide whether to sample.
*/ */
if (alignment == 0) { size_t usize_max;
usize_max = sz_s2u(size+extra); if (aligned_usize_get(size + extra, alignment, &usize_max, NULL,
assert(usize_max > 0 false)) {
&& usize_max <= SC_LARGE_MAXCLASS);
} else {
usize_max = sz_sa2u(size+extra, alignment);
if (unlikely(usize_max == 0
|| usize_max > SC_LARGE_MAXCLASS)) {
/* /*
* usize_max is out of range, and chances are that * usize_max is out of range, and chances are that allocation
* allocation will fail, but use the maximum possible * will fail, but use the maximum possible value and carry on
* value and carry on with prof_alloc_prep(), just in * with prof_alloc_prep(), just in case allocation succeeds.
* case allocation succeeds.
*/ */
usize_max = SC_LARGE_MAXCLASS; usize_max = SC_LARGE_MAXCLASS;
} }
} bool prof_active = prof_active_get_unlocked();
tctx = prof_alloc_prep(tsd, usize_max, prof_active, false); bool sample_event = te_prof_sample_event_lookahead(tsd, usize_max);
prof_tctx_t *tctx = prof_alloc_prep(tsd, prof_active, sample_event);
size_t usize;
if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) { if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) {
usize = ixallocx_prof_sample(tsd_tsdn(tsd), ptr, old_usize, usize = ixallocx_prof_sample(tsd_tsdn(tsd), ptr, old_usize,
size, extra, alignment, zero, tctx); size, extra, alignment, zero, tctx);
...@@ -3376,13 +3724,28 @@ ixallocx_prof(tsd_t *tsd, void *ptr, size_t old_usize, size_t size, ...@@ -3376,13 +3724,28 @@ ixallocx_prof(tsd_t *tsd, void *ptr, size_t old_usize, size_t size,
usize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size, usize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size,
extra, alignment, zero); extra, alignment, zero);
} }
/*
* At this point we can still safely get the original profiling
* information associated with the ptr, because (a) the edata_t object
* associated with the ptr still lives and (b) the profiling info
* fields are not touched. "(a)" is asserted in the outer je_xallocx()
* function, and "(b)" is indirectly verified below by checking that
* the alloc_tctx field is unchanged.
*/
prof_info_t prof_info;
if (usize == old_usize) { if (usize == old_usize) {
prof_alloc_rollback(tsd, tctx, false); prof_info_get(tsd, ptr, alloc_ctx, &prof_info);
return usize; prof_alloc_rollback(tsd, tctx);
} else {
prof_info_get_and_reset_recent(tsd, ptr, alloc_ctx, &prof_info);
assert(usize <= usize_max);
sample_event = te_prof_sample_event_lookahead(tsd, usize);
prof_realloc(tsd, ptr, size, usize, tctx, prof_active, ptr,
old_usize, &prof_info, sample_event);
} }
prof_realloc(tsd, ptr, usize, tctx, prof_active, false, ptr, old_usize,
old_tctx);
assert(old_prof_info.alloc_tctx == prof_info.alloc_tctx);
return usize; return usize;
} }
...@@ -3391,7 +3754,7 @@ je_xallocx(void *ptr, size_t size, size_t extra, int flags) { ...@@ -3391,7 +3754,7 @@ je_xallocx(void *ptr, size_t size, size_t extra, int flags) {
tsd_t *tsd; tsd_t *tsd;
size_t usize, old_usize; size_t usize, old_usize;
size_t alignment = MALLOCX_ALIGN_GET(flags); size_t alignment = MALLOCX_ALIGN_GET(flags);
bool zero = flags & MALLOCX_ZERO; bool zero = zero_get(MALLOCX_ZERO_GET(flags), /* slow */ true);
LOG("core.xallocx.entry", "ptr: %p, size: %zu, extra: %zu, " LOG("core.xallocx.entry", "ptr: %p, size: %zu, extra: %zu, "
"flags: %d", ptr, size, extra, flags); "flags: %d", ptr, size, extra, flags);
...@@ -3403,10 +3766,17 @@ je_xallocx(void *ptr, size_t size, size_t extra, int flags) { ...@@ -3403,10 +3766,17 @@ je_xallocx(void *ptr, size_t size, size_t extra, int flags) {
tsd = tsd_fetch(); tsd = tsd_fetch();
check_entry_exit_locking(tsd_tsdn(tsd)); check_entry_exit_locking(tsd_tsdn(tsd));
alloc_ctx_t alloc_ctx; /*
rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); * old_edata is only for verifying that xallocx() keeps the edata_t
rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, * object associated with the ptr (though the content of the edata_t
(uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab); * object can be changed).
*/
edata_t *old_edata = emap_edata_lookup(tsd_tsdn(tsd),
&arena_emap_global, ptr);
emap_alloc_ctx_t alloc_ctx;
emap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr,
&alloc_ctx);
assert(alloc_ctx.szind != SC_NSIZES); assert(alloc_ctx.szind != SC_NSIZES);
old_usize = sz_index2size(alloc_ctx.szind); old_usize = sz_index2size(alloc_ctx.szind);
assert(old_usize == isalloc(tsd_tsdn(tsd), ptr)); assert(old_usize == isalloc(tsd_tsdn(tsd), ptr));
...@@ -3434,13 +3804,25 @@ je_xallocx(void *ptr, size_t size, size_t extra, int flags) { ...@@ -3434,13 +3804,25 @@ je_xallocx(void *ptr, size_t size, size_t extra, int flags) {
usize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size, usize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size,
extra, alignment, zero); extra, alignment, zero);
} }
/*
* xallocx() should keep using the same edata_t object (though its
* content can be changed).
*/
assert(emap_edata_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr)
== old_edata);
if (unlikely(usize == old_usize)) { if (unlikely(usize == old_usize)) {
goto label_not_resized; goto label_not_resized;
} }
thread_alloc_event(tsd, usize);
thread_dalloc_event(tsd, old_usize);
if (config_stats) { if (config_fill && unlikely(opt_junk_alloc) && usize > old_usize &&
*tsd_thread_allocatedp_get(tsd) += usize; !zero) {
*tsd_thread_deallocatedp_get(tsd) += old_usize; size_t excess_len = usize - old_usize;
void *excess_start = (void *)((uintptr_t)ptr + old_usize);
junk_alloc_callback(excess_start, excess_len);
} }
label_not_resized: label_not_resized:
if (unlikely(!tsd_fast(tsd))) { if (unlikely(!tsd_fast(tsd))) {
...@@ -3490,31 +3872,13 @@ je_dallocx(void *ptr, int flags) { ...@@ -3490,31 +3872,13 @@ je_dallocx(void *ptr, int flags) {
assert(ptr != NULL); assert(ptr != NULL);
assert(malloc_initialized() || IS_INITIALIZER); assert(malloc_initialized() || IS_INITIALIZER);
tsd_t *tsd = tsd_fetch(); tsd_t *tsd = tsd_fetch_min();
bool fast = tsd_fast(tsd); bool fast = tsd_fast(tsd);
check_entry_exit_locking(tsd_tsdn(tsd)); check_entry_exit_locking(tsd_tsdn(tsd));
tcache_t *tcache; unsigned tcache_ind = mallocx_tcache_get(flags);
if (unlikely((flags & MALLOCX_TCACHE_MASK) != 0)) { tcache_t *tcache = tcache_get_from_ind(tsd, tcache_ind, !fast,
/* Not allowed to be reentrant and specify a custom tcache. */ /* is_alloc */ false);
assert(tsd_reentrancy_level_get(tsd) == 0);
if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) {
tcache = NULL;
} else {
tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags));
}
} else {
if (likely(fast)) {
tcache = tsd_tcachep_get(tsd);
assert(tcache == tcache_get(tsd));
} else {
if (likely(tsd_reentrancy_level_get(tsd) == 0)) {
tcache = tcache_get(tsd);
} else {
tcache = NULL;
}
}
}
UTRACE(ptr, 0, 0); UTRACE(ptr, 0, 0);
if (likely(fast)) { if (likely(fast)) {
...@@ -3533,13 +3897,9 @@ je_dallocx(void *ptr, int flags) { ...@@ -3533,13 +3897,9 @@ je_dallocx(void *ptr, int flags) {
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
inallocx(tsdn_t *tsdn, size_t size, int flags) { inallocx(tsdn_t *tsdn, size_t size, int flags) {
check_entry_exit_locking(tsdn); check_entry_exit_locking(tsdn);
size_t usize; size_t usize;
if (likely((flags & MALLOCX_LG_ALIGN_MASK) == 0)) { /* In case of out of range, let the user see it rather than fail. */
usize = sz_s2u(size); aligned_usize_get(size, MALLOCX_ALIGN_GET(flags), &usize, NULL, false);
} else {
usize = sz_sa2u(size, MALLOCX_ALIGN_GET_SPECIFIED(flags));
}
check_entry_exit_locking(tsdn); check_entry_exit_locking(tsdn);
return usize; return usize;
} }
...@@ -3549,33 +3909,14 @@ sdallocx_default(void *ptr, size_t size, int flags) { ...@@ -3549,33 +3909,14 @@ sdallocx_default(void *ptr, size_t size, int flags) {
assert(ptr != NULL); assert(ptr != NULL);
assert(malloc_initialized() || IS_INITIALIZER); assert(malloc_initialized() || IS_INITIALIZER);
tsd_t *tsd = tsd_fetch(); tsd_t *tsd = tsd_fetch_min();
bool fast = tsd_fast(tsd); bool fast = tsd_fast(tsd);
size_t usize = inallocx(tsd_tsdn(tsd), size, flags); size_t usize = inallocx(tsd_tsdn(tsd), size, flags);
assert(usize == isalloc(tsd_tsdn(tsd), ptr));
check_entry_exit_locking(tsd_tsdn(tsd)); check_entry_exit_locking(tsd_tsdn(tsd));
tcache_t *tcache; unsigned tcache_ind = mallocx_tcache_get(flags);
if (unlikely((flags & MALLOCX_TCACHE_MASK) != 0)) { tcache_t *tcache = tcache_get_from_ind(tsd, tcache_ind, !fast,
/* Not allowed to be reentrant and specify a custom tcache. */ /* is_alloc */ false);
assert(tsd_reentrancy_level_get(tsd) == 0);
if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) {
tcache = NULL;
} else {
tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags));
}
} else {
if (likely(fast)) {
tcache = tsd_tcachep_get(tsd);
assert(tcache == tcache_get(tsd));
} else {
if (likely(tsd_reentrancy_level_get(tsd) == 0)) {
tcache = tcache_get(tsd);
} else {
tcache = NULL;
}
}
}
UTRACE(ptr, 0, 0); UTRACE(ptr, 0, 0);
if (likely(fast)) { if (likely(fast)) {
...@@ -3587,7 +3928,6 @@ sdallocx_default(void *ptr, size_t size, int flags) { ...@@ -3587,7 +3928,6 @@ sdallocx_default(void *ptr, size_t size, int flags) {
isfree(tsd, ptr, usize, tcache, true); isfree(tsd, ptr, usize, tcache, true);
} }
check_entry_exit_locking(tsd_tsdn(tsd)); check_entry_exit_locking(tsd_tsdn(tsd));
} }
JEMALLOC_EXPORT void JEMALLOC_NOTHROW JEMALLOC_EXPORT void JEMALLOC_NOTHROW
...@@ -3595,7 +3935,7 @@ je_sdallocx(void *ptr, size_t size, int flags) { ...@@ -3595,7 +3935,7 @@ je_sdallocx(void *ptr, size_t size, int flags) {
LOG("core.sdallocx.entry", "ptr: %p, size: %zu, flags: %d", ptr, LOG("core.sdallocx.entry", "ptr: %p, size: %zu, flags: %d", ptr,
size, flags); size, flags);
if (flags !=0 || !free_fastpath(ptr, size, true)) { if (flags != 0 || !free_fastpath(ptr, size, true)) {
sdallocx_default(ptr, size, flags); sdallocx_default(ptr, size, flags);
} }
...@@ -3704,6 +4044,7 @@ je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, ...@@ -3704,6 +4044,7 @@ je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp,
return ret; return ret;
} }
#define STATS_PRINT_BUFSIZE 65536
JEMALLOC_EXPORT void JEMALLOC_NOTHROW JEMALLOC_EXPORT void JEMALLOC_NOTHROW
je_malloc_stats_print(void (*write_cb)(void *, const char *), void *cbopaque, je_malloc_stats_print(void (*write_cb)(void *, const char *), void *cbopaque,
const char *opts) { const char *opts) {
...@@ -3713,23 +4054,30 @@ je_malloc_stats_print(void (*write_cb)(void *, const char *), void *cbopaque, ...@@ -3713,23 +4054,30 @@ je_malloc_stats_print(void (*write_cb)(void *, const char *), void *cbopaque,
tsdn = tsdn_fetch(); tsdn = tsdn_fetch();
check_entry_exit_locking(tsdn); check_entry_exit_locking(tsdn);
if (config_debug) {
stats_print(write_cb, cbopaque, opts); stats_print(write_cb, cbopaque, opts);
} else {
buf_writer_t buf_writer;
buf_writer_init(tsdn, &buf_writer, write_cb, cbopaque, NULL,
STATS_PRINT_BUFSIZE);
stats_print(buf_writer_cb, &buf_writer, opts);
buf_writer_terminate(tsdn, &buf_writer);
}
check_entry_exit_locking(tsdn); check_entry_exit_locking(tsdn);
LOG("core.malloc_stats_print.exit", ""); LOG("core.malloc_stats_print.exit", "");
} }
#undef STATS_PRINT_BUFSIZE
JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW JEMALLOC_ALWAYS_INLINE size_t
je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) { je_malloc_usable_size_impl(JEMALLOC_USABLE_SIZE_CONST void *ptr) {
size_t ret;
tsdn_t *tsdn;
LOG("core.malloc_usable_size.entry", "ptr: %p", ptr);
assert(malloc_initialized() || IS_INITIALIZER); assert(malloc_initialized() || IS_INITIALIZER);
tsdn = tsdn_fetch(); tsdn_t *tsdn = tsdn_fetch();
check_entry_exit_locking(tsdn); check_entry_exit_locking(tsdn);
size_t ret;
if (unlikely(ptr == NULL)) { if (unlikely(ptr == NULL)) {
ret = 0; ret = 0;
} else { } else {
...@@ -3740,12 +4088,211 @@ je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) { ...@@ -3740,12 +4088,211 @@ je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) {
ret = isalloc(tsdn, ptr); ret = isalloc(tsdn, ptr);
} }
} }
check_entry_exit_locking(tsdn); check_entry_exit_locking(tsdn);
return ret;
}
JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW
je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) {
LOG("core.malloc_usable_size.entry", "ptr: %p", ptr);
size_t ret = je_malloc_usable_size_impl(ptr);
LOG("core.malloc_usable_size.exit", "result: %zu", ret); LOG("core.malloc_usable_size.exit", "result: %zu", ret);
return ret; return ret;
} }
#ifdef JEMALLOC_HAVE_MALLOC_SIZE
JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW
je_malloc_size(const void *ptr) {
LOG("core.malloc_size.entry", "ptr: %p", ptr);
size_t ret = je_malloc_usable_size_impl(ptr);
LOG("core.malloc_size.exit", "result: %zu", ret);
return ret;
}
#endif
static void
batch_alloc_prof_sample_assert(tsd_t *tsd, size_t batch, size_t usize) {
assert(config_prof && opt_prof);
bool prof_sample_event = te_prof_sample_event_lookahead(tsd,
batch * usize);
assert(!prof_sample_event);
size_t surplus;
prof_sample_event = te_prof_sample_event_lookahead_surplus(tsd,
(batch + 1) * usize, &surplus);
assert(prof_sample_event);
assert(surplus < usize);
}
size_t
batch_alloc(void **ptrs, size_t num, size_t size, int flags) {
LOG("core.batch_alloc.entry",
"ptrs: %p, num: %zu, size: %zu, flags: %d", ptrs, num, size, flags);
tsd_t *tsd = tsd_fetch();
check_entry_exit_locking(tsd_tsdn(tsd));
size_t filled = 0;
if (unlikely(tsd == NULL || tsd_reentrancy_level_get(tsd) > 0)) {
goto label_done;
}
size_t alignment = MALLOCX_ALIGN_GET(flags);
size_t usize;
if (aligned_usize_get(size, alignment, &usize, NULL, false)) {
goto label_done;
}
szind_t ind = sz_size2index(usize);
bool zero = zero_get(MALLOCX_ZERO_GET(flags), /* slow */ true);
/*
* The cache bin and arena will be lazily initialized; it's hard to
* know in advance whether each of them needs to be initialized.
*/
cache_bin_t *bin = NULL;
arena_t *arena = NULL;
size_t nregs = 0;
if (likely(ind < SC_NBINS)) {
nregs = bin_infos[ind].nregs;
assert(nregs > 0);
}
while (filled < num) {
size_t batch = num - filled;
size_t surplus = SIZE_MAX; /* Dead store. */
bool prof_sample_event = config_prof && opt_prof
&& prof_active_get_unlocked()
&& te_prof_sample_event_lookahead_surplus(tsd,
batch * usize, &surplus);
if (prof_sample_event) {
/*
* Adjust so that the batch does not trigger prof
* sampling.
*/
batch -= surplus / usize + 1;
batch_alloc_prof_sample_assert(tsd, batch, usize);
}
size_t progress = 0;
if (likely(ind < SC_NBINS) && batch >= nregs) {
if (arena == NULL) {
unsigned arena_ind = mallocx_arena_get(flags);
if (arena_get_from_ind(tsd, arena_ind,
&arena)) {
goto label_done;
}
if (arena == NULL) {
arena = arena_choose(tsd, NULL);
}
if (unlikely(arena == NULL)) {
goto label_done;
}
}
size_t arena_batch = batch - batch % nregs;
size_t n = arena_fill_small_fresh(tsd_tsdn(tsd), arena,
ind, ptrs + filled, arena_batch, zero);
progress += n;
filled += n;
}
if (likely(ind < nhbins) && progress < batch) {
if (bin == NULL) {
unsigned tcache_ind = mallocx_tcache_get(flags);
tcache_t *tcache = tcache_get_from_ind(tsd,
tcache_ind, /* slow */ true,
/* is_alloc */ true);
if (tcache != NULL) {
bin = &tcache->bins[ind];
}
}
/*
* If we don't have a tcache bin, we don't want to
* immediately give up, because there's the possibility
* that the user explicitly requested to bypass the
* tcache, or that the user explicitly turned off the
* tcache; in such cases, we go through the slow path,
* i.e. the mallocx() call at the end of the while loop.
*/
if (bin != NULL) {
size_t bin_batch = batch - progress;
/*
* n can be less than bin_batch, meaning that
* the cache bin does not have enough memory.
* In such cases, we rely on the slow path,
* i.e. the mallocx() call at the end of the
* while loop, to fill in the cache, and in the
* next iteration of the while loop, the tcache
* will contain a lot of memory, and we can
* harvest them here. Compared to the
* alternative approach where we directly go to
* the arena bins here, the overhead of our
* current approach should usually be minimal,
* since we never try to fetch more memory than
* what a slab contains via the tcache. An
* additional benefit is that the tcache will
* not be empty for the next allocation request.
*/
size_t n = cache_bin_alloc_batch(bin, bin_batch,
ptrs + filled);
if (config_stats) {
bin->tstats.nrequests += n;
}
if (zero) {
for (size_t i = 0; i < n; ++i) {
memset(ptrs[filled + i], 0,
usize);
}
}
if (config_prof && opt_prof
&& unlikely(ind >= SC_NBINS)) {
for (size_t i = 0; i < n; ++i) {
prof_tctx_reset_sampled(tsd,
ptrs[filled + i]);
}
}
progress += n;
filled += n;
}
}
/*
* For thread events other than prof sampling, trigger them as
* if there's a single allocation of size (n * usize). This is
* fine because:
* (a) these events do not alter the allocation itself, and
* (b) it's possible that some event would have been triggered
* multiple times, instead of only once, if the allocations
* were handled individually, but it would do no harm (or
* even be beneficial) to coalesce the triggerings.
*/
thread_alloc_event(tsd, progress * usize);
if (progress < batch || prof_sample_event) {
void *p = je_mallocx(size, flags);
if (p == NULL) { /* OOM */
break;
}
if (progress == batch) {
assert(prof_sampled(tsd, p));
}
ptrs[filled++] = p;
}
}
label_done:
check_entry_exit_locking(tsd_tsdn(tsd));
LOG("core.batch_alloc.exit", "result: %zu", filled);
return filled;
}
/* /*
* End non-standard functions. * End non-standard functions.
*/ */
...@@ -3812,7 +4359,7 @@ _malloc_prefork(void) ...@@ -3812,7 +4359,7 @@ _malloc_prefork(void)
background_thread_prefork1(tsd_tsdn(tsd)); background_thread_prefork1(tsd_tsdn(tsd));
} }
/* Break arena prefork into stages to preserve lock order. */ /* Break arena prefork into stages to preserve lock order. */
for (i = 0; i < 8; i++) { for (i = 0; i < 9; i++) {
for (j = 0; j < narenas; j++) { for (j = 0; j < narenas; j++) {
if ((arena = arena_get(tsd_tsdn(tsd), j, false)) != if ((arena = arena_get(tsd_tsdn(tsd), j, false)) !=
NULL) { NULL) {
...@@ -3841,12 +4388,17 @@ _malloc_prefork(void) ...@@ -3841,12 +4388,17 @@ _malloc_prefork(void)
case 7: case 7:
arena_prefork7(tsd_tsdn(tsd), arena); arena_prefork7(tsd_tsdn(tsd), arena);
break; break;
case 8:
arena_prefork8(tsd_tsdn(tsd), arena);
break;
default: not_reached(); default: not_reached();
} }
} }
} }
} }
prof_prefork1(tsd_tsdn(tsd)); prof_prefork1(tsd_tsdn(tsd));
stats_prefork(tsd_tsdn(tsd));
tsd_prefork(tsd); tsd_prefork(tsd);
} }
...@@ -3874,6 +4426,7 @@ _malloc_postfork(void) ...@@ -3874,6 +4426,7 @@ _malloc_postfork(void)
witness_postfork_parent(tsd_witness_tsdp_get(tsd)); witness_postfork_parent(tsd_witness_tsdp_get(tsd));
/* Release all mutexes, now that fork() has completed. */ /* Release all mutexes, now that fork() has completed. */
stats_postfork_parent(tsd_tsdn(tsd));
for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { for (i = 0, narenas = narenas_total_get(); i < narenas; i++) {
arena_t *arena; arena_t *arena;
...@@ -3903,6 +4456,7 @@ jemalloc_postfork_child(void) { ...@@ -3903,6 +4456,7 @@ jemalloc_postfork_child(void) {
witness_postfork_child(tsd_witness_tsdp_get(tsd)); witness_postfork_child(tsd_witness_tsdp_get(tsd));
/* Release all mutexes, now that fork() has completed. */ /* Release all mutexes, now that fork() has completed. */
stats_postfork_child(tsd_tsdn(tsd));
for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { for (i = 0, narenas = narenas_total_get(); i < narenas; i++) {
arena_t *arena; arena_t *arena;
......
...@@ -39,9 +39,29 @@ void operator delete(void *ptr, std::size_t size) noexcept; ...@@ -39,9 +39,29 @@ void operator delete(void *ptr, std::size_t size) noexcept;
void operator delete[](void *ptr, std::size_t size) noexcept; void operator delete[](void *ptr, std::size_t size) noexcept;
#endif #endif
#if __cpp_aligned_new >= 201606
/* C++17's over-aligned operators. */
void *operator new(std::size_t size, std::align_val_t);
void *operator new(std::size_t size, std::align_val_t, const std::nothrow_t &) noexcept;
void *operator new[](std::size_t size, std::align_val_t);
void *operator new[](std::size_t size, std::align_val_t, const std::nothrow_t &) noexcept;
void operator delete(void* ptr, std::align_val_t) noexcept;
void operator delete(void* ptr, std::align_val_t, const std::nothrow_t &) noexcept;
void operator delete(void* ptr, std::size_t size, std::align_val_t al) noexcept;
void operator delete[](void* ptr, std::align_val_t) noexcept;
void operator delete[](void* ptr, std::align_val_t, const std::nothrow_t &) noexcept;
void operator delete[](void* ptr, std::size_t size, std::align_val_t al) noexcept;
#endif
JEMALLOC_NOINLINE JEMALLOC_NOINLINE
static void * static void *
handleOOM(std::size_t size, bool nothrow) { handleOOM(std::size_t size, bool nothrow) {
if (opt_experimental_infallible_new) {
safety_check_fail("<jemalloc>: Allocation failed and "
"opt.experimental_infallible_new is true. Aborting.\n");
return nullptr;
}
void *ptr = nullptr; void *ptr = nullptr;
while (ptr == nullptr) { while (ptr == nullptr) {
...@@ -71,15 +91,22 @@ handleOOM(std::size_t size, bool nothrow) { ...@@ -71,15 +91,22 @@ handleOOM(std::size_t size, bool nothrow) {
return ptr; return ptr;
} }
template <bool IsNoExcept>
JEMALLOC_NOINLINE
static void *
fallback_impl(std::size_t size) noexcept(IsNoExcept) {
void *ptr = malloc_default(size);
if (likely(ptr != nullptr)) {
return ptr;
}
return handleOOM(size, IsNoExcept);
}
template <bool IsNoExcept> template <bool IsNoExcept>
JEMALLOC_ALWAYS_INLINE JEMALLOC_ALWAYS_INLINE
void * void *
newImpl(std::size_t size) noexcept(IsNoExcept) { newImpl(std::size_t size) noexcept(IsNoExcept) {
void *ptr = je_malloc(size); return imalloc_fastpath(size, &fallback_impl<IsNoExcept>);
if (likely(ptr != nullptr))
return ptr;
return handleOOM(size, IsNoExcept);
} }
void * void *
...@@ -102,6 +129,42 @@ operator new[](std::size_t size, const std::nothrow_t &) noexcept { ...@@ -102,6 +129,42 @@ operator new[](std::size_t size, const std::nothrow_t &) noexcept {
return newImpl<true>(size); return newImpl<true>(size);
} }
#if __cpp_aligned_new >= 201606
template <bool IsNoExcept>
JEMALLOC_ALWAYS_INLINE
void *
alignedNewImpl(std::size_t size, std::align_val_t alignment) noexcept(IsNoExcept) {
void *ptr = je_aligned_alloc(static_cast<std::size_t>(alignment), size);
if (likely(ptr != nullptr)) {
return ptr;
}
return handleOOM(size, IsNoExcept);
}
void *
operator new(std::size_t size, std::align_val_t alignment) {
return alignedNewImpl<false>(size, alignment);
}
void *
operator new[](std::size_t size, std::align_val_t alignment) {
return alignedNewImpl<false>(size, alignment);
}
void *
operator new(std::size_t size, std::align_val_t alignment, const std::nothrow_t &) noexcept {
return alignedNewImpl<true>(size, alignment);
}
void *
operator new[](std::size_t size, std::align_val_t alignment, const std::nothrow_t &) noexcept {
return alignedNewImpl<true>(size, alignment);
}
#endif // __cpp_aligned_new
void void
operator delete(void *ptr) noexcept { operator delete(void *ptr) noexcept {
je_free(ptr); je_free(ptr);
...@@ -123,19 +186,69 @@ void operator delete[](void *ptr, const std::nothrow_t &) noexcept { ...@@ -123,19 +186,69 @@ void operator delete[](void *ptr, const std::nothrow_t &) noexcept {
#if __cpp_sized_deallocation >= 201309 #if __cpp_sized_deallocation >= 201309
JEMALLOC_ALWAYS_INLINE
void void
operator delete(void *ptr, std::size_t size) noexcept { sizedDeleteImpl(void* ptr, std::size_t size) noexcept {
if (unlikely(ptr == nullptr)) { if (unlikely(ptr == nullptr)) {
return; return;
} }
je_sdallocx_noflags(ptr, size); je_sdallocx_noflags(ptr, size);
} }
void operator delete[](void *ptr, std::size_t size) noexcept { void
operator delete(void *ptr, std::size_t size) noexcept {
sizedDeleteImpl(ptr, size);
}
void
operator delete[](void *ptr, std::size_t size) noexcept {
sizedDeleteImpl(ptr, size);
}
#endif // __cpp_sized_deallocation
#if __cpp_aligned_new >= 201606
JEMALLOC_ALWAYS_INLINE
void
alignedSizedDeleteImpl(void* ptr, std::size_t size, std::align_val_t alignment) noexcept {
if (config_debug) {
assert(((size_t)alignment & ((size_t)alignment - 1)) == 0);
}
if (unlikely(ptr == nullptr)) { if (unlikely(ptr == nullptr)) {
return; return;
} }
je_sdallocx_noflags(ptr, size); je_sdallocx(ptr, size, MALLOCX_ALIGN(alignment));
} }
#endif // __cpp_sized_deallocation void
operator delete(void* ptr, std::align_val_t) noexcept {
je_free(ptr);
}
void
operator delete[](void* ptr, std::align_val_t) noexcept {
je_free(ptr);
}
void
operator delete(void* ptr, std::align_val_t, const std::nothrow_t&) noexcept {
je_free(ptr);
}
void
operator delete[](void* ptr, std::align_val_t, const std::nothrow_t&) noexcept {
je_free(ptr);
}
void
operator delete(void* ptr, std::size_t size, std::align_val_t alignment) noexcept {
alignedSizedDeleteImpl(ptr, size, alignment);
}
void
operator delete[](void* ptr, std::size_t size, std::align_val_t alignment) noexcept {
alignedSizedDeleteImpl(ptr, size, alignment);
}
#endif // __cpp_aligned_new
#define JEMALLOC_LARGE_C_
#include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/jemalloc_internal_includes.h"
#include "jemalloc/internal/assert.h" #include "jemalloc/internal/assert.h"
#include "jemalloc/internal/emap.h"
#include "jemalloc/internal/extent_mmap.h" #include "jemalloc/internal/extent_mmap.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/rtree.h" #include "jemalloc/internal/prof_recent.h"
#include "jemalloc/internal/util.h" #include "jemalloc/internal/util.h"
/******************************************************************************/ /******************************************************************************/
...@@ -21,8 +21,7 @@ void * ...@@ -21,8 +21,7 @@ void *
large_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, large_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment,
bool zero) { bool zero) {
size_t ausize; size_t ausize;
extent_t *extent; edata_t *edata;
bool is_zeroed;
UNUSED bool idump JEMALLOC_CC_SILENCE_INIT(false); UNUSED bool idump JEMALLOC_CC_SILENCE_INIT(false);
assert(!tsdn_null(tsdn) || arena != NULL); assert(!tsdn_null(tsdn) || arena != NULL);
...@@ -32,163 +31,80 @@ large_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, ...@@ -32,163 +31,80 @@ large_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment,
return NULL; return NULL;
} }
if (config_fill && unlikely(opt_zero)) {
zero = true;
}
/*
* Copy zero into is_zeroed and pass the copy when allocating the
* extent, so that it is possible to make correct junk/zero fill
* decisions below, even if is_zeroed ends up true when zero is false.
*/
is_zeroed = zero;
if (likely(!tsdn_null(tsdn))) { if (likely(!tsdn_null(tsdn))) {
arena = arena_choose_maybe_huge(tsdn_tsd(tsdn), arena, usize); arena = arena_choose_maybe_huge(tsdn_tsd(tsdn), arena, usize);
} }
if (unlikely(arena == NULL) || (extent = arena_extent_alloc_large(tsdn, if (unlikely(arena == NULL) || (edata = arena_extent_alloc_large(tsdn,
arena, usize, alignment, &is_zeroed)) == NULL) { arena, usize, alignment, zero)) == NULL) {
return NULL; return NULL;
} }
/* See comments in arena_bin_slabs_full_insert(). */ /* See comments in arena_bin_slabs_full_insert(). */
if (!arena_is_auto(arena)) { if (!arena_is_auto(arena)) {
/* Insert extent into large. */ /* Insert edata into large. */
malloc_mutex_lock(tsdn, &arena->large_mtx); malloc_mutex_lock(tsdn, &arena->large_mtx);
extent_list_append(&arena->large, extent); edata_list_active_append(&arena->large, edata);
malloc_mutex_unlock(tsdn, &arena->large_mtx); malloc_mutex_unlock(tsdn, &arena->large_mtx);
} }
if (config_prof && arena_prof_accum(tsdn, arena, usize)) {
prof_idump(tsdn);
}
if (zero) {
assert(is_zeroed);
} else if (config_fill && unlikely(opt_junk_alloc)) {
memset(extent_addr_get(extent), JEMALLOC_ALLOC_JUNK,
extent_usize_get(extent));
}
arena_decay_tick(tsdn, arena); arena_decay_tick(tsdn, arena);
return extent_addr_get(extent); return edata_addr_get(edata);
} }
static void
large_dalloc_junk_impl(void *ptr, size_t size) {
memset(ptr, JEMALLOC_FREE_JUNK, size);
}
large_dalloc_junk_t *JET_MUTABLE large_dalloc_junk = large_dalloc_junk_impl;
static void
large_dalloc_maybe_junk_impl(void *ptr, size_t size) {
if (config_fill && have_dss && unlikely(opt_junk_free)) {
/*
* Only bother junk filling if the extent isn't about to be
* unmapped.
*/
if (opt_retain || (have_dss && extent_in_dss(ptr))) {
large_dalloc_junk(ptr, size);
}
}
}
large_dalloc_maybe_junk_t *JET_MUTABLE large_dalloc_maybe_junk =
large_dalloc_maybe_junk_impl;
static bool static bool
large_ralloc_no_move_shrink(tsdn_t *tsdn, extent_t *extent, size_t usize) { large_ralloc_no_move_shrink(tsdn_t *tsdn, edata_t *edata, size_t usize) {
arena_t *arena = extent_arena_get(extent); arena_t *arena = arena_get_from_edata(edata);
size_t oldusize = extent_usize_get(extent); ehooks_t *ehooks = arena_get_ehooks(arena);
extent_hooks_t *extent_hooks = extent_hooks_get(arena); size_t old_size = edata_size_get(edata);
size_t diff = extent_size_get(extent) - (usize + sz_large_pad); size_t old_usize = edata_usize_get(edata);
assert(oldusize > usize); assert(old_usize > usize);
if (extent_hooks->split == NULL) { if (ehooks_split_will_fail(ehooks)) {
return true; return true;
} }
/* Split excess pages. */ bool deferred_work_generated = false;
if (diff != 0) { bool err = pa_shrink(tsdn, &arena->pa_shard, edata, old_size,
extent_t *trail = extent_split_wrapper(tsdn, arena, usize + sz_large_pad, sz_size2index(usize),
&extent_hooks, extent, usize + sz_large_pad, &deferred_work_generated);
sz_size2index(usize), false, diff, SC_NSIZES, false); if (err) {
if (trail == NULL) {
return true; return true;
} }
if (deferred_work_generated) {
if (config_fill && unlikely(opt_junk_free)) { arena_handle_deferred_work(tsdn, arena);
large_dalloc_maybe_junk(extent_addr_get(trail),
extent_size_get(trail));
}
arena_extents_dirty_dalloc(tsdn, arena, &extent_hooks, trail);
} }
arena_extent_ralloc_large_shrink(tsdn, arena, edata, old_usize);
arena_extent_ralloc_large_shrink(tsdn, arena, extent, oldusize);
return false; return false;
} }
static bool static bool
large_ralloc_no_move_expand(tsdn_t *tsdn, extent_t *extent, size_t usize, large_ralloc_no_move_expand(tsdn_t *tsdn, edata_t *edata, size_t usize,
bool zero) { bool zero) {
arena_t *arena = extent_arena_get(extent); arena_t *arena = arena_get_from_edata(edata);
size_t oldusize = extent_usize_get(extent);
extent_hooks_t *extent_hooks = extent_hooks_get(arena);
size_t trailsize = usize - oldusize;
if (extent_hooks->merge == NULL) { size_t old_size = edata_size_get(edata);
return true; size_t old_usize = edata_usize_get(edata);
} size_t new_size = usize + sz_large_pad;
if (config_fill && unlikely(opt_zero)) { szind_t szind = sz_size2index(usize);
zero = true;
}
/*
* Copy zero into is_zeroed_trail and pass the copy when allocating the
* extent, so that it is possible to make correct junk/zero fill
* decisions below, even if is_zeroed_trail ends up true when zero is
* false.
*/
bool is_zeroed_trail = zero;
bool commit = true;
extent_t *trail;
bool new_mapping;
if ((trail = extents_alloc(tsdn, arena, &extent_hooks,
&arena->extents_dirty, extent_past_get(extent), trailsize, 0,
CACHELINE, false, SC_NSIZES, &is_zeroed_trail, &commit)) != NULL
|| (trail = extents_alloc(tsdn, arena, &extent_hooks,
&arena->extents_muzzy, extent_past_get(extent), trailsize, 0,
CACHELINE, false, SC_NSIZES, &is_zeroed_trail, &commit)) != NULL) {
if (config_stats) {
new_mapping = false;
}
} else {
if ((trail = extent_alloc_wrapper(tsdn, arena, &extent_hooks,
extent_past_get(extent), trailsize, 0, CACHELINE, false,
SC_NSIZES, &is_zeroed_trail, &commit)) == NULL) {
return true;
}
if (config_stats) {
new_mapping = true;
}
}
if (extent_merge_wrapper(tsdn, arena, &extent_hooks, extent, trail)) { bool deferred_work_generated = false;
extent_dalloc_wrapper(tsdn, arena, &extent_hooks, trail); bool err = pa_expand(tsdn, &arena->pa_shard, edata, old_size, new_size,
return true; szind, zero, &deferred_work_generated);
if (deferred_work_generated) {
arena_handle_deferred_work(tsdn, arena);
} }
rtree_ctx_t rtree_ctx_fallback;
rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback);
szind_t szind = sz_size2index(usize);
extent_szind_set(extent, szind);
rtree_szind_slab_update(tsdn, &extents_rtree, rtree_ctx,
(uintptr_t)extent_addr_get(extent), szind, false);
if (config_stats && new_mapping) { if (err) {
arena_stats_mapped_add(tsdn, &arena->stats, trailsize); return true;
} }
if (zero) { if (zero) {
if (config_cache_oblivious) { if (opt_cache_oblivious) {
assert(sz_large_pad == PAGE);
/* /*
* Zero the trailing bytes of the original allocation's * Zero the trailing bytes of the original allocation's
* last page, since they are in an indeterminate state. * last page, since they are in an indeterminate state.
...@@ -197,28 +113,23 @@ large_ralloc_no_move_expand(tsdn_t *tsdn, extent_t *extent, size_t usize, ...@@ -197,28 +113,23 @@ large_ralloc_no_move_expand(tsdn_t *tsdn, extent_t *extent, size_t usize,
* of CACHELINE in [0 .. PAGE). * of CACHELINE in [0 .. PAGE).
*/ */
void *zbase = (void *) void *zbase = (void *)
((uintptr_t)extent_addr_get(extent) + oldusize); ((uintptr_t)edata_addr_get(edata) + old_usize);
void *zpast = PAGE_ADDR2BASE((void *)((uintptr_t)zbase + void *zpast = PAGE_ADDR2BASE((void *)((uintptr_t)zbase +
PAGE)); PAGE));
size_t nzero = (uintptr_t)zpast - (uintptr_t)zbase; size_t nzero = (uintptr_t)zpast - (uintptr_t)zbase;
assert(nzero > 0); assert(nzero > 0);
memset(zbase, 0, nzero); memset(zbase, 0, nzero);
} }
assert(is_zeroed_trail);
} else if (config_fill && unlikely(opt_junk_alloc)) {
memset((void *)((uintptr_t)extent_addr_get(extent) + oldusize),
JEMALLOC_ALLOC_JUNK, usize - oldusize);
} }
arena_extent_ralloc_large_expand(tsdn, arena, edata, old_usize);
arena_extent_ralloc_large_expand(tsdn, arena, extent, oldusize);
return false; return false;
} }
bool bool
large_ralloc_no_move(tsdn_t *tsdn, extent_t *extent, size_t usize_min, large_ralloc_no_move(tsdn_t *tsdn, edata_t *edata, size_t usize_min,
size_t usize_max, bool zero) { size_t usize_max, bool zero) {
size_t oldusize = extent_usize_get(extent); size_t oldusize = edata_usize_get(edata);
/* The following should have been caught by callers. */ /* The following should have been caught by callers. */
assert(usize_min > 0 && usize_max <= SC_LARGE_MAXCLASS); assert(usize_min > 0 && usize_max <= SC_LARGE_MAXCLASS);
...@@ -228,16 +139,15 @@ large_ralloc_no_move(tsdn_t *tsdn, extent_t *extent, size_t usize_min, ...@@ -228,16 +139,15 @@ large_ralloc_no_move(tsdn_t *tsdn, extent_t *extent, size_t usize_min,
if (usize_max > oldusize) { if (usize_max > oldusize) {
/* Attempt to expand the allocation in-place. */ /* Attempt to expand the allocation in-place. */
if (!large_ralloc_no_move_expand(tsdn, extent, usize_max, if (!large_ralloc_no_move_expand(tsdn, edata, usize_max,
zero)) { zero)) {
arena_decay_tick(tsdn, extent_arena_get(extent)); arena_decay_tick(tsdn, arena_get_from_edata(edata));
return false; return false;
} }
/* Try again, this time with usize_min. */ /* Try again, this time with usize_min. */
if (usize_min < usize_max && usize_min > oldusize && if (usize_min < usize_max && usize_min > oldusize &&
large_ralloc_no_move_expand(tsdn, extent, usize_min, large_ralloc_no_move_expand(tsdn, edata, usize_min, zero)) {
zero)) { arena_decay_tick(tsdn, arena_get_from_edata(edata));
arena_decay_tick(tsdn, extent_arena_get(extent));
return false; return false;
} }
} }
...@@ -247,14 +157,14 @@ large_ralloc_no_move(tsdn_t *tsdn, extent_t *extent, size_t usize_min, ...@@ -247,14 +157,14 @@ large_ralloc_no_move(tsdn_t *tsdn, extent_t *extent, size_t usize_min,
* the new size. * the new size.
*/ */
if (oldusize >= usize_min && oldusize <= usize_max) { if (oldusize >= usize_min && oldusize <= usize_max) {
arena_decay_tick(tsdn, extent_arena_get(extent)); arena_decay_tick(tsdn, arena_get_from_edata(edata));
return false; return false;
} }
/* Attempt to shrink the allocation in-place. */ /* Attempt to shrink the allocation in-place. */
if (oldusize > usize_max) { if (oldusize > usize_max) {
if (!large_ralloc_no_move_shrink(tsdn, extent, usize_max)) { if (!large_ralloc_no_move_shrink(tsdn, edata, usize_max)) {
arena_decay_tick(tsdn, extent_arena_get(extent)); arena_decay_tick(tsdn, arena_get_from_edata(edata));
return false; return false;
} }
} }
...@@ -274,9 +184,9 @@ void * ...@@ -274,9 +184,9 @@ void *
large_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t usize, large_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t usize,
size_t alignment, bool zero, tcache_t *tcache, size_t alignment, bool zero, tcache_t *tcache,
hook_ralloc_args_t *hook_args) { hook_ralloc_args_t *hook_args) {
extent_t *extent = iealloc(tsdn, ptr); edata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);
size_t oldusize = extent_usize_get(extent); size_t oldusize = edata_usize_get(edata);
/* The following should have been caught by callers. */ /* The following should have been caught by callers. */
assert(usize > 0 && usize <= SC_LARGE_MAXCLASS); assert(usize > 0 && usize <= SC_LARGE_MAXCLASS);
/* Both allocation sizes must be large to avoid a move. */ /* Both allocation sizes must be large to avoid a move. */
...@@ -284,11 +194,11 @@ large_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t usize, ...@@ -284,11 +194,11 @@ large_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t usize,
&& usize >= SC_LARGE_MINCLASS); && usize >= SC_LARGE_MINCLASS);
/* Try to avoid moving the allocation. */ /* Try to avoid moving the allocation. */
if (!large_ralloc_no_move(tsdn, extent, usize, usize, zero)) { if (!large_ralloc_no_move(tsdn, edata, usize, usize, zero)) {
hook_invoke_expand(hook_args->is_realloc hook_invoke_expand(hook_args->is_realloc
? hook_expand_realloc : hook_expand_rallocx, ptr, oldusize, ? hook_expand_realloc : hook_expand_rallocx, ptr, oldusize,
usize, (uintptr_t)ptr, hook_args->args); usize, (uintptr_t)ptr, hook_args->args);
return extent_addr_get(extent); return edata_addr_get(edata);
} }
/* /*
...@@ -309,87 +219,104 @@ large_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t usize, ...@@ -309,87 +219,104 @@ large_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t usize,
? hook_dalloc_realloc : hook_dalloc_rallocx, ptr, hook_args->args); ? hook_dalloc_realloc : hook_dalloc_rallocx, ptr, hook_args->args);
size_t copysize = (usize < oldusize) ? usize : oldusize; size_t copysize = (usize < oldusize) ? usize : oldusize;
memcpy(ret, extent_addr_get(extent), copysize); memcpy(ret, edata_addr_get(edata), copysize);
isdalloct(tsdn, extent_addr_get(extent), oldusize, tcache, NULL, true); isdalloct(tsdn, edata_addr_get(edata), oldusize, tcache, NULL, true);
return ret; return ret;
} }
/* /*
* junked_locked indicates whether the extent's data have been junk-filled, and * locked indicates whether the arena's large_mtx is currently held.
* whether the arena's large_mtx is currently held.
*/ */
static void static void
large_dalloc_prep_impl(tsdn_t *tsdn, arena_t *arena, extent_t *extent, large_dalloc_prep_impl(tsdn_t *tsdn, arena_t *arena, edata_t *edata,
bool junked_locked) { bool locked) {
if (!junked_locked) { if (!locked) {
/* See comments in arena_bin_slabs_full_insert(). */ /* See comments in arena_bin_slabs_full_insert(). */
if (!arena_is_auto(arena)) { if (!arena_is_auto(arena)) {
malloc_mutex_lock(tsdn, &arena->large_mtx); malloc_mutex_lock(tsdn, &arena->large_mtx);
extent_list_remove(&arena->large, extent); edata_list_active_remove(&arena->large, edata);
malloc_mutex_unlock(tsdn, &arena->large_mtx); malloc_mutex_unlock(tsdn, &arena->large_mtx);
} }
large_dalloc_maybe_junk(extent_addr_get(extent),
extent_usize_get(extent));
} else { } else {
/* Only hold the large_mtx if necessary. */ /* Only hold the large_mtx if necessary. */
if (!arena_is_auto(arena)) { if (!arena_is_auto(arena)) {
malloc_mutex_assert_owner(tsdn, &arena->large_mtx); malloc_mutex_assert_owner(tsdn, &arena->large_mtx);
extent_list_remove(&arena->large, extent); edata_list_active_remove(&arena->large, edata);
} }
} }
arena_extent_dalloc_large_prep(tsdn, arena, extent); arena_extent_dalloc_large_prep(tsdn, arena, edata);
} }
static void static void
large_dalloc_finish_impl(tsdn_t *tsdn, arena_t *arena, extent_t *extent) { large_dalloc_finish_impl(tsdn_t *tsdn, arena_t *arena, edata_t *edata) {
extent_hooks_t *extent_hooks = EXTENT_HOOKS_INITIALIZER; bool deferred_work_generated = false;
arena_extents_dirty_dalloc(tsdn, arena, &extent_hooks, extent); pa_dalloc(tsdn, &arena->pa_shard, edata, &deferred_work_generated);
if (deferred_work_generated) {
arena_handle_deferred_work(tsdn, arena);
}
} }
void void
large_dalloc_prep_junked_locked(tsdn_t *tsdn, extent_t *extent) { large_dalloc_prep_locked(tsdn_t *tsdn, edata_t *edata) {
large_dalloc_prep_impl(tsdn, extent_arena_get(extent), extent, true); large_dalloc_prep_impl(tsdn, arena_get_from_edata(edata), edata, true);
} }
void void
large_dalloc_finish(tsdn_t *tsdn, extent_t *extent) { large_dalloc_finish(tsdn_t *tsdn, edata_t *edata) {
large_dalloc_finish_impl(tsdn, extent_arena_get(extent), extent); large_dalloc_finish_impl(tsdn, arena_get_from_edata(edata), edata);
} }
void void
large_dalloc(tsdn_t *tsdn, extent_t *extent) { large_dalloc(tsdn_t *tsdn, edata_t *edata) {
arena_t *arena = extent_arena_get(extent); arena_t *arena = arena_get_from_edata(edata);
large_dalloc_prep_impl(tsdn, arena, extent, false); large_dalloc_prep_impl(tsdn, arena, edata, false);
large_dalloc_finish_impl(tsdn, arena, extent); large_dalloc_finish_impl(tsdn, arena, edata);
arena_decay_tick(tsdn, arena); arena_decay_tick(tsdn, arena);
} }
size_t size_t
large_salloc(tsdn_t *tsdn, const extent_t *extent) { large_salloc(tsdn_t *tsdn, const edata_t *edata) {
return extent_usize_get(extent); return edata_usize_get(edata);
}
prof_tctx_t *
large_prof_tctx_get(tsdn_t *tsdn, const extent_t *extent) {
return extent_prof_tctx_get(extent);
} }
void void
large_prof_tctx_set(tsdn_t *tsdn, extent_t *extent, prof_tctx_t *tctx) { large_prof_info_get(tsd_t *tsd, edata_t *edata, prof_info_t *prof_info,
extent_prof_tctx_set(extent, tctx); bool reset_recent) {
assert(prof_info != NULL);
prof_tctx_t *alloc_tctx = edata_prof_tctx_get(edata);
prof_info->alloc_tctx = alloc_tctx;
if ((uintptr_t)alloc_tctx > (uintptr_t)1U) {
nstime_copy(&prof_info->alloc_time,
edata_prof_alloc_time_get(edata));
prof_info->alloc_size = edata_prof_alloc_size_get(edata);
if (reset_recent) {
/*
* Reset the pointer on the recent allocation record,
* so that this allocation is recorded as released.
*/
prof_recent_alloc_reset(tsd, edata);
}
}
} }
void static void
large_prof_tctx_reset(tsdn_t *tsdn, extent_t *extent) { large_prof_tctx_set(edata_t *edata, prof_tctx_t *tctx) {
large_prof_tctx_set(tsdn, extent, (prof_tctx_t *)(uintptr_t)1U); edata_prof_tctx_set(edata, tctx);
} }
nstime_t void
large_prof_alloc_time_get(const extent_t *extent) { large_prof_tctx_reset(edata_t *edata) {
return extent_prof_alloc_time_get(extent); large_prof_tctx_set(edata, (prof_tctx_t *)(uintptr_t)1U);
} }
void void
large_prof_alloc_time_set(extent_t *extent, nstime_t t) { large_prof_info_set(edata_t *edata, prof_tctx_t *tctx, size_t size) {
extent_prof_alloc_time_set(extent, t); nstime_t t;
nstime_prof_init_update(&t);
edata_prof_alloc_time_set(edata, &t);
edata_prof_alloc_size_set(edata, size);
edata_prof_recent_alloc_init(edata);
large_prof_tctx_set(edata, tctx);
} }
#define JEMALLOC_MALLOC_IO_C_
#include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/jemalloc_internal_includes.h"
...@@ -53,7 +52,6 @@ ...@@ -53,7 +52,6 @@
/******************************************************************************/ /******************************************************************************/
/* Function prototypes for non-inline static functions. */ /* Function prototypes for non-inline static functions. */
static void wrtmessage(void *cbopaque, const char *s);
#define U2S_BUFSIZE ((1U << (LG_SIZEOF_INTMAX_T + 3)) + 1) #define U2S_BUFSIZE ((1U << (LG_SIZEOF_INTMAX_T + 3)) + 1)
static char *u2s(uintmax_t x, unsigned base, bool uppercase, char *s, static char *u2s(uintmax_t x, unsigned base, bool uppercase, char *s,
size_t *slen_p); size_t *slen_p);
...@@ -68,7 +66,7 @@ static char *x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, ...@@ -68,7 +66,7 @@ static char *x2s(uintmax_t x, bool alt_form, bool uppercase, char *s,
/******************************************************************************/ /******************************************************************************/
/* malloc_message() setup. */ /* malloc_message() setup. */
static void void
wrtmessage(void *cbopaque, const char *s) { wrtmessage(void *cbopaque, const char *s) {
malloc_write_fd(STDERR_FILENO, s, strlen(s)); malloc_write_fd(STDERR_FILENO, s, strlen(s));
} }
...@@ -135,10 +133,10 @@ malloc_strtoumax(const char *restrict nptr, char **restrict endptr, int base) { ...@@ -135,10 +133,10 @@ malloc_strtoumax(const char *restrict nptr, char **restrict endptr, int base) {
break; break;
case '-': case '-':
neg = true; neg = true;
/* Fall through. */ JEMALLOC_FALLTHROUGH;
case '+': case '+':
p++; p++;
/* Fall through. */ JEMALLOC_FALLTHROUGH;
default: default:
goto label_prefix; goto label_prefix;
} }
...@@ -289,7 +287,7 @@ d2s(intmax_t x, char sign, char *s, size_t *slen_p) { ...@@ -289,7 +287,7 @@ d2s(intmax_t x, char sign, char *s, size_t *slen_p) {
if (!neg) { if (!neg) {
break; break;
} }
/* Fall through. */ JEMALLOC_FALLTHROUGH;
case ' ': case ' ':
case '+': case '+':
s--; s--;
...@@ -323,6 +321,7 @@ x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p) { ...@@ -323,6 +321,7 @@ x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p) {
return s; return s;
} }
JEMALLOC_COLD
size_t size_t
malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) { malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) {
size_t i; size_t i;
...@@ -348,9 +347,13 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) { ...@@ -348,9 +347,13 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) {
if (!left_justify && pad_len != 0) { \ if (!left_justify && pad_len != 0) { \
size_t j; \ size_t j; \
for (j = 0; j < pad_len; j++) { \ for (j = 0; j < pad_len; j++) { \
if (pad_zero) { \
APPEND_C('0'); \
} else { \
APPEND_C(' '); \ APPEND_C(' '); \
} \ } \
} \ } \
} \
/* Value. */ \ /* Value. */ \
APPEND_S(s, slen); \ APPEND_S(s, slen); \
/* Right padding. */ \ /* Right padding. */ \
...@@ -420,6 +423,8 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) { ...@@ -420,6 +423,8 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) {
unsigned char len = '?'; unsigned char len = '?';
char *s; char *s;
size_t slen; size_t slen;
bool first_width_digit = true;
bool pad_zero = false;
f++; f++;
/* Flags. */ /* Flags. */
...@@ -456,7 +461,12 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) { ...@@ -456,7 +461,12 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) {
width = -width; width = -width;
} }
break; break;
case '0': case '1': case '2': case '3': case '4': case '0':
if (first_width_digit) {
pad_zero = true;
}
JEMALLOC_FALLTHROUGH;
case '1': case '2': case '3': case '4':
case '5': case '6': case '7': case '8': case '9': { case '5': case '6': case '7': case '8': case '9': {
uintmax_t uwidth; uintmax_t uwidth;
set_errno(0); set_errno(0);
...@@ -464,6 +474,7 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) { ...@@ -464,6 +474,7 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) {
assert(uwidth != UINTMAX_MAX || get_errno() != assert(uwidth != UINTMAX_MAX || get_errno() !=
ERANGE); ERANGE);
width = (int)uwidth; width = (int)uwidth;
first_width_digit = false;
break; break;
} default: } default:
break; break;
...@@ -521,6 +532,18 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) { ...@@ -521,6 +532,18 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) {
intmax_t val JEMALLOC_CC_SILENCE_INIT(0); intmax_t val JEMALLOC_CC_SILENCE_INIT(0);
char buf[D2S_BUFSIZE]; char buf[D2S_BUFSIZE];
/*
* Outputting negative, zero-padded numbers
* would require a nontrivial rework of the
* interaction between the width and padding
* (since 0 padding goes between the '-' and the
* number, while ' ' padding goes either before
* the - or after the number. Since we
* currently don't ever need 0-padded negative
* numbers, just don't bother supporting it.
*/
assert(!pad_zero);
GET_ARG_NUMERIC(val, len); GET_ARG_NUMERIC(val, len);
s = d2s(val, (plus_plus ? '+' : (plus_space ? s = d2s(val, (plus_plus ? '+' : (plus_space ?
' ' : '-')), buf, &slen); ' ' : '-')), buf, &slen);
...@@ -620,8 +643,8 @@ malloc_snprintf(char *str, size_t size, const char *format, ...) { ...@@ -620,8 +643,8 @@ malloc_snprintf(char *str, size_t size, const char *format, ...) {
} }
void void
malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque, malloc_vcprintf(write_cb_t *write_cb, void *cbopaque, const char *format,
const char *format, va_list ap) { va_list ap) {
char buf[MALLOC_PRINTF_BUFSIZE]; char buf[MALLOC_PRINTF_BUFSIZE];
if (write_cb == NULL) { if (write_cb == NULL) {
...@@ -644,8 +667,7 @@ malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque, ...@@ -644,8 +667,7 @@ malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque,
*/ */
JEMALLOC_FORMAT_PRINTF(3, 4) JEMALLOC_FORMAT_PRINTF(3, 4)
void void
malloc_cprintf(void (*write_cb)(void *, const char *), void *cbopaque, malloc_cprintf(write_cb_t *write_cb, void *cbopaque, const char *format, ...) {
const char *format, ...) {
va_list ap; va_list ap;
va_start(ap, format); va_start(ap, format);
......
#define JEMALLOC_MUTEX_C_
#include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/jemalloc_internal_includes.h"
...@@ -10,6 +9,12 @@ ...@@ -10,6 +9,12 @@
#define _CRT_SPINCOUNT 4000 #define _CRT_SPINCOUNT 4000
#endif #endif
/*
* Based on benchmark results, a fixed spin with this amount of retries works
* well for our critical sections.
*/
int64_t opt_mutex_max_spin = 600;
/******************************************************************************/ /******************************************************************************/
/* Data. */ /* Data. */
...@@ -46,13 +51,13 @@ JEMALLOC_EXPORT int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex, ...@@ -46,13 +51,13 @@ JEMALLOC_EXPORT int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex,
void void
malloc_mutex_lock_slow(malloc_mutex_t *mutex) { malloc_mutex_lock_slow(malloc_mutex_t *mutex) {
mutex_prof_data_t *data = &mutex->prof_data; mutex_prof_data_t *data = &mutex->prof_data;
nstime_t before = NSTIME_ZERO_INITIALIZER; nstime_t before;
if (ncpus == 1) { if (ncpus == 1) {
goto label_spin_done; goto label_spin_done;
} }
int cnt = 0, max_cnt = MALLOC_MUTEX_MAX_SPIN; int cnt = 0;
do { do {
spin_cpu_spinwait(); spin_cpu_spinwait();
if (!atomic_load_b(&mutex->locked, ATOMIC_RELAXED) if (!atomic_load_b(&mutex->locked, ATOMIC_RELAXED)
...@@ -60,7 +65,7 @@ malloc_mutex_lock_slow(malloc_mutex_t *mutex) { ...@@ -60,7 +65,7 @@ malloc_mutex_lock_slow(malloc_mutex_t *mutex) {
data->n_spin_acquired++; data->n_spin_acquired++;
return; return;
} }
} while (cnt++ < max_cnt); } while (cnt++ < opt_mutex_max_spin || opt_mutex_max_spin == -1);
if (!config_stats) { if (!config_stats) {
/* Only spin is useful when stats is off. */ /* Only spin is useful when stats is off. */
...@@ -68,7 +73,7 @@ malloc_mutex_lock_slow(malloc_mutex_t *mutex) { ...@@ -68,7 +73,7 @@ malloc_mutex_lock_slow(malloc_mutex_t *mutex) {
return; return;
} }
label_spin_done: label_spin_done:
nstime_update(&before); nstime_init_update(&before);
/* Copy before to after to avoid clock skews. */ /* Copy before to after to avoid clock skews. */
nstime_t after; nstime_t after;
nstime_copy(&after, &before); nstime_copy(&after, &before);
...@@ -104,8 +109,8 @@ label_spin_done: ...@@ -104,8 +109,8 @@ label_spin_done:
static void static void
mutex_prof_data_init(mutex_prof_data_t *data) { mutex_prof_data_init(mutex_prof_data_t *data) {
memset(data, 0, sizeof(mutex_prof_data_t)); memset(data, 0, sizeof(mutex_prof_data_t));
nstime_init(&data->max_wait_time, 0); nstime_init_zero(&data->max_wait_time);
nstime_init(&data->tot_wait_time, 0); nstime_init_zero(&data->tot_wait_time);
data->prev_owner = NULL; data->prev_owner = NULL;
} }
......
#define JEMALLOC_MUTEX_POOL_C_
#include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h"
#include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/mutex_pool.h"
bool
mutex_pool_init(mutex_pool_t *pool, const char *name, witness_rank_t rank) {
for (int i = 0; i < MUTEX_POOL_SIZE; ++i) {
if (malloc_mutex_init(&pool->mutexes[i], name, rank,
malloc_mutex_address_ordered)) {
return true;
}
}
return false;
}
...@@ -8,96 +8,169 @@ ...@@ -8,96 +8,169 @@
#define BILLION UINT64_C(1000000000) #define BILLION UINT64_C(1000000000)
#define MILLION UINT64_C(1000000) #define MILLION UINT64_C(1000000)
static void
nstime_set_initialized(nstime_t *time) {
#ifdef JEMALLOC_DEBUG
time->magic = NSTIME_MAGIC;
#endif
}
static void
nstime_assert_initialized(const nstime_t *time) {
#ifdef JEMALLOC_DEBUG
/*
* Some parts (e.g. stats) rely on memset to zero initialize. Treat
* these as valid initialization.
*/
assert(time->magic == NSTIME_MAGIC ||
(time->magic == 0 && time->ns == 0));
#endif
}
static void
nstime_pair_assert_initialized(const nstime_t *t1, const nstime_t *t2) {
nstime_assert_initialized(t1);
nstime_assert_initialized(t2);
}
static void
nstime_initialize_operand(nstime_t *time) {
/*
* Operations like nstime_add may have the initial operand being zero
* initialized (covered by the assert below). Full-initialize needed
* before changing it to non-zero.
*/
nstime_assert_initialized(time);
nstime_set_initialized(time);
}
void void
nstime_init(nstime_t *time, uint64_t ns) { nstime_init(nstime_t *time, uint64_t ns) {
nstime_set_initialized(time);
time->ns = ns; time->ns = ns;
} }
void void
nstime_init2(nstime_t *time, uint64_t sec, uint64_t nsec) { nstime_init2(nstime_t *time, uint64_t sec, uint64_t nsec) {
nstime_set_initialized(time);
time->ns = sec * BILLION + nsec; time->ns = sec * BILLION + nsec;
} }
uint64_t uint64_t
nstime_ns(const nstime_t *time) { nstime_ns(const nstime_t *time) {
nstime_assert_initialized(time);
return time->ns; return time->ns;
} }
uint64_t uint64_t
nstime_msec(const nstime_t *time) { nstime_msec(const nstime_t *time) {
nstime_assert_initialized(time);
return time->ns / MILLION; return time->ns / MILLION;
} }
uint64_t uint64_t
nstime_sec(const nstime_t *time) { nstime_sec(const nstime_t *time) {
nstime_assert_initialized(time);
return time->ns / BILLION; return time->ns / BILLION;
} }
uint64_t uint64_t
nstime_nsec(const nstime_t *time) { nstime_nsec(const nstime_t *time) {
nstime_assert_initialized(time);
return time->ns % BILLION; return time->ns % BILLION;
} }
void void
nstime_copy(nstime_t *time, const nstime_t *source) { nstime_copy(nstime_t *time, const nstime_t *source) {
/* Source is required to be initialized. */
nstime_assert_initialized(source);
*time = *source; *time = *source;
nstime_assert_initialized(time);
} }
int int
nstime_compare(const nstime_t *a, const nstime_t *b) { nstime_compare(const nstime_t *a, const nstime_t *b) {
nstime_pair_assert_initialized(a, b);
return (a->ns > b->ns) - (a->ns < b->ns); return (a->ns > b->ns) - (a->ns < b->ns);
} }
void void
nstime_add(nstime_t *time, const nstime_t *addend) { nstime_add(nstime_t *time, const nstime_t *addend) {
nstime_pair_assert_initialized(time, addend);
assert(UINT64_MAX - time->ns >= addend->ns); assert(UINT64_MAX - time->ns >= addend->ns);
nstime_initialize_operand(time);
time->ns += addend->ns; time->ns += addend->ns;
} }
void void
nstime_iadd(nstime_t *time, uint64_t addend) { nstime_iadd(nstime_t *time, uint64_t addend) {
nstime_assert_initialized(time);
assert(UINT64_MAX - time->ns >= addend); assert(UINT64_MAX - time->ns >= addend);
nstime_initialize_operand(time);
time->ns += addend; time->ns += addend;
} }
void void
nstime_subtract(nstime_t *time, const nstime_t *subtrahend) { nstime_subtract(nstime_t *time, const nstime_t *subtrahend) {
nstime_pair_assert_initialized(time, subtrahend);
assert(nstime_compare(time, subtrahend) >= 0); assert(nstime_compare(time, subtrahend) >= 0);
/* No initialize operand -- subtraction must be initialized. */
time->ns -= subtrahend->ns; time->ns -= subtrahend->ns;
} }
void void
nstime_isubtract(nstime_t *time, uint64_t subtrahend) { nstime_isubtract(nstime_t *time, uint64_t subtrahend) {
nstime_assert_initialized(time);
assert(time->ns >= subtrahend); assert(time->ns >= subtrahend);
/* No initialize operand -- subtraction must be initialized. */
time->ns -= subtrahend; time->ns -= subtrahend;
} }
void void
nstime_imultiply(nstime_t *time, uint64_t multiplier) { nstime_imultiply(nstime_t *time, uint64_t multiplier) {
nstime_assert_initialized(time);
assert((((time->ns | multiplier) & (UINT64_MAX << (sizeof(uint64_t) << assert((((time->ns | multiplier) & (UINT64_MAX << (sizeof(uint64_t) <<
2))) == 0) || ((time->ns * multiplier) / multiplier == time->ns)); 2))) == 0) || ((time->ns * multiplier) / multiplier == time->ns));
nstime_initialize_operand(time);
time->ns *= multiplier; time->ns *= multiplier;
} }
void void
nstime_idivide(nstime_t *time, uint64_t divisor) { nstime_idivide(nstime_t *time, uint64_t divisor) {
nstime_assert_initialized(time);
assert(divisor != 0); assert(divisor != 0);
nstime_initialize_operand(time);
time->ns /= divisor; time->ns /= divisor;
} }
uint64_t uint64_t
nstime_divide(const nstime_t *time, const nstime_t *divisor) { nstime_divide(const nstime_t *time, const nstime_t *divisor) {
nstime_pair_assert_initialized(time, divisor);
assert(divisor->ns != 0); assert(divisor->ns != 0);
/* No initialize operand -- *time itself remains unchanged. */
return time->ns / divisor->ns; return time->ns / divisor->ns;
} }
/* Returns time since *past, w/o updating *past. */
uint64_t
nstime_ns_since(const nstime_t *past) {
nstime_assert_initialized(past);
nstime_t now;
nstime_copy(&now, past);
nstime_update(&now);
assert(nstime_compare(&now, past) >= 0);
return now.ns - past->ns;
}
#ifdef _WIN32 #ifdef _WIN32
# define NSTIME_MONOTONIC true # define NSTIME_MONOTONIC true
static void static void
...@@ -152,7 +225,42 @@ nstime_monotonic_impl(void) { ...@@ -152,7 +225,42 @@ nstime_monotonic_impl(void) {
} }
nstime_monotonic_t *JET_MUTABLE nstime_monotonic = nstime_monotonic_impl; nstime_monotonic_t *JET_MUTABLE nstime_monotonic = nstime_monotonic_impl;
static bool prof_time_res_t opt_prof_time_res =
prof_time_res_default;
const char *prof_time_res_mode_names[] = {
"default",
"high",
};
static void
nstime_get_realtime(nstime_t *time) {
#if defined(JEMALLOC_HAVE_CLOCK_REALTIME) && !defined(_WIN32)
struct timespec ts;
clock_gettime(CLOCK_REALTIME, &ts);
nstime_init2(time, ts.tv_sec, ts.tv_nsec);
#else
unreachable();
#endif
}
static void
nstime_prof_update_impl(nstime_t *time) {
nstime_t old_time;
nstime_copy(&old_time, time);
if (opt_prof_time_res == prof_time_res_high) {
nstime_get_realtime(time);
} else {
nstime_get(time);
}
}
nstime_prof_update_t *JET_MUTABLE nstime_prof_update = nstime_prof_update_impl;
static void
nstime_update_impl(nstime_t *time) { nstime_update_impl(nstime_t *time) {
nstime_t old_time; nstime_t old_time;
...@@ -162,9 +270,20 @@ nstime_update_impl(nstime_t *time) { ...@@ -162,9 +270,20 @@ nstime_update_impl(nstime_t *time) {
/* Handle non-monotonic clocks. */ /* Handle non-monotonic clocks. */
if (unlikely(nstime_compare(&old_time, time) > 0)) { if (unlikely(nstime_compare(&old_time, time) > 0)) {
nstime_copy(time, &old_time); nstime_copy(time, &old_time);
return true;
} }
return false;
} }
nstime_update_t *JET_MUTABLE nstime_update = nstime_update_impl; nstime_update_t *JET_MUTABLE nstime_update = nstime_update_impl;
void
nstime_init_update(nstime_t *time) {
nstime_init_zero(time);
nstime_update(time);
}
void
nstime_prof_init_update(nstime_t *time) {
nstime_init_zero(time);
nstime_prof_update(time);
}
#include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h"
#include "jemalloc/internal/san.h"
#include "jemalloc/internal/hpa.h"
static void
pa_nactive_add(pa_shard_t *shard, size_t add_pages) {
atomic_fetch_add_zu(&shard->nactive, add_pages, ATOMIC_RELAXED);
}
static void
pa_nactive_sub(pa_shard_t *shard, size_t sub_pages) {
assert(atomic_load_zu(&shard->nactive, ATOMIC_RELAXED) >= sub_pages);
atomic_fetch_sub_zu(&shard->nactive, sub_pages, ATOMIC_RELAXED);
}
bool
pa_central_init(pa_central_t *central, base_t *base, bool hpa,
hpa_hooks_t *hpa_hooks) {
bool err;
if (hpa) {
err = hpa_central_init(&central->hpa, base, hpa_hooks);
if (err) {
return true;
}
}
return false;
}
bool
pa_shard_init(tsdn_t *tsdn, pa_shard_t *shard, pa_central_t *central,
emap_t *emap, base_t *base, unsigned ind, pa_shard_stats_t *stats,
malloc_mutex_t *stats_mtx, nstime_t *cur_time,
size_t pac_oversize_threshold, ssize_t dirty_decay_ms,
ssize_t muzzy_decay_ms) {
/* This will change eventually, but for now it should hold. */
assert(base_ind_get(base) == ind);
if (edata_cache_init(&shard->edata_cache, base)) {
return true;
}
if (pac_init(tsdn, &shard->pac, base, emap, &shard->edata_cache,
cur_time, pac_oversize_threshold, dirty_decay_ms, muzzy_decay_ms,
&stats->pac_stats, stats_mtx)) {
return true;
}
shard->ind = ind;
shard->ever_used_hpa = false;
atomic_store_b(&shard->use_hpa, false, ATOMIC_RELAXED);
atomic_store_zu(&shard->nactive, 0, ATOMIC_RELAXED);
shard->stats_mtx = stats_mtx;
shard->stats = stats;
memset(shard->stats, 0, sizeof(*shard->stats));
shard->central = central;
shard->emap = emap;
shard->base = base;
return false;
}
bool
pa_shard_enable_hpa(tsdn_t *tsdn, pa_shard_t *shard,
const hpa_shard_opts_t *hpa_opts, const sec_opts_t *hpa_sec_opts) {
if (hpa_shard_init(&shard->hpa_shard, &shard->central->hpa, shard->emap,
shard->base, &shard->edata_cache, shard->ind, hpa_opts)) {
return true;
}
if (sec_init(tsdn, &shard->hpa_sec, shard->base, &shard->hpa_shard.pai,
hpa_sec_opts)) {
return true;
}
shard->ever_used_hpa = true;
atomic_store_b(&shard->use_hpa, true, ATOMIC_RELAXED);
return false;
}
void
pa_shard_disable_hpa(tsdn_t *tsdn, pa_shard_t *shard) {
atomic_store_b(&shard->use_hpa, false, ATOMIC_RELAXED);
if (shard->ever_used_hpa) {
sec_disable(tsdn, &shard->hpa_sec);
hpa_shard_disable(tsdn, &shard->hpa_shard);
}
}
void
pa_shard_reset(tsdn_t *tsdn, pa_shard_t *shard) {
atomic_store_zu(&shard->nactive, 0, ATOMIC_RELAXED);
if (shard->ever_used_hpa) {
sec_flush(tsdn, &shard->hpa_sec);
}
}
static bool
pa_shard_uses_hpa(pa_shard_t *shard) {
return atomic_load_b(&shard->use_hpa, ATOMIC_RELAXED);
}
void
pa_shard_destroy(tsdn_t *tsdn, pa_shard_t *shard) {
pac_destroy(tsdn, &shard->pac);
if (shard->ever_used_hpa) {
sec_flush(tsdn, &shard->hpa_sec);
hpa_shard_disable(tsdn, &shard->hpa_shard);
}
}
static pai_t *
pa_get_pai(pa_shard_t *shard, edata_t *edata) {
return (edata_pai_get(edata) == EXTENT_PAI_PAC
? &shard->pac.pai : &shard->hpa_sec.pai);
}
edata_t *
pa_alloc(tsdn_t *tsdn, pa_shard_t *shard, size_t size, size_t alignment,
bool slab, szind_t szind, bool zero, bool guarded,
bool *deferred_work_generated) {
witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),
WITNESS_RANK_CORE, 0);
assert(!guarded || alignment <= PAGE);
edata_t *edata = NULL;
if (!guarded && pa_shard_uses_hpa(shard)) {
edata = pai_alloc(tsdn, &shard->hpa_sec.pai, size, alignment,
zero, /* guarded */ false, slab, deferred_work_generated);
}
/*
* Fall back to the PAC if the HPA is off or couldn't serve the given
* allocation request.
*/
if (edata == NULL) {
edata = pai_alloc(tsdn, &shard->pac.pai, size, alignment, zero,
guarded, slab, deferred_work_generated);
}
if (edata != NULL) {
assert(edata_size_get(edata) == size);
pa_nactive_add(shard, size >> LG_PAGE);
emap_remap(tsdn, shard->emap, edata, szind, slab);
edata_szind_set(edata, szind);
edata_slab_set(edata, slab);
if (slab && (size > 2 * PAGE)) {
emap_register_interior(tsdn, shard->emap, edata, szind);
}
assert(edata_arena_ind_get(edata) == shard->ind);
}
return edata;
}
bool
pa_expand(tsdn_t *tsdn, pa_shard_t *shard, edata_t *edata, size_t old_size,
size_t new_size, szind_t szind, bool zero, bool *deferred_work_generated) {
assert(new_size > old_size);
assert(edata_size_get(edata) == old_size);
assert((new_size & PAGE_MASK) == 0);
if (edata_guarded_get(edata)) {
return true;
}
size_t expand_amount = new_size - old_size;
pai_t *pai = pa_get_pai(shard, edata);
bool error = pai_expand(tsdn, pai, edata, old_size, new_size, zero,
deferred_work_generated);
if (error) {
return true;
}
pa_nactive_add(shard, expand_amount >> LG_PAGE);
edata_szind_set(edata, szind);
emap_remap(tsdn, shard->emap, edata, szind, /* slab */ false);
return false;
}
bool
pa_shrink(tsdn_t *tsdn, pa_shard_t *shard, edata_t *edata, size_t old_size,
size_t new_size, szind_t szind, bool *deferred_work_generated) {
assert(new_size < old_size);
assert(edata_size_get(edata) == old_size);
assert((new_size & PAGE_MASK) == 0);
if (edata_guarded_get(edata)) {
return true;
}
size_t shrink_amount = old_size - new_size;
pai_t *pai = pa_get_pai(shard, edata);
bool error = pai_shrink(tsdn, pai, edata, old_size, new_size,
deferred_work_generated);
if (error) {
return true;
}
pa_nactive_sub(shard, shrink_amount >> LG_PAGE);
edata_szind_set(edata, szind);
emap_remap(tsdn, shard->emap, edata, szind, /* slab */ false);
return false;
}
void
pa_dalloc(tsdn_t *tsdn, pa_shard_t *shard, edata_t *edata,
bool *deferred_work_generated) {
emap_remap(tsdn, shard->emap, edata, SC_NSIZES, /* slab */ false);
if (edata_slab_get(edata)) {
emap_deregister_interior(tsdn, shard->emap, edata);
/*
* The slab state of the extent isn't cleared. It may be used
* by the pai implementation, e.g. to make caching decisions.
*/
}
edata_addr_set(edata, edata_base_get(edata));
edata_szind_set(edata, SC_NSIZES);
pa_nactive_sub(shard, edata_size_get(edata) >> LG_PAGE);
pai_t *pai = pa_get_pai(shard, edata);
pai_dalloc(tsdn, pai, edata, deferred_work_generated);
}
bool
pa_shard_retain_grow_limit_get_set(tsdn_t *tsdn, pa_shard_t *shard,
size_t *old_limit, size_t *new_limit) {
return pac_retain_grow_limit_get_set(tsdn, &shard->pac, old_limit,
new_limit);
}
bool
pa_decay_ms_set(tsdn_t *tsdn, pa_shard_t *shard, extent_state_t state,
ssize_t decay_ms, pac_purge_eagerness_t eagerness) {
return pac_decay_ms_set(tsdn, &shard->pac, state, decay_ms, eagerness);
}
ssize_t
pa_decay_ms_get(pa_shard_t *shard, extent_state_t state) {
return pac_decay_ms_get(&shard->pac, state);
}
void
pa_shard_set_deferral_allowed(tsdn_t *tsdn, pa_shard_t *shard,
bool deferral_allowed) {
if (pa_shard_uses_hpa(shard)) {
hpa_shard_set_deferral_allowed(tsdn, &shard->hpa_shard,
deferral_allowed);
}
}
void
pa_shard_do_deferred_work(tsdn_t *tsdn, pa_shard_t *shard) {
if (pa_shard_uses_hpa(shard)) {
hpa_shard_do_deferred_work(tsdn, &shard->hpa_shard);
}
}
/*
* Get time until next deferred work ought to happen. If there are multiple
* things that have been deferred, this function calculates the time until
* the soonest of those things.
*/
uint64_t
pa_shard_time_until_deferred_work(tsdn_t *tsdn, pa_shard_t *shard) {
uint64_t time = pai_time_until_deferred_work(tsdn, &shard->pac.pai);
if (time == BACKGROUND_THREAD_DEFERRED_MIN) {
return time;
}
if (pa_shard_uses_hpa(shard)) {
uint64_t hpa =
pai_time_until_deferred_work(tsdn, &shard->hpa_shard.pai);
if (hpa < time) {
time = hpa;
}
}
return time;
}
#include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h"
/*
* This file is logically part of the PA module. While pa.c contains the core
* allocator functionality, this file contains boring integration functionality;
* things like the pre- and post- fork handlers, and stats merging for CTL
* refreshes.
*/
void
pa_shard_prefork0(tsdn_t *tsdn, pa_shard_t *shard) {
malloc_mutex_prefork(tsdn, &shard->pac.decay_dirty.mtx);
malloc_mutex_prefork(tsdn, &shard->pac.decay_muzzy.mtx);
}
void
pa_shard_prefork2(tsdn_t *tsdn, pa_shard_t *shard) {
if (shard->ever_used_hpa) {
sec_prefork2(tsdn, &shard->hpa_sec);
}
}
void
pa_shard_prefork3(tsdn_t *tsdn, pa_shard_t *shard) {
malloc_mutex_prefork(tsdn, &shard->pac.grow_mtx);
if (shard->ever_used_hpa) {
hpa_shard_prefork3(tsdn, &shard->hpa_shard);
}
}
void
pa_shard_prefork4(tsdn_t *tsdn, pa_shard_t *shard) {
ecache_prefork(tsdn, &shard->pac.ecache_dirty);
ecache_prefork(tsdn, &shard->pac.ecache_muzzy);
ecache_prefork(tsdn, &shard->pac.ecache_retained);
if (shard->ever_used_hpa) {
hpa_shard_prefork4(tsdn, &shard->hpa_shard);
}
}
void
pa_shard_prefork5(tsdn_t *tsdn, pa_shard_t *shard) {
edata_cache_prefork(tsdn, &shard->edata_cache);
}
void
pa_shard_postfork_parent(tsdn_t *tsdn, pa_shard_t *shard) {
edata_cache_postfork_parent(tsdn, &shard->edata_cache);
ecache_postfork_parent(tsdn, &shard->pac.ecache_dirty);
ecache_postfork_parent(tsdn, &shard->pac.ecache_muzzy);
ecache_postfork_parent(tsdn, &shard->pac.ecache_retained);
malloc_mutex_postfork_parent(tsdn, &shard->pac.grow_mtx);
malloc_mutex_postfork_parent(tsdn, &shard->pac.decay_dirty.mtx);
malloc_mutex_postfork_parent(tsdn, &shard->pac.decay_muzzy.mtx);
if (shard->ever_used_hpa) {
sec_postfork_parent(tsdn, &shard->hpa_sec);
hpa_shard_postfork_parent(tsdn, &shard->hpa_shard);
}
}
void
pa_shard_postfork_child(tsdn_t *tsdn, pa_shard_t *shard) {
edata_cache_postfork_child(tsdn, &shard->edata_cache);
ecache_postfork_child(tsdn, &shard->pac.ecache_dirty);
ecache_postfork_child(tsdn, &shard->pac.ecache_muzzy);
ecache_postfork_child(tsdn, &shard->pac.ecache_retained);
malloc_mutex_postfork_child(tsdn, &shard->pac.grow_mtx);
malloc_mutex_postfork_child(tsdn, &shard->pac.decay_dirty.mtx);
malloc_mutex_postfork_child(tsdn, &shard->pac.decay_muzzy.mtx);
if (shard->ever_used_hpa) {
sec_postfork_child(tsdn, &shard->hpa_sec);
hpa_shard_postfork_child(tsdn, &shard->hpa_shard);
}
}
void
pa_shard_basic_stats_merge(pa_shard_t *shard, size_t *nactive, size_t *ndirty,
size_t *nmuzzy) {
*nactive += atomic_load_zu(&shard->nactive, ATOMIC_RELAXED);
*ndirty += ecache_npages_get(&shard->pac.ecache_dirty);
*nmuzzy += ecache_npages_get(&shard->pac.ecache_muzzy);
}
void
pa_shard_stats_merge(tsdn_t *tsdn, pa_shard_t *shard,
pa_shard_stats_t *pa_shard_stats_out, pac_estats_t *estats_out,
hpa_shard_stats_t *hpa_stats_out, sec_stats_t *sec_stats_out,
size_t *resident) {
cassert(config_stats);
pa_shard_stats_out->pac_stats.retained +=
ecache_npages_get(&shard->pac.ecache_retained) << LG_PAGE;
pa_shard_stats_out->edata_avail += atomic_load_zu(
&shard->edata_cache.count, ATOMIC_RELAXED);
size_t resident_pgs = 0;
resident_pgs += atomic_load_zu(&shard->nactive, ATOMIC_RELAXED);
resident_pgs += ecache_npages_get(&shard->pac.ecache_dirty);
*resident += (resident_pgs << LG_PAGE);
/* Dirty decay stats */
locked_inc_u64_unsynchronized(
&pa_shard_stats_out->pac_stats.decay_dirty.npurge,
locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),
&shard->pac.stats->decay_dirty.npurge));
locked_inc_u64_unsynchronized(
&pa_shard_stats_out->pac_stats.decay_dirty.nmadvise,
locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),
&shard->pac.stats->decay_dirty.nmadvise));
locked_inc_u64_unsynchronized(
&pa_shard_stats_out->pac_stats.decay_dirty.purged,
locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),
&shard->pac.stats->decay_dirty.purged));
/* Muzzy decay stats */
locked_inc_u64_unsynchronized(
&pa_shard_stats_out->pac_stats.decay_muzzy.npurge,
locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),
&shard->pac.stats->decay_muzzy.npurge));
locked_inc_u64_unsynchronized(
&pa_shard_stats_out->pac_stats.decay_muzzy.nmadvise,
locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),
&shard->pac.stats->decay_muzzy.nmadvise));
locked_inc_u64_unsynchronized(
&pa_shard_stats_out->pac_stats.decay_muzzy.purged,
locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),
&shard->pac.stats->decay_muzzy.purged));
atomic_load_add_store_zu(&pa_shard_stats_out->pac_stats.abandoned_vm,
atomic_load_zu(&shard->pac.stats->abandoned_vm, ATOMIC_RELAXED));
for (pszind_t i = 0; i < SC_NPSIZES; i++) {
size_t dirty, muzzy, retained, dirty_bytes, muzzy_bytes,
retained_bytes;
dirty = ecache_nextents_get(&shard->pac.ecache_dirty, i);
muzzy = ecache_nextents_get(&shard->pac.ecache_muzzy, i);
retained = ecache_nextents_get(&shard->pac.ecache_retained, i);
dirty_bytes = ecache_nbytes_get(&shard->pac.ecache_dirty, i);
muzzy_bytes = ecache_nbytes_get(&shard->pac.ecache_muzzy, i);
retained_bytes = ecache_nbytes_get(&shard->pac.ecache_retained,
i);
estats_out[i].ndirty = dirty;
estats_out[i].nmuzzy = muzzy;
estats_out[i].nretained = retained;
estats_out[i].dirty_bytes = dirty_bytes;
estats_out[i].muzzy_bytes = muzzy_bytes;
estats_out[i].retained_bytes = retained_bytes;
}
if (shard->ever_used_hpa) {
hpa_shard_stats_merge(tsdn, &shard->hpa_shard, hpa_stats_out);
sec_stats_merge(tsdn, &shard->hpa_sec, sec_stats_out);
}
}
static void
pa_shard_mtx_stats_read_single(tsdn_t *tsdn, mutex_prof_data_t *mutex_prof_data,
malloc_mutex_t *mtx, int ind) {
malloc_mutex_lock(tsdn, mtx);
malloc_mutex_prof_read(tsdn, &mutex_prof_data[ind], mtx);
malloc_mutex_unlock(tsdn, mtx);
}
void
pa_shard_mtx_stats_read(tsdn_t *tsdn, pa_shard_t *shard,
mutex_prof_data_t mutex_prof_data[mutex_prof_num_arena_mutexes]) {
pa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,
&shard->edata_cache.mtx, arena_prof_mutex_extent_avail);
pa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,
&shard->pac.ecache_dirty.mtx, arena_prof_mutex_extents_dirty);
pa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,
&shard->pac.ecache_muzzy.mtx, arena_prof_mutex_extents_muzzy);
pa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,
&shard->pac.ecache_retained.mtx, arena_prof_mutex_extents_retained);
pa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,
&shard->pac.decay_dirty.mtx, arena_prof_mutex_decay_dirty);
pa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,
&shard->pac.decay_muzzy.mtx, arena_prof_mutex_decay_muzzy);
if (shard->ever_used_hpa) {
pa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,
&shard->hpa_shard.mtx, arena_prof_mutex_hpa_shard);
pa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,
&shard->hpa_shard.grow_mtx,
arena_prof_mutex_hpa_shard_grow);
sec_mutex_stats_read(tsdn, &shard->hpa_sec,
&mutex_prof_data[arena_prof_mutex_hpa_sec]);
}
}
#include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h"
#include "jemalloc/internal/pac.h"
#include "jemalloc/internal/san.h"
static edata_t *pac_alloc_impl(tsdn_t *tsdn, pai_t *self, size_t size,
size_t alignment, bool zero, bool guarded, bool frequent_reuse,
bool *deferred_work_generated);
static bool pac_expand_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata,
size_t old_size, size_t new_size, bool zero, bool *deferred_work_generated);
static bool pac_shrink_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata,
size_t old_size, size_t new_size, bool *deferred_work_generated);
static void pac_dalloc_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata,
bool *deferred_work_generated);
static uint64_t pac_time_until_deferred_work(tsdn_t *tsdn, pai_t *self);
static inline void
pac_decay_data_get(pac_t *pac, extent_state_t state,
decay_t **r_decay, pac_decay_stats_t **r_decay_stats, ecache_t **r_ecache) {
switch(state) {
case extent_state_dirty:
*r_decay = &pac->decay_dirty;
*r_decay_stats = &pac->stats->decay_dirty;
*r_ecache = &pac->ecache_dirty;
return;
case extent_state_muzzy:
*r_decay = &pac->decay_muzzy;
*r_decay_stats = &pac->stats->decay_muzzy;
*r_ecache = &pac->ecache_muzzy;
return;
default:
unreachable();
}
}
bool
pac_init(tsdn_t *tsdn, pac_t *pac, base_t *base, emap_t *emap,
edata_cache_t *edata_cache, nstime_t *cur_time,
size_t pac_oversize_threshold, ssize_t dirty_decay_ms,
ssize_t muzzy_decay_ms, pac_stats_t *pac_stats, malloc_mutex_t *stats_mtx) {
unsigned ind = base_ind_get(base);
/*
* Delay coalescing for dirty extents despite the disruptive effect on
* memory layout for best-fit extent allocation, since cached extents
* are likely to be reused soon after deallocation, and the cost of
* merging/splitting extents is non-trivial.
*/
if (ecache_init(tsdn, &pac->ecache_dirty, extent_state_dirty, ind,
/* delay_coalesce */ true)) {
return true;
}
/*
* Coalesce muzzy extents immediately, because operations on them are in
* the critical path much less often than for dirty extents.
*/
if (ecache_init(tsdn, &pac->ecache_muzzy, extent_state_muzzy, ind,
/* delay_coalesce */ false)) {
return true;
}
/*
* Coalesce retained extents immediately, in part because they will
* never be evicted (and therefore there's no opportunity for delayed
* coalescing), but also because operations on retained extents are not
* in the critical path.
*/
if (ecache_init(tsdn, &pac->ecache_retained, extent_state_retained,
ind, /* delay_coalesce */ false)) {
return true;
}
exp_grow_init(&pac->exp_grow);
if (malloc_mutex_init(&pac->grow_mtx, "extent_grow",
WITNESS_RANK_EXTENT_GROW, malloc_mutex_rank_exclusive)) {
return true;
}
atomic_store_zu(&pac->oversize_threshold, pac_oversize_threshold,
ATOMIC_RELAXED);
if (decay_init(&pac->decay_dirty, cur_time, dirty_decay_ms)) {
return true;
}
if (decay_init(&pac->decay_muzzy, cur_time, muzzy_decay_ms)) {
return true;
}
if (san_bump_alloc_init(&pac->sba)) {
return true;
}
pac->base = base;
pac->emap = emap;
pac->edata_cache = edata_cache;
pac->stats = pac_stats;
pac->stats_mtx = stats_mtx;
atomic_store_zu(&pac->extent_sn_next, 0, ATOMIC_RELAXED);
pac->pai.alloc = &pac_alloc_impl;
pac->pai.alloc_batch = &pai_alloc_batch_default;
pac->pai.expand = &pac_expand_impl;
pac->pai.shrink = &pac_shrink_impl;
pac->pai.dalloc = &pac_dalloc_impl;
pac->pai.dalloc_batch = &pai_dalloc_batch_default;
pac->pai.time_until_deferred_work = &pac_time_until_deferred_work;
return false;
}
static inline bool
pac_may_have_muzzy(pac_t *pac) {
return pac_decay_ms_get(pac, extent_state_muzzy) != 0;
}
static edata_t *
pac_alloc_real(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, size_t size,
size_t alignment, bool zero, bool guarded) {
assert(!guarded || alignment <= PAGE);
edata_t *edata = ecache_alloc(tsdn, pac, ehooks, &pac->ecache_dirty,
NULL, size, alignment, zero, guarded);
if (edata == NULL && pac_may_have_muzzy(pac)) {
edata = ecache_alloc(tsdn, pac, ehooks, &pac->ecache_muzzy,
NULL, size, alignment, zero, guarded);
}
if (edata == NULL) {
edata = ecache_alloc_grow(tsdn, pac, ehooks,
&pac->ecache_retained, NULL, size, alignment, zero,
guarded);
if (config_stats && edata != NULL) {
atomic_fetch_add_zu(&pac->stats->pac_mapped, size,
ATOMIC_RELAXED);
}
}
return edata;
}
static edata_t *
pac_alloc_new_guarded(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, size_t size,
size_t alignment, bool zero, bool frequent_reuse) {
assert(alignment <= PAGE);
edata_t *edata;
if (san_bump_enabled() && frequent_reuse) {
edata = san_bump_alloc(tsdn, &pac->sba, pac, ehooks, size,
zero);
} else {
size_t size_with_guards = san_two_side_guarded_sz(size);
/* Alloc a non-guarded extent first.*/
edata = pac_alloc_real(tsdn, pac, ehooks, size_with_guards,
/* alignment */ PAGE, zero, /* guarded */ false);
if (edata != NULL) {
/* Add guards around it. */
assert(edata_size_get(edata) == size_with_guards);
san_guard_pages_two_sided(tsdn, ehooks, edata,
pac->emap, true);
}
}
assert(edata == NULL || (edata_guarded_get(edata) &&
edata_size_get(edata) == size));
return edata;
}
static edata_t *
pac_alloc_impl(tsdn_t *tsdn, pai_t *self, size_t size, size_t alignment,
bool zero, bool guarded, bool frequent_reuse,
bool *deferred_work_generated) {
pac_t *pac = (pac_t *)self;
ehooks_t *ehooks = pac_ehooks_get(pac);
edata_t *edata = NULL;
/*
* The condition is an optimization - not frequently reused guarded
* allocations are never put in the ecache. pac_alloc_real also
* doesn't grow retained for guarded allocations. So pac_alloc_real
* for such allocations would always return NULL.
* */
if (!guarded || frequent_reuse) {
edata = pac_alloc_real(tsdn, pac, ehooks, size, alignment,
zero, guarded);
}
if (edata == NULL && guarded) {
/* No cached guarded extents; creating a new one. */
edata = pac_alloc_new_guarded(tsdn, pac, ehooks, size,
alignment, zero, frequent_reuse);
}
return edata;
}
static bool
pac_expand_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata, size_t old_size,
size_t new_size, bool zero, bool *deferred_work_generated) {
pac_t *pac = (pac_t *)self;
ehooks_t *ehooks = pac_ehooks_get(pac);
size_t mapped_add = 0;
size_t expand_amount = new_size - old_size;
if (ehooks_merge_will_fail(ehooks)) {
return true;
}
edata_t *trail = ecache_alloc(tsdn, pac, ehooks, &pac->ecache_dirty,
edata, expand_amount, PAGE, zero, /* guarded*/ false);
if (trail == NULL) {
trail = ecache_alloc(tsdn, pac, ehooks, &pac->ecache_muzzy,
edata, expand_amount, PAGE, zero, /* guarded*/ false);
}
if (trail == NULL) {
trail = ecache_alloc_grow(tsdn, pac, ehooks,
&pac->ecache_retained, edata, expand_amount, PAGE, zero,
/* guarded */ false);
mapped_add = expand_amount;
}
if (trail == NULL) {
return true;
}
if (extent_merge_wrapper(tsdn, pac, ehooks, edata, trail)) {
extent_dalloc_wrapper(tsdn, pac, ehooks, trail);
return true;
}
if (config_stats && mapped_add > 0) {
atomic_fetch_add_zu(&pac->stats->pac_mapped, mapped_add,
ATOMIC_RELAXED);
}
return false;
}
static bool
pac_shrink_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata, size_t old_size,
size_t new_size, bool *deferred_work_generated) {
pac_t *pac = (pac_t *)self;
ehooks_t *ehooks = pac_ehooks_get(pac);
size_t shrink_amount = old_size - new_size;
if (ehooks_split_will_fail(ehooks)) {
return true;
}
edata_t *trail = extent_split_wrapper(tsdn, pac, ehooks, edata,
new_size, shrink_amount, /* holding_core_locks */ false);
if (trail == NULL) {
return true;
}
ecache_dalloc(tsdn, pac, ehooks, &pac->ecache_dirty, trail);
*deferred_work_generated = true;
return false;
}
static void
pac_dalloc_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata,
bool *deferred_work_generated) {
pac_t *pac = (pac_t *)self;
ehooks_t *ehooks = pac_ehooks_get(pac);
if (edata_guarded_get(edata)) {
/*
* Because cached guarded extents do exact fit only, large
* guarded extents are restored on dalloc eagerly (otherwise
* they will not be reused efficiently). Slab sizes have a
* limited number of size classes, and tend to cycle faster.
*
* In the case where coalesce is restrained (VirtualFree on
* Windows), guarded extents are also not cached -- otherwise
* during arena destroy / reset, the retained extents would not
* be whole regions (i.e. they are split between regular and
* guarded).
*/
if (!edata_slab_get(edata) || !maps_coalesce) {
assert(edata_size_get(edata) >= SC_LARGE_MINCLASS ||
!maps_coalesce);
san_unguard_pages_two_sided(tsdn, ehooks, edata,
pac->emap);
}
}
ecache_dalloc(tsdn, pac, ehooks, &pac->ecache_dirty, edata);
/* Purging of deallocated pages is deferred */
*deferred_work_generated = true;
}
static inline uint64_t
pac_ns_until_purge(tsdn_t *tsdn, decay_t *decay, size_t npages) {
if (malloc_mutex_trylock(tsdn, &decay->mtx)) {
/* Use minimal interval if decay is contended. */
return BACKGROUND_THREAD_DEFERRED_MIN;
}
uint64_t result = decay_ns_until_purge(decay, npages,
ARENA_DEFERRED_PURGE_NPAGES_THRESHOLD);
malloc_mutex_unlock(tsdn, &decay->mtx);
return result;
}
static uint64_t
pac_time_until_deferred_work(tsdn_t *tsdn, pai_t *self) {
uint64_t time;
pac_t *pac = (pac_t *)self;
time = pac_ns_until_purge(tsdn,
&pac->decay_dirty,
ecache_npages_get(&pac->ecache_dirty));
if (time == BACKGROUND_THREAD_DEFERRED_MIN) {
return time;
}
uint64_t muzzy = pac_ns_until_purge(tsdn,
&pac->decay_muzzy,
ecache_npages_get(&pac->ecache_muzzy));
if (muzzy < time) {
time = muzzy;
}
return time;
}
bool
pac_retain_grow_limit_get_set(tsdn_t *tsdn, pac_t *pac, size_t *old_limit,
size_t *new_limit) {
pszind_t new_ind JEMALLOC_CC_SILENCE_INIT(0);
if (new_limit != NULL) {
size_t limit = *new_limit;
/* Grow no more than the new limit. */
if ((new_ind = sz_psz2ind(limit + 1) - 1) >= SC_NPSIZES) {
return true;
}
}
malloc_mutex_lock(tsdn, &pac->grow_mtx);
if (old_limit != NULL) {
*old_limit = sz_pind2sz(pac->exp_grow.limit);
}
if (new_limit != NULL) {
pac->exp_grow.limit = new_ind;
}
malloc_mutex_unlock(tsdn, &pac->grow_mtx);
return false;
}
static size_t
pac_stash_decayed(tsdn_t *tsdn, pac_t *pac, ecache_t *ecache,
size_t npages_limit, size_t npages_decay_max,
edata_list_inactive_t *result) {
witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),
WITNESS_RANK_CORE, 0);
ehooks_t *ehooks = pac_ehooks_get(pac);
/* Stash extents according to npages_limit. */
size_t nstashed = 0;
while (nstashed < npages_decay_max) {
edata_t *edata = ecache_evict(tsdn, pac, ehooks, ecache,
npages_limit);
if (edata == NULL) {
break;
}
edata_list_inactive_append(result, edata);
nstashed += edata_size_get(edata) >> LG_PAGE;
}
return nstashed;
}
static size_t
pac_decay_stashed(tsdn_t *tsdn, pac_t *pac, decay_t *decay,
pac_decay_stats_t *decay_stats, ecache_t *ecache, bool fully_decay,
edata_list_inactive_t *decay_extents) {
bool err;
size_t nmadvise = 0;
size_t nunmapped = 0;
size_t npurged = 0;
ehooks_t *ehooks = pac_ehooks_get(pac);
bool try_muzzy = !fully_decay
&& pac_decay_ms_get(pac, extent_state_muzzy) != 0;
for (edata_t *edata = edata_list_inactive_first(decay_extents); edata !=
NULL; edata = edata_list_inactive_first(decay_extents)) {
edata_list_inactive_remove(decay_extents, edata);
size_t size = edata_size_get(edata);
size_t npages = size >> LG_PAGE;
nmadvise++;
npurged += npages;
switch (ecache->state) {
case extent_state_active:
not_reached();
case extent_state_dirty:
if (try_muzzy) {
err = extent_purge_lazy_wrapper(tsdn, ehooks,
edata, /* offset */ 0, size);
if (!err) {
ecache_dalloc(tsdn, pac, ehooks,
&pac->ecache_muzzy, edata);
break;
}
}
JEMALLOC_FALLTHROUGH;
case extent_state_muzzy:
extent_dalloc_wrapper(tsdn, pac, ehooks, edata);
nunmapped += npages;
break;
case extent_state_retained:
default:
not_reached();
}
}
if (config_stats) {
LOCKEDINT_MTX_LOCK(tsdn, *pac->stats_mtx);
locked_inc_u64(tsdn, LOCKEDINT_MTX(*pac->stats_mtx),
&decay_stats->npurge, 1);
locked_inc_u64(tsdn, LOCKEDINT_MTX(*pac->stats_mtx),
&decay_stats->nmadvise, nmadvise);
locked_inc_u64(tsdn, LOCKEDINT_MTX(*pac->stats_mtx),
&decay_stats->purged, npurged);
LOCKEDINT_MTX_UNLOCK(tsdn, *pac->stats_mtx);
atomic_fetch_sub_zu(&pac->stats->pac_mapped,
nunmapped << LG_PAGE, ATOMIC_RELAXED);
}
return npurged;
}
/*
* npages_limit: Decay at most npages_decay_max pages without violating the
* invariant: (ecache_npages_get(ecache) >= npages_limit). We need an upper
* bound on number of pages in order to prevent unbounded growth (namely in
* stashed), otherwise unbounded new pages could be added to extents during the
* current decay run, so that the purging thread never finishes.
*/
static void
pac_decay_to_limit(tsdn_t *tsdn, pac_t *pac, decay_t *decay,
pac_decay_stats_t *decay_stats, ecache_t *ecache, bool fully_decay,
size_t npages_limit, size_t npages_decay_max) {
witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),
WITNESS_RANK_CORE, 1);
if (decay->purging || npages_decay_max == 0) {
return;
}
decay->purging = true;
malloc_mutex_unlock(tsdn, &decay->mtx);
edata_list_inactive_t decay_extents;
edata_list_inactive_init(&decay_extents);
size_t npurge = pac_stash_decayed(tsdn, pac, ecache, npages_limit,
npages_decay_max, &decay_extents);
if (npurge != 0) {
size_t npurged = pac_decay_stashed(tsdn, pac, decay,
decay_stats, ecache, fully_decay, &decay_extents);
assert(npurged == npurge);
}
malloc_mutex_lock(tsdn, &decay->mtx);
decay->purging = false;
}
void
pac_decay_all(tsdn_t *tsdn, pac_t *pac, decay_t *decay,
pac_decay_stats_t *decay_stats, ecache_t *ecache, bool fully_decay) {
malloc_mutex_assert_owner(tsdn, &decay->mtx);
pac_decay_to_limit(tsdn, pac, decay, decay_stats, ecache, fully_decay,
/* npages_limit */ 0, ecache_npages_get(ecache));
}
static void
pac_decay_try_purge(tsdn_t *tsdn, pac_t *pac, decay_t *decay,
pac_decay_stats_t *decay_stats, ecache_t *ecache,
size_t current_npages, size_t npages_limit) {
if (current_npages > npages_limit) {
pac_decay_to_limit(tsdn, pac, decay, decay_stats, ecache,
/* fully_decay */ false, npages_limit,
current_npages - npages_limit);
}
}
bool
pac_maybe_decay_purge(tsdn_t *tsdn, pac_t *pac, decay_t *decay,
pac_decay_stats_t *decay_stats, ecache_t *ecache,
pac_purge_eagerness_t eagerness) {
malloc_mutex_assert_owner(tsdn, &decay->mtx);
/* Purge all or nothing if the option is disabled. */
ssize_t decay_ms = decay_ms_read(decay);
if (decay_ms <= 0) {
if (decay_ms == 0) {
pac_decay_to_limit(tsdn, pac, decay, decay_stats,
ecache, /* fully_decay */ false,
/* npages_limit */ 0, ecache_npages_get(ecache));
}
return false;
}
/*
* If the deadline has been reached, advance to the current epoch and
* purge to the new limit if necessary. Note that dirty pages created
* during the current epoch are not subject to purge until a future
* epoch, so as a result purging only happens during epoch advances, or
* being triggered by background threads (scheduled event).
*/
nstime_t time;
nstime_init_update(&time);
size_t npages_current = ecache_npages_get(ecache);
bool epoch_advanced = decay_maybe_advance_epoch(decay, &time,
npages_current);
if (eagerness == PAC_PURGE_ALWAYS
|| (epoch_advanced && eagerness == PAC_PURGE_ON_EPOCH_ADVANCE)) {
size_t npages_limit = decay_npages_limit_get(decay);
pac_decay_try_purge(tsdn, pac, decay, decay_stats, ecache,
npages_current, npages_limit);
}
return epoch_advanced;
}
bool
pac_decay_ms_set(tsdn_t *tsdn, pac_t *pac, extent_state_t state,
ssize_t decay_ms, pac_purge_eagerness_t eagerness) {
decay_t *decay;
pac_decay_stats_t *decay_stats;
ecache_t *ecache;
pac_decay_data_get(pac, state, &decay, &decay_stats, &ecache);
if (!decay_ms_valid(decay_ms)) {
return true;
}
malloc_mutex_lock(tsdn, &decay->mtx);
/*
* Restart decay backlog from scratch, which may cause many dirty pages
* to be immediately purged. It would conceptually be possible to map
* the old backlog onto the new backlog, but there is no justification
* for such complexity since decay_ms changes are intended to be
* infrequent, either between the {-1, 0, >0} states, or a one-time
* arbitrary change during initial arena configuration.
*/
nstime_t cur_time;
nstime_init_update(&cur_time);
decay_reinit(decay, &cur_time, decay_ms);
pac_maybe_decay_purge(tsdn, pac, decay, decay_stats, ecache, eagerness);
malloc_mutex_unlock(tsdn, &decay->mtx);
return false;
}
ssize_t
pac_decay_ms_get(pac_t *pac, extent_state_t state) {
decay_t *decay;
pac_decay_stats_t *decay_stats;
ecache_t *ecache;
pac_decay_data_get(pac, state, &decay, &decay_stats, &ecache);
return decay_ms_read(decay);
}
void
pac_reset(tsdn_t *tsdn, pac_t *pac) {
/*
* No-op for now; purging is still done at the arena-level. It should
* get moved in here, though.
*/
(void)tsdn;
(void)pac;
}
void
pac_destroy(tsdn_t *tsdn, pac_t *pac) {
assert(ecache_npages_get(&pac->ecache_dirty) == 0);
assert(ecache_npages_get(&pac->ecache_muzzy) == 0);
/*
* Iterate over the retained extents and destroy them. This gives the
* extent allocator underlying the extent hooks an opportunity to unmap
* all retained memory without having to keep its own metadata
* structures. In practice, virtual memory for dss-allocated extents is
* leaked here, so best practice is to avoid dss for arenas to be
* destroyed, or provide custom extent hooks that track retained
* dss-based extents for later reuse.
*/
ehooks_t *ehooks = pac_ehooks_get(pac);
edata_t *edata;
while ((edata = ecache_evict(tsdn, pac, ehooks,
&pac->ecache_retained, 0)) != NULL) {
extent_destroy_wrapper(tsdn, pac, ehooks, edata);
}
}
#define JEMALLOC_PAGES_C_
#include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/pages.h" #include "jemalloc/internal/pages.h"
...@@ -14,6 +13,14 @@ ...@@ -14,6 +13,14 @@
#include <vm/vm_param.h> #include <vm/vm_param.h>
#endif #endif
#endif #endif
#ifdef __NetBSD__
#include <sys/bitops.h> /* ilog2 */
#endif
#ifdef JEMALLOC_HAVE_VM_MAKE_TAG
#define PAGES_FD_TAG VM_MAKE_TAG(101U)
#else
#define PAGES_FD_TAG -1
#endif
/******************************************************************************/ /******************************************************************************/
/* Data. */ /* Data. */
...@@ -40,6 +47,57 @@ thp_mode_t init_system_thp_mode; ...@@ -40,6 +47,57 @@ thp_mode_t init_system_thp_mode;
/* Runtime support for lazy purge. Irrelevant when !pages_can_purge_lazy. */ /* Runtime support for lazy purge. Irrelevant when !pages_can_purge_lazy. */
static bool pages_can_purge_lazy_runtime = true; static bool pages_can_purge_lazy_runtime = true;
#ifdef JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS
static int madvise_dont_need_zeros_is_faulty = -1;
/**
* Check that MADV_DONTNEED will actually zero pages on subsequent access.
*
* Since qemu does not support this, yet [1], and you can get very tricky
* assert if you will run program with jemalloc in use under qemu:
*
* <jemalloc>: ../contrib/jemalloc/src/extent.c:1195: Failed assertion: "p[i] == 0"
*
* [1]: https://patchwork.kernel.org/patch/10576637/
*/
static int madvise_MADV_DONTNEED_zeroes_pages()
{
int works = -1;
size_t size = PAGE;
void * addr = mmap(NULL, size, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
if (addr == MAP_FAILED) {
malloc_write("<jemalloc>: Cannot allocate memory for "
"MADV_DONTNEED check\n");
if (opt_abort) {
abort();
}
}
memset(addr, 'A', size);
if (madvise(addr, size, MADV_DONTNEED) == 0) {
works = memchr(addr, 'A', size) == NULL;
} else {
/*
* If madvise() does not support MADV_DONTNEED, then we can
* call it anyway, and use it's return code.
*/
works = 1;
}
if (munmap(addr, size) != 0) {
malloc_write("<jemalloc>: Cannot deallocate memory for "
"MADV_DONTNEED check\n");
if (opt_abort) {
abort();
}
}
return works;
}
#endif
/******************************************************************************/ /******************************************************************************/
/* /*
* Function prototypes for static functions that are referenced prior to * Function prototypes for static functions that are referenced prior to
...@@ -74,9 +132,21 @@ os_pages_map(void *addr, size_t size, size_t alignment, bool *commit) { ...@@ -74,9 +132,21 @@ os_pages_map(void *addr, size_t size, size_t alignment, bool *commit) {
* of existing mappings, and we only want to create new mappings. * of existing mappings, and we only want to create new mappings.
*/ */
{ {
#ifdef __NetBSD__
/*
* On NetBSD PAGE for a platform is defined to the
* maximum page size of all machine architectures
* for that platform, so that we can use the same
* binaries across all machine architectures.
*/
if (alignment > os_page || PAGE > os_page) {
unsigned int a = ilog2(MAX(alignment, PAGE));
mmap_flags |= MAP_ALIGNED(a);
}
#endif
int prot = *commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT; int prot = *commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT;
ret = mmap(addr, size, prot, mmap_flags, -1, 0); ret = mmap(addr, size, prot, mmap_flags, PAGES_FD_TAG, 0);
} }
assert(ret != NULL); assert(ret != NULL);
...@@ -197,8 +267,8 @@ pages_map(void *addr, size_t size, size_t alignment, bool *commit) { ...@@ -197,8 +267,8 @@ pages_map(void *addr, size_t size, size_t alignment, bool *commit) {
flags |= MAP_FIXED | MAP_EXCL; flags |= MAP_FIXED | MAP_EXCL;
} else { } else {
unsigned alignment_bits = ffs_zu(alignment); unsigned alignment_bits = ffs_zu(alignment);
assert(alignment_bits > 1); assert(alignment_bits > 0);
flags |= MAP_ALIGNED(alignment_bits - 1); flags |= MAP_ALIGNED(alignment_bits);
} }
void *ret = mmap(addr, size, prot, flags, -1, 0); void *ret = mmap(addr, size, prot, flags, -1, 0);
...@@ -246,14 +316,10 @@ pages_unmap(void *addr, size_t size) { ...@@ -246,14 +316,10 @@ pages_unmap(void *addr, size_t size) {
} }
static bool static bool
pages_commit_impl(void *addr, size_t size, bool commit) { os_pages_commit(void *addr, size_t size, bool commit) {
assert(PAGE_ADDR2BASE(addr) == addr); assert(PAGE_ADDR2BASE(addr) == addr);
assert(PAGE_CEILING(size) == size); assert(PAGE_CEILING(size) == size);
if (os_overcommits) {
return true;
}
#ifdef _WIN32 #ifdef _WIN32
return (commit ? (addr != VirtualAlloc(addr, size, MEM_COMMIT, return (commit ? (addr != VirtualAlloc(addr, size, MEM_COMMIT,
PAGE_READWRITE)) : (!VirtualFree(addr, size, MEM_DECOMMIT))); PAGE_READWRITE)) : (!VirtualFree(addr, size, MEM_DECOMMIT)));
...@@ -261,7 +327,7 @@ pages_commit_impl(void *addr, size_t size, bool commit) { ...@@ -261,7 +327,7 @@ pages_commit_impl(void *addr, size_t size, bool commit) {
{ {
int prot = commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT; int prot = commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT;
void *result = mmap(addr, size, prot, mmap_flags | MAP_FIXED, void *result = mmap(addr, size, prot, mmap_flags | MAP_FIXED,
-1, 0); PAGES_FD_TAG, 0);
if (result == MAP_FAILED) { if (result == MAP_FAILED) {
return true; return true;
} }
...@@ -278,6 +344,15 @@ pages_commit_impl(void *addr, size_t size, bool commit) { ...@@ -278,6 +344,15 @@ pages_commit_impl(void *addr, size_t size, bool commit) {
#endif #endif
} }
static bool
pages_commit_impl(void *addr, size_t size, bool commit) {
if (os_overcommits) {
return true;
}
return os_pages_commit(addr, size, commit);
}
bool bool
pages_commit(void *addr, size_t size) { pages_commit(void *addr, size_t size) {
return pages_commit_impl(addr, size, true); return pages_commit_impl(addr, size, true);
...@@ -288,6 +363,66 @@ pages_decommit(void *addr, size_t size) { ...@@ -288,6 +363,66 @@ pages_decommit(void *addr, size_t size) {
return pages_commit_impl(addr, size, false); return pages_commit_impl(addr, size, false);
} }
void
pages_mark_guards(void *head, void *tail) {
assert(head != NULL || tail != NULL);
assert(head == NULL || tail == NULL ||
(uintptr_t)head < (uintptr_t)tail);
#ifdef JEMALLOC_HAVE_MPROTECT
if (head != NULL) {
mprotect(head, PAGE, PROT_NONE);
}
if (tail != NULL) {
mprotect(tail, PAGE, PROT_NONE);
}
#else
/* Decommit sets to PROT_NONE / MEM_DECOMMIT. */
if (head != NULL) {
os_pages_commit(head, PAGE, false);
}
if (tail != NULL) {
os_pages_commit(tail, PAGE, false);
}
#endif
}
void
pages_unmark_guards(void *head, void *tail) {
assert(head != NULL || tail != NULL);
assert(head == NULL || tail == NULL ||
(uintptr_t)head < (uintptr_t)tail);
#ifdef JEMALLOC_HAVE_MPROTECT
bool head_and_tail = (head != NULL) && (tail != NULL);
size_t range = head_and_tail ?
(uintptr_t)tail - (uintptr_t)head + PAGE :
SIZE_T_MAX;
/*
* The amount of work that the kernel does in mprotect depends on the
* range argument. SC_LARGE_MINCLASS is an arbitrary threshold chosen
* to prevent kernel from doing too much work that would outweigh the
* savings of performing one less system call.
*/
bool ranged_mprotect = head_and_tail && range <= SC_LARGE_MINCLASS;
if (ranged_mprotect) {
mprotect(head, range, PROT_READ | PROT_WRITE);
} else {
if (head != NULL) {
mprotect(head, PAGE, PROT_READ | PROT_WRITE);
}
if (tail != NULL) {
mprotect(tail, PAGE, PROT_READ | PROT_WRITE);
}
}
#else
if (head != NULL) {
os_pages_commit(head, PAGE, true);
}
if (tail != NULL) {
os_pages_commit(tail, PAGE, true);
}
#endif
}
bool bool
pages_purge_lazy(void *addr, size_t size) { pages_purge_lazy(void *addr, size_t size) {
assert(ALIGNMENT_ADDR2BASE(addr, os_page) == addr); assert(ALIGNMENT_ADDR2BASE(addr, os_page) == addr);
...@@ -318,6 +453,9 @@ pages_purge_lazy(void *addr, size_t size) { ...@@ -318,6 +453,9 @@ pages_purge_lazy(void *addr, size_t size) {
#elif defined(JEMALLOC_PURGE_MADVISE_DONTNEED) && \ #elif defined(JEMALLOC_PURGE_MADVISE_DONTNEED) && \
!defined(JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS) !defined(JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS)
return (madvise(addr, size, MADV_DONTNEED) != 0); return (madvise(addr, size, MADV_DONTNEED) != 0);
#elif defined(JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED) && \
!defined(JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED_ZEROS)
return (posix_madvise(addr, size, POSIX_MADV_DONTNEED) != 0);
#else #else
not_reached(); not_reached();
#endif #endif
...@@ -334,7 +472,12 @@ pages_purge_forced(void *addr, size_t size) { ...@@ -334,7 +472,12 @@ pages_purge_forced(void *addr, size_t size) {
#if defined(JEMALLOC_PURGE_MADVISE_DONTNEED) && \ #if defined(JEMALLOC_PURGE_MADVISE_DONTNEED) && \
defined(JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS) defined(JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS)
return (madvise(addr, size, MADV_DONTNEED) != 0); return (unlikely(madvise_dont_need_zeros_is_faulty) ||
madvise(addr, size, MADV_DONTNEED) != 0);
#elif defined(JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED) && \
defined(JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED_ZEROS)
return (unlikely(madvise_dont_need_zeros_is_faulty) ||
posix_madvise(addr, size, POSIX_MADV_DONTNEED) != 0);
#elif defined(JEMALLOC_MAPS_COALESCE) #elif defined(JEMALLOC_MAPS_COALESCE)
/* Try to overlay a new demand-zeroed mapping. */ /* Try to overlay a new demand-zeroed mapping. */
return pages_commit(addr, size); return pages_commit(addr, size);
...@@ -349,8 +492,13 @@ pages_huge_impl(void *addr, size_t size, bool aligned) { ...@@ -349,8 +492,13 @@ pages_huge_impl(void *addr, size_t size, bool aligned) {
assert(HUGEPAGE_ADDR2BASE(addr) == addr); assert(HUGEPAGE_ADDR2BASE(addr) == addr);
assert(HUGEPAGE_CEILING(size) == size); assert(HUGEPAGE_CEILING(size) == size);
} }
#ifdef JEMALLOC_HAVE_MADVISE_HUGE #if defined(JEMALLOC_HAVE_MADVISE_HUGE)
return (madvise(addr, size, MADV_HUGEPAGE) != 0); return (madvise(addr, size, MADV_HUGEPAGE) != 0);
#elif defined(JEMALLOC_HAVE_MEMCNTL)
struct memcntl_mha m = {0};
m.mha_cmd = MHA_MAPSIZE_VA;
m.mha_pagesize = HUGEPAGE;
return (memcntl(addr, size, MC_HAT_ADVISE, (caddr_t)&m, 0, 0) == 0);
#else #else
return true; return true;
#endif #endif
...@@ -394,8 +542,10 @@ bool ...@@ -394,8 +542,10 @@ bool
pages_dontdump(void *addr, size_t size) { pages_dontdump(void *addr, size_t size) {
assert(PAGE_ADDR2BASE(addr) == addr); assert(PAGE_ADDR2BASE(addr) == addr);
assert(PAGE_CEILING(size) == size); assert(PAGE_CEILING(size) == size);
#ifdef JEMALLOC_MADVISE_DONTDUMP #if defined(JEMALLOC_MADVISE_DONTDUMP)
return madvise(addr, size, MADV_DONTDUMP) != 0; return madvise(addr, size, MADV_DONTDUMP) != 0;
#elif defined(JEMALLOC_MADVISE_NOCORE)
return madvise(addr, size, MADV_NOCORE) != 0;
#else #else
return false; return false;
#endif #endif
...@@ -405,8 +555,10 @@ bool ...@@ -405,8 +555,10 @@ bool
pages_dodump(void *addr, size_t size) { pages_dodump(void *addr, size_t size) {
assert(PAGE_ADDR2BASE(addr) == addr); assert(PAGE_ADDR2BASE(addr) == addr);
assert(PAGE_CEILING(size) == size); assert(PAGE_CEILING(size) == size);
#ifdef JEMALLOC_MADVISE_DONTDUMP #if defined(JEMALLOC_MADVISE_DONTDUMP)
return madvise(addr, size, MADV_DODUMP) != 0; return madvise(addr, size, MADV_DODUMP) != 0;
#elif defined(JEMALLOC_MADVISE_NOCORE)
return madvise(addr, size, MADV_CORE) != 0;
#else #else
return false; return false;
#endif #endif
...@@ -547,14 +699,14 @@ pages_set_thp_state (void *ptr, size_t size) { ...@@ -547,14 +699,14 @@ pages_set_thp_state (void *ptr, size_t size) {
static void static void
init_thp_state(void) { init_thp_state(void) {
if (!have_madvise_huge) { if (!have_madvise_huge && !have_memcntl) {
if (metadata_thp_enabled() && opt_abort) { if (metadata_thp_enabled() && opt_abort) {
malloc_write("<jemalloc>: no MADV_HUGEPAGE support\n"); malloc_write("<jemalloc>: no MADV_HUGEPAGE support\n");
abort(); abort();
} }
goto label_error; goto label_error;
} }
#if defined(JEMALLOC_HAVE_MADVISE_HUGE)
static const char sys_state_madvise[] = "always [madvise] never\n"; static const char sys_state_madvise[] = "always [madvise] never\n";
static const char sys_state_always[] = "[always] madvise never\n"; static const char sys_state_always[] = "[always] madvise never\n";
static const char sys_state_never[] = "always madvise [never]\n"; static const char sys_state_never[] = "always madvise [never]\n";
...@@ -563,6 +715,9 @@ init_thp_state(void) { ...@@ -563,6 +715,9 @@ init_thp_state(void) {
#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_open) #if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_open)
int fd = (int)syscall(SYS_open, int fd = (int)syscall(SYS_open,
"/sys/kernel/mm/transparent_hugepage/enabled", O_RDONLY); "/sys/kernel/mm/transparent_hugepage/enabled", O_RDONLY);
#elif defined(JEMALLOC_USE_SYSCALL) && defined(SYS_openat)
int fd = (int)syscall(SYS_openat,
AT_FDCWD, "/sys/kernel/mm/transparent_hugepage/enabled", O_RDONLY);
#else #else
int fd = open("/sys/kernel/mm/transparent_hugepage/enabled", O_RDONLY); int fd = open("/sys/kernel/mm/transparent_hugepage/enabled", O_RDONLY);
#endif #endif
...@@ -591,6 +746,10 @@ init_thp_state(void) { ...@@ -591,6 +746,10 @@ init_thp_state(void) {
goto label_error; goto label_error;
} }
return; return;
#elif defined(JEMALLOC_HAVE_MEMCNTL)
init_system_thp_mode = thp_mode_default;
return;
#endif
label_error: label_error:
opt_thp = init_system_thp_mode = thp_mode_not_supported; opt_thp = init_system_thp_mode = thp_mode_not_supported;
} }
...@@ -606,6 +765,20 @@ pages_boot(void) { ...@@ -606,6 +765,20 @@ pages_boot(void) {
return true; return true;
} }
#ifdef JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS
if (!opt_trust_madvise) {
madvise_dont_need_zeros_is_faulty = !madvise_MADV_DONTNEED_zeroes_pages();
if (madvise_dont_need_zeros_is_faulty) {
malloc_write("<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)\n");
malloc_write("<jemalloc>: (This is the expected behaviour if you are running under QEMU)\n");
}
} else {
/* In case opt_trust_madvise is disable,
* do not do runtime check */
madvise_dont_need_zeros_is_faulty = 0;
}
#endif
#ifndef _WIN32 #ifndef _WIN32
mmap_flags = MAP_PRIVATE | MAP_ANON; mmap_flags = MAP_PRIVATE | MAP_ANON;
#endif #endif
...@@ -619,6 +792,8 @@ pages_boot(void) { ...@@ -619,6 +792,8 @@ pages_boot(void) {
mmap_flags |= MAP_NORESERVE; mmap_flags |= MAP_NORESERVE;
} }
# endif # endif
#elif defined(__NetBSD__)
os_overcommits = true;
#else #else
os_overcommits = false; os_overcommits = false;
#endif #endif
......
#include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h"
size_t
pai_alloc_batch_default(tsdn_t *tsdn, pai_t *self, size_t size, size_t nallocs,
edata_list_active_t *results, bool *deferred_work_generated) {
for (size_t i = 0; i < nallocs; i++) {
bool deferred_by_alloc = false;
edata_t *edata = pai_alloc(tsdn, self, size, PAGE,
/* zero */ false, /* guarded */ false,
/* frequent_reuse */ false, &deferred_by_alloc);
*deferred_work_generated |= deferred_by_alloc;
if (edata == NULL) {
return i;
}
edata_list_active_append(results, edata);
}
return nallocs;
}
void
pai_dalloc_batch_default(tsdn_t *tsdn, pai_t *self,
edata_list_active_t *list, bool *deferred_work_generated) {
edata_t *edata;
while ((edata = edata_list_active_first(list)) != NULL) {
bool deferred_by_dalloc = false;
edata_list_active_remove(list, edata);
pai_dalloc(tsdn, self, edata, &deferred_by_dalloc);
*deferred_work_generated |= deferred_by_dalloc;
}
}
#include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h"
#include "jemalloc/internal/peak_event.h"
#include "jemalloc/internal/activity_callback.h"
#include "jemalloc/internal/peak.h"
/*
* Update every 64K by default. We're not exposing this as a configuration
* option for now; we don't want to bind ourselves too tightly to any particular
* performance requirements for small values, or guarantee that we'll even be
* able to provide fine-grained accuracy.
*/
#define PEAK_EVENT_WAIT (64 * 1024)
/* Update the peak with current tsd state. */
void
peak_event_update(tsd_t *tsd) {
uint64_t alloc = tsd_thread_allocated_get(tsd);
uint64_t dalloc = tsd_thread_deallocated_get(tsd);
peak_t *peak = tsd_peakp_get(tsd);
peak_update(peak, alloc, dalloc);
}
static void
peak_event_activity_callback(tsd_t *tsd) {
activity_callback_thunk_t *thunk = tsd_activity_callback_thunkp_get(
tsd);
uint64_t alloc = tsd_thread_allocated_get(tsd);
uint64_t dalloc = tsd_thread_deallocated_get(tsd);
if (thunk->callback != NULL) {
thunk->callback(thunk->uctx, alloc, dalloc);
}
}
/* Set current state to zero. */
void
peak_event_zero(tsd_t *tsd) {
uint64_t alloc = tsd_thread_allocated_get(tsd);
uint64_t dalloc = tsd_thread_deallocated_get(tsd);
peak_t *peak = tsd_peakp_get(tsd);
peak_set_zero(peak, alloc, dalloc);
}
uint64_t
peak_event_max(tsd_t *tsd) {
peak_t *peak = tsd_peakp_get(tsd);
return peak_max(peak);
}
uint64_t
peak_alloc_new_event_wait(tsd_t *tsd) {
return PEAK_EVENT_WAIT;
}
uint64_t
peak_alloc_postponed_event_wait(tsd_t *tsd) {
return TE_MIN_START_WAIT;
}
void
peak_alloc_event_handler(tsd_t *tsd, uint64_t elapsed) {
peak_event_update(tsd);
peak_event_activity_callback(tsd);
}
uint64_t
peak_dalloc_new_event_wait(tsd_t *tsd) {
return PEAK_EVENT_WAIT;
}
uint64_t
peak_dalloc_postponed_event_wait(tsd_t *tsd) {
return TE_MIN_START_WAIT;
}
void
peak_dalloc_event_handler(tsd_t *tsd, uint64_t elapsed) {
peak_event_update(tsd);
peak_event_activity_callback(tsd);
}
#define JEMALLOC_PROF_C_
#include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/jemalloc_internal_includes.h"
#include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/assert.h" #include "jemalloc/internal/assert.h"
#include "jemalloc/internal/ckh.h"
#include "jemalloc/internal/hash.h"
#include "jemalloc/internal/malloc_io.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/emitter.h" #include "jemalloc/internal/counter.h"
#include "jemalloc/internal/prof_data.h"
#include "jemalloc/internal/prof_log.h"
#include "jemalloc/internal/prof_recent.h"
#include "jemalloc/internal/prof_stats.h"
#include "jemalloc/internal/prof_sys.h"
#include "jemalloc/internal/prof_hook.h"
#include "jemalloc/internal/thread_event.h"
/******************************************************************************/
#ifdef JEMALLOC_PROF_LIBUNWIND
#define UNW_LOCAL_ONLY
#include <libunwind.h>
#endif
#ifdef JEMALLOC_PROF_LIBGCC
/* /*
* We have a circular dependency -- jemalloc_internal.h tells us if we should * This file implements the profiling "APIs" needed by other parts of jemalloc,
* use libgcc's unwinding functionality, but after we've included that, we've * and also manages the relevant "operational" data, mainly options and mutexes;
* already hooked _Unwind_Backtrace. We'll temporarily disable hooking. * the core profiling data structures are encapsulated in prof_data.c.
*/ */
#undef _Unwind_Backtrace
#include <unwind.h>
#define _Unwind_Backtrace JEMALLOC_HOOK(_Unwind_Backtrace, test_hooks_libc_hook)
#endif
/******************************************************************************/ /******************************************************************************/
/* Data. */ /* Data. */
bool opt_prof = false; bool opt_prof = false;
...@@ -38,20 +31,20 @@ ssize_t opt_lg_prof_interval = LG_PROF_INTERVAL_DEFAULT; ...@@ -38,20 +31,20 @@ ssize_t opt_lg_prof_interval = LG_PROF_INTERVAL_DEFAULT;
bool opt_prof_gdump = false; bool opt_prof_gdump = false;
bool opt_prof_final = false; bool opt_prof_final = false;
bool opt_prof_leak = false; bool opt_prof_leak = false;
bool opt_prof_leak_error = false;
bool opt_prof_accum = false; bool opt_prof_accum = false;
bool opt_prof_log = false; char opt_prof_prefix[PROF_DUMP_FILENAME_LEN];
char opt_prof_prefix[ bool opt_prof_sys_thread_name = false;
/* Minimize memory bloat for non-prof builds. */ bool opt_prof_unbias = true;
#ifdef JEMALLOC_PROF
PATH_MAX + /* Accessed via prof_sample_event_handler(). */
#endif static counter_accum_t prof_idump_accumulated;
1];
/* /*
* Initialized as opt_prof_active, and accessed via * Initialized as opt_prof_active, and accessed via
* prof_active_[gs]et{_unlocked,}(). * prof_active_[gs]et{_unlocked,}().
*/ */
bool prof_active; bool prof_active_state;
static malloc_mutex_t prof_active_mtx; static malloc_mutex_t prof_active_mtx;
/* /*
...@@ -72,1080 +65,155 @@ uint64_t prof_interval = 0; ...@@ -72,1080 +65,155 @@ uint64_t prof_interval = 0;
size_t lg_prof_sample; size_t lg_prof_sample;
typedef enum prof_logging_state_e prof_logging_state_t;
enum prof_logging_state_e {
prof_logging_state_stopped,
prof_logging_state_started,
prof_logging_state_dumping
};
/*
* - stopped: log_start never called, or previous log_stop has completed.
* - started: log_start called, log_stop not called yet. Allocations are logged.
* - dumping: log_stop called but not finished; samples are not logged anymore.
*/
prof_logging_state_t prof_logging_state = prof_logging_state_stopped;
#ifdef JEMALLOC_JET
static bool prof_log_dummy = false;
#endif
/* Incremented for every log file that is output. */
static uint64_t log_seq = 0;
static char log_filename[
/* Minimize memory bloat for non-prof builds. */
#ifdef JEMALLOC_PROF
PATH_MAX +
#endif
1];
/* Timestamp for most recent call to log_start(). */
static nstime_t log_start_timestamp = NSTIME_ZERO_INITIALIZER;
/* Increment these when adding to the log_bt and log_thr linked lists. */
static size_t log_bt_index = 0;
static size_t log_thr_index = 0;
/* Linked list node definitions. These are only used in prof.c. */
typedef struct prof_bt_node_s prof_bt_node_t;
struct prof_bt_node_s {
prof_bt_node_t *next;
size_t index;
prof_bt_t bt;
/* Variable size backtrace vector pointed to by bt. */
void *vec[1];
};
typedef struct prof_thr_node_s prof_thr_node_t;
struct prof_thr_node_s {
prof_thr_node_t *next;
size_t index;
uint64_t thr_uid;
/* Variable size based on thr_name_sz. */
char name[1];
};
typedef struct prof_alloc_node_s prof_alloc_node_t;
/* This is output when logging sampled allocations. */
struct prof_alloc_node_s {
prof_alloc_node_t *next;
/* Indices into an array of thread data. */
size_t alloc_thr_ind;
size_t free_thr_ind;
/* Indices into an array of backtraces. */
size_t alloc_bt_ind;
size_t free_bt_ind;
uint64_t alloc_time_ns;
uint64_t free_time_ns;
size_t usize;
};
/*
* Created on the first call to prof_log_start and deleted on prof_log_stop.
* These are the backtraces and threads that have already been logged by an
* allocation.
*/
static bool log_tables_initialized = false;
static ckh_t log_bt_node_set;
static ckh_t log_thr_node_set;
/* Store linked lists for logged data. */
static prof_bt_node_t *log_bt_first = NULL;
static prof_bt_node_t *log_bt_last = NULL;
static prof_thr_node_t *log_thr_first = NULL;
static prof_thr_node_t *log_thr_last = NULL;
static prof_alloc_node_t *log_alloc_first = NULL;
static prof_alloc_node_t *log_alloc_last = NULL;
/* Protects the prof_logging_state and any log_{...} variable. */
static malloc_mutex_t log_mtx;
/*
* Table of mutexes that are shared among gctx's. These are leaf locks, so
* there is no problem with using them for more than one gctx at the same time.
* The primary motivation for this sharing though is that gctx's are ephemeral,
* and destroying mutexes causes complications for systems that allocate when
* creating/destroying mutexes.
*/
static malloc_mutex_t *gctx_locks;
static atomic_u_t cum_gctxs; /* Atomic counter. */
/*
* Table of mutexes that are shared among tdata's. No operations require
* holding multiple tdata locks, so there is no problem with using them for more
* than one tdata at the same time, even though a gctx lock may be acquired
* while holding a tdata lock.
*/
static malloc_mutex_t *tdata_locks;
/*
* Global hash of (prof_bt_t *)-->(prof_gctx_t *). This is the master data
* structure that knows about all backtraces currently captured.
*/
static ckh_t bt2gctx;
/* Non static to enable profiling. */
malloc_mutex_t bt2gctx_mtx;
/*
* Tree of all extant prof_tdata_t structures, regardless of state,
* {attached,detached,expired}.
*/
static prof_tdata_tree_t tdatas;
static malloc_mutex_t tdatas_mtx;
static uint64_t next_thr_uid; static uint64_t next_thr_uid;
static malloc_mutex_t next_thr_uid_mtx; static malloc_mutex_t next_thr_uid_mtx;
static malloc_mutex_t prof_dump_seq_mtx;
static uint64_t prof_dump_seq;
static uint64_t prof_dump_iseq;
static uint64_t prof_dump_mseq;
static uint64_t prof_dump_useq;
/*
* This buffer is rather large for stack allocation, so use a single buffer for
* all profile dumps.
*/
static malloc_mutex_t prof_dump_mtx;
static char prof_dump_buf[
/* Minimize memory bloat for non-prof builds. */
#ifdef JEMALLOC_PROF
PROF_DUMP_BUFSIZE
#else
1
#endif
];
static size_t prof_dump_buf_end;
static int prof_dump_fd;
/* Do not dump any profiles until bootstrapping is complete. */ /* Do not dump any profiles until bootstrapping is complete. */
static bool prof_booted = false; bool prof_booted = false;
/******************************************************************************/
/*
* Function prototypes for static functions that are referenced prior to
* definition.
*/
static bool prof_tctx_should_destroy(tsdn_t *tsdn, prof_tctx_t *tctx);
static void prof_tctx_destroy(tsd_t *tsd, prof_tctx_t *tctx);
static bool prof_tdata_should_destroy(tsdn_t *tsdn, prof_tdata_t *tdata,
bool even_if_attached);
static void prof_tdata_destroy(tsd_t *tsd, prof_tdata_t *tdata,
bool even_if_attached);
static char *prof_thread_name_alloc(tsdn_t *tsdn, const char *thread_name);
/* Hashtable functions for log_bt_node_set and log_thr_node_set. */
static void prof_thr_node_hash(const void *key, size_t r_hash[2]);
static bool prof_thr_node_keycomp(const void *k1, const void *k2);
static void prof_bt_node_hash(const void *key, size_t r_hash[2]);
static bool prof_bt_node_keycomp(const void *k1, const void *k2);
/******************************************************************************/
/* Red-black trees. */
static int
prof_tctx_comp(const prof_tctx_t *a, const prof_tctx_t *b) {
uint64_t a_thr_uid = a->thr_uid;
uint64_t b_thr_uid = b->thr_uid;
int ret = (a_thr_uid > b_thr_uid) - (a_thr_uid < b_thr_uid);
if (ret == 0) {
uint64_t a_thr_discrim = a->thr_discrim;
uint64_t b_thr_discrim = b->thr_discrim;
ret = (a_thr_discrim > b_thr_discrim) - (a_thr_discrim <
b_thr_discrim);
if (ret == 0) {
uint64_t a_tctx_uid = a->tctx_uid;
uint64_t b_tctx_uid = b->tctx_uid;
ret = (a_tctx_uid > b_tctx_uid) - (a_tctx_uid <
b_tctx_uid);
}
}
return ret;
}
rb_gen(static UNUSED, tctx_tree_, prof_tctx_tree_t, prof_tctx_t,
tctx_link, prof_tctx_comp)
static int
prof_gctx_comp(const prof_gctx_t *a, const prof_gctx_t *b) {
unsigned a_len = a->bt.len;
unsigned b_len = b->bt.len;
unsigned comp_len = (a_len < b_len) ? a_len : b_len;
int ret = memcmp(a->bt.vec, b->bt.vec, comp_len * sizeof(void *));
if (ret == 0) {
ret = (a_len > b_len) - (a_len < b_len);
}
return ret;
}
rb_gen(static UNUSED, gctx_tree_, prof_gctx_tree_t, prof_gctx_t, dump_link,
prof_gctx_comp)
static int
prof_tdata_comp(const prof_tdata_t *a, const prof_tdata_t *b) {
int ret;
uint64_t a_uid = a->thr_uid;
uint64_t b_uid = b->thr_uid;
ret = ((a_uid > b_uid) - (a_uid < b_uid));
if (ret == 0) {
uint64_t a_discrim = a->thr_discrim;
uint64_t b_discrim = b->thr_discrim;
ret = ((a_discrim > b_discrim) - (a_discrim < b_discrim)); /* Logically a prof_backtrace_hook_t. */
} atomic_p_t prof_backtrace_hook;
return ret;
}
rb_gen(static UNUSED, tdata_tree_, prof_tdata_tree_t, prof_tdata_t, tdata_link, /* Logically a prof_dump_hook_t. */
prof_tdata_comp) atomic_p_t prof_dump_hook;
/******************************************************************************/ /******************************************************************************/
void void
prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated) { prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx) {
prof_tdata_t *tdata;
cassert(config_prof); cassert(config_prof);
if (updated) { if (tsd_reentrancy_level_get(tsd) > 0) {
/* assert((uintptr_t)tctx == (uintptr_t)1U);
* Compute a new sample threshold. This isn't very important in return;
* practice, because this function is rarely executed, so the
* potential for sample bias is minimal except in contrived
* programs.
*/
tdata = prof_tdata_get(tsd, true);
if (tdata != NULL) {
prof_sample_threshold_update(tdata);
}
} }
if ((uintptr_t)tctx > (uintptr_t)1U) { if ((uintptr_t)tctx > (uintptr_t)1U) {
malloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock); malloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock);
tctx->prepared = false; tctx->prepared = false;
if (prof_tctx_should_destroy(tsd_tsdn(tsd), tctx)) { prof_tctx_try_destroy(tsd, tctx);
prof_tctx_destroy(tsd, tctx);
} else {
malloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock);
}
} }
} }
void void
prof_malloc_sample_object(tsdn_t *tsdn, const void *ptr, size_t usize, prof_malloc_sample_object(tsd_t *tsd, const void *ptr, size_t size,
prof_tctx_t *tctx) { size_t usize, prof_tctx_t *tctx) {
prof_tctx_set(tsdn, ptr, usize, NULL, tctx); cassert(config_prof);
if (opt_prof_sys_thread_name) {
prof_sys_thread_name_fetch(tsd);
}
edata_t *edata = emap_edata_lookup(tsd_tsdn(tsd), &arena_emap_global,
ptr);
prof_info_set(tsd, edata, tctx, size);
/* Get the current time and set this in the extent_t. We'll read this szind_t szind = sz_size2index(usize);
* when free() is called. */
nstime_t t = NSTIME_ZERO_INITIALIZER;
nstime_update(&t);
prof_alloc_time_set(tsdn, ptr, NULL, t);
malloc_mutex_lock(tsdn, tctx->tdata->lock); malloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock);
/*
* We need to do these map lookups while holding the lock, to avoid the
* possibility of races with prof_reset calls, which update the map and
* then acquire the lock. This actually still leaves a data race on the
* contents of the unbias map, but we have not yet gone through and
* atomic-ified the prof module, and compilers are not yet causing us
* issues. The key thing is to make sure that, if we read garbage data,
* the prof_reset call is about to mark our tctx as expired before any
* dumping of our corrupted output is attempted.
*/
size_t shifted_unbiased_cnt = prof_shifted_unbiased_cnt[szind];
size_t unbiased_bytes = prof_unbiased_sz[szind];
tctx->cnts.curobjs++; tctx->cnts.curobjs++;
tctx->cnts.curobjs_shifted_unbiased += shifted_unbiased_cnt;
tctx->cnts.curbytes += usize; tctx->cnts.curbytes += usize;
tctx->cnts.curbytes_unbiased += unbiased_bytes;
if (opt_prof_accum) { if (opt_prof_accum) {
tctx->cnts.accumobjs++; tctx->cnts.accumobjs++;
tctx->cnts.accumobjs_shifted_unbiased += shifted_unbiased_cnt;
tctx->cnts.accumbytes += usize; tctx->cnts.accumbytes += usize;
tctx->cnts.accumbytes_unbiased += unbiased_bytes;
} }
bool record_recent = prof_recent_alloc_prepare(tsd, tctx);
tctx->prepared = false; tctx->prepared = false;
malloc_mutex_unlock(tsdn, tctx->tdata->lock); malloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock);
} if (record_recent) {
assert(tctx == edata_prof_tctx_get(edata));
static size_t prof_recent_alloc(tsd, edata, size, usize);
prof_log_bt_index(tsd_t *tsd, prof_bt_t *bt) {
assert(prof_logging_state == prof_logging_state_started);
malloc_mutex_assert_owner(tsd_tsdn(tsd), &log_mtx);
prof_bt_node_t dummy_node;
dummy_node.bt = *bt;
prof_bt_node_t *node;
/* See if this backtrace is already cached in the table. */
if (ckh_search(&log_bt_node_set, (void *)(&dummy_node),
(void **)(&node), NULL)) {
size_t sz = offsetof(prof_bt_node_t, vec) +
(bt->len * sizeof(void *));
prof_bt_node_t *new_node = (prof_bt_node_t *)
iallocztm(tsd_tsdn(tsd), sz, sz_size2index(sz), false, NULL,
true, arena_get(TSDN_NULL, 0, true), true);
if (log_bt_first == NULL) {
log_bt_first = new_node;
log_bt_last = new_node;
} else {
log_bt_last->next = new_node;
log_bt_last = new_node;
}
new_node->next = NULL;
new_node->index = log_bt_index;
/*
* Copy the backtrace: bt is inside a tdata or gctx, which
* might die before prof_log_stop is called.
*/
new_node->bt.len = bt->len;
memcpy(new_node->vec, bt->vec, bt->len * sizeof(void *));
new_node->bt.vec = new_node->vec;
log_bt_index++;
ckh_insert(tsd, &log_bt_node_set, (void *)new_node, NULL);
return new_node->index;
} else {
return node->index;
}
}
static size_t
prof_log_thr_index(tsd_t *tsd, uint64_t thr_uid, const char *name) {
assert(prof_logging_state == prof_logging_state_started);
malloc_mutex_assert_owner(tsd_tsdn(tsd), &log_mtx);
prof_thr_node_t dummy_node;
dummy_node.thr_uid = thr_uid;
prof_thr_node_t *node;
/* See if this thread is already cached in the table. */
if (ckh_search(&log_thr_node_set, (void *)(&dummy_node),
(void **)(&node), NULL)) {
size_t sz = offsetof(prof_thr_node_t, name) + strlen(name) + 1;
prof_thr_node_t *new_node = (prof_thr_node_t *)
iallocztm(tsd_tsdn(tsd), sz, sz_size2index(sz), false, NULL,
true, arena_get(TSDN_NULL, 0, true), true);
if (log_thr_first == NULL) {
log_thr_first = new_node;
log_thr_last = new_node;
} else {
log_thr_last->next = new_node;
log_thr_last = new_node;
} }
new_node->next = NULL; if (opt_prof_stats) {
new_node->index = log_thr_index; prof_stats_inc(tsd, szind, size);
new_node->thr_uid = thr_uid;
strcpy(new_node->name, name);
log_thr_index++;
ckh_insert(tsd, &log_thr_node_set, (void *)new_node, NULL);
return new_node->index;
} else {
return node->index;
} }
} }
static void void
prof_try_log(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx) { prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_info_t *prof_info) {
malloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock); cassert(config_prof);
prof_tdata_t *cons_tdata = prof_tdata_get(tsd, false);
if (cons_tdata == NULL) {
/*
* We decide not to log these allocations. cons_tdata will be
* NULL only when the current thread is in a weird state (e.g.
* it's being destroyed).
*/
return;
}
malloc_mutex_lock(tsd_tsdn(tsd), &log_mtx);
if (prof_logging_state != prof_logging_state_started) {
goto label_done;
}
if (!log_tables_initialized) {
bool err1 = ckh_new(tsd, &log_bt_node_set, PROF_CKH_MINITEMS,
prof_bt_node_hash, prof_bt_node_keycomp);
bool err2 = ckh_new(tsd, &log_thr_node_set, PROF_CKH_MINITEMS,
prof_thr_node_hash, prof_thr_node_keycomp);
if (err1 || err2) {
goto label_done;
}
log_tables_initialized = true;
}
nstime_t alloc_time = prof_alloc_time_get(tsd_tsdn(tsd), ptr,
(alloc_ctx_t *)NULL);
nstime_t free_time = NSTIME_ZERO_INITIALIZER;
nstime_update(&free_time);
size_t sz = sizeof(prof_alloc_node_t);
prof_alloc_node_t *new_node = (prof_alloc_node_t *)
iallocztm(tsd_tsdn(tsd), sz, sz_size2index(sz), false, NULL, true,
arena_get(TSDN_NULL, 0, true), true);
const char *prod_thr_name = (tctx->tdata->thread_name == NULL)?
"" : tctx->tdata->thread_name;
const char *cons_thr_name = prof_thread_name_get(tsd);
prof_bt_t bt;
/* Initialize the backtrace, using the buffer in tdata to store it. */
bt_init(&bt, cons_tdata->vec);
prof_backtrace(&bt);
prof_bt_t *cons_bt = &bt;
/* We haven't destroyed tctx yet, so gctx should be good to read. */
prof_bt_t *prod_bt = &tctx->gctx->bt;
new_node->next = NULL;
new_node->alloc_thr_ind = prof_log_thr_index(tsd, tctx->tdata->thr_uid,
prod_thr_name);
new_node->free_thr_ind = prof_log_thr_index(tsd, cons_tdata->thr_uid,
cons_thr_name);
new_node->alloc_bt_ind = prof_log_bt_index(tsd, prod_bt);
new_node->free_bt_ind = prof_log_bt_index(tsd, cons_bt);
new_node->alloc_time_ns = nstime_ns(&alloc_time);
new_node->free_time_ns = nstime_ns(&free_time);
new_node->usize = usize;
if (log_alloc_first == NULL) {
log_alloc_first = new_node;
log_alloc_last = new_node;
} else {
log_alloc_last->next = new_node;
log_alloc_last = new_node;
}
label_done: assert(prof_info != NULL);
malloc_mutex_unlock(tsd_tsdn(tsd), &log_mtx); prof_tctx_t *tctx = prof_info->alloc_tctx;
} assert((uintptr_t)tctx > (uintptr_t)1U);
void szind_t szind = sz_size2index(usize);
prof_free_sampled_object(tsd_t *tsd, const void *ptr, size_t usize,
prof_tctx_t *tctx) {
malloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock); malloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock);
assert(tctx->cnts.curobjs > 0); assert(tctx->cnts.curobjs > 0);
assert(tctx->cnts.curbytes >= usize); assert(tctx->cnts.curbytes >= usize);
/*
* It's not correct to do equivalent asserts for unbiased bytes, because
* of the potential for races with prof.reset calls. The map contents
* should really be atomic, but we have not atomic-ified the prof module
* yet.
*/
tctx->cnts.curobjs--; tctx->cnts.curobjs--;
tctx->cnts.curobjs_shifted_unbiased -= prof_shifted_unbiased_cnt[szind];
tctx->cnts.curbytes -= usize; tctx->cnts.curbytes -= usize;
tctx->cnts.curbytes_unbiased -= prof_unbiased_sz[szind];
prof_try_log(tsd, ptr, usize, tctx); prof_try_log(tsd, usize, prof_info);
if (prof_tctx_should_destroy(tsd_tsdn(tsd), tctx)) {
prof_tctx_destroy(tsd, tctx);
} else {
malloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock);
}
}
void
bt_init(prof_bt_t *bt, void **vec) {
cassert(config_prof);
bt->vec = vec;
bt->len = 0;
}
static void
prof_enter(tsd_t *tsd, prof_tdata_t *tdata) {
cassert(config_prof);
assert(tdata == prof_tdata_get(tsd, false));
if (tdata != NULL) {
assert(!tdata->enq);
tdata->enq = true;
}
malloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx);
}
static void
prof_leave(tsd_t *tsd, prof_tdata_t *tdata) {
cassert(config_prof);
assert(tdata == prof_tdata_get(tsd, false));
malloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx);
if (tdata != NULL) {
bool idump, gdump;
assert(tdata->enq);
tdata->enq = false;
idump = tdata->enq_idump;
tdata->enq_idump = false;
gdump = tdata->enq_gdump;
tdata->enq_gdump = false;
if (idump) {
prof_idump(tsd_tsdn(tsd));
}
if (gdump) {
prof_gdump(tsd_tsdn(tsd));
}
}
}
#ifdef JEMALLOC_PROF_LIBUNWIND
void
prof_backtrace(prof_bt_t *bt) {
int nframes;
cassert(config_prof);
assert(bt->len == 0);
assert(bt->vec != NULL);
nframes = unw_backtrace(bt->vec, PROF_BT_MAX);
if (nframes <= 0) {
return;
}
bt->len = nframes;
}
#elif (defined(JEMALLOC_PROF_LIBGCC))
static _Unwind_Reason_Code
prof_unwind_init_callback(struct _Unwind_Context *context, void *arg) {
cassert(config_prof);
return _URC_NO_REASON;
}
static _Unwind_Reason_Code
prof_unwind_callback(struct _Unwind_Context *context, void *arg) {
prof_unwind_data_t *data = (prof_unwind_data_t *)arg;
void *ip;
cassert(config_prof);
ip = (void *)_Unwind_GetIP(context);
if (ip == NULL) {
return _URC_END_OF_STACK;
}
data->bt->vec[data->bt->len] = ip;
data->bt->len++;
if (data->bt->len == data->max) {
return _URC_END_OF_STACK;
}
return _URC_NO_REASON;
}
void
prof_backtrace(prof_bt_t *bt) {
prof_unwind_data_t data = {bt, PROF_BT_MAX};
cassert(config_prof); prof_tctx_try_destroy(tsd, tctx);
_Unwind_Backtrace(prof_unwind_callback, &data); if (opt_prof_stats) {
} prof_stats_dec(tsd, szind, prof_info->alloc_size);
#elif (defined(JEMALLOC_PROF_GCC))
void
prof_backtrace(prof_bt_t *bt) {
#define BT_FRAME(i) \
if ((i) < PROF_BT_MAX) { \
void *p; \
if (__builtin_frame_address(i) == 0) { \
return; \
} \
p = __builtin_return_address(i); \
if (p == NULL) { \
return; \
} \
bt->vec[(i)] = p; \
bt->len = (i) + 1; \
} else { \
return; \
} }
cassert(config_prof);
BT_FRAME(0)
BT_FRAME(1)
BT_FRAME(2)
BT_FRAME(3)
BT_FRAME(4)
BT_FRAME(5)
BT_FRAME(6)
BT_FRAME(7)
BT_FRAME(8)
BT_FRAME(9)
BT_FRAME(10)
BT_FRAME(11)
BT_FRAME(12)
BT_FRAME(13)
BT_FRAME(14)
BT_FRAME(15)
BT_FRAME(16)
BT_FRAME(17)
BT_FRAME(18)
BT_FRAME(19)
BT_FRAME(20)
BT_FRAME(21)
BT_FRAME(22)
BT_FRAME(23)
BT_FRAME(24)
BT_FRAME(25)
BT_FRAME(26)
BT_FRAME(27)
BT_FRAME(28)
BT_FRAME(29)
BT_FRAME(30)
BT_FRAME(31)
BT_FRAME(32)
BT_FRAME(33)
BT_FRAME(34)
BT_FRAME(35)
BT_FRAME(36)
BT_FRAME(37)
BT_FRAME(38)
BT_FRAME(39)
BT_FRAME(40)
BT_FRAME(41)
BT_FRAME(42)
BT_FRAME(43)
BT_FRAME(44)
BT_FRAME(45)
BT_FRAME(46)
BT_FRAME(47)
BT_FRAME(48)
BT_FRAME(49)
BT_FRAME(50)
BT_FRAME(51)
BT_FRAME(52)
BT_FRAME(53)
BT_FRAME(54)
BT_FRAME(55)
BT_FRAME(56)
BT_FRAME(57)
BT_FRAME(58)
BT_FRAME(59)
BT_FRAME(60)
BT_FRAME(61)
BT_FRAME(62)
BT_FRAME(63)
BT_FRAME(64)
BT_FRAME(65)
BT_FRAME(66)
BT_FRAME(67)
BT_FRAME(68)
BT_FRAME(69)
BT_FRAME(70)
BT_FRAME(71)
BT_FRAME(72)
BT_FRAME(73)
BT_FRAME(74)
BT_FRAME(75)
BT_FRAME(76)
BT_FRAME(77)
BT_FRAME(78)
BT_FRAME(79)
BT_FRAME(80)
BT_FRAME(81)
BT_FRAME(82)
BT_FRAME(83)
BT_FRAME(84)
BT_FRAME(85)
BT_FRAME(86)
BT_FRAME(87)
BT_FRAME(88)
BT_FRAME(89)
BT_FRAME(90)
BT_FRAME(91)
BT_FRAME(92)
BT_FRAME(93)
BT_FRAME(94)
BT_FRAME(95)
BT_FRAME(96)
BT_FRAME(97)
BT_FRAME(98)
BT_FRAME(99)
BT_FRAME(100)
BT_FRAME(101)
BT_FRAME(102)
BT_FRAME(103)
BT_FRAME(104)
BT_FRAME(105)
BT_FRAME(106)
BT_FRAME(107)
BT_FRAME(108)
BT_FRAME(109)
BT_FRAME(110)
BT_FRAME(111)
BT_FRAME(112)
BT_FRAME(113)
BT_FRAME(114)
BT_FRAME(115)
BT_FRAME(116)
BT_FRAME(117)
BT_FRAME(118)
BT_FRAME(119)
BT_FRAME(120)
BT_FRAME(121)
BT_FRAME(122)
BT_FRAME(123)
BT_FRAME(124)
BT_FRAME(125)
BT_FRAME(126)
BT_FRAME(127)
#undef BT_FRAME
}
#else
void
prof_backtrace(prof_bt_t *bt) {
cassert(config_prof);
not_reached();
}
#endif
static malloc_mutex_t *
prof_gctx_mutex_choose(void) {
unsigned ngctxs = atomic_fetch_add_u(&cum_gctxs, 1, ATOMIC_RELAXED);
return &gctx_locks[(ngctxs - 1) % PROF_NCTX_LOCKS];
}
static malloc_mutex_t *
prof_tdata_mutex_choose(uint64_t thr_uid) {
return &tdata_locks[thr_uid % PROF_NTDATA_LOCKS];
} }
static prof_gctx_t * prof_tctx_t *
prof_gctx_create(tsdn_t *tsdn, prof_bt_t *bt) { prof_tctx_create(tsd_t *tsd) {
/* if (!tsd_nominal(tsd) || tsd_reentrancy_level_get(tsd) > 0) {
* Create a single allocation that has space for vec of length bt->len.
*/
size_t size = offsetof(prof_gctx_t, vec) + (bt->len * sizeof(void *));
prof_gctx_t *gctx = (prof_gctx_t *)iallocztm(tsdn, size,
sz_size2index(size), false, NULL, true, arena_get(TSDN_NULL, 0, true),
true);
if (gctx == NULL) {
return NULL; return NULL;
} }
gctx->lock = prof_gctx_mutex_choose();
/*
* Set nlimbo to 1, in order to avoid a race condition with
* prof_tctx_destroy()/prof_gctx_try_destroy().
*/
gctx->nlimbo = 1;
tctx_tree_new(&gctx->tctxs);
/* Duplicate bt. */
memcpy(gctx->vec, bt->vec, bt->len * sizeof(void *));
gctx->bt.vec = gctx->vec;
gctx->bt.len = bt->len;
return gctx;
}
static void
prof_gctx_try_destroy(tsd_t *tsd, prof_tdata_t *tdata_self, prof_gctx_t *gctx,
prof_tdata_t *tdata) {
cassert(config_prof);
/* prof_tdata_t *tdata = prof_tdata_get(tsd, true);
* Check that gctx is still unused by any thread cache before destroying if (tdata == NULL) {
* it. prof_lookup() increments gctx->nlimbo in order to avoid a race return NULL;
* condition with this function, as does prof_tctx_destroy() in order to
* avoid a race between the main body of prof_tctx_destroy() and entry
* into this function.
*/
prof_enter(tsd, tdata_self);
malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);
assert(gctx->nlimbo != 0);
if (tctx_tree_empty(&gctx->tctxs) && gctx->nlimbo == 1) {
/* Remove gctx from bt2gctx. */
if (ckh_remove(tsd, &bt2gctx, &gctx->bt, NULL, NULL)) {
not_reached();
}
prof_leave(tsd, tdata_self);
/* Destroy gctx. */
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
idalloctm(tsd_tsdn(tsd), gctx, NULL, NULL, true, true);
} else {
/*
* Compensate for increment in prof_tctx_destroy() or
* prof_lookup().
*/
gctx->nlimbo--;
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
prof_leave(tsd, tdata_self);
} }
}
static bool prof_bt_t bt;
prof_tctx_should_destroy(tsdn_t *tsdn, prof_tctx_t *tctx) { bt_init(&bt, tdata->vec);
malloc_mutex_assert_owner(tsdn, tctx->tdata->lock); prof_backtrace(tsd, &bt);
return prof_lookup(tsd, &bt);
if (opt_prof_accum) {
return false;
}
if (tctx->cnts.curobjs != 0) {
return false;
}
if (tctx->prepared) {
return false;
}
return true;
} }
static bool /*
prof_gctx_should_destroy(prof_gctx_t *gctx) { * The bodies of this function and prof_leakcheck() are compiled out unless heap
if (opt_prof_accum) { * profiling is enabled, so that it is possible to compile jemalloc with
return false; * floating point support completely disabled. Avoiding floating point code is
} * important on memory-constrained systems, but it also enables a workaround for
if (!tctx_tree_empty(&gctx->tctxs)) { * versions of glibc that don't properly save/restore floating point registers
return false; * during dynamic lazy symbol loading (which internally calls into whatever
} * malloc implementation happens to be integrated into the application). Note
if (gctx->nlimbo != 0) { * that some compilers (e.g. gcc 4.8) may use floating point registers for fast
return false; * memory moves, so jemalloc must be compiled with such optimizations disabled
} * (e.g.
return true; * -mno-sse) in order for the workaround to be complete.
} */
uint64_t
static void prof_sample_new_event_wait(tsd_t *tsd) {
prof_tctx_destroy(tsd_t *tsd, prof_tctx_t *tctx) { #ifdef JEMALLOC_PROF
prof_tdata_t *tdata = tctx->tdata; if (lg_prof_sample == 0) {
prof_gctx_t *gctx = tctx->gctx; return TE_MIN_START_WAIT;
bool destroy_tdata, destroy_tctx, destroy_gctx;
malloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);
assert(tctx->cnts.curobjs == 0);
assert(tctx->cnts.curbytes == 0);
assert(!opt_prof_accum);
assert(tctx->cnts.accumobjs == 0);
assert(tctx->cnts.accumbytes == 0);
ckh_remove(tsd, &tdata->bt2tctx, &gctx->bt, NULL, NULL);
destroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd), tdata, false);
malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);
malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);
switch (tctx->state) {
case prof_tctx_state_nominal:
tctx_tree_remove(&gctx->tctxs, tctx);
destroy_tctx = true;
if (prof_gctx_should_destroy(gctx)) {
/*
* Increment gctx->nlimbo in order to keep another
* thread from winning the race to destroy gctx while
* this one has gctx->lock dropped. Without this, it
* would be possible for another thread to:
*
* 1) Sample an allocation associated with gctx.
* 2) Deallocate the sampled object.
* 3) Successfully prof_gctx_try_destroy(gctx).
*
* The result would be that gctx no longer exists by the
* time this thread accesses it in
* prof_gctx_try_destroy().
*/
gctx->nlimbo++;
destroy_gctx = true;
} else {
destroy_gctx = false;
}
break;
case prof_tctx_state_dumping:
/*
* A dumping thread needs tctx to remain valid until dumping
* has finished. Change state such that the dumping thread will
* complete destruction during a late dump iteration phase.
*/
tctx->state = prof_tctx_state_purgatory;
destroy_tctx = false;
destroy_gctx = false;
break;
default:
not_reached();
destroy_tctx = false;
destroy_gctx = false;
}
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
if (destroy_gctx) {
prof_gctx_try_destroy(tsd, prof_tdata_get(tsd, false), gctx,
tdata);
}
malloc_mutex_assert_not_owner(tsd_tsdn(tsd), tctx->tdata->lock);
if (destroy_tdata) {
prof_tdata_destroy(tsd, tdata, false);
}
if (destroy_tctx) {
idalloctm(tsd_tsdn(tsd), tctx, NULL, NULL, true, true);
}
}
static bool
prof_lookup_global(tsd_t *tsd, prof_bt_t *bt, prof_tdata_t *tdata,
void **p_btkey, prof_gctx_t **p_gctx, bool *p_new_gctx) {
union {
prof_gctx_t *p;
void *v;
} gctx, tgctx;
union {
prof_bt_t *p;
void *v;
} btkey;
bool new_gctx;
prof_enter(tsd, tdata);
if (ckh_search(&bt2gctx, bt, &btkey.v, &gctx.v)) {
/* bt has never been seen before. Insert it. */
prof_leave(tsd, tdata);
tgctx.p = prof_gctx_create(tsd_tsdn(tsd), bt);
if (tgctx.v == NULL) {
return true;
}
prof_enter(tsd, tdata);
if (ckh_search(&bt2gctx, bt, &btkey.v, &gctx.v)) {
gctx.p = tgctx.p;
btkey.p = &gctx.p->bt;
if (ckh_insert(tsd, &bt2gctx, btkey.v, gctx.v)) {
/* OOM. */
prof_leave(tsd, tdata);
idalloctm(tsd_tsdn(tsd), gctx.v, NULL, NULL,
true, true);
return true;
}
new_gctx = true;
} else {
new_gctx = false;
}
} else {
tgctx.v = NULL;
new_gctx = false;
}
if (!new_gctx) {
/*
* Increment nlimbo, in order to avoid a race condition with
* prof_tctx_destroy()/prof_gctx_try_destroy().
*/
malloc_mutex_lock(tsd_tsdn(tsd), gctx.p->lock);
gctx.p->nlimbo++;
malloc_mutex_unlock(tsd_tsdn(tsd), gctx.p->lock);
new_gctx = false;
if (tgctx.v != NULL) {
/* Lost race to insert. */
idalloctm(tsd_tsdn(tsd), tgctx.v, NULL, NULL, true,
true);
}
}
prof_leave(tsd, tdata);
*p_btkey = btkey.v;
*p_gctx = gctx.p;
*p_new_gctx = new_gctx;
return false;
}
prof_tctx_t *
prof_lookup(tsd_t *tsd, prof_bt_t *bt) {
union {
prof_tctx_t *p;
void *v;
} ret;
prof_tdata_t *tdata;
bool not_found;
cassert(config_prof);
tdata = prof_tdata_get(tsd, false);
if (tdata == NULL) {
return NULL;
}
malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock);
not_found = ckh_search(&tdata->bt2tctx, bt, NULL, &ret.v);
if (!not_found) { /* Note double negative! */
ret.p->prepared = true;
}
malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);
if (not_found) {
void *btkey;
prof_gctx_t *gctx;
bool new_gctx, error;
/*
* This thread's cache lacks bt. Look for it in the global
* cache.
*/
if (prof_lookup_global(tsd, bt, tdata, &btkey, &gctx,
&new_gctx)) {
return NULL;
}
/* Link a prof_tctx_t into gctx for this thread. */
ret.v = iallocztm(tsd_tsdn(tsd), sizeof(prof_tctx_t),
sz_size2index(sizeof(prof_tctx_t)), false, NULL, true,
arena_ichoose(tsd, NULL), true);
if (ret.p == NULL) {
if (new_gctx) {
prof_gctx_try_destroy(tsd, tdata, gctx, tdata);
}
return NULL;
}
ret.p->tdata = tdata;
ret.p->thr_uid = tdata->thr_uid;
ret.p->thr_discrim = tdata->thr_discrim;
memset(&ret.p->cnts, 0, sizeof(prof_cnt_t));
ret.p->gctx = gctx;
ret.p->tctx_uid = tdata->tctx_uid_next++;
ret.p->prepared = true;
ret.p->state = prof_tctx_state_initializing;
malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock);
error = ckh_insert(tsd, &tdata->bt2tctx, btkey, ret.v);
malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);
if (error) {
if (new_gctx) {
prof_gctx_try_destroy(tsd, tdata, gctx, tdata);
}
idalloctm(tsd_tsdn(tsd), ret.v, NULL, NULL, true, true);
return NULL;
}
malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);
ret.p->state = prof_tctx_state_nominal;
tctx_tree_insert(&gctx->tctxs, ret.p);
gctx->nlimbo--;
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
}
return ret.p;
}
/*
* The bodies of this function and prof_leakcheck() are compiled out unless heap
* profiling is enabled, so that it is possible to compile jemalloc with
* floating point support completely disabled. Avoiding floating point code is
* important on memory-constrained systems, but it also enables a workaround for
* versions of glibc that don't properly save/restore floating point registers
* during dynamic lazy symbol loading (which internally calls into whatever
* malloc implementation happens to be integrated into the application). Note
* that some compilers (e.g. gcc 4.8) may use floating point registers for fast
* memory moves, so jemalloc must be compiled with such optimizations disabled
* (e.g.
* -mno-sse) in order for the workaround to be complete.
*/
void
prof_sample_threshold_update(prof_tdata_t *tdata) {
#ifdef JEMALLOC_PROF
if (!config_prof) {
return;
}
if (lg_prof_sample == 0) {
tsd_bytes_until_sample_set(tsd_fetch(), 0);
return;
} }
/* /*
...@@ -1154,7 +222,7 @@ prof_sample_threshold_update(prof_tdata_t *tdata) { ...@@ -1154,7 +222,7 @@ prof_sample_threshold_update(prof_tdata_t *tdata) {
* *
* __ __ * __ __
* | log(u) | 1 * | log(u) | 1
* tdata->bytes_until_sample = | -------- |, where p = --------------- * bytes_until_sample = | -------- |, where p = ---------------
* | log(1-p) | lg_prof_sample * | log(1-p) | lg_prof_sample
* 2 * 2
* *
...@@ -1164,1623 +232,213 @@ prof_sample_threshold_update(prof_tdata_t *tdata) { ...@@ -1164,1623 +232,213 @@ prof_sample_threshold_update(prof_tdata_t *tdata) {
* Luc Devroye * Luc Devroye
* Springer-Verlag, New York, 1986 * Springer-Verlag, New York, 1986
* pp 500 * pp 500
* (http://luc.devroye.org/rnbookindex.html) * (http://luc.devroye.org/rnbookindex.html)
*/ *
uint64_t r = prng_lg_range_u64(&tdata->prng_state, 53); * In the actual computation, there's a non-zero probability that our
double u = (double)r * (1.0/9007199254740992.0L); * pseudo random number generator generates an exact 0, and to avoid
uint64_t bytes_until_sample = (uint64_t)(log(u) / * log(0), we set u to 1.0 in case r is 0. Therefore u effectively is
log(1.0 - (1.0 / (double)((uint64_t)1U << lg_prof_sample)))) * uniformly distributed in (0, 1] instead of [0, 1). Further, rather
+ (uint64_t)1U; * than taking the ceiling, we take the floor and then add 1, since
if (bytes_until_sample > SSIZE_MAX) { * otherwise bytes_until_sample would be 0 if u is exactly 1.0.
bytes_until_sample = SSIZE_MAX; */
} uint64_t r = prng_lg_range_u64(tsd_prng_statep_get(tsd), 53);
tsd_bytes_until_sample_set(tsd_fetch(), bytes_until_sample); double u = (r == 0U) ? 1.0 : (double)r * (1.0/9007199254740992.0L);
return (uint64_t)(log(u) /
#endif log(1.0 - (1.0 / (double)((uint64_t)1U << lg_prof_sample))))
} + (uint64_t)1U;
#else
#ifdef JEMALLOC_JET not_reached();
static prof_tdata_t * return TE_MAX_START_WAIT;
prof_tdata_count_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, #endif
void *arg) {
size_t *tdata_count = (size_t *)arg;
(*tdata_count)++;
return NULL;
}
size_t
prof_tdata_count(void) {
size_t tdata_count = 0;
tsdn_t *tsdn;
tsdn = tsdn_fetch();
malloc_mutex_lock(tsdn, &tdatas_mtx);
tdata_tree_iter(&tdatas, NULL, prof_tdata_count_iter,
(void *)&tdata_count);
malloc_mutex_unlock(tsdn, &tdatas_mtx);
return tdata_count;
}
size_t
prof_bt_count(void) {
size_t bt_count;
tsd_t *tsd;
prof_tdata_t *tdata;
tsd = tsd_fetch();
tdata = prof_tdata_get(tsd, false);
if (tdata == NULL) {
return 0;
}
malloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx);
bt_count = ckh_count(&bt2gctx);
malloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx);
return bt_count;
}
#endif
static int
prof_dump_open_impl(bool propagate_err, const char *filename) {
int fd;
fd = creat(filename, 0644);
if (fd == -1 && !propagate_err) {
malloc_printf("<jemalloc>: creat(\"%s\"), 0644) failed\n",
filename);
if (opt_abort) {
abort();
}
}
return fd;
}
prof_dump_open_t *JET_MUTABLE prof_dump_open = prof_dump_open_impl;
static bool
prof_dump_flush(bool propagate_err) {
bool ret = false;
ssize_t err;
cassert(config_prof);
err = malloc_write_fd(prof_dump_fd, prof_dump_buf, prof_dump_buf_end);
if (err == -1) {
if (!propagate_err) {
malloc_write("<jemalloc>: write() failed during heap "
"profile flush\n");
if (opt_abort) {
abort();
}
}
ret = true;
}
prof_dump_buf_end = 0;
return ret;
}
static bool
prof_dump_close(bool propagate_err) {
bool ret;
assert(prof_dump_fd != -1);
ret = prof_dump_flush(propagate_err);
close(prof_dump_fd);
prof_dump_fd = -1;
return ret;
}
static bool
prof_dump_write(bool propagate_err, const char *s) {
size_t i, slen, n;
cassert(config_prof);
i = 0;
slen = strlen(s);
while (i < slen) {
/* Flush the buffer if it is full. */
if (prof_dump_buf_end == PROF_DUMP_BUFSIZE) {
if (prof_dump_flush(propagate_err) && propagate_err) {
return true;
}
}
if (prof_dump_buf_end + slen - i <= PROF_DUMP_BUFSIZE) {
/* Finish writing. */
n = slen - i;
} else {
/* Write as much of s as will fit. */
n = PROF_DUMP_BUFSIZE - prof_dump_buf_end;
}
memcpy(&prof_dump_buf[prof_dump_buf_end], &s[i], n);
prof_dump_buf_end += n;
i += n;
}
assert(i == slen);
return false;
}
JEMALLOC_FORMAT_PRINTF(2, 3)
static bool
prof_dump_printf(bool propagate_err, const char *format, ...) {
bool ret;
va_list ap;
char buf[PROF_PRINTF_BUFSIZE];
va_start(ap, format);
malloc_vsnprintf(buf, sizeof(buf), format, ap);
va_end(ap);
ret = prof_dump_write(propagate_err, buf);
return ret;
}
static void
prof_tctx_merge_tdata(tsdn_t *tsdn, prof_tctx_t *tctx, prof_tdata_t *tdata) {
malloc_mutex_assert_owner(tsdn, tctx->tdata->lock);
malloc_mutex_lock(tsdn, tctx->gctx->lock);
switch (tctx->state) {
case prof_tctx_state_initializing:
malloc_mutex_unlock(tsdn, tctx->gctx->lock);
return;
case prof_tctx_state_nominal:
tctx->state = prof_tctx_state_dumping;
malloc_mutex_unlock(tsdn, tctx->gctx->lock);
memcpy(&tctx->dump_cnts, &tctx->cnts, sizeof(prof_cnt_t));
tdata->cnt_summed.curobjs += tctx->dump_cnts.curobjs;
tdata->cnt_summed.curbytes += tctx->dump_cnts.curbytes;
if (opt_prof_accum) {
tdata->cnt_summed.accumobjs +=
tctx->dump_cnts.accumobjs;
tdata->cnt_summed.accumbytes +=
tctx->dump_cnts.accumbytes;
}
break;
case prof_tctx_state_dumping:
case prof_tctx_state_purgatory:
not_reached();
}
}
static void
prof_tctx_merge_gctx(tsdn_t *tsdn, prof_tctx_t *tctx, prof_gctx_t *gctx) {
malloc_mutex_assert_owner(tsdn, gctx->lock);
gctx->cnt_summed.curobjs += tctx->dump_cnts.curobjs;
gctx->cnt_summed.curbytes += tctx->dump_cnts.curbytes;
if (opt_prof_accum) {
gctx->cnt_summed.accumobjs += tctx->dump_cnts.accumobjs;
gctx->cnt_summed.accumbytes += tctx->dump_cnts.accumbytes;
}
}
static prof_tctx_t *
prof_tctx_merge_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) {
tsdn_t *tsdn = (tsdn_t *)arg;
malloc_mutex_assert_owner(tsdn, tctx->gctx->lock);
switch (tctx->state) {
case prof_tctx_state_nominal:
/* New since dumping started; ignore. */
break;
case prof_tctx_state_dumping:
case prof_tctx_state_purgatory:
prof_tctx_merge_gctx(tsdn, tctx, tctx->gctx);
break;
default:
not_reached();
}
return NULL;
}
struct prof_tctx_dump_iter_arg_s {
tsdn_t *tsdn;
bool propagate_err;
};
static prof_tctx_t *
prof_tctx_dump_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *opaque) {
struct prof_tctx_dump_iter_arg_s *arg =
(struct prof_tctx_dump_iter_arg_s *)opaque;
malloc_mutex_assert_owner(arg->tsdn, tctx->gctx->lock);
switch (tctx->state) {
case prof_tctx_state_initializing:
case prof_tctx_state_nominal:
/* Not captured by this dump. */
break;
case prof_tctx_state_dumping:
case prof_tctx_state_purgatory:
if (prof_dump_printf(arg->propagate_err,
" t%"FMTu64": %"FMTu64": %"FMTu64" [%"FMTu64": "
"%"FMTu64"]\n", tctx->thr_uid, tctx->dump_cnts.curobjs,
tctx->dump_cnts.curbytes, tctx->dump_cnts.accumobjs,
tctx->dump_cnts.accumbytes)) {
return tctx;
}
break;
default:
not_reached();
}
return NULL;
}
static prof_tctx_t *
prof_tctx_finish_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) {
tsdn_t *tsdn = (tsdn_t *)arg;
prof_tctx_t *ret;
malloc_mutex_assert_owner(tsdn, tctx->gctx->lock);
switch (tctx->state) {
case prof_tctx_state_nominal:
/* New since dumping started; ignore. */
break;
case prof_tctx_state_dumping:
tctx->state = prof_tctx_state_nominal;
break;
case prof_tctx_state_purgatory:
ret = tctx;
goto label_return;
default:
not_reached();
}
ret = NULL;
label_return:
return ret;
}
static void
prof_dump_gctx_prep(tsdn_t *tsdn, prof_gctx_t *gctx, prof_gctx_tree_t *gctxs) {
cassert(config_prof);
malloc_mutex_lock(tsdn, gctx->lock);
/*
* Increment nlimbo so that gctx won't go away before dump.
* Additionally, link gctx into the dump list so that it is included in
* prof_dump()'s second pass.
*/
gctx->nlimbo++;
gctx_tree_insert(gctxs, gctx);
memset(&gctx->cnt_summed, 0, sizeof(prof_cnt_t));
malloc_mutex_unlock(tsdn, gctx->lock);
}
struct prof_gctx_merge_iter_arg_s {
tsdn_t *tsdn;
size_t leak_ngctx;
};
static prof_gctx_t *
prof_gctx_merge_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) {
struct prof_gctx_merge_iter_arg_s *arg =
(struct prof_gctx_merge_iter_arg_s *)opaque;
malloc_mutex_lock(arg->tsdn, gctx->lock);
tctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_merge_iter,
(void *)arg->tsdn);
if (gctx->cnt_summed.curobjs != 0) {
arg->leak_ngctx++;
}
malloc_mutex_unlock(arg->tsdn, gctx->lock);
return NULL;
}
static void
prof_gctx_finish(tsd_t *tsd, prof_gctx_tree_t *gctxs) {
prof_tdata_t *tdata = prof_tdata_get(tsd, false);
prof_gctx_t *gctx;
/*
* Standard tree iteration won't work here, because as soon as we
* decrement gctx->nlimbo and unlock gctx, another thread can
* concurrently destroy it, which will corrupt the tree. Therefore,
* tear down the tree one node at a time during iteration.
*/
while ((gctx = gctx_tree_first(gctxs)) != NULL) {
gctx_tree_remove(gctxs, gctx);
malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);
{
prof_tctx_t *next;
next = NULL;
do {
prof_tctx_t *to_destroy =
tctx_tree_iter(&gctx->tctxs, next,
prof_tctx_finish_iter,
(void *)tsd_tsdn(tsd));
if (to_destroy != NULL) {
next = tctx_tree_next(&gctx->tctxs,
to_destroy);
tctx_tree_remove(&gctx->tctxs,
to_destroy);
idalloctm(tsd_tsdn(tsd), to_destroy,
NULL, NULL, true, true);
} else {
next = NULL;
}
} while (next != NULL);
}
gctx->nlimbo--;
if (prof_gctx_should_destroy(gctx)) {
gctx->nlimbo++;
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
prof_gctx_try_destroy(tsd, tdata, gctx, tdata);
} else {
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
}
}
}
struct prof_tdata_merge_iter_arg_s {
tsdn_t *tsdn;
prof_cnt_t cnt_all;
};
static prof_tdata_t *
prof_tdata_merge_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata,
void *opaque) {
struct prof_tdata_merge_iter_arg_s *arg =
(struct prof_tdata_merge_iter_arg_s *)opaque;
malloc_mutex_lock(arg->tsdn, tdata->lock);
if (!tdata->expired) {
size_t tabind;
union {
prof_tctx_t *p;
void *v;
} tctx;
tdata->dumping = true;
memset(&tdata->cnt_summed, 0, sizeof(prof_cnt_t));
for (tabind = 0; !ckh_iter(&tdata->bt2tctx, &tabind, NULL,
&tctx.v);) {
prof_tctx_merge_tdata(arg->tsdn, tctx.p, tdata);
}
arg->cnt_all.curobjs += tdata->cnt_summed.curobjs;
arg->cnt_all.curbytes += tdata->cnt_summed.curbytes;
if (opt_prof_accum) {
arg->cnt_all.accumobjs += tdata->cnt_summed.accumobjs;
arg->cnt_all.accumbytes += tdata->cnt_summed.accumbytes;
}
} else {
tdata->dumping = false;
}
malloc_mutex_unlock(arg->tsdn, tdata->lock);
return NULL;
}
static prof_tdata_t *
prof_tdata_dump_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata,
void *arg) {
bool propagate_err = *(bool *)arg;
if (!tdata->dumping) {
return NULL;
}
if (prof_dump_printf(propagate_err,
" t%"FMTu64": %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]%s%s\n",
tdata->thr_uid, tdata->cnt_summed.curobjs,
tdata->cnt_summed.curbytes, tdata->cnt_summed.accumobjs,
tdata->cnt_summed.accumbytes,
(tdata->thread_name != NULL) ? " " : "",
(tdata->thread_name != NULL) ? tdata->thread_name : "")) {
return tdata;
}
return NULL;
}
static bool
prof_dump_header_impl(tsdn_t *tsdn, bool propagate_err,
const prof_cnt_t *cnt_all) {
bool ret;
if (prof_dump_printf(propagate_err,
"heap_v2/%"FMTu64"\n"
" t*: %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]\n",
((uint64_t)1U << lg_prof_sample), cnt_all->curobjs,
cnt_all->curbytes, cnt_all->accumobjs, cnt_all->accumbytes)) {
return true;
}
malloc_mutex_lock(tsdn, &tdatas_mtx);
ret = (tdata_tree_iter(&tdatas, NULL, prof_tdata_dump_iter,
(void *)&propagate_err) != NULL);
malloc_mutex_unlock(tsdn, &tdatas_mtx);
return ret;
}
prof_dump_header_t *JET_MUTABLE prof_dump_header = prof_dump_header_impl;
static bool
prof_dump_gctx(tsdn_t *tsdn, bool propagate_err, prof_gctx_t *gctx,
const prof_bt_t *bt, prof_gctx_tree_t *gctxs) {
bool ret;
unsigned i;
struct prof_tctx_dump_iter_arg_s prof_tctx_dump_iter_arg;
cassert(config_prof);
malloc_mutex_assert_owner(tsdn, gctx->lock);
/* Avoid dumping such gctx's that have no useful data. */
if ((!opt_prof_accum && gctx->cnt_summed.curobjs == 0) ||
(opt_prof_accum && gctx->cnt_summed.accumobjs == 0)) {
assert(gctx->cnt_summed.curobjs == 0);
assert(gctx->cnt_summed.curbytes == 0);
assert(gctx->cnt_summed.accumobjs == 0);
assert(gctx->cnt_summed.accumbytes == 0);
ret = false;
goto label_return;
}
if (prof_dump_printf(propagate_err, "@")) {
ret = true;
goto label_return;
}
for (i = 0; i < bt->len; i++) {
if (prof_dump_printf(propagate_err, " %#"FMTxPTR,
(uintptr_t)bt->vec[i])) {
ret = true;
goto label_return;
}
}
if (prof_dump_printf(propagate_err,
"\n"
" t*: %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]\n",
gctx->cnt_summed.curobjs, gctx->cnt_summed.curbytes,
gctx->cnt_summed.accumobjs, gctx->cnt_summed.accumbytes)) {
ret = true;
goto label_return;
}
prof_tctx_dump_iter_arg.tsdn = tsdn;
prof_tctx_dump_iter_arg.propagate_err = propagate_err;
if (tctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_dump_iter,
(void *)&prof_tctx_dump_iter_arg) != NULL) {
ret = true;
goto label_return;
}
ret = false;
label_return:
return ret;
}
#ifndef _WIN32
JEMALLOC_FORMAT_PRINTF(1, 2)
static int
prof_open_maps(const char *format, ...) {
int mfd;
va_list ap;
char filename[PATH_MAX + 1];
va_start(ap, format);
malloc_vsnprintf(filename, sizeof(filename), format, ap);
va_end(ap);
#if defined(O_CLOEXEC)
mfd = open(filename, O_RDONLY | O_CLOEXEC);
#else
mfd = open(filename, O_RDONLY);
if (mfd != -1) {
fcntl(mfd, F_SETFD, fcntl(mfd, F_GETFD) | FD_CLOEXEC);
}
#endif
return mfd;
}
#endif
static int
prof_getpid(void) {
#ifdef _WIN32
return GetCurrentProcessId();
#else
return getpid();
#endif
}
static bool
prof_dump_maps(bool propagate_err) {
bool ret;
int mfd;
cassert(config_prof);
#ifdef __FreeBSD__
mfd = prof_open_maps("/proc/curproc/map");
#elif defined(_WIN32)
mfd = -1; // Not implemented
#else
{
int pid = prof_getpid();
mfd = prof_open_maps("/proc/%d/task/%d/maps", pid, pid);
if (mfd == -1) {
mfd = prof_open_maps("/proc/%d/maps", pid);
}
}
#endif
if (mfd != -1) {
ssize_t nread;
if (prof_dump_write(propagate_err, "\nMAPPED_LIBRARIES:\n") &&
propagate_err) {
ret = true;
goto label_return;
}
nread = 0;
do {
prof_dump_buf_end += nread;
if (prof_dump_buf_end == PROF_DUMP_BUFSIZE) {
/* Make space in prof_dump_buf before read(). */
if (prof_dump_flush(propagate_err) &&
propagate_err) {
ret = true;
goto label_return;
}
}
nread = malloc_read_fd(mfd,
&prof_dump_buf[prof_dump_buf_end], PROF_DUMP_BUFSIZE
- prof_dump_buf_end);
} while (nread > 0);
} else {
ret = true;
goto label_return;
}
ret = false;
label_return:
if (mfd != -1) {
close(mfd);
}
return ret;
}
/*
* See prof_sample_threshold_update() comment for why the body of this function
* is conditionally compiled.
*/
static void
prof_leakcheck(const prof_cnt_t *cnt_all, size_t leak_ngctx,
const char *filename) {
#ifdef JEMALLOC_PROF
/*
* Scaling is equivalent AdjustSamples() in jeprof, but the result may
* differ slightly from what jeprof reports, because here we scale the
* summary values, whereas jeprof scales each context individually and
* reports the sums of the scaled values.
*/
if (cnt_all->curbytes != 0) {
double sample_period = (double)((uint64_t)1 << lg_prof_sample);
double ratio = (((double)cnt_all->curbytes) /
(double)cnt_all->curobjs) / sample_period;
double scale_factor = 1.0 / (1.0 - exp(-ratio));
uint64_t curbytes = (uint64_t)round(((double)cnt_all->curbytes)
* scale_factor);
uint64_t curobjs = (uint64_t)round(((double)cnt_all->curobjs) *
scale_factor);
malloc_printf("<jemalloc>: Leak approximation summary: ~%"FMTu64
" byte%s, ~%"FMTu64" object%s, >= %zu context%s\n",
curbytes, (curbytes != 1) ? "s" : "", curobjs, (curobjs !=
1) ? "s" : "", leak_ngctx, (leak_ngctx != 1) ? "s" : "");
malloc_printf(
"<jemalloc>: Run jeprof on \"%s\" for leak detail\n",
filename);
}
#endif
}
struct prof_gctx_dump_iter_arg_s {
tsdn_t *tsdn;
bool propagate_err;
};
static prof_gctx_t *
prof_gctx_dump_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) {
prof_gctx_t *ret;
struct prof_gctx_dump_iter_arg_s *arg =
(struct prof_gctx_dump_iter_arg_s *)opaque;
malloc_mutex_lock(arg->tsdn, gctx->lock);
if (prof_dump_gctx(arg->tsdn, arg->propagate_err, gctx, &gctx->bt,
gctxs)) {
ret = gctx;
goto label_return;
}
ret = NULL;
label_return:
malloc_mutex_unlock(arg->tsdn, gctx->lock);
return ret;
}
static void
prof_dump_prep(tsd_t *tsd, prof_tdata_t *tdata,
struct prof_tdata_merge_iter_arg_s *prof_tdata_merge_iter_arg,
struct prof_gctx_merge_iter_arg_s *prof_gctx_merge_iter_arg,
prof_gctx_tree_t *gctxs) {
size_t tabind;
union {
prof_gctx_t *p;
void *v;
} gctx;
prof_enter(tsd, tdata);
/*
* Put gctx's in limbo and clear their counters in preparation for
* summing.
*/
gctx_tree_new(gctxs);
for (tabind = 0; !ckh_iter(&bt2gctx, &tabind, NULL, &gctx.v);) {
prof_dump_gctx_prep(tsd_tsdn(tsd), gctx.p, gctxs);
}
/*
* Iterate over tdatas, and for the non-expired ones snapshot their tctx
* stats and merge them into the associated gctx's.
*/
prof_tdata_merge_iter_arg->tsdn = tsd_tsdn(tsd);
memset(&prof_tdata_merge_iter_arg->cnt_all, 0, sizeof(prof_cnt_t));
malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);
tdata_tree_iter(&tdatas, NULL, prof_tdata_merge_iter,
(void *)prof_tdata_merge_iter_arg);
malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);
/* Merge tctx stats into gctx's. */
prof_gctx_merge_iter_arg->tsdn = tsd_tsdn(tsd);
prof_gctx_merge_iter_arg->leak_ngctx = 0;
gctx_tree_iter(gctxs, NULL, prof_gctx_merge_iter,
(void *)prof_gctx_merge_iter_arg);
prof_leave(tsd, tdata);
}
static bool
prof_dump_file(tsd_t *tsd, bool propagate_err, const char *filename,
bool leakcheck, prof_tdata_t *tdata,
struct prof_tdata_merge_iter_arg_s *prof_tdata_merge_iter_arg,
struct prof_gctx_merge_iter_arg_s *prof_gctx_merge_iter_arg,
struct prof_gctx_dump_iter_arg_s *prof_gctx_dump_iter_arg,
prof_gctx_tree_t *gctxs) {
/* Create dump file. */
if ((prof_dump_fd = prof_dump_open(propagate_err, filename)) == -1) {
return true;
}
/* Dump profile header. */
if (prof_dump_header(tsd_tsdn(tsd), propagate_err,
&prof_tdata_merge_iter_arg->cnt_all)) {
goto label_write_error;
}
/* Dump per gctx profile stats. */
prof_gctx_dump_iter_arg->tsdn = tsd_tsdn(tsd);
prof_gctx_dump_iter_arg->propagate_err = propagate_err;
if (gctx_tree_iter(gctxs, NULL, prof_gctx_dump_iter,
(void *)prof_gctx_dump_iter_arg) != NULL) {
goto label_write_error;
}
/* Dump /proc/<pid>/maps if possible. */
if (prof_dump_maps(propagate_err)) {
goto label_write_error;
}
if (prof_dump_close(propagate_err)) {
return true;
}
return false;
label_write_error:
prof_dump_close(propagate_err);
return true;
}
static bool
prof_dump(tsd_t *tsd, bool propagate_err, const char *filename,
bool leakcheck) {
cassert(config_prof);
assert(tsd_reentrancy_level_get(tsd) == 0);
prof_tdata_t * tdata = prof_tdata_get(tsd, true);
if (tdata == NULL) {
return true;
}
pre_reentrancy(tsd, NULL);
malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_mtx);
prof_gctx_tree_t gctxs;
struct prof_tdata_merge_iter_arg_s prof_tdata_merge_iter_arg;
struct prof_gctx_merge_iter_arg_s prof_gctx_merge_iter_arg;
struct prof_gctx_dump_iter_arg_s prof_gctx_dump_iter_arg;
prof_dump_prep(tsd, tdata, &prof_tdata_merge_iter_arg,
&prof_gctx_merge_iter_arg, &gctxs);
bool err = prof_dump_file(tsd, propagate_err, filename, leakcheck, tdata,
&prof_tdata_merge_iter_arg, &prof_gctx_merge_iter_arg,
&prof_gctx_dump_iter_arg, &gctxs);
prof_gctx_finish(tsd, &gctxs);
malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_mtx);
post_reentrancy(tsd);
if (err) {
return true;
}
if (leakcheck) {
prof_leakcheck(&prof_tdata_merge_iter_arg.cnt_all,
prof_gctx_merge_iter_arg.leak_ngctx, filename);
}
return false;
}
#ifdef JEMALLOC_JET
void
prof_cnt_all(uint64_t *curobjs, uint64_t *curbytes, uint64_t *accumobjs,
uint64_t *accumbytes) {
tsd_t *tsd;
prof_tdata_t *tdata;
struct prof_tdata_merge_iter_arg_s prof_tdata_merge_iter_arg;
struct prof_gctx_merge_iter_arg_s prof_gctx_merge_iter_arg;
prof_gctx_tree_t gctxs;
tsd = tsd_fetch();
tdata = prof_tdata_get(tsd, false);
if (tdata == NULL) {
if (curobjs != NULL) {
*curobjs = 0;
}
if (curbytes != NULL) {
*curbytes = 0;
}
if (accumobjs != NULL) {
*accumobjs = 0;
}
if (accumbytes != NULL) {
*accumbytes = 0;
}
return;
}
prof_dump_prep(tsd, tdata, &prof_tdata_merge_iter_arg,
&prof_gctx_merge_iter_arg, &gctxs);
prof_gctx_finish(tsd, &gctxs);
if (curobjs != NULL) {
*curobjs = prof_tdata_merge_iter_arg.cnt_all.curobjs;
}
if (curbytes != NULL) {
*curbytes = prof_tdata_merge_iter_arg.cnt_all.curbytes;
}
if (accumobjs != NULL) {
*accumobjs = prof_tdata_merge_iter_arg.cnt_all.accumobjs;
}
if (accumbytes != NULL) {
*accumbytes = prof_tdata_merge_iter_arg.cnt_all.accumbytes;
}
}
#endif
#define DUMP_FILENAME_BUFSIZE (PATH_MAX + 1)
#define VSEQ_INVALID UINT64_C(0xffffffffffffffff)
static void
prof_dump_filename(char *filename, char v, uint64_t vseq) {
cassert(config_prof);
if (vseq != VSEQ_INVALID) {
/* "<prefix>.<pid>.<seq>.v<vseq>.heap" */
malloc_snprintf(filename, DUMP_FILENAME_BUFSIZE,
"%s.%d.%"FMTu64".%c%"FMTu64".heap",
opt_prof_prefix, prof_getpid(), prof_dump_seq, v, vseq);
} else {
/* "<prefix>.<pid>.<seq>.<v>.heap" */
malloc_snprintf(filename, DUMP_FILENAME_BUFSIZE,
"%s.%d.%"FMTu64".%c.heap",
opt_prof_prefix, prof_getpid(), prof_dump_seq, v);
}
prof_dump_seq++;
}
static void
prof_fdump(void) {
tsd_t *tsd;
char filename[DUMP_FILENAME_BUFSIZE];
cassert(config_prof);
assert(opt_prof_final);
assert(opt_prof_prefix[0] != '\0');
if (!prof_booted) {
return;
}
tsd = tsd_fetch();
assert(tsd_reentrancy_level_get(tsd) == 0);
malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_seq_mtx);
prof_dump_filename(filename, 'f', VSEQ_INVALID);
malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_seq_mtx);
prof_dump(tsd, false, filename, opt_prof_leak);
}
bool
prof_accum_init(tsdn_t *tsdn, prof_accum_t *prof_accum) {
cassert(config_prof);
#ifndef JEMALLOC_ATOMIC_U64
if (malloc_mutex_init(&prof_accum->mtx, "prof_accum",
WITNESS_RANK_PROF_ACCUM, malloc_mutex_rank_exclusive)) {
return true;
}
prof_accum->accumbytes = 0;
#else
atomic_store_u64(&prof_accum->accumbytes, 0, ATOMIC_RELAXED);
#endif
return false;
}
void
prof_idump(tsdn_t *tsdn) {
tsd_t *tsd;
prof_tdata_t *tdata;
cassert(config_prof);
if (!prof_booted || tsdn_null(tsdn) || !prof_active_get_unlocked()) {
return;
}
tsd = tsdn_tsd(tsdn);
if (tsd_reentrancy_level_get(tsd) > 0) {
return;
}
tdata = prof_tdata_get(tsd, false);
if (tdata == NULL) {
return;
}
if (tdata->enq) {
tdata->enq_idump = true;
return;
}
if (opt_prof_prefix[0] != '\0') {
char filename[PATH_MAX + 1];
malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_seq_mtx);
prof_dump_filename(filename, 'i', prof_dump_iseq);
prof_dump_iseq++;
malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_seq_mtx);
prof_dump(tsd, false, filename, false);
}
}
bool
prof_mdump(tsd_t *tsd, const char *filename) {
cassert(config_prof);
assert(tsd_reentrancy_level_get(tsd) == 0);
if (!opt_prof || !prof_booted) {
return true;
}
char filename_buf[DUMP_FILENAME_BUFSIZE];
if (filename == NULL) {
/* No filename specified, so automatically generate one. */
if (opt_prof_prefix[0] == '\0') {
return true;
}
malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_seq_mtx);
prof_dump_filename(filename_buf, 'm', prof_dump_mseq);
prof_dump_mseq++;
malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_seq_mtx);
filename = filename_buf;
}
return prof_dump(tsd, true, filename, false);
}
void
prof_gdump(tsdn_t *tsdn) {
tsd_t *tsd;
prof_tdata_t *tdata;
cassert(config_prof);
if (!prof_booted || tsdn_null(tsdn) || !prof_active_get_unlocked()) {
return;
}
tsd = tsdn_tsd(tsdn);
if (tsd_reentrancy_level_get(tsd) > 0) {
return;
}
tdata = prof_tdata_get(tsd, false);
if (tdata == NULL) {
return;
}
if (tdata->enq) {
tdata->enq_gdump = true;
return;
}
if (opt_prof_prefix[0] != '\0') {
char filename[DUMP_FILENAME_BUFSIZE];
malloc_mutex_lock(tsdn, &prof_dump_seq_mtx);
prof_dump_filename(filename, 'u', prof_dump_useq);
prof_dump_useq++;
malloc_mutex_unlock(tsdn, &prof_dump_seq_mtx);
prof_dump(tsd, false, filename, false);
}
}
static void
prof_bt_hash(const void *key, size_t r_hash[2]) {
prof_bt_t *bt = (prof_bt_t *)key;
cassert(config_prof);
hash(bt->vec, bt->len * sizeof(void *), 0x94122f33U, r_hash);
}
static bool
prof_bt_keycomp(const void *k1, const void *k2) {
const prof_bt_t *bt1 = (prof_bt_t *)k1;
const prof_bt_t *bt2 = (prof_bt_t *)k2;
cassert(config_prof);
if (bt1->len != bt2->len) {
return false;
}
return (memcmp(bt1->vec, bt2->vec, bt1->len * sizeof(void *)) == 0);
}
static void
prof_bt_node_hash(const void *key, size_t r_hash[2]) {
const prof_bt_node_t *bt_node = (prof_bt_node_t *)key;
prof_bt_hash((void *)(&bt_node->bt), r_hash);
}
static bool
prof_bt_node_keycomp(const void *k1, const void *k2) {
const prof_bt_node_t *bt_node1 = (prof_bt_node_t *)k1;
const prof_bt_node_t *bt_node2 = (prof_bt_node_t *)k2;
return prof_bt_keycomp((void *)(&bt_node1->bt),
(void *)(&bt_node2->bt));
}
static void
prof_thr_node_hash(const void *key, size_t r_hash[2]) {
const prof_thr_node_t *thr_node = (prof_thr_node_t *)key;
hash(&thr_node->thr_uid, sizeof(uint64_t), 0x94122f35U, r_hash);
}
static bool
prof_thr_node_keycomp(const void *k1, const void *k2) {
const prof_thr_node_t *thr_node1 = (prof_thr_node_t *)k1;
const prof_thr_node_t *thr_node2 = (prof_thr_node_t *)k2;
return thr_node1->thr_uid == thr_node2->thr_uid;
}
static uint64_t
prof_thr_uid_alloc(tsdn_t *tsdn) {
uint64_t thr_uid;
malloc_mutex_lock(tsdn, &next_thr_uid_mtx);
thr_uid = next_thr_uid;
next_thr_uid++;
malloc_mutex_unlock(tsdn, &next_thr_uid_mtx);
return thr_uid;
}
static prof_tdata_t *
prof_tdata_init_impl(tsd_t *tsd, uint64_t thr_uid, uint64_t thr_discrim,
char *thread_name, bool active) {
prof_tdata_t *tdata;
cassert(config_prof);
/* Initialize an empty cache for this thread. */
tdata = (prof_tdata_t *)iallocztm(tsd_tsdn(tsd), sizeof(prof_tdata_t),
sz_size2index(sizeof(prof_tdata_t)), false, NULL, true,
arena_get(TSDN_NULL, 0, true), true);
if (tdata == NULL) {
return NULL;
}
tdata->lock = prof_tdata_mutex_choose(thr_uid);
tdata->thr_uid = thr_uid;
tdata->thr_discrim = thr_discrim;
tdata->thread_name = thread_name;
tdata->attached = true;
tdata->expired = false;
tdata->tctx_uid_next = 0;
if (ckh_new(tsd, &tdata->bt2tctx, PROF_CKH_MINITEMS, prof_bt_hash,
prof_bt_keycomp)) {
idalloctm(tsd_tsdn(tsd), tdata, NULL, NULL, true, true);
return NULL;
}
tdata->prng_state = (uint64_t)(uintptr_t)tdata;
prof_sample_threshold_update(tdata);
tdata->enq = false;
tdata->enq_idump = false;
tdata->enq_gdump = false;
tdata->dumping = false;
tdata->active = active;
malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);
tdata_tree_insert(&tdatas, tdata);
malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);
return tdata;
}
prof_tdata_t *
prof_tdata_init(tsd_t *tsd) {
return prof_tdata_init_impl(tsd, prof_thr_uid_alloc(tsd_tsdn(tsd)), 0,
NULL, prof_thread_active_init_get(tsd_tsdn(tsd)));
}
static bool
prof_tdata_should_destroy_unlocked(prof_tdata_t *tdata, bool even_if_attached) {
if (tdata->attached && !even_if_attached) {
return false;
}
if (ckh_count(&tdata->bt2tctx) != 0) {
return false;
}
return true;
}
static bool
prof_tdata_should_destroy(tsdn_t *tsdn, prof_tdata_t *tdata,
bool even_if_attached) {
malloc_mutex_assert_owner(tsdn, tdata->lock);
return prof_tdata_should_destroy_unlocked(tdata, even_if_attached);
}
static void
prof_tdata_destroy_locked(tsd_t *tsd, prof_tdata_t *tdata,
bool even_if_attached) {
malloc_mutex_assert_owner(tsd_tsdn(tsd), &tdatas_mtx);
tdata_tree_remove(&tdatas, tdata);
assert(prof_tdata_should_destroy_unlocked(tdata, even_if_attached));
if (tdata->thread_name != NULL) {
idalloctm(tsd_tsdn(tsd), tdata->thread_name, NULL, NULL, true,
true);
}
ckh_delete(tsd, &tdata->bt2tctx);
idalloctm(tsd_tsdn(tsd), tdata, NULL, NULL, true, true);
}
static void
prof_tdata_destroy(tsd_t *tsd, prof_tdata_t *tdata, bool even_if_attached) {
malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);
prof_tdata_destroy_locked(tsd, tdata, even_if_attached);
malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);
}
static void
prof_tdata_detach(tsd_t *tsd, prof_tdata_t *tdata) {
bool destroy_tdata;
malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock);
if (tdata->attached) {
destroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd), tdata,
true);
/*
* Only detach if !destroy_tdata, because detaching would allow
* another thread to win the race to destroy tdata.
*/
if (!destroy_tdata) {
tdata->attached = false;
}
tsd_prof_tdata_set(tsd, NULL);
} else {
destroy_tdata = false;
}
malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);
if (destroy_tdata) {
prof_tdata_destroy(tsd, tdata, true);
}
}
prof_tdata_t *
prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata) {
uint64_t thr_uid = tdata->thr_uid;
uint64_t thr_discrim = tdata->thr_discrim + 1;
char *thread_name = (tdata->thread_name != NULL) ?
prof_thread_name_alloc(tsd_tsdn(tsd), tdata->thread_name) : NULL;
bool active = tdata->active;
prof_tdata_detach(tsd, tdata);
return prof_tdata_init_impl(tsd, thr_uid, thr_discrim, thread_name,
active);
}
static bool
prof_tdata_expire(tsdn_t *tsdn, prof_tdata_t *tdata) {
bool destroy_tdata;
malloc_mutex_lock(tsdn, tdata->lock);
if (!tdata->expired) {
tdata->expired = true;
destroy_tdata = tdata->attached ? false :
prof_tdata_should_destroy(tsdn, tdata, false);
} else {
destroy_tdata = false;
}
malloc_mutex_unlock(tsdn, tdata->lock);
return destroy_tdata;
}
static prof_tdata_t *
prof_tdata_reset_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata,
void *arg) {
tsdn_t *tsdn = (tsdn_t *)arg;
return (prof_tdata_expire(tsdn, tdata) ? tdata : NULL);
}
void
prof_reset(tsd_t *tsd, size_t lg_sample) {
prof_tdata_t *next;
assert(lg_sample < (sizeof(uint64_t) << 3));
malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_mtx);
malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);
lg_prof_sample = lg_sample;
next = NULL;
do {
prof_tdata_t *to_destroy = tdata_tree_iter(&tdatas, next,
prof_tdata_reset_iter, (void *)tsd);
if (to_destroy != NULL) {
next = tdata_tree_next(&tdatas, to_destroy);
prof_tdata_destroy_locked(tsd, to_destroy, false);
} else {
next = NULL;
}
} while (next != NULL);
malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);
malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_mtx);
}
void
prof_tdata_cleanup(tsd_t *tsd) {
prof_tdata_t *tdata;
if (!config_prof) {
return;
}
tdata = tsd_prof_tdata_get(tsd);
if (tdata != NULL) {
prof_tdata_detach(tsd, tdata);
}
}
bool
prof_active_get(tsdn_t *tsdn) {
bool prof_active_current;
malloc_mutex_lock(tsdn, &prof_active_mtx);
prof_active_current = prof_active;
malloc_mutex_unlock(tsdn, &prof_active_mtx);
return prof_active_current;
}
bool
prof_active_set(tsdn_t *tsdn, bool active) {
bool prof_active_old;
malloc_mutex_lock(tsdn, &prof_active_mtx);
prof_active_old = prof_active;
prof_active = active;
malloc_mutex_unlock(tsdn, &prof_active_mtx);
return prof_active_old;
}
#ifdef JEMALLOC_JET
size_t
prof_log_bt_count(void) {
size_t cnt = 0;
prof_bt_node_t *node = log_bt_first;
while (node != NULL) {
cnt++;
node = node->next;
}
return cnt;
}
size_t
prof_log_alloc_count(void) {
size_t cnt = 0;
prof_alloc_node_t *node = log_alloc_first;
while (node != NULL) {
cnt++;
node = node->next;
}
return cnt;
}
size_t
prof_log_thr_count(void) {
size_t cnt = 0;
prof_thr_node_t *node = log_thr_first;
while (node != NULL) {
cnt++;
node = node->next;
}
return cnt;
}
bool
prof_log_is_logging(void) {
return prof_logging_state == prof_logging_state_started;
}
bool
prof_log_rep_check(void) {
if (prof_logging_state == prof_logging_state_stopped
&& log_tables_initialized) {
return true;
}
if (log_bt_last != NULL && log_bt_last->next != NULL) {
return true;
}
if (log_thr_last != NULL && log_thr_last->next != NULL) {
return true;
}
if (log_alloc_last != NULL && log_alloc_last->next != NULL) {
return true;
}
size_t bt_count = prof_log_bt_count();
size_t thr_count = prof_log_thr_count();
size_t alloc_count = prof_log_alloc_count();
if (prof_logging_state == prof_logging_state_stopped) {
if (bt_count != 0 || thr_count != 0 || alloc_count || 0) {
return true;
}
}
prof_alloc_node_t *node = log_alloc_first;
while (node != NULL) {
if (node->alloc_bt_ind >= bt_count) {
return true;
}
if (node->free_bt_ind >= bt_count) {
return true;
}
if (node->alloc_thr_ind >= thr_count) {
return true;
}
if (node->free_thr_ind >= thr_count) {
return true;
}
if (node->alloc_time_ns > node->free_time_ns) {
return true;
}
node = node->next;
}
return false;
} }
void uint64_t
prof_log_dummy_set(bool new_value) { prof_sample_postponed_event_wait(tsd_t *tsd) {
prof_log_dummy = new_value; /*
* The postponed wait time for prof sample event is computed as if we
* want a new wait time (i.e. as if the event were triggered). If we
* instead postpone to the immediate next allocation, like how we're
* handling the other events, then we can have sampling bias, if e.g.
* the allocation immediately following a reentrancy always comes from
* the same stack trace.
*/
return prof_sample_new_event_wait(tsd);
} }
#endif
bool
prof_log_start(tsdn_t *tsdn, const char *filename) {
if (!opt_prof || !prof_booted) {
return true;
}
bool ret = false; void
size_t buf_size = PATH_MAX + 1; prof_sample_event_handler(tsd_t *tsd, uint64_t elapsed) {
cassert(config_prof);
malloc_mutex_lock(tsdn, &log_mtx); assert(elapsed > 0 && elapsed != TE_INVALID_ELAPSED);
if (prof_interval == 0 || !prof_active_get_unlocked()) {
if (prof_logging_state != prof_logging_state_stopped) { return;
ret = true;
} else if (filename == NULL) {
/* Make default name. */
malloc_snprintf(log_filename, buf_size, "%s.%d.%"FMTu64".json",
opt_prof_prefix, prof_getpid(), log_seq);
log_seq++;
prof_logging_state = prof_logging_state_started;
} else if (strlen(filename) >= buf_size) {
ret = true;
} else {
strcpy(log_filename, filename);
prof_logging_state = prof_logging_state_started;
} }
if (counter_accum(tsd_tsdn(tsd), &prof_idump_accumulated, elapsed)) {
if (!ret) { prof_idump(tsd_tsdn(tsd));
nstime_update(&log_start_timestamp);
} }
malloc_mutex_unlock(tsdn, &log_mtx);
return ret;
} }
/* Used as an atexit function to stop logging on exit. */
static void static void
prof_log_stop_final(void) { prof_fdump(void) {
tsd_t *tsd = tsd_fetch(); tsd_t *tsd;
prof_log_stop(tsd_tsdn(tsd));
}
struct prof_emitter_cb_arg_s { cassert(config_prof);
int fd; assert(opt_prof_final);
ssize_t ret;
};
static void if (!prof_booted) {
prof_emitter_write_cb(void *opaque, const char *to_write) {
struct prof_emitter_cb_arg_s *arg =
(struct prof_emitter_cb_arg_s *)opaque;
size_t bytes = strlen(to_write);
#ifdef JEMALLOC_JET
if (prof_log_dummy) {
return; return;
} }
#endif tsd = tsd_fetch();
arg->ret = write(arg->fd, (void *)to_write, bytes); assert(tsd_reentrancy_level_get(tsd) == 0);
prof_fdump_impl(tsd);
} }
/* static bool
* prof_log_emit_{...} goes through the appropriate linked list, emitting each prof_idump_accum_init(void) {
* node to the json and deallocating it. cassert(config_prof);
*/
static void
prof_log_emit_threads(tsd_t *tsd, emitter_t *emitter) {
emitter_json_array_kv_begin(emitter, "threads");
prof_thr_node_t *thr_node = log_thr_first;
prof_thr_node_t *thr_old_node;
while (thr_node != NULL) {
emitter_json_object_begin(emitter);
emitter_json_kv(emitter, "thr_uid", emitter_type_uint64, return counter_accum_init(&prof_idump_accumulated, prof_interval);
&thr_node->thr_uid); }
char *thr_name = thr_node->name; void
prof_idump(tsdn_t *tsdn) {
tsd_t *tsd;
prof_tdata_t *tdata;
emitter_json_kv(emitter, "thr_name", emitter_type_string, cassert(config_prof);
&thr_name);
emitter_json_object_end(emitter); if (!prof_booted || tsdn_null(tsdn) || !prof_active_get_unlocked()) {
thr_old_node = thr_node; return;
thr_node = thr_node->next;
idalloc(tsd, thr_old_node);
} }
emitter_json_array_end(emitter); tsd = tsdn_tsd(tsdn);
} if (tsd_reentrancy_level_get(tsd) > 0) {
return;
static void
prof_log_emit_traces(tsd_t *tsd, emitter_t *emitter) {
emitter_json_array_kv_begin(emitter, "stack_traces");
prof_bt_node_t *bt_node = log_bt_first;
prof_bt_node_t *bt_old_node;
/*
* Calculate how many hex digits we need: twice number of bytes, two for
* "0x", and then one more for terminating '\0'.
*/
char buf[2 * sizeof(intptr_t) + 3];
size_t buf_sz = sizeof(buf);
while (bt_node != NULL) {
emitter_json_array_begin(emitter);
size_t i;
for (i = 0; i < bt_node->bt.len; i++) {
malloc_snprintf(buf, buf_sz, "%p", bt_node->bt.vec[i]);
char *trace_str = buf;
emitter_json_value(emitter, emitter_type_string,
&trace_str);
} }
emitter_json_array_end(emitter);
bt_old_node = bt_node; tdata = prof_tdata_get(tsd, true);
bt_node = bt_node->next; if (tdata == NULL) {
idalloc(tsd, bt_old_node); return;
}
if (tdata->enq) {
tdata->enq_idump = true;
return;
} }
emitter_json_array_end(emitter);
}
static void
prof_log_emit_allocs(tsd_t *tsd, emitter_t *emitter) {
emitter_json_array_kv_begin(emitter, "allocations");
prof_alloc_node_t *alloc_node = log_alloc_first;
prof_alloc_node_t *alloc_old_node;
while (alloc_node != NULL) {
emitter_json_object_begin(emitter);
emitter_json_kv(emitter, "alloc_thread", emitter_type_size,
&alloc_node->alloc_thr_ind);
emitter_json_kv(emitter, "free_thread", emitter_type_size, prof_idump_impl(tsd);
&alloc_node->free_thr_ind); }
emitter_json_kv(emitter, "alloc_trace", emitter_type_size, bool
&alloc_node->alloc_bt_ind); prof_mdump(tsd_t *tsd, const char *filename) {
cassert(config_prof);
assert(tsd_reentrancy_level_get(tsd) == 0);
emitter_json_kv(emitter, "free_trace", emitter_type_size, if (!opt_prof || !prof_booted) {
&alloc_node->free_bt_ind); return true;
}
emitter_json_kv(emitter, "alloc_timestamp", return prof_mdump_impl(tsd, filename);
emitter_type_uint64, &alloc_node->alloc_time_ns); }
emitter_json_kv(emitter, "free_timestamp", emitter_type_uint64, void
&alloc_node->free_time_ns); prof_gdump(tsdn_t *tsdn) {
tsd_t *tsd;
prof_tdata_t *tdata;
emitter_json_kv(emitter, "usize", emitter_type_uint64, cassert(config_prof);
&alloc_node->usize);
emitter_json_object_end(emitter); if (!prof_booted || tsdn_null(tsdn) || !prof_active_get_unlocked()) {
return;
}
tsd = tsdn_tsd(tsdn);
if (tsd_reentrancy_level_get(tsd) > 0) {
return;
}
alloc_old_node = alloc_node; tdata = prof_tdata_get(tsd, false);
alloc_node = alloc_node->next; if (tdata == NULL) {
idalloc(tsd, alloc_old_node); return;
}
if (tdata->enq) {
tdata->enq_gdump = true;
return;
} }
emitter_json_array_end(emitter);
}
static void prof_gdump_impl(tsd);
prof_log_emit_metadata(emitter_t *emitter) { }
emitter_json_object_kv_begin(emitter, "info");
nstime_t now = NSTIME_ZERO_INITIALIZER; static uint64_t
prof_thr_uid_alloc(tsdn_t *tsdn) {
uint64_t thr_uid;
nstime_update(&now); malloc_mutex_lock(tsdn, &next_thr_uid_mtx);
uint64_t ns = nstime_ns(&now) - nstime_ns(&log_start_timestamp); thr_uid = next_thr_uid;
emitter_json_kv(emitter, "duration", emitter_type_uint64, &ns); next_thr_uid++;
malloc_mutex_unlock(tsdn, &next_thr_uid_mtx);
char *vers = JEMALLOC_VERSION; return thr_uid;
emitter_json_kv(emitter, "version", }
emitter_type_string, &vers);
emitter_json_kv(emitter, "lg_sample_rate", prof_tdata_t *
emitter_type_int, &lg_prof_sample); prof_tdata_init(tsd_t *tsd) {
return prof_tdata_init_impl(tsd, prof_thr_uid_alloc(tsd_tsdn(tsd)), 0,
NULL, prof_thread_active_init_get(tsd_tsdn(tsd)));
}
int pid = prof_getpid(); prof_tdata_t *
emitter_json_kv(emitter, "pid", emitter_type_int, &pid); prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata) {
uint64_t thr_uid = tdata->thr_uid;
uint64_t thr_discrim = tdata->thr_discrim + 1;
char *thread_name = (tdata->thread_name != NULL) ?
prof_thread_name_alloc(tsd, tdata->thread_name) : NULL;
bool active = tdata->active;
emitter_json_object_end(emitter); prof_tdata_detach(tsd, tdata);
return prof_tdata_init_impl(tsd, thr_uid, thr_discrim, thread_name,
active);
} }
void
prof_tdata_cleanup(tsd_t *tsd) {
prof_tdata_t *tdata;
bool if (!config_prof) {
prof_log_stop(tsdn_t *tsdn) { return;
if (!opt_prof || !prof_booted) {
return true;
} }
tsd_t *tsd = tsdn_tsd(tsdn); tdata = tsd_prof_tdata_get(tsd);
malloc_mutex_lock(tsdn, &log_mtx); if (tdata != NULL) {
prof_tdata_detach(tsd, tdata);
if (prof_logging_state != prof_logging_state_started) {
malloc_mutex_unlock(tsdn, &log_mtx);
return true;
} }
}
/* bool
* Set the state to dumping. We'll set it to stopped when we're done. prof_active_get(tsdn_t *tsdn) {
* Since other threads won't be able to start/stop/log when the state is bool prof_active_current;
* dumping, we don't have to hold the lock during the whole method.
*/
prof_logging_state = prof_logging_state_dumping;
malloc_mutex_unlock(tsdn, &log_mtx);
emitter_t emitter;
/* Create a file. */
int fd; prof_active_assert();
#ifdef JEMALLOC_JET malloc_mutex_lock(tsdn, &prof_active_mtx);
if (prof_log_dummy) { prof_active_current = prof_active_state;
fd = 0; malloc_mutex_unlock(tsdn, &prof_active_mtx);
} else { return prof_active_current;
fd = creat(log_filename, 0644); }
}
#else
fd = creat(log_filename, 0644);
#endif
if (fd == -1) { bool
malloc_printf("<jemalloc>: creat() for log file \"%s\" " prof_active_set(tsdn_t *tsdn, bool active) {
" failed with %d\n", log_filename, errno); bool prof_active_old;
if (opt_abort) {
abort();
}
return true;
}
/* Emit to json. */ prof_active_assert();
struct prof_emitter_cb_arg_s arg; malloc_mutex_lock(tsdn, &prof_active_mtx);
arg.fd = fd; prof_active_old = prof_active_state;
emitter_init(&emitter, emitter_output_json, &prof_emitter_write_cb, prof_active_state = active;
(void *)(&arg)); malloc_mutex_unlock(tsdn, &prof_active_mtx);
prof_active_assert();
emitter_begin(&emitter); return prof_active_old;
prof_log_emit_metadata(&emitter);
prof_log_emit_threads(tsd, &emitter);
prof_log_emit_traces(tsd, &emitter);
prof_log_emit_allocs(tsd, &emitter);
emitter_end(&emitter);
/* Reset global state. */
if (log_tables_initialized) {
ckh_delete(tsd, &log_bt_node_set);
ckh_delete(tsd, &log_thr_node_set);
}
log_tables_initialized = false;
log_bt_index = 0;
log_thr_index = 0;
log_bt_first = NULL;
log_bt_last = NULL;
log_thr_first = NULL;
log_thr_last = NULL;
log_alloc_first = NULL;
log_alloc_last = NULL;
malloc_mutex_lock(tsdn, &log_mtx);
prof_logging_state = prof_logging_state_stopped;
malloc_mutex_unlock(tsdn, &log_mtx);
#ifdef JEMALLOC_JET
if (prof_log_dummy) {
return false;
}
#endif
return close(fd);
} }
const char * const char *
prof_thread_name_get(tsd_t *tsd) { prof_thread_name_get(tsd_t *tsd) {
assert(tsd_reentrancy_level_get(tsd) == 0);
prof_tdata_t *tdata; prof_tdata_t *tdata;
tdata = prof_tdata_get(tsd, true); tdata = prof_tdata_get(tsd, true);
...@@ -2790,69 +448,19 @@ prof_thread_name_get(tsd_t *tsd) { ...@@ -2790,69 +448,19 @@ prof_thread_name_get(tsd_t *tsd) {
return (tdata->thread_name != NULL ? tdata->thread_name : ""); return (tdata->thread_name != NULL ? tdata->thread_name : "");
} }
static char *
prof_thread_name_alloc(tsdn_t *tsdn, const char *thread_name) {
char *ret;
size_t size;
if (thread_name == NULL) {
return NULL;
}
size = strlen(thread_name) + 1;
if (size == 1) {
return "";
}
ret = iallocztm(tsdn, size, sz_size2index(size), false, NULL, true,
arena_get(TSDN_NULL, 0, true), true);
if (ret == NULL) {
return NULL;
}
memcpy(ret, thread_name, size);
return ret;
}
int int
prof_thread_name_set(tsd_t *tsd, const char *thread_name) { prof_thread_name_set(tsd_t *tsd, const char *thread_name) {
prof_tdata_t *tdata; if (opt_prof_sys_thread_name) {
unsigned i; return ENOENT;
char *s; } else {
return prof_thread_name_set_impl(tsd, thread_name);
tdata = prof_tdata_get(tsd, true);
if (tdata == NULL) {
return EAGAIN;
}
/* Validate input. */
if (thread_name == NULL) {
return EFAULT;
}
for (i = 0; thread_name[i] != '\0'; i++) {
char c = thread_name[i];
if (!isgraph(c) && !isblank(c)) {
return EFAULT;
}
}
s = prof_thread_name_alloc(tsd_tsdn(tsd), thread_name);
if (s == NULL) {
return EAGAIN;
}
if (tdata->thread_name != NULL) {
idalloctm(tsd_tsdn(tsd), tdata->thread_name, NULL, NULL, true,
true);
tdata->thread_name = NULL;
}
if (strlen(s) > 0) {
tdata->thread_name = s;
} }
return 0;
} }
bool bool
prof_thread_active_get(tsd_t *tsd) { prof_thread_active_get(tsd_t *tsd) {
assert(tsd_reentrancy_level_get(tsd) == 0);
prof_tdata_t *tdata; prof_tdata_t *tdata;
tdata = prof_tdata_get(tsd, true); tdata = prof_tdata_get(tsd, true);
...@@ -2864,6 +472,8 @@ prof_thread_active_get(tsd_t *tsd) { ...@@ -2864,6 +472,8 @@ prof_thread_active_get(tsd_t *tsd) {
bool bool
prof_thread_active_set(tsd_t *tsd, bool active) { prof_thread_active_set(tsd_t *tsd, bool active) {
assert(tsd_reentrancy_level_get(tsd) == 0);
prof_tdata_t *tdata; prof_tdata_t *tdata;
tdata = prof_tdata_get(tsd, true); tdata = prof_tdata_get(tsd, true);
...@@ -2916,6 +526,28 @@ prof_gdump_set(tsdn_t *tsdn, bool gdump) { ...@@ -2916,6 +526,28 @@ prof_gdump_set(tsdn_t *tsdn, bool gdump) {
return prof_gdump_old; return prof_gdump_old;
} }
void
prof_backtrace_hook_set(prof_backtrace_hook_t hook) {
atomic_store_p(&prof_backtrace_hook, hook, ATOMIC_RELEASE);
}
prof_backtrace_hook_t
prof_backtrace_hook_get() {
return (prof_backtrace_hook_t)atomic_load_p(&prof_backtrace_hook,
ATOMIC_ACQUIRE);
}
void
prof_dump_hook_set(prof_dump_hook_t hook) {
atomic_store_p(&prof_dump_hook, hook, ATOMIC_RELEASE);
}
prof_dump_hook_t
prof_dump_hook_get() {
return (prof_dump_hook_t)atomic_load_p(&prof_dump_hook,
ATOMIC_ACQUIRE);
}
void void
prof_boot0(void) { prof_boot0(void) {
cassert(config_prof); cassert(config_prof);
...@@ -2932,6 +564,9 @@ prof_boot1(void) { ...@@ -2932,6 +564,9 @@ prof_boot1(void) {
* opt_prof must be in its final state before any arenas are * opt_prof must be in its final state before any arenas are
* initialized, so this function must be executed early. * initialized, so this function must be executed early.
*/ */
if (opt_prof_leak_error && !opt_prof_leak) {
opt_prof_leak = true;
}
if (opt_prof_leak && !opt_prof) { if (opt_prof_leak && !opt_prof) {
/* /*
...@@ -2949,57 +584,45 @@ prof_boot1(void) { ...@@ -2949,57 +584,45 @@ prof_boot1(void) {
} }
bool bool
prof_boot2(tsd_t *tsd) { prof_boot2(tsd_t *tsd, base_t *base) {
cassert(config_prof); cassert(config_prof);
if (opt_prof) { /*
unsigned i; * Initialize the global mutexes unconditionally to maintain correct
* stats when opt_prof is false.
lg_prof_sample = opt_lg_prof_sample; */
prof_active = opt_prof_active;
if (malloc_mutex_init(&prof_active_mtx, "prof_active", if (malloc_mutex_init(&prof_active_mtx, "prof_active",
WITNESS_RANK_PROF_ACTIVE, malloc_mutex_rank_exclusive)) { WITNESS_RANK_PROF_ACTIVE, malloc_mutex_rank_exclusive)) {
return true; return true;
} }
prof_gdump_val = opt_prof_gdump;
if (malloc_mutex_init(&prof_gdump_mtx, "prof_gdump", if (malloc_mutex_init(&prof_gdump_mtx, "prof_gdump",
WITNESS_RANK_PROF_GDUMP, malloc_mutex_rank_exclusive)) { WITNESS_RANK_PROF_GDUMP, malloc_mutex_rank_exclusive)) {
return true; return true;
} }
prof_thread_active_init = opt_prof_thread_active_init;
if (malloc_mutex_init(&prof_thread_active_init_mtx, if (malloc_mutex_init(&prof_thread_active_init_mtx,
"prof_thread_active_init", "prof_thread_active_init", WITNESS_RANK_PROF_THREAD_ACTIVE_INIT,
WITNESS_RANK_PROF_THREAD_ACTIVE_INIT,
malloc_mutex_rank_exclusive)) { malloc_mutex_rank_exclusive)) {
return true; return true;
} }
if (ckh_new(tsd, &bt2gctx, PROF_CKH_MINITEMS, prof_bt_hash,
prof_bt_keycomp)) {
return true;
}
if (malloc_mutex_init(&bt2gctx_mtx, "prof_bt2gctx", if (malloc_mutex_init(&bt2gctx_mtx, "prof_bt2gctx",
WITNESS_RANK_PROF_BT2GCTX, malloc_mutex_rank_exclusive)) { WITNESS_RANK_PROF_BT2GCTX, malloc_mutex_rank_exclusive)) {
return true; return true;
} }
tdata_tree_new(&tdatas);
if (malloc_mutex_init(&tdatas_mtx, "prof_tdatas", if (malloc_mutex_init(&tdatas_mtx, "prof_tdatas",
WITNESS_RANK_PROF_TDATAS, malloc_mutex_rank_exclusive)) { WITNESS_RANK_PROF_TDATAS, malloc_mutex_rank_exclusive)) {
return true; return true;
} }
next_thr_uid = 0;
if (malloc_mutex_init(&next_thr_uid_mtx, "prof_next_thr_uid", if (malloc_mutex_init(&next_thr_uid_mtx, "prof_next_thr_uid",
WITNESS_RANK_PROF_NEXT_THR_UID, malloc_mutex_rank_exclusive)) { WITNESS_RANK_PROF_NEXT_THR_UID, malloc_mutex_rank_exclusive)) {
return true; return true;
} }
if (malloc_mutex_init(&prof_stats_mtx, "prof_stats",
if (malloc_mutex_init(&prof_dump_seq_mtx, "prof_dump_seq", WITNESS_RANK_PROF_STATS, malloc_mutex_rank_exclusive)) {
WITNESS_RANK_PROF_DUMP_SEQ, malloc_mutex_rank_exclusive)) { return true;
}
if (malloc_mutex_init(&prof_dump_filename_mtx,
"prof_dump_filename", WITNESS_RANK_PROF_DUMP_FILENAME,
malloc_mutex_rank_exclusive)) {
return true; return true;
} }
if (malloc_mutex_init(&prof_dump_mtx, "prof_dump", if (malloc_mutex_init(&prof_dump_mtx, "prof_dump",
...@@ -3007,50 +630,46 @@ prof_boot2(tsd_t *tsd) { ...@@ -3007,50 +630,46 @@ prof_boot2(tsd_t *tsd) {
return true; return true;
} }
if (opt_prof_final && opt_prof_prefix[0] != '\0' && if (opt_prof) {
atexit(prof_fdump) != 0) { lg_prof_sample = opt_lg_prof_sample;
malloc_write("<jemalloc>: Error in atexit()\n"); prof_unbias_map_init();
if (opt_abort) { prof_active_state = opt_prof_active;
abort(); prof_gdump_val = opt_prof_gdump;
} prof_thread_active_init = opt_prof_thread_active_init;
if (prof_data_init(tsd)) {
return true;
} }
if (opt_prof_log) { next_thr_uid = 0;
prof_log_start(tsd_tsdn(tsd), NULL); if (prof_idump_accum_init()) {
return true;
} }
if (atexit(prof_log_stop_final) != 0) { if (opt_prof_final && opt_prof_prefix[0] != '\0' &&
malloc_write("<jemalloc>: Error in atexit() " atexit(prof_fdump) != 0) {
"for logging\n"); malloc_write("<jemalloc>: Error in atexit()\n");
if (opt_abort) { if (opt_abort) {
abort(); abort();
} }
} }
if (malloc_mutex_init(&log_mtx, "prof_log", if (prof_log_init(tsd)) {
WITNESS_RANK_PROF_LOG, malloc_mutex_rank_exclusive)) {
return true;
}
if (ckh_new(tsd, &log_bt_node_set, PROF_CKH_MINITEMS,
prof_bt_node_hash, prof_bt_node_keycomp)) {
return true; return true;
} }
if (ckh_new(tsd, &log_thr_node_set, PROF_CKH_MINITEMS, if (prof_recent_init()) {
prof_thr_node_hash, prof_thr_node_keycomp)) {
return true; return true;
} }
log_tables_initialized = true; prof_base = base;
gctx_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), gctx_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), base,
b0get(), PROF_NCTX_LOCKS * sizeof(malloc_mutex_t), PROF_NCTX_LOCKS * sizeof(malloc_mutex_t), CACHELINE);
CACHELINE);
if (gctx_locks == NULL) { if (gctx_locks == NULL) {
return true; return true;
} }
for (i = 0; i < PROF_NCTX_LOCKS; i++) { for (unsigned i = 0; i < PROF_NCTX_LOCKS; i++) {
if (malloc_mutex_init(&gctx_locks[i], "prof_gctx", if (malloc_mutex_init(&gctx_locks[i], "prof_gctx",
WITNESS_RANK_PROF_GCTX, WITNESS_RANK_PROF_GCTX,
malloc_mutex_rank_exclusive)) { malloc_mutex_rank_exclusive)) {
...@@ -3058,26 +677,21 @@ prof_boot2(tsd_t *tsd) { ...@@ -3058,26 +677,21 @@ prof_boot2(tsd_t *tsd) {
} }
} }
tdata_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), tdata_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), base,
b0get(), PROF_NTDATA_LOCKS * sizeof(malloc_mutex_t), PROF_NTDATA_LOCKS * sizeof(malloc_mutex_t), CACHELINE);
CACHELINE);
if (tdata_locks == NULL) { if (tdata_locks == NULL) {
return true; return true;
} }
for (i = 0; i < PROF_NTDATA_LOCKS; i++) { for (unsigned i = 0; i < PROF_NTDATA_LOCKS; i++) {
if (malloc_mutex_init(&tdata_locks[i], "prof_tdata", if (malloc_mutex_init(&tdata_locks[i], "prof_tdata",
WITNESS_RANK_PROF_TDATA, WITNESS_RANK_PROF_TDATA,
malloc_mutex_rank_exclusive)) { malloc_mutex_rank_exclusive)) {
return true; return true;
} }
} }
#ifdef JEMALLOC_PROF_LIBGCC
/* prof_unwind_init();
* Cause the backtracing machinery to allocate its internal prof_hooks_init();
* state before enabling profiling.
*/
_Unwind_Backtrace(prof_unwind_init_callback, NULL);
#endif
} }
prof_booted = true; prof_booted = true;
...@@ -3095,18 +709,23 @@ prof_prefork0(tsdn_t *tsdn) { ...@@ -3095,18 +709,23 @@ prof_prefork0(tsdn_t *tsdn) {
for (i = 0; i < PROF_NTDATA_LOCKS; i++) { for (i = 0; i < PROF_NTDATA_LOCKS; i++) {
malloc_mutex_prefork(tsdn, &tdata_locks[i]); malloc_mutex_prefork(tsdn, &tdata_locks[i]);
} }
malloc_mutex_prefork(tsdn, &log_mtx);
for (i = 0; i < PROF_NCTX_LOCKS; i++) { for (i = 0; i < PROF_NCTX_LOCKS; i++) {
malloc_mutex_prefork(tsdn, &gctx_locks[i]); malloc_mutex_prefork(tsdn, &gctx_locks[i]);
} }
malloc_mutex_prefork(tsdn, &prof_recent_dump_mtx);
} }
} }
void void
prof_prefork1(tsdn_t *tsdn) { prof_prefork1(tsdn_t *tsdn) {
if (config_prof && opt_prof) { if (config_prof && opt_prof) {
counter_prefork(tsdn, &prof_idump_accumulated);
malloc_mutex_prefork(tsdn, &prof_active_mtx); malloc_mutex_prefork(tsdn, &prof_active_mtx);
malloc_mutex_prefork(tsdn, &prof_dump_seq_mtx); malloc_mutex_prefork(tsdn, &prof_dump_filename_mtx);
malloc_mutex_prefork(tsdn, &prof_gdump_mtx); malloc_mutex_prefork(tsdn, &prof_gdump_mtx);
malloc_mutex_prefork(tsdn, &prof_recent_alloc_mtx);
malloc_mutex_prefork(tsdn, &prof_stats_mtx);
malloc_mutex_prefork(tsdn, &next_thr_uid_mtx); malloc_mutex_prefork(tsdn, &next_thr_uid_mtx);
malloc_mutex_prefork(tsdn, &prof_thread_active_init_mtx); malloc_mutex_prefork(tsdn, &prof_thread_active_init_mtx);
} }
...@@ -3120,12 +739,17 @@ prof_postfork_parent(tsdn_t *tsdn) { ...@@ -3120,12 +739,17 @@ prof_postfork_parent(tsdn_t *tsdn) {
malloc_mutex_postfork_parent(tsdn, malloc_mutex_postfork_parent(tsdn,
&prof_thread_active_init_mtx); &prof_thread_active_init_mtx);
malloc_mutex_postfork_parent(tsdn, &next_thr_uid_mtx); malloc_mutex_postfork_parent(tsdn, &next_thr_uid_mtx);
malloc_mutex_postfork_parent(tsdn, &prof_stats_mtx);
malloc_mutex_postfork_parent(tsdn, &prof_recent_alloc_mtx);
malloc_mutex_postfork_parent(tsdn, &prof_gdump_mtx); malloc_mutex_postfork_parent(tsdn, &prof_gdump_mtx);
malloc_mutex_postfork_parent(tsdn, &prof_dump_seq_mtx); malloc_mutex_postfork_parent(tsdn, &prof_dump_filename_mtx);
malloc_mutex_postfork_parent(tsdn, &prof_active_mtx); malloc_mutex_postfork_parent(tsdn, &prof_active_mtx);
counter_postfork_parent(tsdn, &prof_idump_accumulated);
malloc_mutex_postfork_parent(tsdn, &prof_recent_dump_mtx);
for (i = 0; i < PROF_NCTX_LOCKS; i++) { for (i = 0; i < PROF_NCTX_LOCKS; i++) {
malloc_mutex_postfork_parent(tsdn, &gctx_locks[i]); malloc_mutex_postfork_parent(tsdn, &gctx_locks[i]);
} }
malloc_mutex_postfork_parent(tsdn, &log_mtx);
for (i = 0; i < PROF_NTDATA_LOCKS; i++) { for (i = 0; i < PROF_NTDATA_LOCKS; i++) {
malloc_mutex_postfork_parent(tsdn, &tdata_locks[i]); malloc_mutex_postfork_parent(tsdn, &tdata_locks[i]);
} }
...@@ -3142,12 +766,17 @@ prof_postfork_child(tsdn_t *tsdn) { ...@@ -3142,12 +766,17 @@ prof_postfork_child(tsdn_t *tsdn) {
malloc_mutex_postfork_child(tsdn, &prof_thread_active_init_mtx); malloc_mutex_postfork_child(tsdn, &prof_thread_active_init_mtx);
malloc_mutex_postfork_child(tsdn, &next_thr_uid_mtx); malloc_mutex_postfork_child(tsdn, &next_thr_uid_mtx);
malloc_mutex_postfork_child(tsdn, &prof_stats_mtx);
malloc_mutex_postfork_child(tsdn, &prof_recent_alloc_mtx);
malloc_mutex_postfork_child(tsdn, &prof_gdump_mtx); malloc_mutex_postfork_child(tsdn, &prof_gdump_mtx);
malloc_mutex_postfork_child(tsdn, &prof_dump_seq_mtx); malloc_mutex_postfork_child(tsdn, &prof_dump_filename_mtx);
malloc_mutex_postfork_child(tsdn, &prof_active_mtx); malloc_mutex_postfork_child(tsdn, &prof_active_mtx);
counter_postfork_child(tsdn, &prof_idump_accumulated);
malloc_mutex_postfork_child(tsdn, &prof_recent_dump_mtx);
for (i = 0; i < PROF_NCTX_LOCKS; i++) { for (i = 0; i < PROF_NCTX_LOCKS; i++) {
malloc_mutex_postfork_child(tsdn, &gctx_locks[i]); malloc_mutex_postfork_child(tsdn, &gctx_locks[i]);
} }
malloc_mutex_postfork_child(tsdn, &log_mtx);
for (i = 0; i < PROF_NTDATA_LOCKS; i++) { for (i = 0; i < PROF_NTDATA_LOCKS; i++) {
malloc_mutex_postfork_child(tsdn, &tdata_locks[i]); malloc_mutex_postfork_child(tsdn, &tdata_locks[i]);
} }
......
#include "jemalloc/internal/jemalloc_preamble.h"
#include "jemalloc/internal/jemalloc_internal_includes.h"
#include "jemalloc/internal/assert.h"
#include "jemalloc/internal/ckh.h"
#include "jemalloc/internal/hash.h"
#include "jemalloc/internal/malloc_io.h"
#include "jemalloc/internal/prof_data.h"
/*
* This file defines and manages the core profiling data structures.
*
* Conceptually, profiling data can be imagined as a table with three columns:
* thread, stack trace, and current allocation size. (When prof_accum is on,
* there's one additional column which is the cumulative allocation size.)
*
* Implementation wise, each thread maintains a hash recording the stack trace
* to allocation size correspondences, which are basically the individual rows
* in the table. In addition, two global "indices" are built to make data
* aggregation efficient (for dumping): bt2gctx and tdatas, which are basically
* the "grouped by stack trace" and "grouped by thread" views of the same table,
* respectively. Note that the allocation size is only aggregated to the two
* indices at dumping time, so as to optimize for performance.
*/
/******************************************************************************/
malloc_mutex_t bt2gctx_mtx;
malloc_mutex_t tdatas_mtx;
malloc_mutex_t prof_dump_mtx;
/*
* Table of mutexes that are shared among gctx's. These are leaf locks, so
* there is no problem with using them for more than one gctx at the same time.
* The primary motivation for this sharing though is that gctx's are ephemeral,
* and destroying mutexes causes complications for systems that allocate when
* creating/destroying mutexes.
*/
malloc_mutex_t *gctx_locks;
static atomic_u_t cum_gctxs; /* Atomic counter. */
/*
* Table of mutexes that are shared among tdata's. No operations require
* holding multiple tdata locks, so there is no problem with using them for more
* than one tdata at the same time, even though a gctx lock may be acquired
* while holding a tdata lock.
*/
malloc_mutex_t *tdata_locks;
/*
* Global hash of (prof_bt_t *)-->(prof_gctx_t *). This is the master data
* structure that knows about all backtraces currently captured.
*/
static ckh_t bt2gctx;
/*
* Tree of all extant prof_tdata_t structures, regardless of state,
* {attached,detached,expired}.
*/
static prof_tdata_tree_t tdatas;
size_t prof_unbiased_sz[PROF_SC_NSIZES];
size_t prof_shifted_unbiased_cnt[PROF_SC_NSIZES];
/******************************************************************************/
/* Red-black trees. */
static int
prof_tctx_comp(const prof_tctx_t *a, const prof_tctx_t *b) {
uint64_t a_thr_uid = a->thr_uid;
uint64_t b_thr_uid = b->thr_uid;
int ret = (a_thr_uid > b_thr_uid) - (a_thr_uid < b_thr_uid);
if (ret == 0) {
uint64_t a_thr_discrim = a->thr_discrim;
uint64_t b_thr_discrim = b->thr_discrim;
ret = (a_thr_discrim > b_thr_discrim) - (a_thr_discrim <
b_thr_discrim);
if (ret == 0) {
uint64_t a_tctx_uid = a->tctx_uid;
uint64_t b_tctx_uid = b->tctx_uid;
ret = (a_tctx_uid > b_tctx_uid) - (a_tctx_uid <
b_tctx_uid);
}
}
return ret;
}
rb_gen(static UNUSED, tctx_tree_, prof_tctx_tree_t, prof_tctx_t,
tctx_link, prof_tctx_comp)
static int
prof_gctx_comp(const prof_gctx_t *a, const prof_gctx_t *b) {
unsigned a_len = a->bt.len;
unsigned b_len = b->bt.len;
unsigned comp_len = (a_len < b_len) ? a_len : b_len;
int ret = memcmp(a->bt.vec, b->bt.vec, comp_len * sizeof(void *));
if (ret == 0) {
ret = (a_len > b_len) - (a_len < b_len);
}
return ret;
}
rb_gen(static UNUSED, gctx_tree_, prof_gctx_tree_t, prof_gctx_t, dump_link,
prof_gctx_comp)
static int
prof_tdata_comp(const prof_tdata_t *a, const prof_tdata_t *b) {
int ret;
uint64_t a_uid = a->thr_uid;
uint64_t b_uid = b->thr_uid;
ret = ((a_uid > b_uid) - (a_uid < b_uid));
if (ret == 0) {
uint64_t a_discrim = a->thr_discrim;
uint64_t b_discrim = b->thr_discrim;
ret = ((a_discrim > b_discrim) - (a_discrim < b_discrim));
}
return ret;
}
rb_gen(static UNUSED, tdata_tree_, prof_tdata_tree_t, prof_tdata_t, tdata_link,
prof_tdata_comp)
/******************************************************************************/
static malloc_mutex_t *
prof_gctx_mutex_choose(void) {
unsigned ngctxs = atomic_fetch_add_u(&cum_gctxs, 1, ATOMIC_RELAXED);
return &gctx_locks[(ngctxs - 1) % PROF_NCTX_LOCKS];
}
static malloc_mutex_t *
prof_tdata_mutex_choose(uint64_t thr_uid) {
return &tdata_locks[thr_uid % PROF_NTDATA_LOCKS];
}
bool
prof_data_init(tsd_t *tsd) {
tdata_tree_new(&tdatas);
return ckh_new(tsd, &bt2gctx, PROF_CKH_MINITEMS,
prof_bt_hash, prof_bt_keycomp);
}
static void
prof_enter(tsd_t *tsd, prof_tdata_t *tdata) {
cassert(config_prof);
assert(tdata == prof_tdata_get(tsd, false));
if (tdata != NULL) {
assert(!tdata->enq);
tdata->enq = true;
}
malloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx);
}
static void
prof_leave(tsd_t *tsd, prof_tdata_t *tdata) {
cassert(config_prof);
assert(tdata == prof_tdata_get(tsd, false));
malloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx);
if (tdata != NULL) {
bool idump, gdump;
assert(tdata->enq);
tdata->enq = false;
idump = tdata->enq_idump;
tdata->enq_idump = false;
gdump = tdata->enq_gdump;
tdata->enq_gdump = false;
if (idump) {
prof_idump(tsd_tsdn(tsd));
}
if (gdump) {
prof_gdump(tsd_tsdn(tsd));
}
}
}
static prof_gctx_t *
prof_gctx_create(tsdn_t *tsdn, prof_bt_t *bt) {
/*
* Create a single allocation that has space for vec of length bt->len.
*/
size_t size = offsetof(prof_gctx_t, vec) + (bt->len * sizeof(void *));
prof_gctx_t *gctx = (prof_gctx_t *)iallocztm(tsdn, size,
sz_size2index(size), false, NULL, true, arena_get(TSDN_NULL, 0, true),
true);
if (gctx == NULL) {
return NULL;
}
gctx->lock = prof_gctx_mutex_choose();
/*
* Set nlimbo to 1, in order to avoid a race condition with
* prof_tctx_destroy()/prof_gctx_try_destroy().
*/
gctx->nlimbo = 1;
tctx_tree_new(&gctx->tctxs);
/* Duplicate bt. */
memcpy(gctx->vec, bt->vec, bt->len * sizeof(void *));
gctx->bt.vec = gctx->vec;
gctx->bt.len = bt->len;
return gctx;
}
static void
prof_gctx_try_destroy(tsd_t *tsd, prof_tdata_t *tdata_self,
prof_gctx_t *gctx) {
cassert(config_prof);
/*
* Check that gctx is still unused by any thread cache before destroying
* it. prof_lookup() increments gctx->nlimbo in order to avoid a race
* condition with this function, as does prof_tctx_destroy() in order to
* avoid a race between the main body of prof_tctx_destroy() and entry
* into this function.
*/
prof_enter(tsd, tdata_self);
malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);
assert(gctx->nlimbo != 0);
if (tctx_tree_empty(&gctx->tctxs) && gctx->nlimbo == 1) {
/* Remove gctx from bt2gctx. */
if (ckh_remove(tsd, &bt2gctx, &gctx->bt, NULL, NULL)) {
not_reached();
}
prof_leave(tsd, tdata_self);
/* Destroy gctx. */
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
idalloctm(tsd_tsdn(tsd), gctx, NULL, NULL, true, true);
} else {
/*
* Compensate for increment in prof_tctx_destroy() or
* prof_lookup().
*/
gctx->nlimbo--;
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
prof_leave(tsd, tdata_self);
}
}
static bool
prof_gctx_should_destroy(prof_gctx_t *gctx) {
if (opt_prof_accum) {
return false;
}
if (!tctx_tree_empty(&gctx->tctxs)) {
return false;
}
if (gctx->nlimbo != 0) {
return false;
}
return true;
}
static bool
prof_lookup_global(tsd_t *tsd, prof_bt_t *bt, prof_tdata_t *tdata,
void **p_btkey, prof_gctx_t **p_gctx, bool *p_new_gctx) {
union {
prof_gctx_t *p;
void *v;
} gctx, tgctx;
union {
prof_bt_t *p;
void *v;
} btkey;
bool new_gctx;
prof_enter(tsd, tdata);
if (ckh_search(&bt2gctx, bt, &btkey.v, &gctx.v)) {
/* bt has never been seen before. Insert it. */
prof_leave(tsd, tdata);
tgctx.p = prof_gctx_create(tsd_tsdn(tsd), bt);
if (tgctx.v == NULL) {
return true;
}
prof_enter(tsd, tdata);
if (ckh_search(&bt2gctx, bt, &btkey.v, &gctx.v)) {
gctx.p = tgctx.p;
btkey.p = &gctx.p->bt;
if (ckh_insert(tsd, &bt2gctx, btkey.v, gctx.v)) {
/* OOM. */
prof_leave(tsd, tdata);
idalloctm(tsd_tsdn(tsd), gctx.v, NULL, NULL,
true, true);
return true;
}
new_gctx = true;
} else {
new_gctx = false;
}
} else {
tgctx.v = NULL;
new_gctx = false;
}
if (!new_gctx) {
/*
* Increment nlimbo, in order to avoid a race condition with
* prof_tctx_destroy()/prof_gctx_try_destroy().
*/
malloc_mutex_lock(tsd_tsdn(tsd), gctx.p->lock);
gctx.p->nlimbo++;
malloc_mutex_unlock(tsd_tsdn(tsd), gctx.p->lock);
new_gctx = false;
if (tgctx.v != NULL) {
/* Lost race to insert. */
idalloctm(tsd_tsdn(tsd), tgctx.v, NULL, NULL, true,
true);
}
}
prof_leave(tsd, tdata);
*p_btkey = btkey.v;
*p_gctx = gctx.p;
*p_new_gctx = new_gctx;
return false;
}
prof_tctx_t *
prof_lookup(tsd_t *tsd, prof_bt_t *bt) {
union {
prof_tctx_t *p;
void *v;
} ret;
prof_tdata_t *tdata;
bool not_found;
cassert(config_prof);
tdata = prof_tdata_get(tsd, false);
assert(tdata != NULL);
malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock);
not_found = ckh_search(&tdata->bt2tctx, bt, NULL, &ret.v);
if (!not_found) { /* Note double negative! */
ret.p->prepared = true;
}
malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);
if (not_found) {
void *btkey;
prof_gctx_t *gctx;
bool new_gctx, error;
/*
* This thread's cache lacks bt. Look for it in the global
* cache.
*/
if (prof_lookup_global(tsd, bt, tdata, &btkey, &gctx,
&new_gctx)) {
return NULL;
}
/* Link a prof_tctx_t into gctx for this thread. */
ret.v = iallocztm(tsd_tsdn(tsd), sizeof(prof_tctx_t),
sz_size2index(sizeof(prof_tctx_t)), false, NULL, true,
arena_ichoose(tsd, NULL), true);
if (ret.p == NULL) {
if (new_gctx) {
prof_gctx_try_destroy(tsd, tdata, gctx);
}
return NULL;
}
ret.p->tdata = tdata;
ret.p->thr_uid = tdata->thr_uid;
ret.p->thr_discrim = tdata->thr_discrim;
ret.p->recent_count = 0;
memset(&ret.p->cnts, 0, sizeof(prof_cnt_t));
ret.p->gctx = gctx;
ret.p->tctx_uid = tdata->tctx_uid_next++;
ret.p->prepared = true;
ret.p->state = prof_tctx_state_initializing;
malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock);
error = ckh_insert(tsd, &tdata->bt2tctx, btkey, ret.v);
malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);
if (error) {
if (new_gctx) {
prof_gctx_try_destroy(tsd, tdata, gctx);
}
idalloctm(tsd_tsdn(tsd), ret.v, NULL, NULL, true, true);
return NULL;
}
malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);
ret.p->state = prof_tctx_state_nominal;
tctx_tree_insert(&gctx->tctxs, ret.p);
gctx->nlimbo--;
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
}
return ret.p;
}
/* Used in unit tests. */
static prof_tdata_t *
prof_tdata_count_iter(prof_tdata_tree_t *tdatas_ptr, prof_tdata_t *tdata,
void *arg) {
size_t *tdata_count = (size_t *)arg;
(*tdata_count)++;
return NULL;
}
/* Used in unit tests. */
size_t
prof_tdata_count(void) {
size_t tdata_count = 0;
tsdn_t *tsdn;
tsdn = tsdn_fetch();
malloc_mutex_lock(tsdn, &tdatas_mtx);
tdata_tree_iter(&tdatas, NULL, prof_tdata_count_iter,
(void *)&tdata_count);
malloc_mutex_unlock(tsdn, &tdatas_mtx);
return tdata_count;
}
/* Used in unit tests. */
size_t
prof_bt_count(void) {
size_t bt_count;
tsd_t *tsd;
prof_tdata_t *tdata;
tsd = tsd_fetch();
tdata = prof_tdata_get(tsd, false);
if (tdata == NULL) {
return 0;
}
malloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx);
bt_count = ckh_count(&bt2gctx);
malloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx);
return bt_count;
}
char *
prof_thread_name_alloc(tsd_t *tsd, const char *thread_name) {
char *ret;
size_t size;
if (thread_name == NULL) {
return NULL;
}
size = strlen(thread_name) + 1;
if (size == 1) {
return "";
}
ret = iallocztm(tsd_tsdn(tsd), size, sz_size2index(size), false, NULL,
true, arena_get(TSDN_NULL, 0, true), true);
if (ret == NULL) {
return NULL;
}
memcpy(ret, thread_name, size);
return ret;
}
int
prof_thread_name_set_impl(tsd_t *tsd, const char *thread_name) {
assert(tsd_reentrancy_level_get(tsd) == 0);
prof_tdata_t *tdata;
unsigned i;
char *s;
tdata = prof_tdata_get(tsd, true);
if (tdata == NULL) {
return EAGAIN;
}
/* Validate input. */
if (thread_name == NULL) {
return EFAULT;
}
for (i = 0; thread_name[i] != '\0'; i++) {
char c = thread_name[i];
if (!isgraph(c) && !isblank(c)) {
return EFAULT;
}
}
s = prof_thread_name_alloc(tsd, thread_name);
if (s == NULL) {
return EAGAIN;
}
if (tdata->thread_name != NULL) {
idalloctm(tsd_tsdn(tsd), tdata->thread_name, NULL, NULL, true,
true);
tdata->thread_name = NULL;
}
if (strlen(s) > 0) {
tdata->thread_name = s;
}
return 0;
}
JEMALLOC_FORMAT_PRINTF(3, 4)
static void
prof_dump_printf(write_cb_t *prof_dump_write, void *cbopaque,
const char *format, ...) {
va_list ap;
char buf[PROF_PRINTF_BUFSIZE];
va_start(ap, format);
malloc_vsnprintf(buf, sizeof(buf), format, ap);
va_end(ap);
prof_dump_write(cbopaque, buf);
}
/*
* Casting a double to a uint64_t may not necessarily be in range; this can be
* UB. I don't think this is practically possible with the cur counters, but
* plausibly could be with the accum counters.
*/
#ifdef JEMALLOC_PROF
static uint64_t
prof_double_uint64_cast(double d) {
/*
* Note: UINT64_MAX + 1 is exactly representable as a double on all
* reasonable platforms (certainly those we'll support). Writing this
* as !(a < b) instead of (a >= b) means that we're NaN-safe.
*/
double rounded = round(d);
if (!(rounded < (double)UINT64_MAX)) {
return UINT64_MAX;
}
return (uint64_t)rounded;
}
#endif
void prof_unbias_map_init() {
/* See the comment in prof_sample_new_event_wait */
#ifdef JEMALLOC_PROF
for (szind_t i = 0; i < SC_NSIZES; i++) {
double sz = (double)sz_index2size(i);
double rate = (double)(ZU(1) << lg_prof_sample);
double div_val = 1.0 - exp(-sz / rate);
double unbiased_sz = sz / div_val;
/*
* The "true" right value for the unbiased count is
* 1.0/(1 - exp(-sz/rate)). The problem is, we keep the counts
* as integers (for a variety of reasons -- rounding errors
* could trigger asserts, and not all libcs can properly handle
* floating point arithmetic during malloc calls inside libc).
* Rounding to an integer, though, can lead to rounding errors
* of over 30% for sizes close to the sampling rate. So
* instead, we multiply by a constant, dividing the maximum
* possible roundoff error by that constant. To avoid overflow
* in summing up size_t values, the largest safe constant we can
* pick is the size of the smallest allocation.
*/
double cnt_shift = (double)(ZU(1) << SC_LG_TINY_MIN);
double shifted_unbiased_cnt = cnt_shift / div_val;
prof_unbiased_sz[i] = (size_t)round(unbiased_sz);
prof_shifted_unbiased_cnt[i] = (size_t)round(
shifted_unbiased_cnt);
}
#else
unreachable();
#endif
}
/*
* The unbiasing story is long. The jeprof unbiasing logic was copied from
* pprof. Both shared an issue: they unbiased using the average size of the
* allocations at a particular stack trace. This can work out OK if allocations
* are mostly of the same size given some stack, but not otherwise. We now
* internally track what the unbiased results ought to be. We can't just report
* them as they are though; they'll still go through the jeprof unbiasing
* process. Instead, we figure out what values we can feed *into* jeprof's
* unbiasing mechanism that will lead to getting the right values out.
*
* It'll unbias count and aggregate size as:
*
* c_out = c_in * 1/(1-exp(-s_in/c_in/R)
* s_out = s_in * 1/(1-exp(-s_in/c_in/R)
*
* We want to solve for the values of c_in and s_in that will
* give the c_out and s_out that we've computed internally.
*
* Let's do a change of variables (both to make the math easier and to make it
* easier to write):
* x = s_in / c_in
* y = s_in
* k = 1/R.
*
* Then
* c_out = y/x * 1/(1-exp(-k*x))
* s_out = y * 1/(1-exp(-k*x))
*
* The first equation gives:
* y = x * c_out * (1-exp(-k*x))
* The second gives:
* y = s_out * (1-exp(-k*x))
* So we have
* x = s_out / c_out.
* And all the other values fall out from that.
*
* This is all a fair bit of work. The thing we get out of it is that we don't
* break backwards compatibility with jeprof (and the various tools that have
* copied its unbiasing logic). Eventually, we anticipate a v3 heap profile
* dump format based on JSON, at which point I think much of this logic can get
* cleaned up (since we'll be taking a compatibility break there anyways).
*/
static void
prof_do_unbias(uint64_t c_out_shifted_i, uint64_t s_out_i, uint64_t *r_c_in,
uint64_t *r_s_in) {
#ifdef JEMALLOC_PROF
if (c_out_shifted_i == 0 || s_out_i == 0) {
*r_c_in = 0;
*r_s_in = 0;
return;
}
/*
* See the note in prof_unbias_map_init() to see why we take c_out in a
* shifted form.
*/
double c_out = (double)c_out_shifted_i
/ (double)(ZU(1) << SC_LG_TINY_MIN);
double s_out = (double)s_out_i;
double R = (double)(ZU(1) << lg_prof_sample);
double x = s_out / c_out;
double y = s_out * (1.0 - exp(-x / R));
double c_in = y / x;
double s_in = y;
*r_c_in = prof_double_uint64_cast(c_in);
*r_s_in = prof_double_uint64_cast(s_in);
#else
unreachable();
#endif
}
static void
prof_dump_print_cnts(write_cb_t *prof_dump_write, void *cbopaque,
const prof_cnt_t *cnts) {
uint64_t curobjs;
uint64_t curbytes;
uint64_t accumobjs;
uint64_t accumbytes;
if (opt_prof_unbias) {
prof_do_unbias(cnts->curobjs_shifted_unbiased,
cnts->curbytes_unbiased, &curobjs, &curbytes);
prof_do_unbias(cnts->accumobjs_shifted_unbiased,
cnts->accumbytes_unbiased, &accumobjs, &accumbytes);
} else {
curobjs = cnts->curobjs;
curbytes = cnts->curbytes;
accumobjs = cnts->accumobjs;
accumbytes = cnts->accumbytes;
}
prof_dump_printf(prof_dump_write, cbopaque,
"%"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]",
curobjs, curbytes, accumobjs, accumbytes);
}
static void
prof_tctx_merge_tdata(tsdn_t *tsdn, prof_tctx_t *tctx, prof_tdata_t *tdata) {
malloc_mutex_assert_owner(tsdn, tctx->tdata->lock);
malloc_mutex_lock(tsdn, tctx->gctx->lock);
switch (tctx->state) {
case prof_tctx_state_initializing:
malloc_mutex_unlock(tsdn, tctx->gctx->lock);
return;
case prof_tctx_state_nominal:
tctx->state = prof_tctx_state_dumping;
malloc_mutex_unlock(tsdn, tctx->gctx->lock);
memcpy(&tctx->dump_cnts, &tctx->cnts, sizeof(prof_cnt_t));
tdata->cnt_summed.curobjs += tctx->dump_cnts.curobjs;
tdata->cnt_summed.curobjs_shifted_unbiased
+= tctx->dump_cnts.curobjs_shifted_unbiased;
tdata->cnt_summed.curbytes += tctx->dump_cnts.curbytes;
tdata->cnt_summed.curbytes_unbiased
+= tctx->dump_cnts.curbytes_unbiased;
if (opt_prof_accum) {
tdata->cnt_summed.accumobjs +=
tctx->dump_cnts.accumobjs;
tdata->cnt_summed.accumobjs_shifted_unbiased +=
tctx->dump_cnts.accumobjs_shifted_unbiased;
tdata->cnt_summed.accumbytes +=
tctx->dump_cnts.accumbytes;
tdata->cnt_summed.accumbytes_unbiased +=
tctx->dump_cnts.accumbytes_unbiased;
}
break;
case prof_tctx_state_dumping:
case prof_tctx_state_purgatory:
not_reached();
}
}
static void
prof_tctx_merge_gctx(tsdn_t *tsdn, prof_tctx_t *tctx, prof_gctx_t *gctx) {
malloc_mutex_assert_owner(tsdn, gctx->lock);
gctx->cnt_summed.curobjs += tctx->dump_cnts.curobjs;
gctx->cnt_summed.curobjs_shifted_unbiased
+= tctx->dump_cnts.curobjs_shifted_unbiased;
gctx->cnt_summed.curbytes += tctx->dump_cnts.curbytes;
gctx->cnt_summed.curbytes_unbiased += tctx->dump_cnts.curbytes_unbiased;
if (opt_prof_accum) {
gctx->cnt_summed.accumobjs += tctx->dump_cnts.accumobjs;
gctx->cnt_summed.accumobjs_shifted_unbiased
+= tctx->dump_cnts.accumobjs_shifted_unbiased;
gctx->cnt_summed.accumbytes += tctx->dump_cnts.accumbytes;
gctx->cnt_summed.accumbytes_unbiased
+= tctx->dump_cnts.accumbytes_unbiased;
}
}
static prof_tctx_t *
prof_tctx_merge_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) {
tsdn_t *tsdn = (tsdn_t *)arg;
malloc_mutex_assert_owner(tsdn, tctx->gctx->lock);
switch (tctx->state) {
case prof_tctx_state_nominal:
/* New since dumping started; ignore. */
break;
case prof_tctx_state_dumping:
case prof_tctx_state_purgatory:
prof_tctx_merge_gctx(tsdn, tctx, tctx->gctx);
break;
default:
not_reached();
}
return NULL;
}
typedef struct prof_dump_iter_arg_s prof_dump_iter_arg_t;
struct prof_dump_iter_arg_s {
tsdn_t *tsdn;
write_cb_t *prof_dump_write;
void *cbopaque;
};
static prof_tctx_t *
prof_tctx_dump_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *opaque) {
prof_dump_iter_arg_t *arg = (prof_dump_iter_arg_t *)opaque;
malloc_mutex_assert_owner(arg->tsdn, tctx->gctx->lock);
switch (tctx->state) {
case prof_tctx_state_initializing:
case prof_tctx_state_nominal:
/* Not captured by this dump. */
break;
case prof_tctx_state_dumping:
case prof_tctx_state_purgatory:
prof_dump_printf(arg->prof_dump_write, arg->cbopaque,
" t%"FMTu64": ", tctx->thr_uid);
prof_dump_print_cnts(arg->prof_dump_write, arg->cbopaque,
&tctx->dump_cnts);
arg->prof_dump_write(arg->cbopaque, "\n");
break;
default:
not_reached();
}
return NULL;
}
static prof_tctx_t *
prof_tctx_finish_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) {
tsdn_t *tsdn = (tsdn_t *)arg;
prof_tctx_t *ret;
malloc_mutex_assert_owner(tsdn, tctx->gctx->lock);
switch (tctx->state) {
case prof_tctx_state_nominal:
/* New since dumping started; ignore. */
break;
case prof_tctx_state_dumping:
tctx->state = prof_tctx_state_nominal;
break;
case prof_tctx_state_purgatory:
ret = tctx;
goto label_return;
default:
not_reached();
}
ret = NULL;
label_return:
return ret;
}
static void
prof_dump_gctx_prep(tsdn_t *tsdn, prof_gctx_t *gctx, prof_gctx_tree_t *gctxs) {
cassert(config_prof);
malloc_mutex_lock(tsdn, gctx->lock);
/*
* Increment nlimbo so that gctx won't go away before dump.
* Additionally, link gctx into the dump list so that it is included in
* prof_dump()'s second pass.
*/
gctx->nlimbo++;
gctx_tree_insert(gctxs, gctx);
memset(&gctx->cnt_summed, 0, sizeof(prof_cnt_t));
malloc_mutex_unlock(tsdn, gctx->lock);
}
typedef struct prof_gctx_merge_iter_arg_s prof_gctx_merge_iter_arg_t;
struct prof_gctx_merge_iter_arg_s {
tsdn_t *tsdn;
size_t *leak_ngctx;
};
static prof_gctx_t *
prof_gctx_merge_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) {
prof_gctx_merge_iter_arg_t *arg = (prof_gctx_merge_iter_arg_t *)opaque;
malloc_mutex_lock(arg->tsdn, gctx->lock);
tctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_merge_iter,
(void *)arg->tsdn);
if (gctx->cnt_summed.curobjs != 0) {
(*arg->leak_ngctx)++;
}
malloc_mutex_unlock(arg->tsdn, gctx->lock);
return NULL;
}
static void
prof_gctx_finish(tsd_t *tsd, prof_gctx_tree_t *gctxs) {
prof_tdata_t *tdata = prof_tdata_get(tsd, false);
prof_gctx_t *gctx;
/*
* Standard tree iteration won't work here, because as soon as we
* decrement gctx->nlimbo and unlock gctx, another thread can
* concurrently destroy it, which will corrupt the tree. Therefore,
* tear down the tree one node at a time during iteration.
*/
while ((gctx = gctx_tree_first(gctxs)) != NULL) {
gctx_tree_remove(gctxs, gctx);
malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);
{
prof_tctx_t *next;
next = NULL;
do {
prof_tctx_t *to_destroy =
tctx_tree_iter(&gctx->tctxs, next,
prof_tctx_finish_iter,
(void *)tsd_tsdn(tsd));
if (to_destroy != NULL) {
next = tctx_tree_next(&gctx->tctxs,
to_destroy);
tctx_tree_remove(&gctx->tctxs,
to_destroy);
idalloctm(tsd_tsdn(tsd), to_destroy,
NULL, NULL, true, true);
} else {
next = NULL;
}
} while (next != NULL);
}
gctx->nlimbo--;
if (prof_gctx_should_destroy(gctx)) {
gctx->nlimbo++;
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
prof_gctx_try_destroy(tsd, tdata, gctx);
} else {
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
}
}
}
typedef struct prof_tdata_merge_iter_arg_s prof_tdata_merge_iter_arg_t;
struct prof_tdata_merge_iter_arg_s {
tsdn_t *tsdn;
prof_cnt_t *cnt_all;
};
static prof_tdata_t *
prof_tdata_merge_iter(prof_tdata_tree_t *tdatas_ptr, prof_tdata_t *tdata,
void *opaque) {
prof_tdata_merge_iter_arg_t *arg =
(prof_tdata_merge_iter_arg_t *)opaque;
malloc_mutex_lock(arg->tsdn, tdata->lock);
if (!tdata->expired) {
size_t tabind;
union {
prof_tctx_t *p;
void *v;
} tctx;
tdata->dumping = true;
memset(&tdata->cnt_summed, 0, sizeof(prof_cnt_t));
for (tabind = 0; !ckh_iter(&tdata->bt2tctx, &tabind, NULL,
&tctx.v);) {
prof_tctx_merge_tdata(arg->tsdn, tctx.p, tdata);
}
arg->cnt_all->curobjs += tdata->cnt_summed.curobjs;
arg->cnt_all->curobjs_shifted_unbiased
+= tdata->cnt_summed.curobjs_shifted_unbiased;
arg->cnt_all->curbytes += tdata->cnt_summed.curbytes;
arg->cnt_all->curbytes_unbiased
+= tdata->cnt_summed.curbytes_unbiased;
if (opt_prof_accum) {
arg->cnt_all->accumobjs += tdata->cnt_summed.accumobjs;
arg->cnt_all->accumobjs_shifted_unbiased
+= tdata->cnt_summed.accumobjs_shifted_unbiased;
arg->cnt_all->accumbytes +=
tdata->cnt_summed.accumbytes;
arg->cnt_all->accumbytes_unbiased +=
tdata->cnt_summed.accumbytes_unbiased;
}
} else {
tdata->dumping = false;
}
malloc_mutex_unlock(arg->tsdn, tdata->lock);
return NULL;
}
static prof_tdata_t *
prof_tdata_dump_iter(prof_tdata_tree_t *tdatas_ptr, prof_tdata_t *tdata,
void *opaque) {
if (!tdata->dumping) {
return NULL;
}
prof_dump_iter_arg_t *arg = (prof_dump_iter_arg_t *)opaque;
prof_dump_printf(arg->prof_dump_write, arg->cbopaque, " t%"FMTu64": ",
tdata->thr_uid);
prof_dump_print_cnts(arg->prof_dump_write, arg->cbopaque,
&tdata->cnt_summed);
if (tdata->thread_name != NULL) {
arg->prof_dump_write(arg->cbopaque, " ");
arg->prof_dump_write(arg->cbopaque, tdata->thread_name);
}
arg->prof_dump_write(arg->cbopaque, "\n");
return NULL;
}
static void
prof_dump_header(prof_dump_iter_arg_t *arg, const prof_cnt_t *cnt_all) {
prof_dump_printf(arg->prof_dump_write, arg->cbopaque,
"heap_v2/%"FMTu64"\n t*: ", ((uint64_t)1U << lg_prof_sample));
prof_dump_print_cnts(arg->prof_dump_write, arg->cbopaque, cnt_all);
arg->prof_dump_write(arg->cbopaque, "\n");
malloc_mutex_lock(arg->tsdn, &tdatas_mtx);
tdata_tree_iter(&tdatas, NULL, prof_tdata_dump_iter, arg);
malloc_mutex_unlock(arg->tsdn, &tdatas_mtx);
}
static void
prof_dump_gctx(prof_dump_iter_arg_t *arg, prof_gctx_t *gctx,
const prof_bt_t *bt, prof_gctx_tree_t *gctxs) {
cassert(config_prof);
malloc_mutex_assert_owner(arg->tsdn, gctx->lock);
/* Avoid dumping such gctx's that have no useful data. */
if ((!opt_prof_accum && gctx->cnt_summed.curobjs == 0) ||
(opt_prof_accum && gctx->cnt_summed.accumobjs == 0)) {
assert(gctx->cnt_summed.curobjs == 0);
assert(gctx->cnt_summed.curbytes == 0);
/*
* These asserts would not be correct -- see the comment on races
* in prof.c
* assert(gctx->cnt_summed.curobjs_unbiased == 0);
* assert(gctx->cnt_summed.curbytes_unbiased == 0);
*/
assert(gctx->cnt_summed.accumobjs == 0);
assert(gctx->cnt_summed.accumobjs_shifted_unbiased == 0);
assert(gctx->cnt_summed.accumbytes == 0);
assert(gctx->cnt_summed.accumbytes_unbiased == 0);
return;
}
arg->prof_dump_write(arg->cbopaque, "@");
for (unsigned i = 0; i < bt->len; i++) {
prof_dump_printf(arg->prof_dump_write, arg->cbopaque,
" %#"FMTxPTR, (uintptr_t)bt->vec[i]);
}
arg->prof_dump_write(arg->cbopaque, "\n t*: ");
prof_dump_print_cnts(arg->prof_dump_write, arg->cbopaque,
&gctx->cnt_summed);
arg->prof_dump_write(arg->cbopaque, "\n");
tctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_dump_iter, arg);
}
/*
* See prof_sample_new_event_wait() comment for why the body of this function
* is conditionally compiled.
*/
static void
prof_leakcheck(const prof_cnt_t *cnt_all, size_t leak_ngctx) {
#ifdef JEMALLOC_PROF
/*
* Scaling is equivalent AdjustSamples() in jeprof, but the result may
* differ slightly from what jeprof reports, because here we scale the
* summary values, whereas jeprof scales each context individually and
* reports the sums of the scaled values.
*/
if (cnt_all->curbytes != 0) {
double sample_period = (double)((uint64_t)1 << lg_prof_sample);
double ratio = (((double)cnt_all->curbytes) /
(double)cnt_all->curobjs) / sample_period;
double scale_factor = 1.0 / (1.0 - exp(-ratio));
uint64_t curbytes = (uint64_t)round(((double)cnt_all->curbytes)
* scale_factor);
uint64_t curobjs = (uint64_t)round(((double)cnt_all->curobjs) *
scale_factor);
malloc_printf("<jemalloc>: Leak approximation summary: ~%"FMTu64
" byte%s, ~%"FMTu64" object%s, >= %zu context%s\n",
curbytes, (curbytes != 1) ? "s" : "", curobjs, (curobjs !=
1) ? "s" : "", leak_ngctx, (leak_ngctx != 1) ? "s" : "");
malloc_printf(
"<jemalloc>: Run jeprof on dump output for leak detail\n");
if (opt_prof_leak_error) {
malloc_printf(
"<jemalloc>: Exiting with error code because memory"
" leaks were detected\n");
/*
* Use _exit() with underscore to avoid calling atexit()
* and entering endless cycle.
*/
_exit(1);
}
}
#endif
}
static prof_gctx_t *
prof_gctx_dump_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) {
prof_dump_iter_arg_t *arg = (prof_dump_iter_arg_t *)opaque;
malloc_mutex_lock(arg->tsdn, gctx->lock);
prof_dump_gctx(arg, gctx, &gctx->bt, gctxs);
malloc_mutex_unlock(arg->tsdn, gctx->lock);
return NULL;
}
static void
prof_dump_prep(tsd_t *tsd, prof_tdata_t *tdata, prof_cnt_t *cnt_all,
size_t *leak_ngctx, prof_gctx_tree_t *gctxs) {
size_t tabind;
union {
prof_gctx_t *p;
void *v;
} gctx;
prof_enter(tsd, tdata);
/*
* Put gctx's in limbo and clear their counters in preparation for
* summing.
*/
gctx_tree_new(gctxs);
for (tabind = 0; !ckh_iter(&bt2gctx, &tabind, NULL, &gctx.v);) {
prof_dump_gctx_prep(tsd_tsdn(tsd), gctx.p, gctxs);
}
/*
* Iterate over tdatas, and for the non-expired ones snapshot their tctx
* stats and merge them into the associated gctx's.
*/
memset(cnt_all, 0, sizeof(prof_cnt_t));
prof_tdata_merge_iter_arg_t prof_tdata_merge_iter_arg = {tsd_tsdn(tsd),
cnt_all};
malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);
tdata_tree_iter(&tdatas, NULL, prof_tdata_merge_iter,
&prof_tdata_merge_iter_arg);
malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);
/* Merge tctx stats into gctx's. */
*leak_ngctx = 0;
prof_gctx_merge_iter_arg_t prof_gctx_merge_iter_arg = {tsd_tsdn(tsd),
leak_ngctx};
gctx_tree_iter(gctxs, NULL, prof_gctx_merge_iter,
&prof_gctx_merge_iter_arg);
prof_leave(tsd, tdata);
}
void
prof_dump_impl(tsd_t *tsd, write_cb_t *prof_dump_write, void *cbopaque,
prof_tdata_t *tdata, bool leakcheck) {
malloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_dump_mtx);
prof_cnt_t cnt_all;
size_t leak_ngctx;
prof_gctx_tree_t gctxs;
prof_dump_prep(tsd, tdata, &cnt_all, &leak_ngctx, &gctxs);
prof_dump_iter_arg_t prof_dump_iter_arg = {tsd_tsdn(tsd),
prof_dump_write, cbopaque};
prof_dump_header(&prof_dump_iter_arg, &cnt_all);
gctx_tree_iter(&gctxs, NULL, prof_gctx_dump_iter, &prof_dump_iter_arg);
prof_gctx_finish(tsd, &gctxs);
if (leakcheck) {
prof_leakcheck(&cnt_all, leak_ngctx);
}
}
/* Used in unit tests. */
void
prof_cnt_all(prof_cnt_t *cnt_all) {
tsd_t *tsd = tsd_fetch();
prof_tdata_t *tdata = prof_tdata_get(tsd, false);
if (tdata == NULL) {
memset(cnt_all, 0, sizeof(prof_cnt_t));
} else {
size_t leak_ngctx;
prof_gctx_tree_t gctxs;
prof_dump_prep(tsd, tdata, cnt_all, &leak_ngctx, &gctxs);
prof_gctx_finish(tsd, &gctxs);
}
}
void
prof_bt_hash(const void *key, size_t r_hash[2]) {
prof_bt_t *bt = (prof_bt_t *)key;
cassert(config_prof);
hash(bt->vec, bt->len * sizeof(void *), 0x94122f33U, r_hash);
}
bool
prof_bt_keycomp(const void *k1, const void *k2) {
const prof_bt_t *bt1 = (prof_bt_t *)k1;
const prof_bt_t *bt2 = (prof_bt_t *)k2;
cassert(config_prof);
if (bt1->len != bt2->len) {
return false;
}
return (memcmp(bt1->vec, bt2->vec, bt1->len * sizeof(void *)) == 0);
}
prof_tdata_t *
prof_tdata_init_impl(tsd_t *tsd, uint64_t thr_uid, uint64_t thr_discrim,
char *thread_name, bool active) {
assert(tsd_reentrancy_level_get(tsd) == 0);
prof_tdata_t *tdata;
cassert(config_prof);
/* Initialize an empty cache for this thread. */
tdata = (prof_tdata_t *)iallocztm(tsd_tsdn(tsd), sizeof(prof_tdata_t),
sz_size2index(sizeof(prof_tdata_t)), false, NULL, true,
arena_get(TSDN_NULL, 0, true), true);
if (tdata == NULL) {
return NULL;
}
tdata->lock = prof_tdata_mutex_choose(thr_uid);
tdata->thr_uid = thr_uid;
tdata->thr_discrim = thr_discrim;
tdata->thread_name = thread_name;
tdata->attached = true;
tdata->expired = false;
tdata->tctx_uid_next = 0;
if (ckh_new(tsd, &tdata->bt2tctx, PROF_CKH_MINITEMS, prof_bt_hash,
prof_bt_keycomp)) {
idalloctm(tsd_tsdn(tsd), tdata, NULL, NULL, true, true);
return NULL;
}
tdata->enq = false;
tdata->enq_idump = false;
tdata->enq_gdump = false;
tdata->dumping = false;
tdata->active = active;
malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);
tdata_tree_insert(&tdatas, tdata);
malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);
return tdata;
}
static bool
prof_tdata_should_destroy_unlocked(prof_tdata_t *tdata, bool even_if_attached) {
if (tdata->attached && !even_if_attached) {
return false;
}
if (ckh_count(&tdata->bt2tctx) != 0) {
return false;
}
return true;
}
static bool
prof_tdata_should_destroy(tsdn_t *tsdn, prof_tdata_t *tdata,
bool even_if_attached) {
malloc_mutex_assert_owner(tsdn, tdata->lock);
return prof_tdata_should_destroy_unlocked(tdata, even_if_attached);
}
static void
prof_tdata_destroy_locked(tsd_t *tsd, prof_tdata_t *tdata,
bool even_if_attached) {
malloc_mutex_assert_owner(tsd_tsdn(tsd), &tdatas_mtx);
malloc_mutex_assert_not_owner(tsd_tsdn(tsd), tdata->lock);
tdata_tree_remove(&tdatas, tdata);
assert(prof_tdata_should_destroy_unlocked(tdata, even_if_attached));
if (tdata->thread_name != NULL) {
idalloctm(tsd_tsdn(tsd), tdata->thread_name, NULL, NULL, true,
true);
}
ckh_delete(tsd, &tdata->bt2tctx);
idalloctm(tsd_tsdn(tsd), tdata, NULL, NULL, true, true);
}
static void
prof_tdata_destroy(tsd_t *tsd, prof_tdata_t *tdata, bool even_if_attached) {
malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);
prof_tdata_destroy_locked(tsd, tdata, even_if_attached);
malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);
}
void
prof_tdata_detach(tsd_t *tsd, prof_tdata_t *tdata) {
bool destroy_tdata;
malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock);
if (tdata->attached) {
destroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd), tdata,
true);
/*
* Only detach if !destroy_tdata, because detaching would allow
* another thread to win the race to destroy tdata.
*/
if (!destroy_tdata) {
tdata->attached = false;
}
tsd_prof_tdata_set(tsd, NULL);
} else {
destroy_tdata = false;
}
malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);
if (destroy_tdata) {
prof_tdata_destroy(tsd, tdata, true);
}
}
static bool
prof_tdata_expire(tsdn_t *tsdn, prof_tdata_t *tdata) {
bool destroy_tdata;
malloc_mutex_lock(tsdn, tdata->lock);
if (!tdata->expired) {
tdata->expired = true;
destroy_tdata = prof_tdata_should_destroy(tsdn, tdata, false);
} else {
destroy_tdata = false;
}
malloc_mutex_unlock(tsdn, tdata->lock);
return destroy_tdata;
}
static prof_tdata_t *
prof_tdata_reset_iter(prof_tdata_tree_t *tdatas_ptr, prof_tdata_t *tdata,
void *arg) {
tsdn_t *tsdn = (tsdn_t *)arg;
return (prof_tdata_expire(tsdn, tdata) ? tdata : NULL);
}
void
prof_reset(tsd_t *tsd, size_t lg_sample) {
prof_tdata_t *next;
assert(lg_sample < (sizeof(uint64_t) << 3));
malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_mtx);
malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);
lg_prof_sample = lg_sample;
prof_unbias_map_init();
next = NULL;
do {
prof_tdata_t *to_destroy = tdata_tree_iter(&tdatas, next,
prof_tdata_reset_iter, (void *)tsd);
if (to_destroy != NULL) {
next = tdata_tree_next(&tdatas, to_destroy);
prof_tdata_destroy_locked(tsd, to_destroy, false);
} else {
next = NULL;
}
} while (next != NULL);
malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);
malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_mtx);
}
static bool
prof_tctx_should_destroy(tsd_t *tsd, prof_tctx_t *tctx) {
malloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);
if (opt_prof_accum) {
return false;
}
if (tctx->cnts.curobjs != 0) {
return false;
}
if (tctx->prepared) {
return false;
}
if (tctx->recent_count != 0) {
return false;
}
return true;
}
static void
prof_tctx_destroy(tsd_t *tsd, prof_tctx_t *tctx) {
malloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);
assert(tctx->cnts.curobjs == 0);
assert(tctx->cnts.curbytes == 0);
/*
* These asserts are not correct -- see the comment about races in
* prof.c
*
* assert(tctx->cnts.curobjs_shifted_unbiased == 0);
* assert(tctx->cnts.curbytes_unbiased == 0);
*/
assert(!opt_prof_accum);
assert(tctx->cnts.accumobjs == 0);
assert(tctx->cnts.accumbytes == 0);
/*
* These ones are, since accumbyte counts never go down. Either
* prof_accum is off (in which case these should never have changed from
* their initial value of zero), or it's on (in which case we shouldn't
* be destroying this tctx).
*/
assert(tctx->cnts.accumobjs_shifted_unbiased == 0);
assert(tctx->cnts.accumbytes_unbiased == 0);
prof_gctx_t *gctx = tctx->gctx;
{
prof_tdata_t *tdata = tctx->tdata;
tctx->tdata = NULL;
ckh_remove(tsd, &tdata->bt2tctx, &gctx->bt, NULL, NULL);
bool destroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd),
tdata, false);
malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);
if (destroy_tdata) {
prof_tdata_destroy(tsd, tdata, false);
}
}
bool destroy_tctx, destroy_gctx;
malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);
switch (tctx->state) {
case prof_tctx_state_nominal:
tctx_tree_remove(&gctx->tctxs, tctx);
destroy_tctx = true;
if (prof_gctx_should_destroy(gctx)) {
/*
* Increment gctx->nlimbo in order to keep another
* thread from winning the race to destroy gctx while
* this one has gctx->lock dropped. Without this, it
* would be possible for another thread to:
*
* 1) Sample an allocation associated with gctx.
* 2) Deallocate the sampled object.
* 3) Successfully prof_gctx_try_destroy(gctx).
*
* The result would be that gctx no longer exists by the
* time this thread accesses it in
* prof_gctx_try_destroy().
*/
gctx->nlimbo++;
destroy_gctx = true;
} else {
destroy_gctx = false;
}
break;
case prof_tctx_state_dumping:
/*
* A dumping thread needs tctx to remain valid until dumping
* has finished. Change state such that the dumping thread will
* complete destruction during a late dump iteration phase.
*/
tctx->state = prof_tctx_state_purgatory;
destroy_tctx = false;
destroy_gctx = false;
break;
default:
not_reached();
destroy_tctx = false;
destroy_gctx = false;
}
malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);
if (destroy_gctx) {
prof_gctx_try_destroy(tsd, prof_tdata_get(tsd, false), gctx);
}
if (destroy_tctx) {
idalloctm(tsd_tsdn(tsd), tctx, NULL, NULL, true, true);
}
}
void
prof_tctx_try_destroy(tsd_t *tsd, prof_tctx_t *tctx) {
malloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);
if (prof_tctx_should_destroy(tsd, tctx)) {
/* tctx->tdata->lock will be released in prof_tctx_destroy(). */
prof_tctx_destroy(tsd, tctx);
} else {
malloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock);
}
}
/******************************************************************************/
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment