Commit a9951b1b authored by antirez's avatar antirez
Browse files

Jemalloc updated to 4.0.3.

parent e3ded027
begin-language: "Autoconf-without-aclocal-m4"
args: --no-cache
end-language: "Autoconf-without-aclocal-m4"
/*.gcov.* /*.gcov.*
/autom4te.cache/ /bin/jemalloc-config
/bin/jemalloc.sh /bin/jemalloc.sh
/bin/jeprof
/config.stamp /config.stamp
/config.log /config.log
...@@ -15,6 +15,8 @@ ...@@ -15,6 +15,8 @@
/doc/jemalloc.html /doc/jemalloc.html
/doc/jemalloc.3 /doc/jemalloc.3
/jemalloc.pc
/lib/ /lib/
/Makefile /Makefile
...@@ -35,6 +37,7 @@ ...@@ -35,6 +37,7 @@
/include/jemalloc/jemalloc_protos.h /include/jemalloc/jemalloc_protos.h
/include/jemalloc/jemalloc_protos_jet.h /include/jemalloc/jemalloc_protos_jet.h
/include/jemalloc/jemalloc_rename.h /include/jemalloc/jemalloc_rename.h
/include/jemalloc/jemalloc_typedefs.h
/src/*.[od] /src/*.[od]
/src/*.gcda /src/*.gcda
......
Unless otherwise specified, files in the jemalloc source distribution are Unless otherwise specified, files in the jemalloc source distribution are
subject to the following license: subject to the following license:
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
Copyright (C) 2002-2014 Jason Evans <jasone@canonware.com>. Copyright (C) 2002-2015 Jason Evans <jasone@canonware.com>.
All rights reserved. All rights reserved.
Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved. Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved.
Copyright (C) 2009-2014 Facebook, Inc. All rights reserved. Copyright (C) 2009-2015 Facebook, Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met: modification, are permitted provided that the following conditions are met:
......
Following are change highlights associated with official releases. Important Following are change highlights associated with official releases. Important
bug fixes are all mentioned, but internal enhancements are omitted here for bug fixes are all mentioned, but some internal enhancements are omitted here for
brevity (even though they are more fun to write about). Much more detail can be brevity. Much more detail can be found in the git revision history:
found in the git revision history:
https://github.com/jemalloc/jemalloc https://github.com/jemalloc/jemalloc
* 4.0.3 (September 24, 2015)
This bugfix release continues the trend of xallocx() and heap profiling fixes.
Bug fixes:
- Fix xallocx(..., MALLOCX_ZERO) to zero all trailing bytes of large
allocations when --enable-cache-oblivious configure option is enabled.
- Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations
when resizing from/to a size class that is not a multiple of the chunk size.
- Fix prof_tctx_dump_iter() to filter out nodes that were created after heap
profile dumping started.
- Work around a potentially bad thread-specific data initialization
interaction with NPTL (glibc's pthreads implementation).
* 4.0.2 (September 21, 2015)
This bugfix release addresses a few bugs specific to heap profiling.
Bug fixes:
- Fix ixallocx_prof_sample() to never modify nor create sampled small
allocations. xallocx() is in general incapable of moving small allocations,
so this fix removes buggy code without loss of generality.
- Fix irallocx_prof_sample() to always allocate large regions, even when
alignment is non-zero.
- Fix prof_alloc_rollback() to read tdata from thread-specific data rather
than dereferencing a potentially invalid tctx.
* 4.0.1 (September 15, 2015)
This is a bugfix release that is somewhat high risk due to the amount of
refactoring required to address deep xallocx() problems. As a side effect of
these fixes, xallocx() now tries harder to partially fulfill requests for
optional extra space. Note that a couple of minor heap profiling
optimizations are included, but these are better thought of as performance
fixes that were integral to disovering most of the other bugs.
Optimizations:
- Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the
fast path when heap profiling is enabled. Additionally, split a special
case out into arena_prof_tctx_reset(), which also avoids chunk metadata
reads.
- Optimize irallocx_prof() to optimistically update the sampler state. The
prior implementation appears to have been a holdover from when
rallocx()/xallocx() functionality was combined as rallocm().
Bug fixes:
- Fix TLS configuration such that it is enabled by default for platforms on
which it works correctly.
- Fix arenas_cache_cleanup() and arena_get_hard() to handle
allocation/deallocation within the application's thread-specific data
cleanup functions even after arenas_cache is torn down.
- Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS.
- Fix chunk purge hook calls for in-place huge shrinking reallocation to
specify the old chunk size rather than the new chunk size. This bug caused
no correctness issues for the default chunk purge function, but was
visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl.
- Fix heap profiling bugs:
+ Fix heap profiling to distinguish among otherwise identical sample sites
with interposed resets (triggered via the "prof.reset" mallctl). This bug
could cause data structure corruption that would most likely result in a
segfault.
+ Fix irealloc_prof() to prof_alloc_rollback() on OOM.
+ Make one call to prof_active_get_unlocked() per allocation event, and use
the result throughout the relevant functions that handle an allocation
event. Also add a missing check in prof_realloc(). These fixes protect
allocation events against concurrent prof_active changes.
+ Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample()
in the correct order.
+ Fix prof_realloc() to call prof_free_sampled_object() after calling
prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were
the same, the tctx could have been prematurely destroyed.
- Fix portability bugs:
+ Don't bitshift by negative amounts when encoding/decoding run sizes in
chunk header maps. This affected systems with page sizes greater than 8
KiB.
+ Rename index_t to szind_t to avoid an existing type on Solaris.
+ Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to
match glibc and avoid compilation errors when including both
jemalloc/jemalloc.h and malloc.h in C++ code.
+ Don't assume that /bin/sh is appropriate when running size_classes.sh
during configuration.
+ Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM.
+ Link tests to librt if it contains clock_gettime(2).
* 4.0.0 (August 17, 2015)
This version contains many speed and space optimizations, both minor and
major. The major themes are generalization, unification, and simplification.
Although many of these optimizations cause no visible behavior change, their
cumulative effect is substantial.
New features:
- Normalize size class spacing to be consistent across the complete size
range. By default there are four size classes per size doubling, but this
is now configurable via the --with-lg-size-class-group option. Also add the
--with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and
--with-lg-tiny-min options, which can be used to tweak page and size class
settings. Impacts:
+ Worst case performance for incrementally growing/shrinking reallocation
is improved because there are far fewer size classes, and therefore
copying happens less often.
+ Internal fragmentation is limited to 20% for all but the smallest size
classes (those less than four times the quantum). (1B + 4 KiB)
and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation.
+ Chunk fragmentation tends to be lower because there are fewer distinct run
sizes to pack.
- Add support for explicit tcaches. The "tcache.create", "tcache.flush", and
"tcache.destroy" mallctls control tcache lifetime and flushing, and the
MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API
control which tcache is used for each operation.
- Implement per thread heap profiling, as well as the ability to
enable/disable heap profiling on a per thread basis. Add the "prof.reset",
"prof.lg_sample", "thread.prof.name", "thread.prof.active",
"opt.prof_thread_active_init", "prof.thread_active_init", and
"thread.prof.active" mallctls.
- Add support for per arena application-specified chunk allocators, configured
via the "arena.<i>.chunk_hooks" mallctl.
- Refactor huge allocation to be managed by arenas, so that arenas now
function as general purpose independent allocators. This is important in
the context of user-specified chunk allocators, aside from the scalability
benefits. Related new statistics:
+ The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
"stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
mallctls provide high level per arena huge allocation statistics.
+ The "arenas.nhchunks", "arenas.hchunk.<i>.size",
"stats.arenas.<i>.hchunks.<j>.nmalloc",
"stats.arenas.<i>.hchunks.<j>.ndalloc",
"stats.arenas.<i>.hchunks.<j>.nrequests", and
"stats.arenas.<i>.hchunks.<j>.curhchunks" mallctls provide per size class
statistics.
- Add the 'util' column to malloc_stats_print() output, which reports the
proportion of available regions that are currently in use for each small
size class.
- Add "alloc" and "free" modes for for junk filling (see the "opt.junk"
mallctl), so that it is possible to separately enable junk filling for
allocation versus deallocation.
- Add the jemalloc-config script, which provides information about how
jemalloc was configured, and how to integrate it into application builds.
- Add metadata statistics, which are accessible via the "stats.metadata",
"stats.arenas.<i>.metadata.mapped", and
"stats.arenas.<i>.metadata.allocated" mallctls.
- Add the "stats.resident" mallctl, which reports the upper limit of
physically resident memory mapped by the allocator.
- Add per arena control over unused dirty page purging, via the
"arenas.lg_dirty_mult", "arena.<i>.lg_dirty_mult", and
"stats.arenas.<i>.lg_dirty_mult" mallctls.
- Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump
feature on/off during program execution.
- Add sdallocx(), which implements sized deallocation. The primary
optimization over dallocx() is the removal of a metadata read, which often
suffers an L1 cache miss.
- Add missing header includes in jemalloc/jemalloc.h, so that applications
only have to #include <jemalloc/jemalloc.h>.
- Add support for additional platforms:
+ Bitrig
+ Cygwin
+ DragonFlyBSD
+ iOS
+ OpenBSD
+ OpenRISC/or1k
Optimizations:
- Maintain dirty runs in per arena LRUs rather than in per arena trees of
dirty-run-containing chunks. In practice this change significantly reduces
dirty page purging volume.
- Integrate whole chunks into the unused dirty page purging machinery. This
reduces the cost of repeated huge allocation/deallocation, because it
effectively introduces a cache of chunks.
- Split the arena chunk map into two separate arrays, in order to increase
cache locality for the frequently accessed bits.
- Move small run metadata out of runs, into arena chunk headers. This reduces
run fragmentation, smaller runs reduce external fragmentation for small size
classes, and packed (less uniformly aligned) metadata layout improves CPU
cache set distribution.
- Randomly distribute large allocation base pointer alignment relative to page
boundaries in order to more uniformly utilize CPU cache sets. This can be
disabled via the --disable-cache-oblivious configure option, and queried via
the "config.cache_oblivious" mallctl.
- Micro-optimize the fast paths for the public API functions.
- Refactor thread-specific data to reside in a single structure. This assures
that only a single TLS read is necessary per call into the public API.
- Implement in-place huge allocation growing and shrinking.
- Refactor rtree (radix tree for chunk lookups) to be lock-free, and make
additional optimizations that reduce maximum lookup depth to one or two
levels. This resolves what was a concurrency bottleneck for per arena huge
allocation, because a global data structure is critical for determining
which arenas own which huge allocations.
Incompatible changes:
- Replace --enable-cc-silence with --disable-cc-silence to suppress spurious
warnings by default.
- Assure that the constness of malloc_usable_size()'s return type matches that
of the system implementation.
- Change the heap profile dump format to support per thread heap profiling,
rename pprof to jeprof, and enhance it with the --thread=<n> option. As a
result, the bundled jeprof must now be used rather than the upstream
(gperftools) pprof.
- Disable "opt.prof_final" by default, in order to avoid atexit(3), which can
internally deadlock on some platforms.
- Change the "arenas.nlruns" mallctl type from size_t to unsigned.
- Replace the "stats.arenas.<i>.bins.<j>.allocated" mallctl with
"stats.arenas.<i>.bins.<j>.curregs".
- Ignore MALLOC_CONF in set{uid,gid,cap} binaries.
- Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the
MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage.
Removed features:
- Remove the *allocm() API, which is superseded by the *allocx() API.
- Remove the --enable-dss options, and make dss non-optional on all platforms
which support sbrk(2).
- Remove the "arenas.purge" mallctl, which was obsoleted by the
"arena.<i>.purge" mallctl in 3.1.0.
- Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically
detects whether it is running inside Valgrind.
- Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
"stats.huge.ndalloc" mallctls.
- Remove the --enable-mremap option.
- Remove the "stats.chunks.current", "stats.chunks.total", and
"stats.chunks.high" mallctls.
Bug fixes:
- Fix the cactive statistic to decrease (rather than increase) when active
memory decreases. This regression was first released in 3.5.0.
- Fix OOM handling in memalign() and valloc(). A variant of this bug existed
in all releases since 2.0.0, which introduced these functions.
- Fix an OOM-related regression in arena_tcache_fill_small(), which could
cause cache corruption on OOM. This regression was present in all releases
from 2.2.0 through 3.6.0.
- Fix size class overflow handling for malloc(), posix_memalign(), memalign(),
calloc(), and realloc() when profiling is enabled.
- Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
"secondary" precedence is specified, but sbrk(2) is not supported.
- Fix fallback lg_floor() implementations to handle extremely large inputs.
- Ensure the default purgeable zone is after the default zone on OS X.
- Fix latent bugs in atomic_*().
- Fix the "arena.<i>.dss" mallctl to handle read-only calls.
- Fix tls_model configuration to enable the initial-exec model when possible.
- Mark malloc_conf as a weak symbol so that the application can override it.
- Correctly detect glibc's adaptive pthread mutexes.
- Fix the --without-export configure option.
* 3.6.0 (March 31, 2014) * 3.6.0 (March 31, 2014)
This version contains a critical bug fix for a regression present in 3.5.0 and This version contains a critical bug fix for a regression present in 3.5.0 and
...@@ -21,7 +261,7 @@ found in the git revision history: ...@@ -21,7 +261,7 @@ found in the git revision history:
backtracing to be reliable. backtracing to be reliable.
- Use dss allocation precedence for huge allocations as well as small/large - Use dss allocation precedence for huge allocations as well as small/large
allocations. allocations.
- Fix test assertion failure message formatting. This bug did not manifect on - Fix test assertion failure message formatting. This bug did not manifest on
x86_64 systems because of implementation subtleties in va_list. x86_64 systems because of implementation subtleties in va_list.
- Fix inconsequential test failures for hash and SFMT code. - Fix inconsequential test failures for hash and SFMT code.
...@@ -516,7 +756,7 @@ found in the git revision history: ...@@ -516,7 +756,7 @@ found in the git revision history:
- Make it possible for the application to manually flush a thread's cache, via - Make it possible for the application to manually flush a thread's cache, via
the "tcache.flush" mallctl. the "tcache.flush" mallctl.
- Base maximum dirty page count on proportion of active memory. - Base maximum dirty page count on proportion of active memory.
- Compute various addtional run-time statistics, including per size class - Compute various additional run-time statistics, including per size class
statistics for large objects. statistics for large objects.
- Expose malloc_stats_print(), which can be called repeatedly by the - Expose malloc_stats_print(), which can be called repeatedly by the
application. application.
......
Building and installing jemalloc can be as simple as typing the following while Building and installing a packaged release of jemalloc can be as simple as
in the root directory of the source tree: typing the following while in the root directory of the source tree:
./configure ./configure
make make
make install make install
If building from unpackaged developer sources, the simplest command sequence
that might work is:
./autogen.sh
make dist
make
make install
Note that documentation is not built by the default target because doing so
would create a dependency on xsltproc in packaged releases, hence the
requirement to either run 'make dist' or avoid installing docs via the various
install_* targets documented below.
=== Advanced configuration ===================================================== === Advanced configuration =====================================================
The 'configure' script supports numerous options that allow control of which The 'configure' script supports numerous options that allow control of which
...@@ -71,10 +84,10 @@ any of the following arguments (not a definitive list) to 'configure': ...@@ -71,10 +84,10 @@ any of the following arguments (not a definitive list) to 'configure':
versions of jemalloc can coexist in the same installation directory. For versions of jemalloc can coexist in the same installation directory. For
example, libjemalloc.so.0 becomes libjemalloc<suffix>.so.0. example, libjemalloc.so.0 becomes libjemalloc<suffix>.so.0.
--enable-cc-silence --disable-cc-silence
Enable code that silences non-useful compiler warnings. This is helpful Disable code that silences non-useful compiler warnings. This is mainly
when trying to tell serious warnings from those due to compiler useful during development when auditing the set of warnings that are being
limitations, but it potentially incurs a performance penalty. silenced.
--enable-debug --enable-debug
Enable assertions and validation code. This incurs a substantial Enable assertions and validation code. This incurs a substantial
...@@ -94,15 +107,15 @@ any of the following arguments (not a definitive list) to 'configure': ...@@ -94,15 +107,15 @@ any of the following arguments (not a definitive list) to 'configure':
there are interactions between the various coverage targets, so it is there are interactions between the various coverage targets, so it is
usually advisable to run 'make clean' between repeated code coverage runs. usually advisable to run 'make clean' between repeated code coverage runs.
--enable-ivsalloc
Enable validation code, which verifies that pointers reside within
jemalloc-owned chunks before dereferencing them. This incurs a substantial
performance hit.
--disable-stats --disable-stats
Disable statistics gathering functionality. See the "opt.stats_print" Disable statistics gathering functionality. See the "opt.stats_print"
option documentation for usage details. option documentation for usage details.
--enable-ivsalloc
Enable validation code, which verifies that pointers reside within
jemalloc-owned chunks before dereferencing them. This incurs a minor
performance hit.
--enable-prof --enable-prof
Enable heap profiling and leak detection functionality. See the "opt.prof" Enable heap profiling and leak detection functionality. See the "opt.prof"
option documentation for usage details. When enabled, there are several option documentation for usage details. When enabled, there are several
...@@ -132,12 +145,6 @@ any of the following arguments (not a definitive list) to 'configure': ...@@ -132,12 +145,6 @@ any of the following arguments (not a definitive list) to 'configure':
released in bulk, thus reducing the total number of mutex operations. See released in bulk, thus reducing the total number of mutex operations. See
the "opt.tcache" option for usage details. the "opt.tcache" option for usage details.
--enable-mremap
Enable huge realloc() via mremap(2). mremap() is disabled by default
because the flavor used is specific to Linux, which has a quirk in its
virtual memory allocation algorithm that causes semi-permanent VM map holes
under normal jemalloc operation.
--disable-munmap --disable-munmap
Disable virtual memory deallocation via munmap(2); instead keep track of Disable virtual memory deallocation via munmap(2); instead keep track of
the virtual memory for later use. munmap() is disabled by default (i.e. the virtual memory for later use. munmap() is disabled by default (i.e.
...@@ -145,10 +152,6 @@ any of the following arguments (not a definitive list) to 'configure': ...@@ -145,10 +152,6 @@ any of the following arguments (not a definitive list) to 'configure':
memory allocation algorithm that causes semi-permanent VM map holes under memory allocation algorithm that causes semi-permanent VM map holes under
normal jemalloc operation. normal jemalloc operation.
--enable-dss
Enable support for page allocation/deallocation via sbrk(2), in addition to
mmap(2).
--disable-fill --disable-fill
Disable support for junk/zero filling of memory, quarantine, and redzones. Disable support for junk/zero filling of memory, quarantine, and redzones.
See the "opt.junk", "opt.zero", "opt.quarantine", and "opt.redzone" option See the "opt.junk", "opt.zero", "opt.quarantine", and "opt.redzone" option
...@@ -157,9 +160,6 @@ any of the following arguments (not a definitive list) to 'configure': ...@@ -157,9 +160,6 @@ any of the following arguments (not a definitive list) to 'configure':
--disable-valgrind --disable-valgrind
Disable support for Valgrind. Disable support for Valgrind.
--disable-experimental
Disable support for the experimental API (*allocm()).
--disable-zone-allocator --disable-zone-allocator
Disable zone allocator for Darwin. This means jemalloc won't be hooked as Disable zone allocator for Darwin. This means jemalloc won't be hooked as
the default allocator on OSX/iOS. the default allocator on OSX/iOS.
...@@ -185,10 +185,106 @@ any of the following arguments (not a definitive list) to 'configure': ...@@ -185,10 +185,106 @@ any of the following arguments (not a definitive list) to 'configure':
thread-local variables via the __thread keyword. If TLS is available, thread-local variables via the __thread keyword. If TLS is available,
jemalloc uses it for several purposes. jemalloc uses it for several purposes.
--disable-cache-oblivious
Disable cache-oblivious large allocation alignment for large allocation
requests with no alignment constraints. If this feature is disabled, all
large allocations are page-aligned as an implementation artifact, which can
severely harm CPU cache utilization. However, the cache-oblivious layout
comes at the cost of one extra page per large allocation, which in the
most extreme case increases physical memory usage for the 16 KiB size class
to 20 KiB.
--with-xslroot=<path> --with-xslroot=<path>
Specify where to find DocBook XSL stylesheets when building the Specify where to find DocBook XSL stylesheets when building the
documentation. documentation.
--with-lg-page=<lg-page>
Specify the base 2 log of the system page size. This option is only useful
when cross compiling, since the configure script automatically determines
the host's page size by default.
--with-lg-page-sizes=<lg-page-sizes>
Specify the comma-separated base 2 logs of the page sizes to support. This
option may be useful when cross-compiling in combination with
--with-lg-page, but its primary use case is for integration with FreeBSD's
libc, wherein jemalloc is embedded.
--with-lg-size-class-group=<lg-size-class-group>
Specify the base 2 log of how many size classes to use for each doubling in
size. By default jemalloc uses <lg-size-class-group>=2, which results in
e.g. the following size classes:
[...], 64,
80, 96, 112, 128,
160, [...]
<lg-size-class-group>=3 results in e.g. the following size classes:
[...], 64,
72, 80, 88, 96, 104, 112, 120, 128,
144, [...]
The minimal <lg-size-class-group>=0 causes jemalloc to only provide size
classes that are powers of 2:
[...],
64,
128,
256,
[...]
An implementation detail currently limits the total number of small size
classes to 255, and a compilation error will result if the
<lg-size-class-group> you specify cannot be supported. The limit is
roughly <lg-size-class-group>=4, depending on page size.
--with-lg-quantum=<lg-quantum>
Specify the base 2 log of the minimum allocation alignment. jemalloc needs
to know the minimum alignment that meets the following C standard
requirement (quoted from the April 12, 2011 draft of the C11 standard):
The pointer returned if the allocation succeeds is suitably aligned so
that it may be assigned to a pointer to any type of object with a
fundamental alignment requirement and then used to access such an object
or an array of such objects in the space allocated [...]
This setting is architecture-specific, and although jemalloc includes known
safe values for the most commonly used modern architectures, there is a
wrinkle related to GNU libc (glibc) that may impact your choice of
<lg-quantum>. On most modern architectures, this mandates 16-byte alignment
(<lg-quantum>=4), but the glibc developers chose not to meet this
requirement for performance reasons. An old discussion can be found at
https://sourceware.org/bugzilla/show_bug.cgi?id=206 . Unlike glibc,
jemalloc does follow the C standard by default (caveat: jemalloc
technically cheats if --with-lg-tiny-min is smaller than
--with-lg-quantum), but the fact that Linux systems already work around
this allocator noncompliance means that it is generally safe in practice to
let jemalloc's minimum alignment follow glibc's lead. If you specify
--with-lg-quantum=3 during configuration, jemalloc will provide additional
size classes that are not 16-byte-aligned (24, 40, and 56, assuming
--with-lg-size-class-group=2).
--with-lg-tiny-min=<lg-tiny-min>
Specify the base 2 log of the minimum tiny size class to support. Tiny
size classes are powers of 2 less than the quantum, and are only
incorporated if <lg-tiny-min> is less than <lg-quantum> (see
--with-lg-quantum). Tiny size classes technically violate the C standard
requirement for minimum alignment, and crashes could conceivably result if
the compiler were to generate instructions that made alignment assumptions,
both because illegal instruction traps could result, and because accesses
could straddle page boundaries and cause segmentation faults due to
accessing unmapped addresses.
The default of <lg-tiny-min>=3 works well in practice even on architectures
that technically require 16-byte alignment, probably for the same reason
--with-lg-quantum=3 works. Smaller tiny size classes can, and will, cause
crashes (see https://bugzilla.mozilla.org/show_bug.cgi?id=691003 for an
example).
This option is rarely useful, and is mainly provided as documentation of a
subtle implementation detail. If you do use this option, specify a
value in [3, ..., <lg-quantum>].
The following environment variables (not a definitive list) impact configure's The following environment variables (not a definitive list) impact configure's
behavior: behavior:
......
...@@ -28,6 +28,7 @@ CFLAGS := @CFLAGS@ ...@@ -28,6 +28,7 @@ CFLAGS := @CFLAGS@
LDFLAGS := @LDFLAGS@ LDFLAGS := @LDFLAGS@
EXTRA_LDFLAGS := @EXTRA_LDFLAGS@ EXTRA_LDFLAGS := @EXTRA_LDFLAGS@
LIBS := @LIBS@ LIBS := @LIBS@
TESTLIBS := @TESTLIBS@
RPATH_EXTRA := @RPATH_EXTRA@ RPATH_EXTRA := @RPATH_EXTRA@
SO := @so@ SO := @so@
IMPORTLIB := @importlib@ IMPORTLIB := @importlib@
...@@ -42,14 +43,16 @@ XSLTPROC := @XSLTPROC@ ...@@ -42,14 +43,16 @@ XSLTPROC := @XSLTPROC@
AUTOCONF := @AUTOCONF@ AUTOCONF := @AUTOCONF@
_RPATH = @RPATH@ _RPATH = @RPATH@
RPATH = $(if $(1),$(call _RPATH,$(1))) RPATH = $(if $(1),$(call _RPATH,$(1)))
cfghdrs_in := @cfghdrs_in@ cfghdrs_in := $(addprefix $(srcroot),@cfghdrs_in@)
cfghdrs_out := @cfghdrs_out@ cfghdrs_out := @cfghdrs_out@
cfgoutputs_in := @cfgoutputs_in@ cfgoutputs_in := $(addprefix $(srcroot),@cfgoutputs_in@)
cfgoutputs_out := @cfgoutputs_out@ cfgoutputs_out := @cfgoutputs_out@
enable_autogen := @enable_autogen@ enable_autogen := @enable_autogen@
enable_code_coverage := @enable_code_coverage@ enable_code_coverage := @enable_code_coverage@
enable_experimental := @enable_experimental@ enable_prof := @enable_prof@
enable_valgrind := @enable_valgrind@
enable_zone_allocator := @enable_zone_allocator@ enable_zone_allocator := @enable_zone_allocator@
MALLOC_CONF := @JEMALLOC_CPREFIX@MALLOC_CONF
DSO_LDFLAGS = @DSO_LDFLAGS@ DSO_LDFLAGS = @DSO_LDFLAGS@
SOREV = @SOREV@ SOREV = @SOREV@
PIC_CFLAGS = @PIC_CFLAGS@ PIC_CFLAGS = @PIC_CFLAGS@
...@@ -73,16 +76,20 @@ endif ...@@ -73,16 +76,20 @@ endif
LIBJEMALLOC := $(LIBPREFIX)jemalloc$(install_suffix) LIBJEMALLOC := $(LIBPREFIX)jemalloc$(install_suffix)
# Lists of files. # Lists of files.
BINS := $(srcroot)bin/pprof $(objroot)bin/jemalloc.sh BINS := $(objroot)bin/jemalloc-config $(objroot)bin/jemalloc.sh $(objroot)bin/jeprof
C_HDRS := $(objroot)include/jemalloc/jemalloc$(install_suffix).h C_HDRS := $(objroot)include/jemalloc/jemalloc$(install_suffix).h
C_SRCS := $(srcroot)src/jemalloc.c $(srcroot)src/arena.c \ C_SRCS := $(srcroot)src/jemalloc.c $(srcroot)src/arena.c \
$(srcroot)src/atomic.c $(srcroot)src/base.c $(srcroot)src/bitmap.c \ $(srcroot)src/atomic.c $(srcroot)src/base.c $(srcroot)src/bitmap.c \
$(srcroot)src/chunk.c $(srcroot)src/chunk_dss.c \ $(srcroot)src/chunk.c $(srcroot)src/chunk_dss.c \
$(srcroot)src/chunk_mmap.c $(srcroot)src/ckh.c $(srcroot)src/ctl.c \ $(srcroot)src/chunk_mmap.c $(srcroot)src/ckh.c $(srcroot)src/ctl.c \
$(srcroot)src/extent.c $(srcroot)src/hash.c $(srcroot)src/huge.c \ $(srcroot)src/extent.c $(srcroot)src/hash.c $(srcroot)src/huge.c \
$(srcroot)src/mb.c $(srcroot)src/mutex.c $(srcroot)src/prof.c \ $(srcroot)src/mb.c $(srcroot)src/mutex.c $(srcroot)src/pages.c \
$(srcroot)src/quarantine.c $(srcroot)src/rtree.c $(srcroot)src/stats.c \ $(srcroot)src/prof.c $(srcroot)src/quarantine.c $(srcroot)src/rtree.c \
$(srcroot)src/tcache.c $(srcroot)src/util.c $(srcroot)src/tsd.c $(srcroot)src/stats.c $(srcroot)src/tcache.c $(srcroot)src/util.c \
$(srcroot)src/tsd.c
ifeq ($(enable_valgrind), 1)
C_SRCS += $(srcroot)src/valgrind.c
endif
ifeq ($(enable_zone_allocator), 1) ifeq ($(enable_zone_allocator), 1)
C_SRCS += $(srcroot)src/zone.c C_SRCS += $(srcroot)src/zone.c
endif endif
...@@ -98,53 +105,60 @@ DSOS := $(objroot)lib/$(LIBJEMALLOC).$(SOREV) ...@@ -98,53 +105,60 @@ DSOS := $(objroot)lib/$(LIBJEMALLOC).$(SOREV)
ifneq ($(SOREV),$(SO)) ifneq ($(SOREV),$(SO))
DSOS += $(objroot)lib/$(LIBJEMALLOC).$(SO) DSOS += $(objroot)lib/$(LIBJEMALLOC).$(SO)
endif endif
PC := $(objroot)jemalloc.pc
MAN3 := $(objroot)doc/jemalloc$(install_suffix).3 MAN3 := $(objroot)doc/jemalloc$(install_suffix).3
DOCS_XML := $(objroot)doc/jemalloc$(install_suffix).xml DOCS_XML := $(objroot)doc/jemalloc$(install_suffix).xml
DOCS_HTML := $(DOCS_XML:$(objroot)%.xml=$(srcroot)%.html) DOCS_HTML := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.html)
DOCS_MAN3 := $(DOCS_XML:$(objroot)%.xml=$(srcroot)%.3) DOCS_MAN3 := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.3)
DOCS := $(DOCS_HTML) $(DOCS_MAN3) DOCS := $(DOCS_HTML) $(DOCS_MAN3)
C_TESTLIB_SRCS := $(srcroot)test/src/math.c $(srcroot)test/src/mtx.c \ C_TESTLIB_SRCS := $(srcroot)test/src/btalloc.c $(srcroot)test/src/btalloc_0.c \
$(srcroot)test/src/btalloc_1.c $(srcroot)test/src/math.c \
$(srcroot)test/src/mtx.c $(srcroot)test/src/mq.c \
$(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \ $(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \
$(srcroot)test/src/thd.c $(srcroot)test/src/thd.c $(srcroot)test/src/timer.c
C_UTIL_INTEGRATION_SRCS := $(srcroot)src/util.c C_UTIL_INTEGRATION_SRCS := $(srcroot)src/util.c
TESTS_UNIT := $(srcroot)test/unit/bitmap.c \ TESTS_UNIT := $(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/bitmap.c \
$(srcroot)test/unit/ckh.c \ $(srcroot)test/unit/ckh.c \
$(srcroot)test/unit/hash.c \ $(srcroot)test/unit/hash.c \
$(srcroot)test/unit/junk.c \ $(srcroot)test/unit/junk.c \
$(srcroot)test/unit/junk_alloc.c \
$(srcroot)test/unit/junk_free.c \
$(srcroot)test/unit/lg_chunk.c \
$(srcroot)test/unit/mallctl.c \ $(srcroot)test/unit/mallctl.c \
$(srcroot)test/unit/math.c \ $(srcroot)test/unit/math.c \
$(srcroot)test/unit/mq.c \ $(srcroot)test/unit/mq.c \
$(srcroot)test/unit/mtx.c \ $(srcroot)test/unit/mtx.c \
$(srcroot)test/unit/prof_accum.c \ $(srcroot)test/unit/prof_accum.c \
$(srcroot)test/unit/prof_active.c \
$(srcroot)test/unit/prof_gdump.c \ $(srcroot)test/unit/prof_gdump.c \
$(srcroot)test/unit/prof_idump.c \ $(srcroot)test/unit/prof_idump.c \
$(srcroot)test/unit/prof_reset.c \
$(srcroot)test/unit/prof_thread_name.c \
$(srcroot)test/unit/ql.c \ $(srcroot)test/unit/ql.c \
$(srcroot)test/unit/qr.c \ $(srcroot)test/unit/qr.c \
$(srcroot)test/unit/quarantine.c \ $(srcroot)test/unit/quarantine.c \
$(srcroot)test/unit/rb.c \ $(srcroot)test/unit/rb.c \
$(srcroot)test/unit/rtree.c \ $(srcroot)test/unit/rtree.c \
$(srcroot)test/unit/SFMT.c \ $(srcroot)test/unit/SFMT.c \
$(srcroot)test/unit/size_classes.c \
$(srcroot)test/unit/stats.c \ $(srcroot)test/unit/stats.c \
$(srcroot)test/unit/tsd.c \ $(srcroot)test/unit/tsd.c \
$(srcroot)test/unit/util.c \ $(srcroot)test/unit/util.c \
$(srcroot)test/unit/zero.c $(srcroot)test/unit/zero.c
TESTS_UNIT_AUX := $(srcroot)test/unit/prof_accum_a.c \
$(srcroot)test/unit/prof_accum_b.c
TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \ TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \
$(srcroot)test/integration/allocated.c \ $(srcroot)test/integration/allocated.c \
$(srcroot)test/integration/sdallocx.c \
$(srcroot)test/integration/mallocx.c \ $(srcroot)test/integration/mallocx.c \
$(srcroot)test/integration/mremap.c \ $(srcroot)test/integration/MALLOCX_ARENA.c \
$(srcroot)test/integration/overflow.c \
$(srcroot)test/integration/posix_memalign.c \ $(srcroot)test/integration/posix_memalign.c \
$(srcroot)test/integration/rallocx.c \ $(srcroot)test/integration/rallocx.c \
$(srcroot)test/integration/thread_arena.c \ $(srcroot)test/integration/thread_arena.c \
$(srcroot)test/integration/thread_tcache_enabled.c \ $(srcroot)test/integration/thread_tcache_enabled.c \
$(srcroot)test/integration/xallocx.c $(srcroot)test/integration/xallocx.c \
ifeq ($(enable_experimental), 1) $(srcroot)test/integration/chunk.c
TESTS_INTEGRATION += $(srcroot)test/integration/allocm.c \ TESTS_STRESS := $(srcroot)test/stress/microbench.c
$(srcroot)test/integration/MALLOCX_ARENA.c \
$(srcroot)test/integration/rallocm.c
endif
TESTS_STRESS :=
TESTS := $(TESTS_UNIT) $(TESTS_INTEGRATION) $(TESTS_STRESS) TESTS := $(TESTS_UNIT) $(TESTS_INTEGRATION) $(TESTS_STRESS)
C_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.$(O)) C_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.$(O))
...@@ -157,10 +171,9 @@ C_TESTLIB_STRESS_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.stress.$(O)) ...@@ -157,10 +171,9 @@ C_TESTLIB_STRESS_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.stress.$(O))
C_TESTLIB_OBJS := $(C_TESTLIB_UNIT_OBJS) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(C_TESTLIB_STRESS_OBJS) C_TESTLIB_OBJS := $(C_TESTLIB_UNIT_OBJS) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(C_TESTLIB_STRESS_OBJS)
TESTS_UNIT_OBJS := $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%.$(O)) TESTS_UNIT_OBJS := $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%.$(O))
TESTS_UNIT_AUX_OBJS := $(TESTS_UNIT_AUX:$(srcroot)%.c=$(objroot)%.$(O))
TESTS_INTEGRATION_OBJS := $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%.$(O)) TESTS_INTEGRATION_OBJS := $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%.$(O))
TESTS_STRESS_OBJS := $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%.$(O)) TESTS_STRESS_OBJS := $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%.$(O))
TESTS_OBJS := $(TESTS_UNIT_OBJS) $(TESTS_UNIT_AUX_OBJS) $(TESTS_INTEGRATION_OBJS) $(TESTS_STRESS_OBJS) TESTS_OBJS := $(TESTS_UNIT_OBJS) $(TESTS_INTEGRATION_OBJS) $(TESTS_STRESS_OBJS)
.PHONY: all dist build_doc_html build_doc_man build_doc .PHONY: all dist build_doc_html build_doc_man build_doc
.PHONY: install_bin install_include install_lib .PHONY: install_bin install_include install_lib
...@@ -174,10 +187,10 @@ all: build_lib ...@@ -174,10 +187,10 @@ all: build_lib
dist: build_doc dist: build_doc
$(srcroot)doc/%.html : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/html.xsl $(objroot)doc/%.html : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/html.xsl
$(XSLTPROC) -o $@ $(objroot)doc/html.xsl $< $(XSLTPROC) -o $@ $(objroot)doc/html.xsl $<
$(srcroot)doc/%.3 : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/manpages.xsl $(objroot)doc/%.3 : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/manpages.xsl
$(XSLTPROC) -o $@ $(objroot)doc/manpages.xsl $< $(XSLTPROC) -o $@ $(objroot)doc/manpages.xsl $<
build_doc_html: $(DOCS_HTML) build_doc_html: $(DOCS_HTML)
...@@ -209,18 +222,12 @@ $(C_TESTLIB_STRESS_OBJS): $(objroot)test/src/%.stress.$(O): $(srcroot)test/src/% ...@@ -209,18 +222,12 @@ $(C_TESTLIB_STRESS_OBJS): $(objroot)test/src/%.stress.$(O): $(srcroot)test/src/%
$(C_TESTLIB_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST -DJEMALLOC_STRESS_TESTLIB $(C_TESTLIB_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST -DJEMALLOC_STRESS_TESTLIB
$(C_TESTLIB_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include $(C_TESTLIB_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include
$(TESTS_UNIT_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST $(TESTS_UNIT_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST
$(TESTS_UNIT_AUX_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST
define make-unit-link-dep
$(1): TESTS_UNIT_LINK_OBJS += $(2)
$(1): $(2)
endef
$(foreach test, $(TESTS_UNIT:$(srcroot)test/unit/%.c=$(objroot)test/unit/%$(EXE)), $(eval $(call make-unit-link-dep,$(test),$(filter $(test:%=%_a.$(O)) $(test:%=%_b.$(O)),$(TESTS_UNIT_AUX_OBJS)))))
$(TESTS_INTEGRATION_OBJS): CPPFLAGS += -DJEMALLOC_INTEGRATION_TEST $(TESTS_INTEGRATION_OBJS): CPPFLAGS += -DJEMALLOC_INTEGRATION_TEST
$(TESTS_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST $(TESTS_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST
$(TESTS_OBJS): $(objroot)test/%.$(O): $(srcroot)test/%.c $(TESTS_OBJS): $(objroot)test/%.$(O): $(srcroot)test/%.c
$(TESTS_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include $(TESTS_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include
ifneq ($(IMPORTLIB),$(SO)) ifneq ($(IMPORTLIB),$(SO))
$(C_OBJS): CPPFLAGS += -DDLLEXPORT $(C_OBJS) $(C_JET_OBJS): CPPFLAGS += -DDLLEXPORT
endif endif
ifndef CC_MM ifndef CC_MM
...@@ -229,7 +236,7 @@ HEADER_DIRS = $(srcroot)include/jemalloc/internal \ ...@@ -229,7 +236,7 @@ HEADER_DIRS = $(srcroot)include/jemalloc/internal \
$(objroot)include/jemalloc $(objroot)include/jemalloc/internal $(objroot)include/jemalloc $(objroot)include/jemalloc/internal
HEADERS = $(wildcard $(foreach dir,$(HEADER_DIRS),$(dir)/*.h)) HEADERS = $(wildcard $(foreach dir,$(HEADER_DIRS),$(dir)/*.h))
$(C_OBJS) $(C_PIC_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS): $(HEADERS) $(C_OBJS) $(C_PIC_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS): $(HEADERS)
$(TESTS_OBJS): $(objroot)test/unit/jemalloc_test.h $(TESTS_OBJS): $(objroot)test/include/test/jemalloc_test.h
endif endif
$(C_OBJS) $(C_PIC_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS): %.$(O): $(C_OBJS) $(C_PIC_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS): %.$(O):
...@@ -259,15 +266,15 @@ $(STATIC_LIBS): ...@@ -259,15 +266,15 @@ $(STATIC_LIBS):
$(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS) $(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS)
@mkdir -p $(@D) @mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS) $(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D) @mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(EXTRA_LDFLAGS) $(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D) @mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS) $(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
build_lib_shared: $(DSOS) build_lib_shared: $(DSOS)
build_lib_static: $(STATIC_LIBS) build_lib_static: $(STATIC_LIBS)
...@@ -301,7 +308,14 @@ install_lib_static: $(STATIC_LIBS) ...@@ -301,7 +308,14 @@ install_lib_static: $(STATIC_LIBS)
install -m 755 $$l $(LIBDIR); \ install -m 755 $$l $(LIBDIR); \
done done
install_lib: install_lib_shared install_lib_static install_lib_pc: $(PC)
install -d $(LIBDIR)/pkgconfig
@for l in $(PC); do \
echo "install -m 644 $$l $(LIBDIR)/pkgconfig"; \
install -m 644 $$l $(LIBDIR)/pkgconfig; \
done
install_lib: install_lib_shared install_lib_static install_lib_pc
install_doc_html: install_doc_html:
install -d $(DATADIR)/doc/jemalloc$(install_suffix) install -d $(DATADIR)/doc/jemalloc$(install_suffix)
...@@ -330,18 +344,23 @@ check_unit_dir: ...@@ -330,18 +344,23 @@ check_unit_dir:
@mkdir -p $(objroot)test/unit @mkdir -p $(objroot)test/unit
check_integration_dir: check_integration_dir:
@mkdir -p $(objroot)test/integration @mkdir -p $(objroot)test/integration
check_stress_dir: stress_dir:
@mkdir -p $(objroot)test/stress @mkdir -p $(objroot)test/stress
check_dir: check_unit_dir check_integration_dir check_stress_dir check_dir: check_unit_dir check_integration_dir
check_unit: tests_unit check_unit_dir check_unit: tests_unit check_unit_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%) $(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)
check_integration_prof: tests_integration check_integration_dir
ifeq ($(enable_prof), 1)
$(MALLOC_CONF)="prof:true" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="prof:true,prof_active:false" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
endif
check_integration: tests_integration check_integration_dir check_integration: tests_integration check_integration_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%) $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
check_stress: tests_stress check_stress_dir stress: tests_stress stress_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%) $(SHELL) $(objroot)test/test.sh $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%)
check: tests check_dir check: tests check_dir check_integration_prof
$(SHELL) $(objroot)test/test.sh $(TESTS:$(srcroot)%.c=$(objroot)%) $(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
ifeq ($(enable_code_coverage), 1) ifeq ($(enable_code_coverage), 1)
coverage_unit: check_unit coverage_unit: check_unit
...@@ -355,7 +374,7 @@ coverage_integration: check_integration ...@@ -355,7 +374,7 @@ coverage_integration: check_integration
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src integration $(C_TESTLIB_INTEGRATION_OBJS) $(SHELL) $(srcroot)coverage.sh $(srcroot)test/src integration $(C_TESTLIB_INTEGRATION_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/integration integration $(TESTS_INTEGRATION_OBJS) $(SHELL) $(srcroot)coverage.sh $(srcroot)test/integration integration $(TESTS_INTEGRATION_OBJS)
coverage_stress: check_stress coverage_stress: stress
$(SHELL) $(srcroot)coverage.sh $(srcroot)src pic $(C_PIC_OBJS) $(SHELL) $(srcroot)coverage.sh $(srcroot)src pic $(C_PIC_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)src jet $(C_JET_OBJS) $(SHELL) $(srcroot)coverage.sh $(srcroot)src jet $(C_JET_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src stress $(C_TESTLIB_STRESS_OBJS) $(SHELL) $(srcroot)coverage.sh $(srcroot)test/src stress $(C_TESTLIB_STRESS_OBJS)
...@@ -400,8 +419,9 @@ clean: ...@@ -400,8 +419,9 @@ clean:
rm -f $(objroot)*.gcov.* rm -f $(objroot)*.gcov.*
distclean: clean distclean: clean
rm -rf $(objroot)autom4te.cache rm -f $(objroot)bin/jemalloc-config
rm -f $(objroot)bin/jemalloc.sh rm -f $(objroot)bin/jemalloc.sh
rm -f $(objroot)bin/jeprof
rm -f $(objroot)config.log rm -f $(objroot)config.log
rm -f $(objroot)config.status rm -f $(objroot)config.status
rm -f $(objroot)config.stamp rm -f $(objroot)config.stamp
...@@ -410,7 +430,7 @@ distclean: clean ...@@ -410,7 +430,7 @@ distclean: clean
relclean: distclean relclean: distclean
rm -f $(objroot)configure rm -f $(objroot)configure
rm -f $(srcroot)VERSION rm -f $(objroot)VERSION
rm -f $(DOCS_HTML) rm -f $(DOCS_HTML)
rm -f $(DOCS_MAN3) rm -f $(DOCS_MAN3)
......
3.6.0-0-g46c0af68bd248b04df75e4f92d5fb804c3d75340 4.0.3-0-ge9192eacf8935e29fc62fddc2701f7942b1cc02c
#!/bin/sh
usage() {
cat <<EOF
Usage:
@BINDIR@/jemalloc-config <option>
Options:
--help | -h : Print usage.
--version : Print jemalloc version.
--revision : Print shared library revision number.
--config : Print configure options used to build jemalloc.
--prefix : Print installation directory prefix.
--bindir : Print binary installation directory.
--datadir : Print data installation directory.
--includedir : Print include installation directory.
--libdir : Print library installation directory.
--mandir : Print manual page installation directory.
--cc : Print compiler used to build jemalloc.
--cflags : Print compiler flags used to build jemalloc.
--cppflags : Print preprocessor flags used to build jemalloc.
--ldflags : Print library flags used to build jemalloc.
--libs : Print libraries jemalloc was linked against.
EOF
}
prefix="@prefix@"
exec_prefix="@exec_prefix@"
case "$1" in
--help | -h)
usage
exit 0
;;
--version)
echo "@jemalloc_version@"
;;
--revision)
echo "@rev@"
;;
--config)
echo "@CONFIG@"
;;
--prefix)
echo "@PREFIX@"
;;
--bindir)
echo "@BINDIR@"
;;
--datadir)
echo "@DATADIR@"
;;
--includedir)
echo "@INCLUDEDIR@"
;;
--libdir)
echo "@LIBDIR@"
;;
--mandir)
echo "@MANDIR@"
;;
--cc)
echo "@CC@"
;;
--cflags)
echo "@CFLAGS@"
;;
--cppflags)
echo "@CPPFLAGS@"
;;
--ldflags)
echo "@LDFLAGS@ @EXTRA_LDFLAGS@"
;;
--libs)
echo "@LIBS@"
;;
*)
usage
exit 1
esac
...@@ -40,28 +40,28 @@ ...@@ -40,28 +40,28 @@
# #
# Examples: # Examples:
# #
# % tools/pprof "program" "profile" # % tools/jeprof "program" "profile"
# Enters "interactive" mode # Enters "interactive" mode
# #
# % tools/pprof --text "program" "profile" # % tools/jeprof --text "program" "profile"
# Generates one line per procedure # Generates one line per procedure
# #
# % tools/pprof --gv "program" "profile" # % tools/jeprof --gv "program" "profile"
# Generates annotated call-graph and displays via "gv" # Generates annotated call-graph and displays via "gv"
# #
# % tools/pprof --gv --focus=Mutex "program" "profile" # % tools/jeprof --gv --focus=Mutex "program" "profile"
# Restrict to code paths that involve an entry that matches "Mutex" # Restrict to code paths that involve an entry that matches "Mutex"
# #
# % tools/pprof --gv --focus=Mutex --ignore=string "program" "profile" # % tools/jeprof --gv --focus=Mutex --ignore=string "program" "profile"
# Restrict to code paths that involve an entry that matches "Mutex" # Restrict to code paths that involve an entry that matches "Mutex"
# and does not match "string" # and does not match "string"
# #
# % tools/pprof --list=IBF_CheckDocid "program" "profile" # % tools/jeprof --list=IBF_CheckDocid "program" "profile"
# Generates disassembly listing of all routines with at least one # Generates disassembly listing of all routines with at least one
# sample that match the --list=<regexp> pattern. The listing is # sample that match the --list=<regexp> pattern. The listing is
# annotated with the flat and cumulative sample counts at each line. # annotated with the flat and cumulative sample counts at each line.
# #
# % tools/pprof --disasm=IBF_CheckDocid "program" "profile" # % tools/jeprof --disasm=IBF_CheckDocid "program" "profile"
# Generates disassembly listing of all routines with at least one # Generates disassembly listing of all routines with at least one
# sample that match the --disasm=<regexp> pattern. The listing is # sample that match the --disasm=<regexp> pattern. The listing is
# annotated with the flat and cumulative sample counts at each PC value. # annotated with the flat and cumulative sample counts at each PC value.
...@@ -72,10 +72,11 @@ use strict; ...@@ -72,10 +72,11 @@ use strict;
use warnings; use warnings;
use Getopt::Long; use Getopt::Long;
my $JEPROF_VERSION = "@jemalloc_version@";
my $PPROF_VERSION = "2.0"; my $PPROF_VERSION = "2.0";
# These are the object tools we use which can come from a # These are the object tools we use which can come from a
# user-specified location using --tools, from the PPROF_TOOLS # user-specified location using --tools, from the JEPROF_TOOLS
# environment variable, or from the environment. # environment variable, or from the environment.
my %obj_tool_map = ( my %obj_tool_map = (
"objdump" => "objdump", "objdump" => "objdump",
...@@ -144,13 +145,13 @@ my $sep_address = undef; ...@@ -144,13 +145,13 @@ my $sep_address = undef;
sub usage_string { sub usage_string {
return <<EOF; return <<EOF;
Usage: Usage:
pprof [options] <program> <profiles> jeprof [options] <program> <profiles>
<profiles> is a space separated list of profile names. <profiles> is a space separated list of profile names.
pprof [options] <symbolized-profiles> jeprof [options] <symbolized-profiles>
<symbolized-profiles> is a list of profile files where each file contains <symbolized-profiles> is a list of profile files where each file contains
the necessary symbol mappings as well as profile data (likely generated the necessary symbol mappings as well as profile data (likely generated
with --raw). with --raw).
pprof [options] <profile> jeprof [options] <profile>
<profile> is a remote form. Symbols are obtained from host:port$SYMBOL_PAGE <profile> is a remote form. Symbols are obtained from host:port$SYMBOL_PAGE
Each name can be: Each name can be:
...@@ -161,9 +162,9 @@ pprof [options] <profile> ...@@ -161,9 +162,9 @@ pprof [options] <profile>
$GROWTH_PAGE, $CONTENTION_PAGE, /pprof/wall, $GROWTH_PAGE, $CONTENTION_PAGE, /pprof/wall,
$CENSUSPROFILE_PAGE, or /pprof/filteredprofile. $CENSUSPROFILE_PAGE, or /pprof/filteredprofile.
For instance: For instance:
pprof http://myserver.com:80$HEAP_PAGE jeprof http://myserver.com:80$HEAP_PAGE
If /<service> is omitted, the service defaults to $PROFILE_PAGE (cpu profiling). If /<service> is omitted, the service defaults to $PROFILE_PAGE (cpu profiling).
pprof --symbols <program> jeprof --symbols <program>
Maps addresses to symbol names. In this mode, stdin should be a Maps addresses to symbol names. In this mode, stdin should be a
list of library mappings, in the same format as is found in the heap- list of library mappings, in the same format as is found in the heap-
and cpu-profile files (this loosely matches that of /proc/self/maps and cpu-profile files (this loosely matches that of /proc/self/maps
...@@ -202,7 +203,7 @@ Output type: ...@@ -202,7 +203,7 @@ Output type:
--pdf Generate PDF to stdout --pdf Generate PDF to stdout
--svg Generate SVG to stdout --svg Generate SVG to stdout
--gif Generate GIF to stdout --gif Generate GIF to stdout
--raw Generate symbolized pprof data (useful with remote fetch) --raw Generate symbolized jeprof data (useful with remote fetch)
Heap-Profile Options: Heap-Profile Options:
--inuse_space Display in-use (mega)bytes [default] --inuse_space Display in-use (mega)bytes [default]
...@@ -223,6 +224,7 @@ Call-graph Options: ...@@ -223,6 +224,7 @@ Call-graph Options:
--edgefraction=<f> Hide edges below <f>*total [default=.001] --edgefraction=<f> Hide edges below <f>*total [default=.001]
--maxdegree=<n> Max incoming/outgoing edges per node [default=8] --maxdegree=<n> Max incoming/outgoing edges per node [default=8]
--focus=<regexp> Focus on nodes matching <regexp> --focus=<regexp> Focus on nodes matching <regexp>
--thread=<n> Show profile for thread <n>
--ignore=<regexp> Ignore nodes matching <regexp> --ignore=<regexp> Ignore nodes matching <regexp>
--scale=<n> Set GV scaling [default=0] --scale=<n> Set GV scaling [default=0]
--heapcheck Make nodes with non-0 object counts --heapcheck Make nodes with non-0 object counts
...@@ -235,34 +237,34 @@ Miscellaneous: ...@@ -235,34 +237,34 @@ Miscellaneous:
--version Version information --version Version information
Environment Variables: Environment Variables:
PPROF_TMPDIR Profiles directory. Defaults to \$HOME/pprof JEPROF_TMPDIR Profiles directory. Defaults to \$HOME/jeprof
PPROF_TOOLS Prefix for object tools pathnames JEPROF_TOOLS Prefix for object tools pathnames
Examples: Examples:
pprof /bin/ls ls.prof jeprof /bin/ls ls.prof
Enters "interactive" mode Enters "interactive" mode
pprof --text /bin/ls ls.prof jeprof --text /bin/ls ls.prof
Outputs one line per procedure Outputs one line per procedure
pprof --web /bin/ls ls.prof jeprof --web /bin/ls ls.prof
Displays annotated call-graph in web browser Displays annotated call-graph in web browser
pprof --gv /bin/ls ls.prof jeprof --gv /bin/ls ls.prof
Displays annotated call-graph via 'gv' Displays annotated call-graph via 'gv'
pprof --gv --focus=Mutex /bin/ls ls.prof jeprof --gv --focus=Mutex /bin/ls ls.prof
Restricts to code paths including a .*Mutex.* entry Restricts to code paths including a .*Mutex.* entry
pprof --gv --focus=Mutex --ignore=string /bin/ls ls.prof jeprof --gv --focus=Mutex --ignore=string /bin/ls ls.prof
Code paths including Mutex but not string Code paths including Mutex but not string
pprof --list=getdir /bin/ls ls.prof jeprof --list=getdir /bin/ls ls.prof
(Per-line) annotated source listing for getdir() (Per-line) annotated source listing for getdir()
pprof --disasm=getdir /bin/ls ls.prof jeprof --disasm=getdir /bin/ls ls.prof
(Per-PC) annotated disassembly for getdir() (Per-PC) annotated disassembly for getdir()
pprof http://localhost:1234/ jeprof http://localhost:1234/
Enters "interactive" mode Enters "interactive" mode
pprof --text localhost:1234 jeprof --text localhost:1234
Outputs one line per procedure for localhost:1234 Outputs one line per procedure for localhost:1234
pprof --raw localhost:1234 > ./local.raw jeprof --raw localhost:1234 > ./local.raw
pprof --text ./local.raw jeprof --text ./local.raw
Fetches a remote profile for later analysis and then Fetches a remote profile for later analysis and then
analyzes it in text mode. analyzes it in text mode.
EOF EOF
...@@ -270,7 +272,8 @@ EOF ...@@ -270,7 +272,8 @@ EOF
sub version_string { sub version_string {
return <<EOF return <<EOF
pprof (part of gperftools $PPROF_VERSION) jeprof (part of jemalloc $JEPROF_VERSION)
based on pprof (part of gperftools $PPROF_VERSION)
Copyright 1998-2007 Google Inc. Copyright 1998-2007 Google Inc.
...@@ -293,8 +296,8 @@ sub Init() { ...@@ -293,8 +296,8 @@ sub Init() {
# Setup tmp-file name and handler to clean it up. # Setup tmp-file name and handler to clean it up.
# We do this in the very beginning so that we can use # We do this in the very beginning so that we can use
# error() and cleanup() function anytime here after. # error() and cleanup() function anytime here after.
$main::tmpfile_sym = "/tmp/pprof$$.sym"; $main::tmpfile_sym = "/tmp/jeprof$$.sym";
$main::tmpfile_ps = "/tmp/pprof$$"; $main::tmpfile_ps = "/tmp/jeprof$$";
$main::next_tmpfile = 0; $main::next_tmpfile = 0;
$SIG{'INT'} = \&sighandler; $SIG{'INT'} = \&sighandler;
...@@ -332,6 +335,7 @@ sub Init() { ...@@ -332,6 +335,7 @@ sub Init() {
$main::opt_edgefraction = 0.001; $main::opt_edgefraction = 0.001;
$main::opt_maxdegree = 8; $main::opt_maxdegree = 8;
$main::opt_focus = ''; $main::opt_focus = '';
$main::opt_thread = undef;
$main::opt_ignore = ''; $main::opt_ignore = '';
$main::opt_scale = 0; $main::opt_scale = 0;
$main::opt_heapcheck = 0; $main::opt_heapcheck = 0;
...@@ -402,6 +406,7 @@ sub Init() { ...@@ -402,6 +406,7 @@ sub Init() {
"edgefraction=f" => \$main::opt_edgefraction, "edgefraction=f" => \$main::opt_edgefraction,
"maxdegree=i" => \$main::opt_maxdegree, "maxdegree=i" => \$main::opt_maxdegree,
"focus=s" => \$main::opt_focus, "focus=s" => \$main::opt_focus,
"thread=s" => \$main::opt_thread,
"ignore=s" => \$main::opt_ignore, "ignore=s" => \$main::opt_ignore,
"scale=i" => \$main::opt_scale, "scale=i" => \$main::opt_scale,
"heapcheck" => \$main::opt_heapcheck, "heapcheck" => \$main::opt_heapcheck,
...@@ -562,66 +567,12 @@ sub Init() { ...@@ -562,66 +567,12 @@ sub Init() {
} }
} }
sub Main() { sub FilterAndPrint {
Init(); my ($profile, $symbols, $libs, $thread) = @_;
$main::collected_profile = undef;
@main::profile_files = ();
$main::op_time = time();
# Printing symbols is special and requires a lot less info that most.
if ($main::opt_symbols) {
PrintSymbols(*STDIN); # Get /proc/maps and symbols output from stdin
return;
}
# Fetch all profile data
FetchDynamicProfiles();
# this will hold symbols that we read from the profile files
my $symbol_map = {};
# Read one profile, pick the last item on the list
my $data = ReadProfile($main::prog, pop(@main::profile_files));
my $profile = $data->{profile};
my $pcs = $data->{pcs};
my $libs = $data->{libs}; # Info about main program and shared libraries
$symbol_map = MergeSymbols($symbol_map, $data->{symbols});
# Add additional profiles, if available.
if (scalar(@main::profile_files) > 0) {
foreach my $pname (@main::profile_files) {
my $data2 = ReadProfile($main::prog, $pname);
$profile = AddProfile($profile, $data2->{profile});
$pcs = AddPcs($pcs, $data2->{pcs});
$symbol_map = MergeSymbols($symbol_map, $data2->{symbols});
}
}
# Subtract base from profile, if specified
if ($main::opt_base ne '') {
my $base = ReadProfile($main::prog, $main::opt_base);
$profile = SubtractProfile($profile, $base->{profile});
$pcs = AddPcs($pcs, $base->{pcs});
$symbol_map = MergeSymbols($symbol_map, $base->{symbols});
}
# Get total data in profile # Get total data in profile
my $total = TotalProfile($profile); my $total = TotalProfile($profile);
# Collect symbols
my $symbols;
if ($main::use_symbolized_profile) {
$symbols = FetchSymbols($pcs, $symbol_map);
} elsif ($main::use_symbol_page) {
$symbols = FetchSymbols($pcs);
} else {
# TODO(csilvers): $libs uses the /proc/self/maps data from profile1,
# which may differ from the data from subsequent profiles, especially
# if they were run on different machines. Use appropriate libs for
# each pc somehow.
$symbols = ExtractSymbols($libs, $pcs);
}
# Remove uniniteresting stack items # Remove uniniteresting stack items
$profile = RemoveUninterestingFrames($symbols, $profile); $profile = RemoveUninterestingFrames($symbols, $profile);
...@@ -656,7 +607,9 @@ sub Main() { ...@@ -656,7 +607,9 @@ sub Main() {
# (only matters when --heapcheck is given but we must be # (only matters when --heapcheck is given but we must be
# compatible with old branches that did not pass --heapcheck always): # compatible with old branches that did not pass --heapcheck always):
if ($total != 0) { if ($total != 0) {
printf("Total: %s %s\n", Unparse($total), Units()); printf("Total%s: %s %s\n",
(defined($thread) ? " (t$thread)" : ""),
Unparse($total), Units());
} }
PrintText($symbols, $flat, $cumulative, -1); PrintText($symbols, $flat, $cumulative, -1);
} elsif ($main::opt_raw) { } elsif ($main::opt_raw) {
...@@ -692,6 +645,77 @@ sub Main() { ...@@ -692,6 +645,77 @@ sub Main() {
} else { } else {
InteractiveMode($profile, $symbols, $libs, $total); InteractiveMode($profile, $symbols, $libs, $total);
} }
}
sub Main() {
Init();
$main::collected_profile = undef;
@main::profile_files = ();
$main::op_time = time();
# Printing symbols is special and requires a lot less info that most.
if ($main::opt_symbols) {
PrintSymbols(*STDIN); # Get /proc/maps and symbols output from stdin
return;
}
# Fetch all profile data
FetchDynamicProfiles();
# this will hold symbols that we read from the profile files
my $symbol_map = {};
# Read one profile, pick the last item on the list
my $data = ReadProfile($main::prog, pop(@main::profile_files));
my $profile = $data->{profile};
my $pcs = $data->{pcs};
my $libs = $data->{libs}; # Info about main program and shared libraries
$symbol_map = MergeSymbols($symbol_map, $data->{symbols});
# Add additional profiles, if available.
if (scalar(@main::profile_files) > 0) {
foreach my $pname (@main::profile_files) {
my $data2 = ReadProfile($main::prog, $pname);
$profile = AddProfile($profile, $data2->{profile});
$pcs = AddPcs($pcs, $data2->{pcs});
$symbol_map = MergeSymbols($symbol_map, $data2->{symbols});
}
}
# Subtract base from profile, if specified
if ($main::opt_base ne '') {
my $base = ReadProfile($main::prog, $main::opt_base);
$profile = SubtractProfile($profile, $base->{profile});
$pcs = AddPcs($pcs, $base->{pcs});
$symbol_map = MergeSymbols($symbol_map, $base->{symbols});
}
# Collect symbols
my $symbols;
if ($main::use_symbolized_profile) {
$symbols = FetchSymbols($pcs, $symbol_map);
} elsif ($main::use_symbol_page) {
$symbols = FetchSymbols($pcs);
} else {
# TODO(csilvers): $libs uses the /proc/self/maps data from profile1,
# which may differ from the data from subsequent profiles, especially
# if they were run on different machines. Use appropriate libs for
# each pc somehow.
$symbols = ExtractSymbols($libs, $pcs);
}
if (!defined($main::opt_thread)) {
FilterAndPrint($profile, $symbols, $libs);
}
if (defined($data->{threads})) {
foreach my $thread (sort { $a <=> $b } keys(%{$data->{threads}})) {
if (defined($main::opt_thread) &&
($main::opt_thread eq '*' || $main::opt_thread == $thread)) {
my $thread_profile = $data->{threads}{$thread};
FilterAndPrint($thread_profile, $symbols, $libs, $thread);
}
}
}
cleanup(); cleanup();
exit(0); exit(0);
...@@ -780,14 +804,14 @@ sub InteractiveMode { ...@@ -780,14 +804,14 @@ sub InteractiveMode {
$| = 1; # Make output unbuffered for interactive mode $| = 1; # Make output unbuffered for interactive mode
my ($orig_profile, $symbols, $libs, $total) = @_; my ($orig_profile, $symbols, $libs, $total) = @_;
print STDERR "Welcome to pprof! For help, type 'help'.\n"; print STDERR "Welcome to jeprof! For help, type 'help'.\n";
# Use ReadLine if it's installed and input comes from a console. # Use ReadLine if it's installed and input comes from a console.
if ( -t STDIN && if ( -t STDIN &&
!ReadlineMightFail() && !ReadlineMightFail() &&
defined(eval {require Term::ReadLine}) ) { defined(eval {require Term::ReadLine}) ) {
my $term = new Term::ReadLine 'pprof'; my $term = new Term::ReadLine 'jeprof';
while ( defined ($_ = $term->readline('(pprof) '))) { while ( defined ($_ = $term->readline('(jeprof) '))) {
$term->addhistory($_) if /\S/; $term->addhistory($_) if /\S/;
if (!InteractiveCommand($orig_profile, $symbols, $libs, $total, $_)) { if (!InteractiveCommand($orig_profile, $symbols, $libs, $total, $_)) {
last; # exit when we get an interactive command to quit last; # exit when we get an interactive command to quit
...@@ -795,7 +819,7 @@ sub InteractiveMode { ...@@ -795,7 +819,7 @@ sub InteractiveMode {
} }
} else { # don't have readline } else { # don't have readline
while (1) { while (1) {
print STDERR "(pprof) "; print STDERR "(jeprof) ";
$_ = <STDIN>; $_ = <STDIN>;
last if ! defined $_ ; last if ! defined $_ ;
s/\r//g; # turn windows-looking lines into unix-looking lines s/\r//g; # turn windows-looking lines into unix-looking lines
...@@ -988,7 +1012,7 @@ sub ProcessProfile { ...@@ -988,7 +1012,7 @@ sub ProcessProfile {
sub InteractiveHelpMessage { sub InteractiveHelpMessage {
print STDERR <<ENDOFHELP; print STDERR <<ENDOFHELP;
Interactive pprof mode Interactive jeprof mode
Commands: Commands:
gv gv
...@@ -1031,7 +1055,7 @@ Commands: ...@@ -1031,7 +1055,7 @@ Commands:
Generates callgrind file. If no filename is given, kcachegrind is called. Generates callgrind file. If no filename is given, kcachegrind is called.
help - This listing help - This listing
quit or ^D - End pprof quit or ^D - End jeprof
For commands that accept optional -ignore tags, samples where any routine in For commands that accept optional -ignore tags, samples where any routine in
the stack trace matches the regular expression in any of the -ignore the stack trace matches the regular expression in any of the -ignore
...@@ -1476,7 +1500,7 @@ h1 { ...@@ -1476,7 +1500,7 @@ h1 {
} }
</style> </style>
<script type="text/javascript"> <script type="text/javascript">
function pprof_toggle_asm(e) { function jeprof_toggle_asm(e) {
var target; var target;
if (!e) e = window.event; if (!e) e = window.event;
if (e.target) target = e.target; if (e.target) target = e.target;
...@@ -1745,7 +1769,7 @@ sub PrintSource { ...@@ -1745,7 +1769,7 @@ sub PrintSource {
if ($html) { if ($html) {
printf $output ( printf $output (
"<h1>%s</h1>%s\n<pre onClick=\"pprof_toggle_asm()\">\n" . "<h1>%s</h1>%s\n<pre onClick=\"jeprof_toggle_asm()\">\n" .
"Total:%6s %6s (flat / cumulative %s)\n", "Total:%6s %6s (flat / cumulative %s)\n",
HtmlEscape(ShortFunctionName($routine)), HtmlEscape(ShortFunctionName($routine)),
HtmlEscape(CleanFileName($filename)), HtmlEscape(CleanFileName($filename)),
...@@ -2811,9 +2835,15 @@ sub RemoveUninterestingFrames { ...@@ -2811,9 +2835,15 @@ sub RemoveUninterestingFrames {
'free', 'free',
'memalign', 'memalign',
'posix_memalign', 'posix_memalign',
'aligned_alloc',
'pvalloc', 'pvalloc',
'valloc', 'valloc',
'realloc', 'realloc',
'mallocx', # jemalloc
'rallocx', # jemalloc
'xallocx', # jemalloc
'dallocx', # jemalloc
'sdallocx', # jemalloc
'tc_calloc', 'tc_calloc',
'tc_cfree', 'tc_cfree',
'tc_malloc', 'tc_malloc',
...@@ -2923,6 +2953,10 @@ sub RemoveUninterestingFrames { ...@@ -2923,6 +2953,10 @@ sub RemoveUninterestingFrames {
if (exists($symbols->{$a})) { if (exists($symbols->{$a})) {
my $func = $symbols->{$a}->[0]; my $func = $symbols->{$a}->[0];
if ($skip{$func} || ($func =~ m/$skip_regexp/)) { if ($skip{$func} || ($func =~ m/$skip_regexp/)) {
# Throw away the portion of the backtrace seen so far, under the
# assumption that previous frames were for functions internal to the
# allocator.
@path = ();
next; next;
} }
} }
...@@ -3401,7 +3435,7 @@ sub FetchDynamicProfile { ...@@ -3401,7 +3435,7 @@ sub FetchDynamicProfile {
$profile_file .= $suffix; $profile_file .= $suffix;
} }
my $profile_dir = $ENV{"PPROF_TMPDIR"} || ($ENV{HOME} . "/pprof"); my $profile_dir = $ENV{"JEPROF_TMPDIR"} || ($ENV{HOME} . "/jeprof");
if (! -d $profile_dir) { if (! -d $profile_dir) {
mkdir($profile_dir) mkdir($profile_dir)
|| die("Unable to create profile directory $profile_dir: $!\n"); || die("Unable to create profile directory $profile_dir: $!\n");
...@@ -3617,7 +3651,7 @@ BEGIN { ...@@ -3617,7 +3651,7 @@ BEGIN {
# Reads the top, 'header' section of a profile, and returns the last # Reads the top, 'header' section of a profile, and returns the last
# line of the header, commonly called a 'header line'. The header # line of the header, commonly called a 'header line'. The header
# section of a profile consists of zero or more 'command' lines that # section of a profile consists of zero or more 'command' lines that
# are instructions to pprof, which pprof executes when reading the # are instructions to jeprof, which jeprof executes when reading the
# header. All 'command' lines start with a %. After the command # header. All 'command' lines start with a %. After the command
# lines is the 'header line', which is a profile-specific line that # lines is the 'header line', which is a profile-specific line that
# indicates what type of profile it is, and perhaps other global # indicates what type of profile it is, and perhaps other global
...@@ -3680,6 +3714,7 @@ sub IsSymbolizedProfileFile { ...@@ -3680,6 +3714,7 @@ sub IsSymbolizedProfileFile {
# $result->{version} Version number of profile file # $result->{version} Version number of profile file
# $result->{period} Sampling period (in microseconds) # $result->{period} Sampling period (in microseconds)
# $result->{profile} Profile object # $result->{profile} Profile object
# $result->{threads} Map of thread IDs to profile objects
# $result->{map} Memory map info from profile # $result->{map} Memory map info from profile
# $result->{pcs} Hash of all PC values seen, key is hex address # $result->{pcs} Hash of all PC values seen, key is hex address
sub ReadProfile { sub ReadProfile {
...@@ -3728,6 +3763,9 @@ sub ReadProfile { ...@@ -3728,6 +3763,9 @@ sub ReadProfile {
} elsif ($header =~ m/^heap profile:/) { } elsif ($header =~ m/^heap profile:/) {
$main::profile_type = 'heap'; $main::profile_type = 'heap';
$result = ReadHeapProfile($prog, *PROFILE, $header); $result = ReadHeapProfile($prog, *PROFILE, $header);
} elsif ($header =~ m/^heap/) {
$main::profile_type = 'heap';
$result = ReadThreadedHeapProfile($prog, $fname, $header);
} elsif ($header =~ m/^--- *$contention_marker/o) { } elsif ($header =~ m/^--- *$contention_marker/o) {
$main::profile_type = 'contention'; $main::profile_type = 'contention';
$result = ReadSynchProfile($prog, *PROFILE); $result = ReadSynchProfile($prog, *PROFILE);
...@@ -3870,11 +3908,7 @@ sub ReadCPUProfile { ...@@ -3870,11 +3908,7 @@ sub ReadCPUProfile {
return $r; return $r;
} }
sub ReadHeapProfile { sub HeapProfileIndex {
my $prog = shift;
local *PROFILE = shift;
my $header = shift;
my $index = 1; my $index = 1;
if ($main::opt_inuse_space) { if ($main::opt_inuse_space) {
$index = 1; $index = 1;
...@@ -3885,6 +3919,84 @@ sub ReadHeapProfile { ...@@ -3885,6 +3919,84 @@ sub ReadHeapProfile {
} elsif ($main::opt_alloc_objects) { } elsif ($main::opt_alloc_objects) {
$index = 2; $index = 2;
} }
return $index;
}
sub ReadMappedLibraries {
my $fh = shift;
my $map = "";
# Read the /proc/self/maps data
while (<$fh>) {
s/\r//g; # turn windows-looking lines into unix-looking lines
$map .= $_;
}
return $map;
}
sub ReadMemoryMap {
my $fh = shift;
my $map = "";
# Read /proc/self/maps data as formatted by DumpAddressMap()
my $buildvar = "";
while (<PROFILE>) {
s/\r//g; # turn windows-looking lines into unix-looking lines
# Parse "build=<dir>" specification if supplied
if (m/^\s*build=(.*)\n/) {
$buildvar = $1;
}
# Expand "$build" variable if available
$_ =~ s/\$build\b/$buildvar/g;
$map .= $_;
}
return $map;
}
sub AdjustSamples {
my ($sample_adjustment, $sampling_algorithm, $n1, $s1, $n2, $s2) = @_;
if ($sample_adjustment) {
if ($sampling_algorithm == 2) {
# Remote-heap version 2
# The sampling frequency is the rate of a Poisson process.
# This means that the probability of sampling an allocation of
# size X with sampling rate Y is 1 - exp(-X/Y)
if ($n1 != 0) {
my $ratio = (($s1*1.0)/$n1)/($sample_adjustment);
my $scale_factor = 1/(1 - exp(-$ratio));
$n1 *= $scale_factor;
$s1 *= $scale_factor;
}
if ($n2 != 0) {
my $ratio = (($s2*1.0)/$n2)/($sample_adjustment);
my $scale_factor = 1/(1 - exp(-$ratio));
$n2 *= $scale_factor;
$s2 *= $scale_factor;
}
} else {
# Remote-heap version 1
my $ratio;
$ratio = (($s1*1.0)/$n1)/($sample_adjustment);
if ($ratio < 1) {
$n1 /= $ratio;
$s1 /= $ratio;
}
$ratio = (($s2*1.0)/$n2)/($sample_adjustment);
if ($ratio < 1) {
$n2 /= $ratio;
$s2 /= $ratio;
}
}
}
return ($n1, $s1, $n2, $s2);
}
sub ReadHeapProfile {
my $prog = shift;
local *PROFILE = shift;
my $header = shift;
my $index = HeapProfileIndex();
# Find the type of this profile. The header line looks like: # Find the type of this profile. The header line looks like:
# heap profile: 1246: 8800744 [ 1246: 8800744] @ <heap-url>/266053 # heap profile: 1246: 8800744 [ 1246: 8800744] @ <heap-url>/266053
...@@ -3974,29 +4086,12 @@ sub ReadHeapProfile { ...@@ -3974,29 +4086,12 @@ sub ReadHeapProfile {
while (<PROFILE>) { while (<PROFILE>) {
s/\r//g; # turn windows-looking lines into unix-looking lines s/\r//g; # turn windows-looking lines into unix-looking lines
if (/^MAPPED_LIBRARIES:/) { if (/^MAPPED_LIBRARIES:/) {
# Read the /proc/self/maps data $map .= ReadMappedLibraries(*PROFILE);
while (<PROFILE>) {
s/\r//g; # turn windows-looking lines into unix-looking lines
$map .= $_;
}
last; last;
} }
if (/^--- Memory map:/) { if (/^--- Memory map:/) {
# Read /proc/self/maps data as formatted by DumpAddressMap() $map .= ReadMemoryMap(*PROFILE);
my $buildvar = "";
while (<PROFILE>) {
s/\r//g; # turn windows-looking lines into unix-looking lines
# Parse "build=<dir>" specification if supplied
if (m/^\s*build=(.*)\n/) {
$buildvar = $1;
}
# Expand "$build" variable if available
$_ =~ s/\$build\b/$buildvar/g;
$map .= $_;
}
last; last;
} }
...@@ -4007,43 +4102,85 @@ sub ReadHeapProfile { ...@@ -4007,43 +4102,85 @@ sub ReadHeapProfile {
if (m/^\s*(\d+):\s+(\d+)\s+\[\s*(\d+):\s+(\d+)\]\s+@\s+(.*)$/) { if (m/^\s*(\d+):\s+(\d+)\s+\[\s*(\d+):\s+(\d+)\]\s+@\s+(.*)$/) {
my $stack = $5; my $stack = $5;
my ($n1, $s1, $n2, $s2) = ($1, $2, $3, $4); my ($n1, $s1, $n2, $s2) = ($1, $2, $3, $4);
my @counts = AdjustSamples($sample_adjustment, $sampling_algorithm,
if ($sample_adjustment) { $n1, $s1, $n2, $s2);
if ($sampling_algorithm == 2) { AddEntries($profile, $pcs, FixCallerAddresses($stack), $counts[$index]);
# Remote-heap version 2
# The sampling frequency is the rate of a Poisson process.
# This means that the probability of sampling an allocation of
# size X with sampling rate Y is 1 - exp(-X/Y)
if ($n1 != 0) {
my $ratio = (($s1*1.0)/$n1)/($sample_adjustment);
my $scale_factor = 1/(1 - exp(-$ratio));
$n1 *= $scale_factor;
$s1 *= $scale_factor;
} }
if ($n2 != 0) {
my $ratio = (($s2*1.0)/$n2)/($sample_adjustment);
my $scale_factor = 1/(1 - exp(-$ratio));
$n2 *= $scale_factor;
$s2 *= $scale_factor;
} }
} else {
# Remote-heap version 1 my $r = {};
my $ratio; $r->{version} = "heap";
$ratio = (($s1*1.0)/$n1)/($sample_adjustment); $r->{period} = 1;
if ($ratio < 1) { $r->{profile} = $profile;
$n1 /= $ratio; $r->{libs} = ParseLibraries($prog, $map, $pcs);
$s1 /= $ratio; $r->{pcs} = $pcs;
return $r;
}
sub ReadThreadedHeapProfile {
my ($prog, $fname, $header) = @_;
my $index = HeapProfileIndex();
my $sampling_algorithm = 0;
my $sample_adjustment = 0;
chomp($header);
my $type = "unknown";
# Assuming a very specific type of header for now.
if ($header =~ m"^heap_v2/(\d+)") {
$type = "_v2";
$sampling_algorithm = 2;
$sample_adjustment = int($1);
} }
$ratio = (($s2*1.0)/$n2)/($sample_adjustment); if ($type ne "_v2" || !defined($sample_adjustment)) {
if ($ratio < 1) { die "Threaded heap profiles require v2 sampling with a sample rate\n";
$n2 /= $ratio;
$s2 /= $ratio;
} }
my $profile = {};
my $thread_profiles = {};
my $pcs = {};
my $map = "";
my $stack = "";
while (<PROFILE>) {
s/\r//g;
if (/^MAPPED_LIBRARIES:/) {
$map .= ReadMappedLibraries(*PROFILE);
last;
} }
if (/^--- Memory map:/) {
$map .= ReadMemoryMap(*PROFILE);
last;
} }
my @counts = ($n1, $s1, $n2, $s2); # Read entry of the form:
# @ a1 a2 ... an
# t*: <count1>: <bytes1> [<count2>: <bytes2>]
# t1: <count1>: <bytes1> [<count2>: <bytes2>]
# ...
# tn: <count1>: <bytes1> [<count2>: <bytes2>]
s/^\s*//;
s/\s*$//;
if (m/^@\s+(.*)$/) {
$stack = $1;
} elsif (m/^\s*(t(\*|\d+)):\s+(\d+):\s+(\d+)\s+\[\s*(\d+):\s+(\d+)\]$/) {
if ($stack eq "") {
# Still in the header, so this is just a per-thread summary.
next;
}
my $thread = $2;
my ($n1, $s1, $n2, $s2) = ($3, $4, $5, $6);
my @counts = AdjustSamples($sample_adjustment, $sampling_algorithm,
$n1, $s1, $n2, $s2);
if ($thread eq "*") {
AddEntries($profile, $pcs, FixCallerAddresses($stack), $counts[$index]); AddEntries($profile, $pcs, FixCallerAddresses($stack), $counts[$index]);
} else {
if (!exists($thread_profiles->{$thread})) {
$thread_profiles->{$thread} = {};
}
AddEntries($thread_profiles->{$thread}, $pcs,
FixCallerAddresses($stack), $counts[$index]);
}
} }
} }
...@@ -4051,6 +4188,7 @@ sub ReadHeapProfile { ...@@ -4051,6 +4188,7 @@ sub ReadHeapProfile {
$r->{version} = "heap"; $r->{version} = "heap";
$r->{period} = 1; $r->{period} = 1;
$r->{profile} = $profile; $r->{profile} = $profile;
$r->{threads} = $thread_profiles;
$r->{libs} = ParseLibraries($prog, $map, $pcs); $r->{libs} = ParseLibraries($prog, $map, $pcs);
$r->{pcs} = $pcs; $r->{pcs} = $pcs;
return $r; return $r;
...@@ -4120,10 +4258,10 @@ sub ReadSynchProfile { ...@@ -4120,10 +4258,10 @@ sub ReadSynchProfile {
} elsif ($variable eq "sampling period") { } elsif ($variable eq "sampling period") {
$sampling_period = $value; $sampling_period = $value;
} elsif ($variable eq "ms since reset") { } elsif ($variable eq "ms since reset") {
# Currently nothing is done with this value in pprof # Currently nothing is done with this value in jeprof
# So we just silently ignore it for now # So we just silently ignore it for now
} elsif ($variable eq "discarded samples") { } elsif ($variable eq "discarded samples") {
# Currently nothing is done with this value in pprof # Currently nothing is done with this value in jeprof
# So we just silently ignore it for now # So we just silently ignore it for now
} else { } else {
printf STDERR ("Ignoring unnknown variable in /contention output: " . printf STDERR ("Ignoring unnknown variable in /contention output: " .
...@@ -4429,7 +4567,7 @@ sub ParseLibraries { ...@@ -4429,7 +4567,7 @@ sub ParseLibraries {
} }
# Add two hex addresses of length $address_length. # Add two hex addresses of length $address_length.
# Run pprof --test for unit test if this is changed. # Run jeprof --test for unit test if this is changed.
sub AddressAdd { sub AddressAdd {
my $addr1 = shift; my $addr1 = shift;
my $addr2 = shift; my $addr2 = shift;
...@@ -4483,7 +4621,7 @@ sub AddressAdd { ...@@ -4483,7 +4621,7 @@ sub AddressAdd {
# Subtract two hex addresses of length $address_length. # Subtract two hex addresses of length $address_length.
# Run pprof --test for unit test if this is changed. # Run jeprof --test for unit test if this is changed.
sub AddressSub { sub AddressSub {
my $addr1 = shift; my $addr1 = shift;
my $addr2 = shift; my $addr2 = shift;
...@@ -4535,7 +4673,7 @@ sub AddressSub { ...@@ -4535,7 +4673,7 @@ sub AddressSub {
} }
# Increment a hex addresses of length $address_length. # Increment a hex addresses of length $address_length.
# Run pprof --test for unit test if this is changed. # Run jeprof --test for unit test if this is changed.
sub AddressInc { sub AddressInc {
my $addr = shift; my $addr = shift;
my $sum; my $sum;
...@@ -4853,7 +4991,7 @@ sub UnparseAddress { ...@@ -4853,7 +4991,7 @@ sub UnparseAddress {
# 32-bit or ELF 64-bit executable file. The location of the tools # 32-bit or ELF 64-bit executable file. The location of the tools
# is determined by considering the following options in this order: # is determined by considering the following options in this order:
# 1) --tools option, if set # 1) --tools option, if set
# 2) PPROF_TOOLS environment variable, if set # 2) JEPROF_TOOLS environment variable, if set
# 3) the environment # 3) the environment
sub ConfigureObjTools { sub ConfigureObjTools {
my $prog_file = shift; my $prog_file = shift;
...@@ -4886,7 +5024,7 @@ sub ConfigureObjTools { ...@@ -4886,7 +5024,7 @@ sub ConfigureObjTools {
# For windows, we provide a version of nm and addr2line as part of # For windows, we provide a version of nm and addr2line as part of
# the opensource release, which is capable of parsing # the opensource release, which is capable of parsing
# Windows-style PDB executables. It should live in the path, or # Windows-style PDB executables. It should live in the path, or
# in the same directory as pprof. # in the same directory as jeprof.
$obj_tool_map{"nm_pdb"} = "nm-pdb"; $obj_tool_map{"nm_pdb"} = "nm-pdb";
$obj_tool_map{"addr2line_pdb"} = "addr2line-pdb"; $obj_tool_map{"addr2line_pdb"} = "addr2line-pdb";
} }
...@@ -4905,20 +5043,20 @@ sub ConfigureObjTools { ...@@ -4905,20 +5043,20 @@ sub ConfigureObjTools {
} }
# Returns the path of a caller-specified object tool. If --tools or # Returns the path of a caller-specified object tool. If --tools or
# PPROF_TOOLS are specified, then returns the full path to the tool # JEPROF_TOOLS are specified, then returns the full path to the tool
# with that prefix. Otherwise, returns the path unmodified (which # with that prefix. Otherwise, returns the path unmodified (which
# means we will look for it on PATH). # means we will look for it on PATH).
sub ConfigureTool { sub ConfigureTool {
my $tool = shift; my $tool = shift;
my $path; my $path;
# --tools (or $PPROF_TOOLS) is a comma separated list, where each # --tools (or $JEPROF_TOOLS) is a comma separated list, where each
# item is either a) a pathname prefix, or b) a map of the form # item is either a) a pathname prefix, or b) a map of the form
# <tool>:<path>. First we look for an entry of type (b) for our # <tool>:<path>. First we look for an entry of type (b) for our
# tool. If one is found, we use it. Otherwise, we consider all the # tool. If one is found, we use it. Otherwise, we consider all the
# pathname prefixes in turn, until one yields an existing file. If # pathname prefixes in turn, until one yields an existing file. If
# none does, we use a default path. # none does, we use a default path.
my $tools = $main::opt_tools || $ENV{"PPROF_TOOLS"} || ""; my $tools = $main::opt_tools || $ENV{"JEPROF_TOOLS"} || "";
if ($tools =~ m/(,|^)\Q$tool\E:([^,]*)/) { if ($tools =~ m/(,|^)\Q$tool\E:([^,]*)/) {
$path = $2; $path = $2;
# TODO(csilvers): sanity-check that $path exists? Hard if it's relative. # TODO(csilvers): sanity-check that $path exists? Hard if it's relative.
...@@ -4932,11 +5070,11 @@ sub ConfigureTool { ...@@ -4932,11 +5070,11 @@ sub ConfigureTool {
} }
if (!$path) { if (!$path) {
error("No '$tool' found with prefix specified by " . error("No '$tool' found with prefix specified by " .
"--tools (or \$PPROF_TOOLS) '$tools'\n"); "--tools (or \$JEPROF_TOOLS) '$tools'\n");
} }
} else { } else {
# ... otherwise use the version that exists in the same directory as # ... otherwise use the version that exists in the same directory as
# pprof. If there's nothing there, use $PATH. # jeprof. If there's nothing there, use $PATH.
$0 =~ m,[^/]*$,; # this is everything after the last slash $0 =~ m,[^/]*$,; # this is everything after the last slash
my $dirname = $`; # this is everything up to and including the last slash my $dirname = $`; # this is everything up to and including the last slash
if (-x "$dirname$tool") { if (-x "$dirname$tool") {
...@@ -4966,7 +5104,7 @@ sub cleanup { ...@@ -4966,7 +5104,7 @@ sub cleanup {
unlink($main::tmpfile_sym); unlink($main::tmpfile_sym);
unlink(keys %main::tempnames); unlink(keys %main::tempnames);
# We leave any collected profiles in $HOME/pprof in case the user wants # We leave any collected profiles in $HOME/jeprof in case the user wants
# to look at them later. We print a message informing them of this. # to look at them later. We print a message informing them of this.
if ((scalar(@main::profile_files) > 0) && if ((scalar(@main::profile_files) > 0) &&
defined($main::collected_profile)) { defined($main::collected_profile)) {
...@@ -4975,7 +5113,7 @@ sub cleanup { ...@@ -4975,7 +5113,7 @@ sub cleanup {
} }
print STDERR "If you want to investigate this profile further, you can do:\n"; print STDERR "If you want to investigate this profile further, you can do:\n";
print STDERR "\n"; print STDERR "\n";
print STDERR " pprof \\\n"; print STDERR " jeprof \\\n";
print STDERR " $main::prog \\\n"; print STDERR " $main::prog \\\n";
print STDERR " $main::collected_profile\n"; print STDERR " $main::collected_profile\n";
print STDERR "\n"; print STDERR "\n";
...@@ -5160,7 +5298,7 @@ sub GetProcedureBoundaries { ...@@ -5160,7 +5298,7 @@ sub GetProcedureBoundaries {
# The test vectors for AddressAdd/Sub/Inc are 8-16-nibble hex strings. # The test vectors for AddressAdd/Sub/Inc are 8-16-nibble hex strings.
# To make them more readable, we add underscores at interesting places. # To make them more readable, we add underscores at interesting places.
# This routine removes the underscores, producing the canonical representation # This routine removes the underscores, producing the canonical representation
# used by pprof to represent addresses, particularly in the tested routines. # used by jeprof to represent addresses, particularly in the tested routines.
sub CanonicalHex { sub CanonicalHex {
my $arg = shift; my $arg = shift;
return join '', (split '_',$arg); return join '', (split '_',$arg);
......
#! /bin/sh #! /bin/sh
# Attempt to guess a canonical system name. # Attempt to guess a canonical system name.
# Copyright 1992-2013 Free Software Foundation, Inc. # Copyright 1992-2014 Free Software Foundation, Inc.
timestamp='2013-06-10' timestamp='2014-03-23'
# This file is free software; you can redistribute it and/or modify it # This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by # under the terms of the GNU General Public License as published by
...@@ -50,7 +50,7 @@ version="\ ...@@ -50,7 +50,7 @@ version="\
GNU config.guess ($timestamp) GNU config.guess ($timestamp)
Originally written by Per Bothner. Originally written by Per Bothner.
Copyright 1992-2013 Free Software Foundation, Inc. Copyright 1992-2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
...@@ -149,7 +149,7 @@ Linux|GNU|GNU/*) ...@@ -149,7 +149,7 @@ Linux|GNU|GNU/*)
LIBC=gnu LIBC=gnu
#endif #endif
EOF EOF
eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^LIBC'` eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^LIBC' | sed 's, ,,g'`
;; ;;
esac esac
...@@ -826,7 +826,7 @@ EOF ...@@ -826,7 +826,7 @@ EOF
*:MINGW*:*) *:MINGW*:*)
echo ${UNAME_MACHINE}-pc-mingw32 echo ${UNAME_MACHINE}-pc-mingw32
exit ;; exit ;;
i*:MSYS*:*) *:MSYS*:*)
echo ${UNAME_MACHINE}-pc-msys echo ${UNAME_MACHINE}-pc-msys
exit ;; exit ;;
i*:windows32*:*) i*:windows32*:*)
...@@ -969,10 +969,10 @@ EOF ...@@ -969,10 +969,10 @@ EOF
eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'` eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'`
test x"${CPU}" != x && { echo "${CPU}-unknown-linux-${LIBC}"; exit; } test x"${CPU}" != x && { echo "${CPU}-unknown-linux-${LIBC}"; exit; }
;; ;;
or1k:Linux:*:*) openrisc*:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC} echo or1k-unknown-linux-${LIBC}
exit ;; exit ;;
or32:Linux:*:*) or32:Linux:*:* | or1k*:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC} echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;; exit ;;
padre:Linux:*:*) padre:Linux:*:*)
...@@ -1260,6 +1260,7 @@ EOF ...@@ -1260,6 +1260,7 @@ EOF
if test "$UNAME_PROCESSOR" = unknown ; then if test "$UNAME_PROCESSOR" = unknown ; then
UNAME_PROCESSOR=powerpc UNAME_PROCESSOR=powerpc
fi fi
if test `echo "$UNAME_RELEASE" | sed -e 's/\..*//'` -le 10 ; then
if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then
if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \
(CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \
...@@ -1271,6 +1272,15 @@ EOF ...@@ -1271,6 +1272,15 @@ EOF
esac esac
fi fi
fi fi
elif test "$UNAME_PROCESSOR" = i386 ; then
# Avoid executing cc on OS X 10.9, as it ships with a stub
# that puts up a graphical alert prompting to install
# developer tools. Any system running Mac OS X 10.7 or
# later (Darwin 11 and later) is required to have a 64-bit
# processor. This is not true of the ARM version of Darwin
# that Apple uses in portable devices.
UNAME_PROCESSOR=x86_64
fi
echo ${UNAME_PROCESSOR}-apple-darwin${UNAME_RELEASE} echo ${UNAME_PROCESSOR}-apple-darwin${UNAME_RELEASE}
exit ;; exit ;;
*:procnto*:*:* | *:QNX:[0123456789]*:*) *:procnto*:*:* | *:QNX:[0123456789]*:*)
...@@ -1361,154 +1371,6 @@ EOF ...@@ -1361,154 +1371,6 @@ EOF
exit ;; exit ;;
esac esac
eval $set_cc_for_build
cat >$dummy.c <<EOF
#ifdef _SEQUENT_
# include <sys/types.h>
# include <sys/utsname.h>
#endif
main ()
{
#if defined (sony)
#if defined (MIPSEB)
/* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed,
I don't know.... */
printf ("mips-sony-bsd\n"); exit (0);
#else
#include <sys/param.h>
printf ("m68k-sony-newsos%s\n",
#ifdef NEWSOS4
"4"
#else
""
#endif
); exit (0);
#endif
#endif
#if defined (__arm) && defined (__acorn) && defined (__unix)
printf ("arm-acorn-riscix\n"); exit (0);
#endif
#if defined (hp300) && !defined (hpux)
printf ("m68k-hp-bsd\n"); exit (0);
#endif
#if defined (NeXT)
#if !defined (__ARCHITECTURE__)
#define __ARCHITECTURE__ "m68k"
#endif
int version;
version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`;
if (version < 4)
printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version);
else
printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version);
exit (0);
#endif
#if defined (MULTIMAX) || defined (n16)
#if defined (UMAXV)
printf ("ns32k-encore-sysv\n"); exit (0);
#else
#if defined (CMU)
printf ("ns32k-encore-mach\n"); exit (0);
#else
printf ("ns32k-encore-bsd\n"); exit (0);
#endif
#endif
#endif
#if defined (__386BSD__)
printf ("i386-pc-bsd\n"); exit (0);
#endif
#if defined (sequent)
#if defined (i386)
printf ("i386-sequent-dynix\n"); exit (0);
#endif
#if defined (ns32000)
printf ("ns32k-sequent-dynix\n"); exit (0);
#endif
#endif
#if defined (_SEQUENT_)
struct utsname un;
uname(&un);
if (strncmp(un.version, "V2", 2) == 0) {
printf ("i386-sequent-ptx2\n"); exit (0);
}
if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */
printf ("i386-sequent-ptx1\n"); exit (0);
}
printf ("i386-sequent-ptx\n"); exit (0);
#endif
#if defined (vax)
# if !defined (ultrix)
# include <sys/param.h>
# if defined (BSD)
# if BSD == 43
printf ("vax-dec-bsd4.3\n"); exit (0);
# else
# if BSD == 199006
printf ("vax-dec-bsd4.3reno\n"); exit (0);
# else
printf ("vax-dec-bsd\n"); exit (0);
# endif
# endif
# else
printf ("vax-dec-bsd\n"); exit (0);
# endif
# else
printf ("vax-dec-ultrix\n"); exit (0);
# endif
#endif
#if defined (alliant) && defined (i860)
printf ("i860-alliant-bsd\n"); exit (0);
#endif
exit (1);
}
EOF
$CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null && SYSTEM_NAME=`$dummy` &&
{ echo "$SYSTEM_NAME"; exit; }
# Apollos put the system type in the environment.
test -d /usr/apollo && { echo ${ISP}-apollo-${SYSTYPE}; exit; }
# Convex versions that predate uname can use getsysinfo(1)
if [ -x /usr/convex/getsysinfo ]
then
case `getsysinfo -f cpu_type` in
c1*)
echo c1-convex-bsd
exit ;;
c2*)
if getsysinfo -f scalar_acc
then echo c32-convex-bsd
else echo c2-convex-bsd
fi
exit ;;
c34*)
echo c34-convex-bsd
exit ;;
c38*)
echo c38-convex-bsd
exit ;;
c4*)
echo c4-convex-bsd
exit ;;
esac
fi
cat >&2 <<EOF cat >&2 <<EOF
$0: unable to guess system type $0: unable to guess system type
......
#! /bin/sh #! /bin/sh
# Configuration validation subroutine script. # Configuration validation subroutine script.
# Copyright 1992-2013 Free Software Foundation, Inc. # Copyright 1992-2014 Free Software Foundation, Inc.
timestamp='2013-10-01' timestamp='2014-05-01'
# This file is free software; you can redistribute it and/or modify it # This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by # under the terms of the GNU General Public License as published by
...@@ -68,7 +68,7 @@ Report bugs and patches to <config-patches@gnu.org>." ...@@ -68,7 +68,7 @@ Report bugs and patches to <config-patches@gnu.org>."
version="\ version="\
GNU config.sub ($timestamp) GNU config.sub ($timestamp)
Copyright 1992-2013 Free Software Foundation, Inc. Copyright 1992-2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
...@@ -283,8 +283,10 @@ case $basic_machine in ...@@ -283,8 +283,10 @@ case $basic_machine in
| mips64vr5900 | mips64vr5900el \ | mips64vr5900 | mips64vr5900el \
| mipsisa32 | mipsisa32el \ | mipsisa32 | mipsisa32el \
| mipsisa32r2 | mipsisa32r2el \ | mipsisa32r2 | mipsisa32r2el \
| mipsisa32r6 | mipsisa32r6el \
| mipsisa64 | mipsisa64el \ | mipsisa64 | mipsisa64el \
| mipsisa64r2 | mipsisa64r2el \ | mipsisa64r2 | mipsisa64r2el \
| mipsisa64r6 | mipsisa64r6el \
| mipsisa64sb1 | mipsisa64sb1el \ | mipsisa64sb1 | mipsisa64sb1el \
| mipsisa64sr71k | mipsisa64sr71kel \ | mipsisa64sr71k | mipsisa64sr71kel \
| mipsr5900 | mipsr5900el \ | mipsr5900 | mipsr5900el \
...@@ -296,8 +298,7 @@ case $basic_machine in ...@@ -296,8 +298,7 @@ case $basic_machine in
| nds32 | nds32le | nds32be \ | nds32 | nds32le | nds32be \
| nios | nios2 | nios2eb | nios2el \ | nios | nios2 | nios2eb | nios2el \
| ns16k | ns32k \ | ns16k | ns32k \
| open8 \ | open8 | or1k | or1knd | or32 \
| or1k | or32 \
| pdp10 | pdp11 | pj | pjl \ | pdp10 | pdp11 | pj | pjl \
| powerpc | powerpc64 | powerpc64le | powerpcle \ | powerpc | powerpc64 | powerpc64le | powerpcle \
| pyramid \ | pyramid \
...@@ -402,8 +403,10 @@ case $basic_machine in ...@@ -402,8 +403,10 @@ case $basic_machine in
| mips64vr5900-* | mips64vr5900el-* \ | mips64vr5900-* | mips64vr5900el-* \
| mipsisa32-* | mipsisa32el-* \ | mipsisa32-* | mipsisa32el-* \
| mipsisa32r2-* | mipsisa32r2el-* \ | mipsisa32r2-* | mipsisa32r2el-* \
| mipsisa32r6-* | mipsisa32r6el-* \
| mipsisa64-* | mipsisa64el-* \ | mipsisa64-* | mipsisa64el-* \
| mipsisa64r2-* | mipsisa64r2el-* \ | mipsisa64r2-* | mipsisa64r2el-* \
| mipsisa64r6-* | mipsisa64r6el-* \
| mipsisa64sb1-* | mipsisa64sb1el-* \ | mipsisa64sb1-* | mipsisa64sb1el-* \
| mipsisa64sr71k-* | mipsisa64sr71kel-* \ | mipsisa64sr71k-* | mipsisa64sr71kel-* \
| mipsr5900-* | mipsr5900el-* \ | mipsr5900-* | mipsr5900el-* \
...@@ -415,6 +418,7 @@ case $basic_machine in ...@@ -415,6 +418,7 @@ case $basic_machine in
| nios-* | nios2-* | nios2eb-* | nios2el-* \ | nios-* | nios2-* | nios2eb-* | nios2el-* \
| none-* | np1-* | ns16k-* | ns32k-* \ | none-* | np1-* | ns16k-* | ns32k-* \
| open8-* \ | open8-* \
| or1k*-* \
| orion-* \ | orion-* \
| pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \ | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \
| powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \ | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \
...@@ -1376,7 +1380,7 @@ case $os in ...@@ -1376,7 +1380,7 @@ case $os in
| -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \ | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \
| -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \ | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \
| -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \ | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \
| -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es*) | -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es* | -tirtos*)
# Remember, each alternative MUST END IN *, to match a version number. # Remember, each alternative MUST END IN *, to match a version number.
;; ;;
-qnx*) -qnx*)
...@@ -1400,6 +1404,9 @@ case $os in ...@@ -1400,6 +1404,9 @@ case $os in
-mac*) -mac*)
os=`echo $os | sed -e 's|mac|macos|'` os=`echo $os | sed -e 's|mac|macos|'`
;; ;;
# Apple iOS
-ios*)
;;
-linux-dietlibc) -linux-dietlibc)
os=-linux-dietlibc os=-linux-dietlibc
;; ;;
...@@ -1594,9 +1601,6 @@ case $basic_machine in ...@@ -1594,9 +1601,6 @@ case $basic_machine in
mips*-*) mips*-*)
os=-elf os=-elf
;; ;;
or1k-*)
os=-elf
;;
or32-*) or32-*)
os=-coff os=-coff
;; ;;
......
...@@ -628,19 +628,19 @@ cfghdrs_in ...@@ -628,19 +628,19 @@ cfghdrs_in
enable_zone_allocator enable_zone_allocator
enable_tls enable_tls
enable_lazy_lock enable_lazy_lock
TESTLIBS
jemalloc_version_gid jemalloc_version_gid
jemalloc_version_nrev jemalloc_version_nrev
jemalloc_version_bugfix jemalloc_version_bugfix
jemalloc_version_minor jemalloc_version_minor
jemalloc_version_major jemalloc_version_major
jemalloc_version jemalloc_version
enable_cache_oblivious
enable_xmalloc enable_xmalloc
enable_valgrind enable_valgrind
enable_utrace enable_utrace
enable_fill enable_fill
enable_dss
enable_munmap enable_munmap
enable_mremap
enable_tcache enable_tcache
enable_prof enable_prof
enable_stats enable_stats
...@@ -648,8 +648,8 @@ enable_debug ...@@ -648,8 +648,8 @@ enable_debug
je_ je_
install_suffix install_suffix
private_namespace private_namespace
JEMALLOC_CPREFIX
enable_code_coverage enable_code_coverage
enable_experimental
AUTOCONF AUTOCONF
LD LD
RANLIB RANLIB
...@@ -709,6 +709,7 @@ objroot ...@@ -709,6 +709,7 @@ objroot
abs_srcroot abs_srcroot
srcroot srcroot
rev rev
CONFIG
target_alias target_alias
host_alias host_alias
build_alias build_alias
...@@ -753,7 +754,6 @@ enable_option_checking ...@@ -753,7 +754,6 @@ enable_option_checking
with_xslroot with_xslroot
with_rpath with_rpath
enable_autogen enable_autogen
enable_experimental
enable_code_coverage enable_code_coverage
with_mangling with_mangling
with_jemalloc_prefix with_jemalloc_prefix
...@@ -770,13 +770,17 @@ with_static_libunwind ...@@ -770,13 +770,17 @@ with_static_libunwind
enable_prof_libgcc enable_prof_libgcc
enable_prof_gcc enable_prof_gcc
enable_tcache enable_tcache
enable_mremap
enable_munmap enable_munmap
enable_dss
enable_fill enable_fill
enable_utrace enable_utrace
enable_valgrind enable_valgrind
enable_xmalloc enable_xmalloc
enable_cache_oblivious
with_lg_tiny_min
with_lg_quantum
with_lg_page
with_lg_page_sizes
with_lg_size_class_group
enable_lazy_lock enable_lazy_lock
enable_tls enable_tls
enable_zone_allocator enable_zone_allocator
...@@ -1402,9 +1406,8 @@ Optional Features: ...@@ -1402,9 +1406,8 @@ Optional Features:
--disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
--enable-FEATURE[=ARG] include FEATURE [ARG=yes] --enable-FEATURE[=ARG] include FEATURE [ARG=yes]
--enable-autogen Automatically regenerate configure output --enable-autogen Automatically regenerate configure output
--disable-experimental Disable support for the experimental API
--enable-code-coverage Enable code coverage --enable-code-coverage Enable code coverage
--enable-cc-silence Silence irrelevant compiler warnings --disable-cc-silence Do not silence irrelevant compiler warnings
--enable-debug Build debugging code (implies --enable-ivsalloc) --enable-debug Build debugging code (implies --enable-ivsalloc)
--enable-ivsalloc Validate pointers passed through the public API --enable-ivsalloc Validate pointers passed through the public API
--disable-stats Disable statistics calculation/reporting --disable-stats Disable statistics calculation/reporting
...@@ -1413,14 +1416,15 @@ Optional Features: ...@@ -1413,14 +1416,15 @@ Optional Features:
--disable-prof-libgcc Do not use libgcc for backtracing --disable-prof-libgcc Do not use libgcc for backtracing
--disable-prof-gcc Do not use gcc intrinsics for backtracing --disable-prof-gcc Do not use gcc intrinsics for backtracing
--disable-tcache Disable per thread caches --disable-tcache Disable per thread caches
--enable-mremap Enable mremap(2) for huge realloc()
--disable-munmap Disable VM deallocation via munmap(2) --disable-munmap Disable VM deallocation via munmap(2)
--enable-dss Enable allocation from DSS
--disable-fill Disable support for junk/zero filling, quarantine, --disable-fill Disable support for junk/zero filling, quarantine,
and redzones and redzones
--enable-utrace Enable utrace(2)-based tracing --enable-utrace Enable utrace(2)-based tracing
--disable-valgrind Disable support for Valgrind --disable-valgrind Disable support for Valgrind
--enable-xmalloc Support xmalloc option --enable-xmalloc Support xmalloc option
--disable-cache-oblivious
Disable support for cache-oblivious allocation
alignment
--enable-lazy-lock Enable lazy locking (only lock when multi-threaded) --enable-lazy-lock Enable lazy locking (only lock when multi-threaded)
--disable-tls Disable thread-local storage (__thread keyword) --disable-tls Disable thread-local storage (__thread keyword)
--disable-zone-allocator --disable-zone-allocator
...@@ -1442,6 +1446,16 @@ Optional Packages: ...@@ -1442,6 +1446,16 @@ Optional Packages:
--with-static-libunwind=<libunwind.a> --with-static-libunwind=<libunwind.a>
Path to static libunwind library; use rather than Path to static libunwind library; use rather than
dynamically linking dynamically linking
--with-lg-tiny-min=<lg-tiny-min>
Base 2 log of minimum tiny size class to support
--with-lg-quantum=<lg-quantum>
Base 2 log of minimum allocation alignment
--with-lg-page=<lg-page>
Base 2 log of system page size
--with-lg-page-sizes=<lg-page-sizes>
Base 2 logs of system page sizes to support
--with-lg-size-class-group=<lg-size-class-group>
Base 2 log of size classes per doubling
Some influential environment variables: Some influential environment variables:
CC C compiler command CC C compiler command
...@@ -1910,73 +1924,6 @@ fi ...@@ -1910,73 +1924,6 @@ fi
} # ac_fn_c_try_link } # ac_fn_c_try_link
# ac_fn_c_check_func LINENO FUNC VAR
# ----------------------------------
# Tests whether FUNC exists, setting the cache variable VAR accordingly
ac_fn_c_check_func ()
{
as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5
$as_echo_n "checking for $2... " >&6; }
if eval \${$3+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
/* Define $2 to an innocuous variant, in case <limits.h> declares $2.
For example, HP-UX 11i <limits.h> declares gettimeofday. */
#define $2 innocuous_$2
/* System header to define __stub macros and hopefully few prototypes,
which can conflict with char $2 (); below.
Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
<limits.h> exists even on freestanding compilers. */
#ifdef __STDC__
# include <limits.h>
#else
# include <assert.h>
#endif
#undef $2
/* Override any GCC internal prototype to avoid an error.
Use char because int might match the return type of a GCC
builtin and then its argument prototype would still apply. */
#ifdef __cplusplus
extern "C"
#endif
char $2 ();
/* The GNU C library defines this for functions which it implements
to always fail with ENOSYS. Some functions are actually named
something starting with __ and the normal name is an alias. */
#if defined __stub_$2 || defined __stub___$2
choke me
#endif
int
main ()
{
return $2 ();
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
eval "$3=yes"
else
eval "$3=no"
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
eval ac_res=\$$3
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5
$as_echo "$ac_res" >&6; }
eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
} # ac_fn_c_check_func
# ac_fn_c_check_header_mongrel LINENO HEADER VAR INCLUDES # ac_fn_c_check_header_mongrel LINENO HEADER VAR INCLUDES
# ------------------------------------------------------- # -------------------------------------------------------
# Tests whether HEADER exists, giving a warning if it cannot be compiled using # Tests whether HEADER exists, giving a warning if it cannot be compiled using
...@@ -2064,6 +2011,73 @@ fi ...@@ -2064,6 +2011,73 @@ fi
} # ac_fn_c_check_header_mongrel } # ac_fn_c_check_header_mongrel
# ac_fn_c_check_func LINENO FUNC VAR
# ----------------------------------
# Tests whether FUNC exists, setting the cache variable VAR accordingly
ac_fn_c_check_func ()
{
as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5
$as_echo_n "checking for $2... " >&6; }
if eval \${$3+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
/* Define $2 to an innocuous variant, in case <limits.h> declares $2.
For example, HP-UX 11i <limits.h> declares gettimeofday. */
#define $2 innocuous_$2
/* System header to define __stub macros and hopefully few prototypes,
which can conflict with char $2 (); below.
Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
<limits.h> exists even on freestanding compilers. */
#ifdef __STDC__
# include <limits.h>
#else
# include <assert.h>
#endif
#undef $2
/* Override any GCC internal prototype to avoid an error.
Use char because int might match the return type of a GCC
builtin and then its argument prototype would still apply. */
#ifdef __cplusplus
extern "C"
#endif
char $2 ();
/* The GNU C library defines this for functions which it implements
to always fail with ENOSYS. Some functions are actually named
something starting with __ and the normal name is an alias. */
#if defined __stub_$2 || defined __stub___$2
choke me
#endif
int
main ()
{
return $2 ();
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
eval "$3=yes"
else
eval "$3=no"
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
eval ac_res=\$$3
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5
$as_echo "$ac_res" >&6; }
eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
} # ac_fn_c_check_func
# ac_fn_c_check_type LINENO TYPE VAR INCLUDES # ac_fn_c_check_type LINENO TYPE VAR INCLUDES
# ------------------------------------------- # -------------------------------------------
# Tests whether TYPE exists after having included INCLUDES, setting cache # Tests whether TYPE exists after having included INCLUDES, setting cache
...@@ -2476,7 +2490,10 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu ...@@ -2476,7 +2490,10 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu
rev=1 CONFIG=`echo ${ac_configure_args} | sed -e 's#'"'"'\([^ ]*\)'"'"'#\1#g'`
rev=2
srcroot=$srcdir srcroot=$srcdir
...@@ -3488,6 +3505,42 @@ fi ...@@ -3488,6 +3505,42 @@ fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror=declaration-after-statement" >&5
$as_echo_n "checking whether compiler supports -Werror=declaration-after-statement... " >&6; }
TCFLAGS="${CFLAGS}"
if test "x${CFLAGS}" = "x" ; then
CFLAGS="-Werror=declaration-after-statement"
else
CFLAGS="${CFLAGS} -Werror=declaration-after-statement"
fi
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
return 0;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
je_cv_cflags_appended=-Werror=declaration-after-statement
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
je_cv_cflags_appended=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
CFLAGS="${TCFLAGS}"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -pipe" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -pipe" >&5
$as_echo_n "checking whether compiler supports -pipe... " >&6; } $as_echo_n "checking whether compiler supports -pipe... " >&6; }
TCFLAGS="${CFLAGS}" TCFLAGS="${CFLAGS}"
...@@ -3669,7 +3722,43 @@ $as_echo "no" >&6; } ...@@ -3669,7 +3722,43 @@ $as_echo "no" >&6; }
fi fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
CPPFLAGS="$CPPFLAGS -I${srcroot}/include/msvc_compat"
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -FS" >&5
$as_echo_n "checking whether compiler supports -FS... " >&6; }
TCFLAGS="${CFLAGS}"
if test "x${CFLAGS}" = "x" ; then
CFLAGS="-FS"
else
CFLAGS="${CFLAGS} -FS"
fi
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
return 0;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
je_cv_cflags_appended=-FS
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
je_cv_cflags_appended=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
CFLAGS="${TCFLAGS}"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat"
fi fi
fi fi
if test "x$EXTRA_CFLAGS" != "x" ; then if test "x$EXTRA_CFLAGS" != "x" ; then
...@@ -4338,6 +4427,10 @@ _ACEOF ...@@ -4338,6 +4427,10 @@ _ACEOF
fi fi
if test "x${je_cv_msvc}" = "xyes" -a "x${ac_cv_header_inttypes_h}" = "xno"; then
CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat/C99"
fi
# The cast to long int works around a bug in the HP C Compiler # The cast to long int works around a bug in the HP C Compiler
# version HP92453-01 B.11.11.23709.GP, which incorrectly rejects # version HP92453-01 B.11.11.23709.GP, which incorrectly rejects
# declarations like `int a3[[(sizeof (unsigned char)) >= 0]];'. # declarations like `int a3[[(sizeof (unsigned char)) >= 0]];'.
...@@ -4622,9 +4715,10 @@ case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac ...@@ -4622,9 +4715,10 @@ case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac
CPU_SPINWAIT="" CPU_SPINWAIT=""
case "${host_cpu}" in case "${host_cpu}" in
i[345]86)
;;
i686|x86_64) i686|x86_64)
if ${je_cv_pause+:} false; then :
$as_echo_n "(cached) " >&6
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pause instruction is compilable" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pause instruction is compilable" >&5
$as_echo_n "checking whether pause instruction is compilable... " >&6; } $as_echo_n "checking whether pause instruction is compilable... " >&6; }
...@@ -4653,44 +4747,10 @@ fi ...@@ -4653,44 +4747,10 @@ fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_pause" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_pause" >&5
$as_echo "$je_cv_pause" >&6; } $as_echo "$je_cv_pause" >&6; }
if test "x${je_cv_pause}" = "xyes" ; then
CPU_SPINWAIT='__asm__ volatile("pause")'
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether SSE2 intrinsics is compilable" >&5
$as_echo_n "checking whether SSE2 intrinsics is compilable... " >&6; }
if ${je_cv_sse2+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <emmintrin.h>
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_sse2=yes
else
je_cv_sse2=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_sse2" >&5
$as_echo "$je_cv_sse2" >&6; }
if test "x${je_cv_sse2}" = "xyes" ; then
cat >>confdefs.h <<_ACEOF
#define HAVE_SSE2
_ACEOF
if test "x${je_cv_pause}" = "xyes" ; then
CPU_SPINWAIT='__asm__ volatile("pause")'
fi fi
;; ;;
powerpc) powerpc)
...@@ -4822,9 +4882,9 @@ fi ...@@ -4822,9 +4882,9 @@ fi
default_munmap="1" default_munmap="1"
JEMALLOC_USABLE_SIZE_CONST="const" maps_coalesce="1"
case "${host}" in case "${host}" in
*-*-darwin*) *-*-darwin* | *-*-ios*)
CFLAGS="$CFLAGS" CFLAGS="$CFLAGS"
abi="macho" abi="macho"
$as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h $as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h
...@@ -4834,7 +4894,7 @@ case "${host}" in ...@@ -4834,7 +4894,7 @@ case "${host}" in
so="dylib" so="dylib"
importlib="${so}" importlib="${so}"
force_tls="0" force_tls="0"
DSO_LDFLAGS='-shared -Wl,-dylib_install_name,$(@F)' DSO_LDFLAGS='-shared -Wl,-install_name,$(LIBDIR)/$(@F)'
SOREV="${rev}.${so}" SOREV="${rev}.${so}"
sbrk_deprecated="1" sbrk_deprecated="1"
;; ;;
...@@ -4844,6 +4904,25 @@ case "${host}" in ...@@ -4844,6 +4904,25 @@ case "${host}" in
$as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h $as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h
force_lazy_lock="1" force_lazy_lock="1"
;;
*-*-dragonfly*)
CFLAGS="$CFLAGS"
abi="elf"
$as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h
;;
*-*-openbsd*)
CFLAGS="$CFLAGS"
abi="elf"
$as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h
force_tls="0"
;;
*-*-bitrig*)
CFLAGS="$CFLAGS"
abi="elf"
$as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h
;; ;;
*-*-linux*) *-*-linux*)
CFLAGS="$CFLAGS" CFLAGS="$CFLAGS"
...@@ -4855,7 +4934,8 @@ case "${host}" in ...@@ -4855,7 +4934,8 @@ case "${host}" in
$as_echo "#define JEMALLOC_THREADED_INIT " >>confdefs.h $as_echo "#define JEMALLOC_THREADED_INIT " >>confdefs.h
JEMALLOC_USABLE_SIZE_CONST="" $as_echo "#define JEMALLOC_USE_CXX_THROW " >>confdefs.h
default_munmap="0" default_munmap="0"
;; ;;
*-*-netbsd*) *-*-netbsd*)
...@@ -4905,9 +4985,11 @@ $as_echo "$abi" >&6; } ...@@ -4905,9 +4985,11 @@ $as_echo "$abi" >&6; }
fi fi
abi="xcoff" abi="xcoff"
;; ;;
*-*-mingw*) *-*-mingw* | *-*-cygwin*)
abi="pecoff" abi="pecoff"
force_tls="0" force_tls="0"
force_lazy_lock="1"
maps_coalesce="0"
RPATH="" RPATH=""
so="dll" so="dll"
if test "x$je_cv_msvc" = "xyes" ; then if test "x$je_cv_msvc" = "xyes" ; then
...@@ -4935,8 +5017,52 @@ $as_echo "Unsupported operating system: ${host}" >&6; } ...@@ -4935,8 +5017,52 @@ $as_echo "Unsupported operating system: ${host}" >&6; }
abi="elf" abi="elf"
;; ;;
esac esac
cat >>confdefs.h <<_ACEOF
#define JEMALLOC_USABLE_SIZE_CONST $JEMALLOC_USABLE_SIZE_CONST JEMALLOC_USABLE_SIZE_CONST=const
for ac_header in malloc.h
do :
ac_fn_c_check_header_mongrel "$LINENO" "malloc.h" "ac_cv_header_malloc_h" "$ac_includes_default"
if test "x$ac_cv_header_malloc_h" = xyes; then :
cat >>confdefs.h <<_ACEOF
#define HAVE_MALLOC_H 1
_ACEOF
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether malloc_usable_size definition can use const argument" >&5
$as_echo_n "checking whether malloc_usable_size definition can use const argument... " >&6; }
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <malloc.h>
#include <stddef.h>
size_t malloc_usable_size(const void *ptr);
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
JEMALLOC_USABLE_SIZE_CONST=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
fi
done
cat >>confdefs.h <<_ACEOF
#define JEMALLOC_USABLE_SIZE_CONST $JEMALLOC_USABLE_SIZE_CONST
_ACEOF _ACEOF
...@@ -5079,7 +5205,7 @@ int ...@@ -5079,7 +5205,7 @@ int
main () main ()
{ {
static __thread int static __thread int
__attribute__((tls_model("initial-exec"))) foo; __attribute__((tls_model("initial-exec"), unused)) foo;
foo = 0; foo = 0;
; ;
return 0; return 0;
...@@ -5104,6 +5230,216 @@ else ...@@ -5104,6 +5230,216 @@ else
$as_echo "#define JEMALLOC_TLS_MODEL " >>confdefs.h $as_echo "#define JEMALLOC_TLS_MODEL " >>confdefs.h
fi fi
SAVED_CFLAGS="${CFLAGS}"
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror" >&5
$as_echo_n "checking whether compiler supports -Werror... " >&6; }
TCFLAGS="${CFLAGS}"
if test "x${CFLAGS}" = "x" ; then
CFLAGS="-Werror"
else
CFLAGS="${CFLAGS} -Werror"
fi
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
return 0;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
je_cv_cflags_appended=-Werror
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
je_cv_cflags_appended=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
CFLAGS="${TCFLAGS}"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether alloc_size attribute is compilable" >&5
$as_echo_n "checking whether alloc_size attribute is compilable... " >&6; }
if ${je_cv_alloc_size+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdlib.h>
int
main ()
{
void *foo(size_t size) __attribute__((alloc_size(1)));
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_alloc_size=yes
else
je_cv_alloc_size=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_alloc_size" >&5
$as_echo "$je_cv_alloc_size" >&6; }
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_alloc_size}" = "xyes" ; then
$as_echo "#define JEMALLOC_HAVE_ATTR_ALLOC_SIZE " >>confdefs.h
fi
SAVED_CFLAGS="${CFLAGS}"
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror" >&5
$as_echo_n "checking whether compiler supports -Werror... " >&6; }
TCFLAGS="${CFLAGS}"
if test "x${CFLAGS}" = "x" ; then
CFLAGS="-Werror"
else
CFLAGS="${CFLAGS} -Werror"
fi
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
return 0;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
je_cv_cflags_appended=-Werror
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
je_cv_cflags_appended=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
CFLAGS="${TCFLAGS}"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether format(gnu_printf, ...) attribute is compilable" >&5
$as_echo_n "checking whether format(gnu_printf, ...) attribute is compilable... " >&6; }
if ${je_cv_format_gnu_printf+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdlib.h>
int
main ()
{
void *foo(const char *format, ...) __attribute__((format(gnu_printf, 1, 2)));
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_format_gnu_printf=yes
else
je_cv_format_gnu_printf=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_format_gnu_printf" >&5
$as_echo "$je_cv_format_gnu_printf" >&6; }
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_gnu_printf}" = "xyes" ; then
$as_echo "#define JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF " >>confdefs.h
fi
SAVED_CFLAGS="${CFLAGS}"
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror" >&5
$as_echo_n "checking whether compiler supports -Werror... " >&6; }
TCFLAGS="${CFLAGS}"
if test "x${CFLAGS}" = "x" ; then
CFLAGS="-Werror"
else
CFLAGS="${CFLAGS} -Werror"
fi
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
return 0;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
je_cv_cflags_appended=-Werror
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
je_cv_cflags_appended=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
CFLAGS="${TCFLAGS}"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether format(printf, ...) attribute is compilable" >&5
$as_echo_n "checking whether format(printf, ...) attribute is compilable... " >&6; }
if ${je_cv_format_printf+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdlib.h>
int
main ()
{
void *foo(const char *format, ...) __attribute__((format(printf, 1, 2)));
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_format_printf=yes
else
je_cv_format_printf=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_format_printf" >&5
$as_echo "$je_cv_format_printf" >&6; }
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_printf}" = "xyes" ; then
$as_echo "#define JEMALLOC_HAVE_ATTR_FORMAT_PRINTF " >>confdefs.h
fi
# Check whether --with-rpath was given. # Check whether --with-rpath was given.
...@@ -5403,7 +5739,7 @@ fi ...@@ -5403,7 +5739,7 @@ fi
public_syms="malloc_conf malloc_message malloc calloc posix_memalign aligned_alloc realloc free mallocx rallocx xallocx sallocx dallocx nallocx mallctl mallctlnametomib mallctlbymib malloc_stats_print malloc_usable_size" public_syms="malloc_conf malloc_message malloc calloc posix_memalign aligned_alloc realloc free mallocx rallocx xallocx sallocx dallocx sdallocx nallocx mallctl mallctlnametomib mallctlbymib malloc_stats_print malloc_usable_size"
ac_fn_c_check_func "$LINENO" "memalign" "ac_cv_func_memalign" ac_fn_c_check_func "$LINENO" "memalign" "ac_cv_func_memalign"
if test "x$ac_cv_func_memalign" = xyes; then : if test "x$ac_cv_func_memalign" = xyes; then :
...@@ -5420,26 +5756,6 @@ if test "x$ac_cv_func_valloc" = xyes; then : ...@@ -5420,26 +5756,6 @@ if test "x$ac_cv_func_valloc" = xyes; then :
fi fi
# Check whether --enable-experimental was given.
if test "${enable_experimental+set}" = set; then :
enableval=$enable_experimental; if test "x$enable_experimental" = "xno" ; then
enable_experimental="0"
else
enable_experimental="1"
fi
else
enable_experimental="1"
fi
if test "x$enable_experimental" = "x1" ; then
$as_echo "#define JEMALLOC_EXPERIMENTAL " >>confdefs.h
public_syms="${public_syms} allocm dallocm nallocm rallocm sallocm"
fi
GCOV_FLAGS= GCOV_FLAGS=
# Check whether --enable-code-coverage was given. # Check whether --enable-code-coverage was given.
if test "${enable_code_coverage+set}" = set; then : if test "${enable_code_coverage+set}" = set; then :
...@@ -5572,6 +5888,7 @@ _ACEOF ...@@ -5572,6 +5888,7 @@ _ACEOF
fi fi
# Check whether --with-export was given. # Check whether --with-export was given.
if test "${with_export+set}" = set; then : if test "${with_export+set}" = set; then :
withval=$with_export; if test "x$with_export" = "xno"; then withval=$with_export; if test "x$with_export" = "xno"; then
...@@ -5613,48 +5930,54 @@ install_suffix="$INSTALL_SUFFIX" ...@@ -5613,48 +5930,54 @@ install_suffix="$INSTALL_SUFFIX"
je_="je_" je_="je_"
cfgoutputs_in="${srcroot}Makefile.in" cfgoutputs_in="Makefile.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}doc/html.xsl.in" cfgoutputs_in="${cfgoutputs_in} jemalloc.pc.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}doc/manpages.xsl.in" cfgoutputs_in="${cfgoutputs_in} doc/html.xsl.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}doc/jemalloc.xml.in" cfgoutputs_in="${cfgoutputs_in} doc/manpages.xsl.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}include/jemalloc/jemalloc_macros.h.in" cfgoutputs_in="${cfgoutputs_in} doc/jemalloc.xml.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}include/jemalloc/jemalloc_protos.h.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/jemalloc_macros.h.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}include/jemalloc/internal/jemalloc_internal.h.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/jemalloc_protos.h.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}test/test.sh.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/jemalloc_typedefs.h.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}test/include/test/jemalloc_test.h.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/internal/jemalloc_internal.h.in"
cfgoutputs_in="${cfgoutputs_in} test/test.sh.in"
cfgoutputs_in="${cfgoutputs_in} test/include/test/jemalloc_test.h.in"
cfgoutputs_out="Makefile" cfgoutputs_out="Makefile"
cfgoutputs_out="${cfgoutputs_out} jemalloc.pc"
cfgoutputs_out="${cfgoutputs_out} doc/html.xsl" cfgoutputs_out="${cfgoutputs_out} doc/html.xsl"
cfgoutputs_out="${cfgoutputs_out} doc/manpages.xsl" cfgoutputs_out="${cfgoutputs_out} doc/manpages.xsl"
cfgoutputs_out="${cfgoutputs_out} doc/jemalloc.xml" cfgoutputs_out="${cfgoutputs_out} doc/jemalloc.xml"
cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_macros.h" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_macros.h"
cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_protos.h" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_protos.h"
cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_typedefs.h"
cfgoutputs_out="${cfgoutputs_out} include/jemalloc/internal/jemalloc_internal.h" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/internal/jemalloc_internal.h"
cfgoutputs_out="${cfgoutputs_out} test/test.sh" cfgoutputs_out="${cfgoutputs_out} test/test.sh"
cfgoutputs_out="${cfgoutputs_out} test/include/test/jemalloc_test.h" cfgoutputs_out="${cfgoutputs_out} test/include/test/jemalloc_test.h"
cfgoutputs_tup="Makefile" cfgoutputs_tup="Makefile"
cfgoutputs_tup="${cfgoutputs_tup} jemalloc.pc:jemalloc.pc.in"
cfgoutputs_tup="${cfgoutputs_tup} doc/html.xsl:doc/html.xsl.in" cfgoutputs_tup="${cfgoutputs_tup} doc/html.xsl:doc/html.xsl.in"
cfgoutputs_tup="${cfgoutputs_tup} doc/manpages.xsl:doc/manpages.xsl.in" cfgoutputs_tup="${cfgoutputs_tup} doc/manpages.xsl:doc/manpages.xsl.in"
cfgoutputs_tup="${cfgoutputs_tup} doc/jemalloc.xml:doc/jemalloc.xml.in" cfgoutputs_tup="${cfgoutputs_tup} doc/jemalloc.xml:doc/jemalloc.xml.in"
cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_macros.h:include/jemalloc/jemalloc_macros.h.in" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_macros.h:include/jemalloc/jemalloc_macros.h.in"
cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_protos.h:include/jemalloc/jemalloc_protos.h.in" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_protos.h:include/jemalloc/jemalloc_protos.h.in"
cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_typedefs.h:include/jemalloc/jemalloc_typedefs.h.in"
cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/internal/jemalloc_internal.h" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/internal/jemalloc_internal.h"
cfgoutputs_tup="${cfgoutputs_tup} test/test.sh:test/test.sh.in" cfgoutputs_tup="${cfgoutputs_tup} test/test.sh:test/test.sh.in"
cfgoutputs_tup="${cfgoutputs_tup} test/include/test/jemalloc_test.h:test/include/test/jemalloc_test.h.in" cfgoutputs_tup="${cfgoutputs_tup} test/include/test/jemalloc_test.h:test/include/test/jemalloc_test.h.in"
cfghdrs_in="${srcroot}include/jemalloc/jemalloc_defs.h.in" cfghdrs_in="include/jemalloc/jemalloc_defs.h.in"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/jemalloc_internal_defs.h.in" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/jemalloc_internal_defs.h.in"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/private_namespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/private_namespace.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/private_unnamespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/private_unnamespace.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/private_symbols.txt" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/private_symbols.txt"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/public_namespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/public_namespace.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/public_unnamespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/public_unnamespace.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/size_classes.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/size_classes.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/jemalloc_rename.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/jemalloc_rename.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/jemalloc_mangle.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/jemalloc_mangle.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/jemalloc.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/jemalloc.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}test/include/test/jemalloc_test_defs.h.in" cfghdrs_in="${cfghdrs_in} test/include/test/jemalloc_test_defs.h.in"
cfghdrs_out="include/jemalloc/jemalloc_defs.h" cfghdrs_out="include/jemalloc/jemalloc_defs.h"
cfghdrs_out="${cfghdrs_out} include/jemalloc/jemalloc${install_suffix}.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/jemalloc${install_suffix}.h"
...@@ -5672,8 +5995,8 @@ cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/jemalloc_internal_defs.h" ...@@ -5672,8 +5995,8 @@ cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/jemalloc_internal_defs.h"
cfghdrs_out="${cfghdrs_out} test/include/test/jemalloc_test_defs.h" cfghdrs_out="${cfghdrs_out} test/include/test/jemalloc_test_defs.h"
cfghdrs_tup="include/jemalloc/jemalloc_defs.h:include/jemalloc/jemalloc_defs.h.in" cfghdrs_tup="include/jemalloc/jemalloc_defs.h:include/jemalloc/jemalloc_defs.h.in"
cfghdrs_tup="${cfghdrs_tup} include/jemalloc/internal/jemalloc_internal_defs.h:${srcroot}include/jemalloc/internal/jemalloc_internal_defs.h.in" cfghdrs_tup="${cfghdrs_tup} include/jemalloc/internal/jemalloc_internal_defs.h:include/jemalloc/internal/jemalloc_internal_defs.h.in"
cfghdrs_tup="${cfghdrs_tup} test/include/test/jemalloc_test_defs.h:${srcroot}test/include/test/jemalloc_test_defs.h.in" cfghdrs_tup="${cfghdrs_tup} test/include/test/jemalloc_test_defs.h:test/include/test/jemalloc_test_defs.h.in"
# Check whether --enable-cc-silence was given. # Check whether --enable-cc-silence was given.
if test "${enable_cc_silence+set}" = set; then : if test "${enable_cc_silence+set}" = set; then :
...@@ -5684,7 +6007,7 @@ else ...@@ -5684,7 +6007,7 @@ else
fi fi
else else
enable_cc_silence="0" enable_cc_silence="1"
fi fi
...@@ -5706,6 +6029,10 @@ else ...@@ -5706,6 +6029,10 @@ else
fi fi
if test "x$enable_debug" = "x1" ; then
$as_echo "#define JEMALLOC_DEBUG " >>confdefs.h
fi
if test "x$enable_debug" = "x1" ; then if test "x$enable_debug" = "x1" ; then
$as_echo "#define JEMALLOC_DEBUG " >>confdefs.h $as_echo "#define JEMALLOC_DEBUG " >>confdefs.h
...@@ -5969,9 +6296,9 @@ fi ...@@ -5969,9 +6296,9 @@ fi
done done
if test "x$LUNWIND" = "x-lunwind" ; then if test "x$LUNWIND" = "x-lunwind" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for backtrace in -lunwind" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for unw_backtrace in -lunwind" >&5
$as_echo_n "checking for backtrace in -lunwind... " >&6; } $as_echo_n "checking for unw_backtrace in -lunwind... " >&6; }
if ${ac_cv_lib_unwind_backtrace+:} false; then : if ${ac_cv_lib_unwind_unw_backtrace+:} false; then :
$as_echo_n "(cached) " >&6 $as_echo_n "(cached) " >&6
else else
ac_check_lib_save_LIBS=$LIBS ac_check_lib_save_LIBS=$LIBS
...@@ -5985,27 +6312,27 @@ cat confdefs.h - <<_ACEOF >conftest.$ac_ext ...@@ -5985,27 +6312,27 @@ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
#ifdef __cplusplus #ifdef __cplusplus
extern "C" extern "C"
#endif #endif
char backtrace (); char unw_backtrace ();
int int
main () main ()
{ {
return backtrace (); return unw_backtrace ();
; ;
return 0; return 0;
} }
_ACEOF _ACEOF
if ac_fn_c_try_link "$LINENO"; then : if ac_fn_c_try_link "$LINENO"; then :
ac_cv_lib_unwind_backtrace=yes ac_cv_lib_unwind_unw_backtrace=yes
else else
ac_cv_lib_unwind_backtrace=no ac_cv_lib_unwind_unw_backtrace=no
fi fi
rm -f core conftest.err conftest.$ac_objext \ rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext conftest$ac_exeext conftest.$ac_ext
LIBS=$ac_check_lib_save_LIBS LIBS=$ac_check_lib_save_LIBS
fi fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_unwind_backtrace" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_unwind_unw_backtrace" >&5
$as_echo "$ac_cv_lib_unwind_backtrace" >&6; } $as_echo "$ac_cv_lib_unwind_unw_backtrace" >&6; }
if test "x$ac_cv_lib_unwind_backtrace" = xyes; then : if test "x$ac_cv_lib_unwind_unw_backtrace" = xyes; then :
LIBS="$LIBS $LUNWIND" LIBS="$LIBS $LUNWIND"
else else
enable_prof_libunwind="0" enable_prof_libunwind="0"
...@@ -6168,11 +6495,6 @@ $as_echo_n "checking configured backtracing method... " >&6; } ...@@ -6168,11 +6495,6 @@ $as_echo_n "checking configured backtracing method... " >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $backtrace_method" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $backtrace_method" >&5
$as_echo "$backtrace_method" >&6; } $as_echo "$backtrace_method" >&6; }
if test "x$enable_prof" = "x1" ; then if test "x$enable_prof" = "x1" ; then
if test "x${force_tls}" = "x0" ; then
as_fn_error $? "Heap profiling requires TLS" "$LINENO" 5;
fi
force_tls="1"
if test "x$abi" != "xpecoff"; then if test "x$abi" != "xpecoff"; then
LIBS="$LIBS -lm" LIBS="$LIBS -lm"
fi fi
...@@ -6201,63 +6523,11 @@ if test "x$enable_tcache" = "x1" ; then ...@@ -6201,63 +6523,11 @@ if test "x$enable_tcache" = "x1" ; then
fi fi
# Check whether --enable-mremap was given. if test "x${maps_coalesce}" = "x1" ; then
if test "${enable_mremap+set}" = set; then : $as_echo "#define JEMALLOC_MAPS_COALESCE " >>confdefs.h
enableval=$enable_mremap; if test "x$enable_mremap" = "xno" ; then
enable_mremap="0"
else
enable_mremap="1"
fi
else
enable_mremap="0"
fi fi
if test "x$enable_mremap" = "x1" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether mremap(...MREMAP_FIXED...) is compilable" >&5
$as_echo_n "checking whether mremap(...MREMAP_FIXED...) is compilable... " >&6; }
if ${je_cv_mremap_fixed+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#define _GNU_SOURCE
#include <sys/mman.h>
int
main ()
{
void *p = mremap((void *)0, 0, 0, MREMAP_MAYMOVE|MREMAP_FIXED, (void *)0);
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_mremap_fixed=yes
else
je_cv_mremap_fixed=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_mremap_fixed" >&5
$as_echo "$je_cv_mremap_fixed" >&6; }
if test "x${je_cv_mremap_fixed}" = "xno" ; then
enable_mremap="0"
fi
fi
if test "x$enable_mremap" = "x1" ; then
$as_echo "#define JEMALLOC_MREMAP " >>confdefs.h
fi
# Check whether --enable-munmap was given. # Check whether --enable-munmap was given.
if test "${enable_munmap+set}" = set; then : if test "${enable_munmap+set}" = set; then :
enableval=$enable_munmap; if test "x$enable_munmap" = "xno" ; then enableval=$enable_munmap; if test "x$enable_munmap" = "xno" ; then
...@@ -6277,19 +6547,7 @@ if test "x$enable_munmap" = "x1" ; then ...@@ -6277,19 +6547,7 @@ if test "x$enable_munmap" = "x1" ; then
fi fi
# Check whether --enable-dss was given. have_dss="1"
if test "${enable_dss+set}" = set; then :
enableval=$enable_dss; if test "x$enable_dss" = "xno" ; then
enable_dss="0"
else
enable_dss="1"
fi
else
enable_dss="0"
fi
ac_fn_c_check_func "$LINENO" "sbrk" "ac_cv_func_sbrk" ac_fn_c_check_func "$LINENO" "sbrk" "ac_cv_func_sbrk"
if test "x$ac_cv_func_sbrk" = xyes; then : if test "x$ac_cv_func_sbrk" = xyes; then :
have_sbrk="1" have_sbrk="1"
...@@ -6298,24 +6556,20 @@ else ...@@ -6298,24 +6556,20 @@ else
fi fi
if test "x$have_sbrk" = "x1" ; then if test "x$have_sbrk" = "x1" ; then
if test "x$sbrk_deprecated" == "x1" ; then if test "x$sbrk_deprecated" = "x1" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Disabling dss allocation because sbrk is deprecated" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: Disabling dss allocation because sbrk is deprecated" >&5
$as_echo "Disabling dss allocation because sbrk is deprecated" >&6; } $as_echo "Disabling dss allocation because sbrk is deprecated" >&6; }
enable_dss="0" have_dss="0"
else
$as_echo "#define JEMALLOC_HAVE_SBRK " >>confdefs.h
fi fi
else else
enable_dss="0" have_dss="0"
fi fi
if test "x$enable_dss" = "x1" ; then if test "x$have_dss" = "x1" ; then
$as_echo "#define JEMALLOC_DSS " >>confdefs.h $as_echo "#define JEMALLOC_DSS " >>confdefs.h
fi fi
# Check whether --enable-fill was given. # Check whether --enable-fill was given.
if test "${enable_fill+set}" = set; then : if test "${enable_fill+set}" = set; then :
enableval=$enable_fill; if test "x$enable_fill" = "xno" ; then enableval=$enable_fill; if test "x$enable_fill" = "xno" ; then
...@@ -6471,16 +6725,159 @@ if test "x$enable_xmalloc" = "x1" ; then ...@@ -6471,16 +6725,159 @@ if test "x$enable_xmalloc" = "x1" ; then
fi fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking STATIC_PAGE_SHIFT" >&5 # Check whether --enable-cache-oblivious was given.
$as_echo_n "checking STATIC_PAGE_SHIFT... " >&6; } if test "${enable_cache_oblivious+set}" = set; then :
if ${je_cv_static_page_shift+:} false; then : enableval=$enable_cache_oblivious; if test "x$enable_cache_oblivious" = "xno" ; then
enable_cache_oblivious="0"
else
enable_cache_oblivious="1"
fi
else
enable_cache_oblivious="1"
fi
if test "x$enable_cache_oblivious" = "x1" ; then
$as_echo "#define JEMALLOC_CACHE_OBLIVIOUS " >>confdefs.h
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a program using __builtin_ffsl is compilable" >&5
$as_echo_n "checking whether a program using __builtin_ffsl is compilable... " >&6; }
if ${je_cv_gcc_builtin_ffsl+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdio.h>
#include <strings.h>
#include <string.h>
int
main ()
{
{
int rv = __builtin_ffsl(0x08);
printf("%d\n", rv);
}
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_gcc_builtin_ffsl=yes
else
je_cv_gcc_builtin_ffsl=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_gcc_builtin_ffsl" >&5
$as_echo "$je_cv_gcc_builtin_ffsl" >&6; }
if test "x${je_cv_gcc_builtin_ffsl}" = "xyes" ; then
$as_echo "#define JEMALLOC_INTERNAL_FFSL __builtin_ffsl" >>confdefs.h
$as_echo "#define JEMALLOC_INTERNAL_FFS __builtin_ffs" >>confdefs.h
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a program using ffsl is compilable" >&5
$as_echo_n "checking whether a program using ffsl is compilable... " >&6; }
if ${je_cv_function_ffsl+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdio.h>
#include <strings.h>
#include <string.h>
int
main ()
{
{
int rv = ffsl(0x08);
printf("%d\n", rv);
}
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_function_ffsl=yes
else
je_cv_function_ffsl=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_function_ffsl" >&5
$as_echo "$je_cv_function_ffsl" >&6; }
if test "x${je_cv_function_ffsl}" = "xyes" ; then
$as_echo "#define JEMALLOC_INTERNAL_FFSL ffsl" >>confdefs.h
$as_echo "#define JEMALLOC_INTERNAL_FFS ffs" >>confdefs.h
else
as_fn_error $? "Cannot build without ffsl(3) or __builtin_ffsl()" "$LINENO" 5
fi
fi
# Check whether --with-lg_tiny_min was given.
if test "${with_lg_tiny_min+set}" = set; then :
withval=$with_lg_tiny_min; LG_TINY_MIN="$with_lg_tiny_min"
else
LG_TINY_MIN="3"
fi
cat >>confdefs.h <<_ACEOF
#define LG_TINY_MIN $LG_TINY_MIN
_ACEOF
# Check whether --with-lg_quantum was given.
if test "${with_lg_quantum+set}" = set; then :
withval=$with_lg_quantum; LG_QUANTA="$with_lg_quantum"
else
LG_QUANTA="3 4"
fi
if test "x$with_lg_quantum" != "x" ; then
cat >>confdefs.h <<_ACEOF
#define LG_QUANTUM $with_lg_quantum
_ACEOF
fi
# Check whether --with-lg_page was given.
if test "${with_lg_page+set}" = set; then :
withval=$with_lg_page; LG_PAGE="$with_lg_page"
else
LG_PAGE="detect"
fi
if test "x$LG_PAGE" = "xdetect"; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking LG_PAGE" >&5
$as_echo_n "checking LG_PAGE... " >&6; }
if ${je_cv_lg_page+:} false; then :
$as_echo_n "(cached) " >&6 $as_echo_n "(cached) " >&6
else else
if test "$cross_compiling" = yes; then : if test "$cross_compiling" = yes; then :
{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 je_cv_lg_page=12
$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
as_fn_error $? "cannot run test program while cross compiling
See \`config.log' for more details" "$LINENO" 5; }
else else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */ /* end confdefs.h. */
...@@ -6510,7 +6907,7 @@ main () ...@@ -6510,7 +6907,7 @@ main ()
if (result == -1) { if (result == -1) {
return 1; return 1;
} }
result = ffsl(result) - 1; result = JEMALLOC_INTERNAL_FFSL(result) - 1;
f = fopen("conftest.out", "w"); f = fopen("conftest.out", "w");
if (f == NULL) { if (f == NULL) {
...@@ -6525,33 +6922,77 @@ main () ...@@ -6525,33 +6922,77 @@ main ()
return 0; return 0;
} }
_ACEOF _ACEOF
if ac_fn_c_try_run "$LINENO"; then : if ac_fn_c_try_run "$LINENO"; then :
je_cv_static_page_shift=`cat conftest.out` je_cv_lg_page=`cat conftest.out`
else
je_cv_lg_page=undefined
fi
rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
conftest.$ac_objext conftest.beam conftest.$ac_ext
fi
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_lg_page" >&5
$as_echo "$je_cv_lg_page" >&6; }
fi
if test "x${je_cv_lg_page}" != "x" ; then
LG_PAGE="${je_cv_lg_page}"
fi
if test "x${LG_PAGE}" != "xundefined" ; then
cat >>confdefs.h <<_ACEOF
#define LG_PAGE $LG_PAGE
_ACEOF
else else
je_cv_static_page_shift=undefined as_fn_error $? "cannot determine value for LG_PAGE" "$LINENO" 5
fi
rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
conftest.$ac_objext conftest.beam conftest.$ac_ext
fi fi
# Check whether --with-lg_page_sizes was given.
if test "${with_lg_page_sizes+set}" = set; then :
withval=$with_lg_page_sizes; LG_PAGE_SIZES="$with_lg_page_sizes"
else
LG_PAGE_SIZES="$LG_PAGE"
fi fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_static_page_shift" >&5
$as_echo "$je_cv_static_page_shift" >&6; }
if test "x$je_cv_static_page_shift" != "xundefined"; then
cat >>confdefs.h <<_ACEOF
#define STATIC_PAGE_SHIFT $je_cv_static_page_shift
_ACEOF
# Check whether --with-lg_size_class_group was given.
if test "${with_lg_size_class_group+set}" = set; then :
withval=$with_lg_size_class_group; LG_SIZE_CLASS_GROUP="$with_lg_size_class_group"
else else
as_fn_error $? "cannot determine value for STATIC_PAGE_SHIFT" "$LINENO" 5 LG_SIZE_CLASS_GROUP="2"
fi fi
if test -d "${srcroot}.git" ; then
git describe --long --abbrev=40 > ${srcroot}VERSION if test "x`test ! \"${srcroot}\" && cd \"${srcroot}\"; git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
rm -f "${objroot}VERSION"
for pattern in '[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \
'[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9][0-9]'; do
if test ! -e "${objroot}VERSION" ; then
(test ! "${srcroot}" && cd "${srcroot}"; git describe --long --abbrev=40 --match="${pattern}") > "${objroot}VERSION.tmp" 2>/dev/null
if test $? -eq 0 ; then
mv "${objroot}VERSION.tmp" "${objroot}VERSION"
break
fi
fi
done
fi
rm -f "${objroot}VERSION.tmp"
if test ! -e "${objroot}VERSION" ; then
if test ! -e "${srcroot}VERSION" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Missing VERSION file, and unable to generate it; creating bogus VERSION" >&5
$as_echo "Missing VERSION file, and unable to generate it; creating bogus VERSION" >&6; }
echo "0.0.0-0-g0000000000000000000000000000000000000000" > "${objroot}VERSION"
else
cp ${srcroot}VERSION ${objroot}VERSION
fi
fi fi
jemalloc_version=`cat ${srcroot}VERSION` jemalloc_version=`cat "${objroot}VERSION"`
jemalloc_version_major=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print $1}'` jemalloc_version_major=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print $1}'`
jemalloc_version_minor=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print $2}'` jemalloc_version_minor=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print $2}'`
jemalloc_version_bugfix=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print $3}'` jemalloc_version_bugfix=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print $3}'`
...@@ -6683,6 +7124,93 @@ fi ...@@ -6683,6 +7124,93 @@ fi
CPPFLAGS="$CPPFLAGS -D_REENTRANT" CPPFLAGS="$CPPFLAGS -D_REENTRANT"
SAVED_LIBS="${LIBS}"
LIBS=
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing clock_gettime" >&5
$as_echo_n "checking for library containing clock_gettime... " >&6; }
if ${ac_cv_search_clock_gettime+:} false; then :
$as_echo_n "(cached) " >&6
else
ac_func_search_save_LIBS=$LIBS
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
/* Override any GCC internal prototype to avoid an error.
Use char because int might match the return type of a GCC
builtin and then its argument prototype would still apply. */
#ifdef __cplusplus
extern "C"
#endif
char clock_gettime ();
int
main ()
{
return clock_gettime ();
;
return 0;
}
_ACEOF
for ac_lib in '' rt; do
if test -z "$ac_lib"; then
ac_res="none required"
else
ac_res=-l$ac_lib
LIBS="-l$ac_lib $ac_func_search_save_LIBS"
fi
if ac_fn_c_try_link "$LINENO"; then :
ac_cv_search_clock_gettime=$ac_res
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext
if ${ac_cv_search_clock_gettime+:} false; then :
break
fi
done
if ${ac_cv_search_clock_gettime+:} false; then :
else
ac_cv_search_clock_gettime=no
fi
rm conftest.$ac_ext
LIBS=$ac_func_search_save_LIBS
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_clock_gettime" >&5
$as_echo "$ac_cv_search_clock_gettime" >&6; }
ac_res=$ac_cv_search_clock_gettime
if test "$ac_res" != no; then :
test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
TESTLIBS="${LIBS}"
fi
LIBS="${SAVED_LIBS}"
ac_fn_c_check_func "$LINENO" "secure_getenv" "ac_cv_func_secure_getenv"
if test "x$ac_cv_func_secure_getenv" = xyes; then :
have_secure_getenv="1"
else
have_secure_getenv="0"
fi
if test "x$have_secure_getenv" = "x1" ; then
$as_echo "#define JEMALLOC_HAVE_SECURE_GETENV " >>confdefs.h
fi
ac_fn_c_check_func "$LINENO" "issetugid" "ac_cv_func_issetugid"
if test "x$ac_cv_func_issetugid" = xyes; then :
have_issetugid="1"
else
have_issetugid="0"
fi
if test "x$have_issetugid" = "x1" ; then
$as_echo "#define JEMALLOC_HAVE_ISSETUGID " >>confdefs.h
fi
ac_fn_c_check_func "$LINENO" "_malloc_thread_cleanup" "ac_cv_func__malloc_thread_cleanup" ac_fn_c_check_func "$LINENO" "_malloc_thread_cleanup" "ac_cv_func__malloc_thread_cleanup"
if test "x$ac_cv_func__malloc_thread_cleanup" = xyes; then : if test "x$ac_cv_func__malloc_thread_cleanup" = xyes; then :
have__malloc_thread_cleanup="1" have__malloc_thread_cleanup="1"
...@@ -6719,11 +7247,11 @@ else ...@@ -6719,11 +7247,11 @@ else
fi fi
else else
enable_lazy_lock="0" enable_lazy_lock=""
fi fi
if test "x$enable_lazy_lock" = "x0" -a "x${force_lazy_lock}" = "x1" ; then if test "x$enable_lazy_lock" = "x" -a "x${force_lazy_lock}" = "x1" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing lazy-lock to avoid allocator/threading bootstrap issues" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing lazy-lock to avoid allocator/threading bootstrap issues" >&5
$as_echo "Forcing lazy-lock to avoid allocator/threading bootstrap issues" >&6; } $as_echo "Forcing lazy-lock to avoid allocator/threading bootstrap issues" >&6; }
enable_lazy_lock="1" enable_lazy_lock="1"
...@@ -6796,6 +7324,8 @@ fi ...@@ -6796,6 +7324,8 @@ fi
fi fi
$as_echo "#define JEMALLOC_LAZY_LOCK " >>confdefs.h $as_echo "#define JEMALLOC_LAZY_LOCK " >>confdefs.h
else
enable_lazy_lock="0"
fi fi
...@@ -6808,19 +7338,22 @@ else ...@@ -6808,19 +7338,22 @@ else
fi fi
else else
enable_tls="1" enable_tls=""
fi fi
if test "x${enable_tls}" = "x0" -a "x${force_tls}" = "x1" ; then if test "x${enable_tls}" = "x" ; then
if test "x${force_tls}" = "x1" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing TLS to avoid allocator/threading bootstrap issues" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing TLS to avoid allocator/threading bootstrap issues" >&5
$as_echo "Forcing TLS to avoid allocator/threading bootstrap issues" >&6; } $as_echo "Forcing TLS to avoid allocator/threading bootstrap issues" >&6; }
enable_tls="1" enable_tls="1"
fi elif test "x${force_tls}" = "x0" ; then
if test "x${enable_tls}" = "x1" -a "x${force_tls}" = "x0" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing no TLS to avoid allocator/threading bootstrap issues" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing no TLS to avoid allocator/threading bootstrap issues" >&5
$as_echo "Forcing no TLS to avoid allocator/threading bootstrap issues" >&6; } $as_echo "Forcing no TLS to avoid allocator/threading bootstrap issues" >&6; }
enable_tls="0" enable_tls="0"
else
enable_tls="1"
fi
fi fi
if test "x${enable_tls}" = "x1" ; then if test "x${enable_tls}" = "x1" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for TLS" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for TLS" >&5
...@@ -6851,56 +7384,69 @@ $as_echo "no" >&6; } ...@@ -6851,56 +7384,69 @@ $as_echo "no" >&6; }
enable_tls="0" enable_tls="0"
fi fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
else
enable_tls="0"
fi fi
if test "x${enable_tls}" = "x1" ; then if test "x${enable_tls}" = "x1" ; then
if test "x${force_tls}" = "x0" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: TLS enabled despite being marked unusable on this platform" >&5
$as_echo "$as_me: WARNING: TLS enabled despite being marked unusable on this platform" >&2;}
fi
cat >>confdefs.h <<_ACEOF cat >>confdefs.h <<_ACEOF
#define JEMALLOC_TLS #define JEMALLOC_TLS
_ACEOF _ACEOF
elif test "x${force_tls}" = "x1" ; then elif test "x${force_tls}" = "x1" ; then
as_fn_error $? "Failed to configure TLS, which is mandatory for correct function" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: TLS disabled despite being marked critical on this platform" >&5
$as_echo "$as_me: WARNING: TLS disabled despite being marked critical on this platform" >&2;}
fi fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a program using ffsl is compilable" >&5
$as_echo_n "checking whether a program using ffsl is compilable... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether C11 atomics is compilable" >&5
if ${je_cv_function_ffsl+:} false; then : $as_echo_n "checking whether C11 atomics is compilable... " >&6; }
if ${je_cv_c11atomics+:} false; then :
$as_echo_n "(cached) " >&6 $as_echo_n "(cached) " >&6
else else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */ /* end confdefs.h. */
#include <stdio.h> #include <stdint.h>
#include <strings.h> #if (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)
#include <string.h> #include <stdatomic.h>
#else
#error Atomics not available
#endif
int int
main () main ()
{ {
{ uint64_t *p = (uint64_t *)0;
int rv = ffsl(0x08); uint64_t x = 1;
printf("%d\n", rv); volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
} uint64_t r = atomic_fetch_add(a, x) + x;
return (r == 0);
; ;
return 0; return 0;
} }
_ACEOF _ACEOF
if ac_fn_c_try_link "$LINENO"; then : if ac_fn_c_try_link "$LINENO"; then :
je_cv_function_ffsl=yes je_cv_c11atomics=yes
else else
je_cv_function_ffsl=no je_cv_c11atomics=no
fi fi
rm -f core conftest.err conftest.$ac_objext \ rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext conftest$ac_exeext conftest.$ac_ext
fi fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_function_ffsl" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_c11atomics" >&5
$as_echo "$je_cv_function_ffsl" >&6; } $as_echo "$je_cv_c11atomics" >&6; }
if test "x${je_cv_c11atomics}" = "xyes" ; then
$as_echo "#define JEMALLOC_C11ATOMICS 1" >>confdefs.h
if test "x${je_cv_function_ffsl}" != "xyes" ; then
as_fn_error $? "Cannot build without ffsl(3)" "$LINENO" 5
fi fi
...@@ -7002,6 +7548,46 @@ fi ...@@ -7002,6 +7548,46 @@ fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether madvise(2) is compilable" >&5
$as_echo_n "checking whether madvise(2) is compilable... " >&6; }
if ${je_cv_madvise+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <sys/mman.h>
int
main ()
{
{
madvise((void *)0, 0, 0);
}
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_madvise=yes
else
je_cv_madvise=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_madvise" >&5
$as_echo "$je_cv_madvise" >&6; }
if test "x${je_cv_madvise}" = "xyes" ; then
$as_echo "#define JEMALLOC_HAVE_MADVISE " >>confdefs.h
fi
if test "x${je_cv_atomic9}" != "xyes" -a "x${je_cv_osatomic}" != "xyes" ; then if test "x${je_cv_atomic9}" != "xyes" -a "x${je_cv_osatomic}" != "xyes" ; then
...@@ -7097,6 +7683,48 @@ $as_echo "$je_cv_sync_compare_and_swap_8" >&6; } ...@@ -7097,6 +7683,48 @@ $as_echo "$je_cv_sync_compare_and_swap_8" >&6; }
fi fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_clz" >&5
$as_echo_n "checking for __builtin_clz... " >&6; }
if ${je_cv_builtin_clz+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
{
unsigned x = 0;
int y = __builtin_clz(x);
}
{
unsigned long x = 0;
int y = __builtin_clzl(x);
}
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_builtin_clz=yes
else
je_cv_builtin_clz=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_builtin_clz" >&5
$as_echo "$je_cv_builtin_clz" >&6; }
if test "x${je_cv_builtin_clz}" = "xyes" ; then
$as_echo "#define JEMALLOC_HAVE_BUILTIN_CLZ " >>confdefs.h
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether Darwin OSSpin*() is compilable" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether Darwin OSSpin*() is compilable" >&5
$as_echo_n "checking whether Darwin OSSpin*() is compilable... " >&6; } $as_echo_n "checking whether Darwin OSSpin*() is compilable... " >&6; }
...@@ -7160,8 +7788,6 @@ if test "x${enable_zone_allocator}" = "x1" ; then ...@@ -7160,8 +7788,6 @@ if test "x${enable_zone_allocator}" = "x1" ; then
if test "x${abi}" != "xmacho"; then if test "x${abi}" != "xmacho"; then
as_fn_error $? "--enable-zone-allocator is only supported on Darwin" "$LINENO" 5 as_fn_error $? "--enable-zone-allocator is only supported on Darwin" "$LINENO" 5
fi fi
$as_echo "#define JEMALLOC_IVSALLOC " >>confdefs.h
$as_echo "#define JEMALLOC_ZONE " >>confdefs.h $as_echo "#define JEMALLOC_ZONE " >>confdefs.h
...@@ -7175,7 +7801,7 @@ $as_echo_n "checking malloc zone version... " >&6; } ...@@ -7175,7 +7801,7 @@ $as_echo_n "checking malloc zone version... " >&6; }
int int
main () main ()
{ {
static foo[sizeof(malloc_zone_t) == sizeof(void *) * 14 ? 1 : -1] static int foo[sizeof(malloc_zone_t) == sizeof(void *) * 14 ? 1 : -1]
; ;
return 0; return 0;
...@@ -7191,7 +7817,7 @@ else ...@@ -7191,7 +7817,7 @@ else
int int
main () main ()
{ {
static foo[sizeof(malloc_zone_t) == sizeof(void *) * 15 ? 1 : -1] static int foo[sizeof(malloc_zone_t) == sizeof(void *) * 15 ? 1 : -1]
; ;
return 0; return 0;
...@@ -7207,7 +7833,7 @@ else ...@@ -7207,7 +7833,7 @@ else
int int
main () main ()
{ {
static foo[sizeof(malloc_zone_t) == sizeof(void *) * 16 ? 1 : -1] static int foo[sizeof(malloc_zone_t) == sizeof(void *) * 16 ? 1 : -1]
; ;
return 0; return 0;
...@@ -7221,7 +7847,7 @@ if ac_fn_c_try_compile "$LINENO"; then : ...@@ -7221,7 +7847,7 @@ if ac_fn_c_try_compile "$LINENO"; then :
int int
main () main ()
{ {
static foo[sizeof(malloc_introspection_t) == sizeof(void *) * 9 ? 1 : -1] static int foo[sizeof(malloc_introspection_t) == sizeof(void *) * 9 ? 1 : -1]
; ;
return 0; return 0;
...@@ -7237,7 +7863,7 @@ else ...@@ -7237,7 +7863,7 @@ else
int int
main () main ()
{ {
static foo[sizeof(malloc_introspection_t) == sizeof(void *) * 13 ? 1 : -1] static int foo[sizeof(malloc_introspection_t) == sizeof(void *) * 13 ? 1 : -1]
; ;
return 0; return 0;
...@@ -7260,7 +7886,7 @@ else ...@@ -7260,7 +7886,7 @@ else
int int
main () main ()
{ {
static foo[sizeof(malloc_zone_t) == sizeof(void *) * 17 ? 1 : -1] static int foo[sizeof(malloc_zone_t) == sizeof(void *) * 17 ? 1 : -1]
; ;
return 0; return 0;
...@@ -7276,7 +7902,7 @@ else ...@@ -7276,7 +7902,7 @@ else
int int
main () main ()
{ {
static foo[sizeof(malloc_zone_t) > sizeof(void *) * 17 ? 1 : -1] static int foo[sizeof(malloc_zone_t) > sizeof(void *) * 17 ? 1 : -1]
; ;
return 0; return 0;
...@@ -7316,6 +7942,131 @@ _ACEOF ...@@ -7316,6 +7942,131 @@ _ACEOF
fi fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether glibc malloc hook is compilable" >&5
$as_echo_n "checking whether glibc malloc hook is compilable... " >&6; }
if ${je_cv_glibc_malloc_hook+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stddef.h>
extern void (* __free_hook)(void *ptr);
extern void *(* __malloc_hook)(size_t size);
extern void *(* __realloc_hook)(void *ptr, size_t size);
int
main ()
{
void *ptr = 0L;
if (__malloc_hook) ptr = __malloc_hook(1);
if (__realloc_hook) ptr = __realloc_hook(ptr, 2);
if (__free_hook && ptr) __free_hook(ptr);
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_glibc_malloc_hook=yes
else
je_cv_glibc_malloc_hook=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_glibc_malloc_hook" >&5
$as_echo "$je_cv_glibc_malloc_hook" >&6; }
if test "x${je_cv_glibc_malloc_hook}" = "xyes" ; then
$as_echo "#define JEMALLOC_GLIBC_MALLOC_HOOK " >>confdefs.h
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether glibc memalign hook is compilable" >&5
$as_echo_n "checking whether glibc memalign hook is compilable... " >&6; }
if ${je_cv_glibc_memalign_hook+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stddef.h>
extern void *(* __memalign_hook)(size_t alignment, size_t size);
int
main ()
{
void *ptr = 0L;
if (__memalign_hook) ptr = __memalign_hook(16, 7);
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_glibc_memalign_hook=yes
else
je_cv_glibc_memalign_hook=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_glibc_memalign_hook" >&5
$as_echo "$je_cv_glibc_memalign_hook" >&6; }
if test "x${je_cv_glibc_memalign_hook}" = "xyes" ; then
$as_echo "#define JEMALLOC_GLIBC_MEMALIGN_HOOK " >>confdefs.h
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pthreads adaptive mutexes is compilable" >&5
$as_echo_n "checking whether pthreads adaptive mutexes is compilable... " >&6; }
if ${je_cv_pthread_mutex_adaptive_np+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <pthread.h>
int
main ()
{
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ADAPTIVE_NP);
pthread_mutexattr_destroy(&attr);
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_pthread_mutex_adaptive_np=yes
else
je_cv_pthread_mutex_adaptive_np=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_pthread_mutex_adaptive_np" >&5
$as_echo "$je_cv_pthread_mutex_adaptive_np" >&6; }
if test "x${je_cv_pthread_mutex_adaptive_np}" = "xyes" ; then
$as_echo "#define JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP " >>confdefs.h
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for stdbool.h that conforms to C99" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for stdbool.h that conforms to C99" >&5
$as_echo_n "checking for stdbool.h that conforms to C99... " >&6; } $as_echo_n "checking for stdbool.h that conforms to C99... " >&6; }
if ${ac_cv_header_stdbool_h+:} false; then : if ${ac_cv_header_stdbool_h+:} false; then :
...@@ -7440,7 +8191,7 @@ ac_config_headers="$ac_config_headers $cfghdrs_tup" ...@@ -7440,7 +8191,7 @@ ac_config_headers="$ac_config_headers $cfghdrs_tup"
ac_config_files="$ac_config_files $cfgoutputs_tup config.stamp bin/jemalloc.sh" ac_config_files="$ac_config_files $cfgoutputs_tup config.stamp bin/jemalloc-config bin/jemalloc.sh bin/jeprof"
...@@ -8158,8 +8909,13 @@ cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ...@@ -8158,8 +8909,13 @@ cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
objroot="${objroot}" objroot="${objroot}"
SHELL="${SHELL}"
srcdir="${srcdir}" srcdir="${srcdir}"
objroot="${objroot}" objroot="${objroot}"
LG_QUANTA="${LG_QUANTA}"
LG_TINY_MIN=${LG_TINY_MIN}
LG_PAGE_SIZES="${LG_PAGE_SIZES}"
LG_SIZE_CLASS_GROUP=${LG_SIZE_CLASS_GROUP}
srcdir="${srcdir}" srcdir="${srcdir}"
...@@ -8205,7 +8961,9 @@ do ...@@ -8205,7 +8961,9 @@ do
"$cfghdrs_tup") CONFIG_HEADERS="$CONFIG_HEADERS $cfghdrs_tup" ;; "$cfghdrs_tup") CONFIG_HEADERS="$CONFIG_HEADERS $cfghdrs_tup" ;;
"$cfgoutputs_tup") CONFIG_FILES="$CONFIG_FILES $cfgoutputs_tup" ;; "$cfgoutputs_tup") CONFIG_FILES="$CONFIG_FILES $cfgoutputs_tup" ;;
"config.stamp") CONFIG_FILES="$CONFIG_FILES config.stamp" ;; "config.stamp") CONFIG_FILES="$CONFIG_FILES config.stamp" ;;
"bin/jemalloc-config") CONFIG_FILES="$CONFIG_FILES bin/jemalloc-config" ;;
"bin/jemalloc.sh") CONFIG_FILES="$CONFIG_FILES bin/jemalloc.sh" ;; "bin/jemalloc.sh") CONFIG_FILES="$CONFIG_FILES bin/jemalloc.sh" ;;
"bin/jeprof") CONFIG_FILES="$CONFIG_FILES bin/jeprof" ;;
*) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;; *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;;
esac esac
...@@ -8795,7 +9553,7 @@ $as_echo "$as_me: executing $ac_file commands" >&6;} ...@@ -8795,7 +9553,7 @@ $as_echo "$as_me: executing $ac_file commands" >&6;}
;; ;;
"include/jemalloc/internal/size_classes.h":C) "include/jemalloc/internal/size_classes.h":C)
mkdir -p "${objroot}include/jemalloc/internal" mkdir -p "${objroot}include/jemalloc/internal"
"${srcdir}/include/jemalloc/internal/size_classes.sh" > "${objroot}include/jemalloc/internal/size_classes.h" "${SHELL}" "${srcdir}/include/jemalloc/internal/size_classes.sh" "${LG_QUANTA}" ${LG_TINY_MIN} "${LG_PAGE_SIZES}" ${LG_SIZE_CLASS_GROUP} > "${objroot}include/jemalloc/internal/size_classes.h"
;; ;;
"include/jemalloc/jemalloc_protos_jet.h":C) "include/jemalloc/jemalloc_protos_jet.h":C)
mkdir -p "${objroot}include/jemalloc" mkdir -p "${objroot}include/jemalloc"
...@@ -8864,18 +9622,22 @@ $as_echo "jemalloc version : ${jemalloc_version}" >&6; } ...@@ -8864,18 +9622,22 @@ $as_echo "jemalloc version : ${jemalloc_version}" >&6; }
$as_echo "library revision : ${rev}" >&6; } $as_echo "library revision : ${rev}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5
$as_echo "" >&6; } $as_echo "" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CONFIG : ${CONFIG}" >&5
$as_echo "CONFIG : ${CONFIG}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CC : ${CC}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: CC : ${CC}" >&5
$as_echo "CC : ${CC}" >&6; } $as_echo "CC : ${CC}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CPPFLAGS : ${CPPFLAGS}" >&5
$as_echo "CPPFLAGS : ${CPPFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CFLAGS : ${CFLAGS}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: CFLAGS : ${CFLAGS}" >&5
$as_echo "CFLAGS : ${CFLAGS}" >&6; } $as_echo "CFLAGS : ${CFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CPPFLAGS : ${CPPFLAGS}" >&5
$as_echo "CPPFLAGS : ${CPPFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: LDFLAGS : ${LDFLAGS}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: LDFLAGS : ${LDFLAGS}" >&5
$as_echo "LDFLAGS : ${LDFLAGS}" >&6; } $as_echo "LDFLAGS : ${LDFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}" >&5
$as_echo "EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}" >&6; } $as_echo "EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: LIBS : ${LIBS}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: LIBS : ${LIBS}" >&5
$as_echo "LIBS : ${LIBS}" >&6; } $as_echo "LIBS : ${LIBS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: TESTLIBS : ${TESTLIBS}" >&5
$as_echo "TESTLIBS : ${TESTLIBS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: RPATH_EXTRA : ${RPATH_EXTRA}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: RPATH_EXTRA : ${RPATH_EXTRA}" >&5
$as_echo "RPATH_EXTRA : ${RPATH_EXTRA}" >&6; } $as_echo "RPATH_EXTRA : ${RPATH_EXTRA}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5
...@@ -8890,12 +9652,12 @@ $as_echo "" >&6; } ...@@ -8890,12 +9652,12 @@ $as_echo "" >&6; }
$as_echo "PREFIX : ${PREFIX}" >&6; } $as_echo "PREFIX : ${PREFIX}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: BINDIR : ${BINDIR}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: BINDIR : ${BINDIR}" >&5
$as_echo "BINDIR : ${BINDIR}" >&6; } $as_echo "BINDIR : ${BINDIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: DATADIR : ${DATADIR}" >&5
$as_echo "DATADIR : ${DATADIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: INCLUDEDIR : ${INCLUDEDIR}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: INCLUDEDIR : ${INCLUDEDIR}" >&5
$as_echo "INCLUDEDIR : ${INCLUDEDIR}" >&6; } $as_echo "INCLUDEDIR : ${INCLUDEDIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: LIBDIR : ${LIBDIR}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: LIBDIR : ${LIBDIR}" >&5
$as_echo "LIBDIR : ${LIBDIR}" >&6; } $as_echo "LIBDIR : ${LIBDIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: DATADIR : ${DATADIR}" >&5
$as_echo "DATADIR : ${DATADIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: MANDIR : ${MANDIR}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: MANDIR : ${MANDIR}" >&5
$as_echo "MANDIR : ${MANDIR}" >&6; } $as_echo "MANDIR : ${MANDIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5
...@@ -8920,8 +9682,6 @@ $as_echo " : ${JEMALLOC_PRIVATE_NAMESPACE}" >&6; } ...@@ -8920,8 +9682,6 @@ $as_echo " : ${JEMALLOC_PRIVATE_NAMESPACE}" >&6; }
$as_echo "install_suffix : ${install_suffix}" >&6; } $as_echo "install_suffix : ${install_suffix}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: autogen : ${enable_autogen}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: autogen : ${enable_autogen}" >&5
$as_echo "autogen : ${enable_autogen}" >&6; } $as_echo "autogen : ${enable_autogen}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: experimental : ${enable_experimental}" >&5
$as_echo "experimental : ${enable_experimental}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: cc-silence : ${enable_cc_silence}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: cc-silence : ${enable_cc_silence}" >&5
$as_echo "cc-silence : ${enable_cc_silence}" >&6; } $as_echo "cc-silence : ${enable_cc_silence}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: debug : ${enable_debug}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: debug : ${enable_debug}" >&5
...@@ -8948,15 +9708,13 @@ $as_echo "utrace : ${enable_utrace}" >&6; } ...@@ -8948,15 +9708,13 @@ $as_echo "utrace : ${enable_utrace}" >&6; }
$as_echo "valgrind : ${enable_valgrind}" >&6; } $as_echo "valgrind : ${enable_valgrind}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: xmalloc : ${enable_xmalloc}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: xmalloc : ${enable_xmalloc}" >&5
$as_echo "xmalloc : ${enable_xmalloc}" >&6; } $as_echo "xmalloc : ${enable_xmalloc}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: mremap : ${enable_mremap}" >&5
$as_echo "mremap : ${enable_mremap}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: munmap : ${enable_munmap}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: munmap : ${enable_munmap}" >&5
$as_echo "munmap : ${enable_munmap}" >&6; } $as_echo "munmap : ${enable_munmap}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: dss : ${enable_dss}" >&5
$as_echo "dss : ${enable_dss}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: lazy_lock : ${enable_lazy_lock}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: lazy_lock : ${enable_lazy_lock}" >&5
$as_echo "lazy_lock : ${enable_lazy_lock}" >&6; } $as_echo "lazy_lock : ${enable_lazy_lock}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: tls : ${enable_tls}" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: tls : ${enable_tls}" >&5
$as_echo "tls : ${enable_tls}" >&6; } $as_echo "tls : ${enable_tls}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: cache-oblivious : ${enable_cache_oblivious}" >&5
$as_echo "cache-oblivious : ${enable_cache_oblivious}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: ===============================================================================" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: result: ===============================================================================" >&5
$as_echo "===============================================================================" >&6; } $as_echo "===============================================================================" >&6; }
...@@ -43,8 +43,11 @@ AC_CACHE_CHECK([whether $1 is compilable], ...@@ -43,8 +43,11 @@ AC_CACHE_CHECK([whether $1 is compilable],
dnl ============================================================================ dnl ============================================================================
CONFIG=`echo ${ac_configure_args} | sed -e 's#'"'"'\([^ ]*\)'"'"'#\1#g'`
AC_SUBST([CONFIG])
dnl Library revision. dnl Library revision.
rev=1 rev=2
AC_SUBST([rev]) AC_SUBST([rev])
srcroot=$srcdir srcroot=$srcdir
...@@ -134,6 +137,7 @@ if test "x$CFLAGS" = "x" ; then ...@@ -134,6 +137,7 @@ if test "x$CFLAGS" = "x" ; then
AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT]) AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT])
fi fi
JE_CFLAGS_APPEND([-Wall]) JE_CFLAGS_APPEND([-Wall])
JE_CFLAGS_APPEND([-Werror=declaration-after-statement])
JE_CFLAGS_APPEND([-pipe]) JE_CFLAGS_APPEND([-pipe])
JE_CFLAGS_APPEND([-g3]) JE_CFLAGS_APPEND([-g3])
elif test "x$je_cv_msvc" = "xyes" ; then elif test "x$je_cv_msvc" = "xyes" ; then
...@@ -141,7 +145,8 @@ if test "x$CFLAGS" = "x" ; then ...@@ -141,7 +145,8 @@ if test "x$CFLAGS" = "x" ; then
JE_CFLAGS_APPEND([-Zi]) JE_CFLAGS_APPEND([-Zi])
JE_CFLAGS_APPEND([-MT]) JE_CFLAGS_APPEND([-MT])
JE_CFLAGS_APPEND([-W3]) JE_CFLAGS_APPEND([-W3])
CPPFLAGS="$CPPFLAGS -I${srcroot}/include/msvc_compat" JE_CFLAGS_APPEND([-FS])
CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat"
fi fi
fi fi
dnl Append EXTRA_CFLAGS to CFLAGS, if defined. dnl Append EXTRA_CFLAGS to CFLAGS, if defined.
...@@ -155,6 +160,10 @@ if test "x${ac_cv_big_endian}" = "x1" ; then ...@@ -155,6 +160,10 @@ if test "x${ac_cv_big_endian}" = "x1" ; then
AC_DEFINE_UNQUOTED([JEMALLOC_BIG_ENDIAN], [ ]) AC_DEFINE_UNQUOTED([JEMALLOC_BIG_ENDIAN], [ ])
fi fi
if test "x${je_cv_msvc}" = "xyes" -a "x${ac_cv_header_inttypes_h}" = "xno"; then
CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat/C99"
fi
AC_CHECK_SIZEOF([void *]) AC_CHECK_SIZEOF([void *])
if test "x${ac_cv_sizeof_void_p}" = "x8" ; then if test "x${ac_cv_sizeof_void_p}" = "x8" ; then
LG_SIZEOF_PTR=3 LG_SIZEOF_PTR=3
...@@ -201,23 +210,14 @@ AC_CANONICAL_HOST ...@@ -201,23 +210,14 @@ AC_CANONICAL_HOST
dnl CPU-specific settings. dnl CPU-specific settings.
CPU_SPINWAIT="" CPU_SPINWAIT=""
case "${host_cpu}" in case "${host_cpu}" in
i[[345]]86)
;;
i686|x86_64) i686|x86_64)
JE_COMPILABLE([pause instruction], [], AC_CACHE_VAL([je_cv_pause],
[JE_COMPILABLE([pause instruction], [],
[[__asm__ volatile("pause"); return 0;]], [[__asm__ volatile("pause"); return 0;]],
[je_cv_pause]) [je_cv_pause])])
if test "x${je_cv_pause}" = "xyes" ; then if test "x${je_cv_pause}" = "xyes" ; then
CPU_SPINWAIT='__asm__ volatile("pause")' CPU_SPINWAIT='__asm__ volatile("pause")'
fi fi
dnl emmintrin.h fails to compile unless MMX, SSE, and SSE2 are
dnl supported.
JE_COMPILABLE([SSE2 intrinsics], [
#include <emmintrin.h>
], [], [je_cv_sse2])
if test "x${je_cv_sse2}" = "xyes" ; then
AC_DEFINE_UNQUOTED([HAVE_SSE2], [ ])
fi
;; ;;
powerpc) powerpc)
AC_DEFINE_UNQUOTED([HAVE_ALTIVEC], [ ]) AC_DEFINE_UNQUOTED([HAVE_ALTIVEC], [ ])
...@@ -258,9 +258,9 @@ dnl Define cpp macros in CPPFLAGS, rather than doing AC_DEFINE(macro), since the ...@@ -258,9 +258,9 @@ dnl Define cpp macros in CPPFLAGS, rather than doing AC_DEFINE(macro), since the
dnl definitions need to be seen before any headers are included, which is a pain dnl definitions need to be seen before any headers are included, which is a pain
dnl to make happen otherwise. dnl to make happen otherwise.
default_munmap="1" default_munmap="1"
JEMALLOC_USABLE_SIZE_CONST="const" maps_coalesce="1"
case "${host}" in case "${host}" in
*-*-darwin*) *-*-darwin* | *-*-ios*)
CFLAGS="$CFLAGS" CFLAGS="$CFLAGS"
abi="macho" abi="macho"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ]) AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
...@@ -269,7 +269,7 @@ case "${host}" in ...@@ -269,7 +269,7 @@ case "${host}" in
so="dylib" so="dylib"
importlib="${so}" importlib="${so}"
force_tls="0" force_tls="0"
DSO_LDFLAGS='-shared -Wl,-dylib_install_name,$(@F)' DSO_LDFLAGS='-shared -Wl,-install_name,$(LIBDIR)/$(@F)'
SOREV="${rev}.${so}" SOREV="${rev}.${so}"
sbrk_deprecated="1" sbrk_deprecated="1"
;; ;;
...@@ -279,6 +279,22 @@ case "${host}" in ...@@ -279,6 +279,22 @@ case "${host}" in
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ]) AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
force_lazy_lock="1" force_lazy_lock="1"
;; ;;
*-*-dragonfly*)
CFLAGS="$CFLAGS"
abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
;;
*-*-openbsd*)
CFLAGS="$CFLAGS"
abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
force_tls="0"
;;
*-*-bitrig*)
CFLAGS="$CFLAGS"
abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
;;
*-*-linux*) *-*-linux*)
CFLAGS="$CFLAGS" CFLAGS="$CFLAGS"
CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE" CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE"
...@@ -286,7 +302,7 @@ case "${host}" in ...@@ -286,7 +302,7 @@ case "${host}" in
AC_DEFINE([JEMALLOC_HAS_ALLOCA_H]) AC_DEFINE([JEMALLOC_HAS_ALLOCA_H])
AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ]) AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ])
AC_DEFINE([JEMALLOC_THREADED_INIT], [ ]) AC_DEFINE([JEMALLOC_THREADED_INIT], [ ])
JEMALLOC_USABLE_SIZE_CONST="" AC_DEFINE([JEMALLOC_USE_CXX_THROW], [ ])
default_munmap="0" default_munmap="0"
;; ;;
*-*-netbsd*) *-*-netbsd*)
...@@ -322,9 +338,11 @@ case "${host}" in ...@@ -322,9 +338,11 @@ case "${host}" in
fi fi
abi="xcoff" abi="xcoff"
;; ;;
*-*-mingw*) *-*-mingw* | *-*-cygwin*)
abi="pecoff" abi="pecoff"
force_tls="0" force_tls="0"
force_lazy_lock="1"
maps_coalesce="0"
RPATH="" RPATH=""
so="dll" so="dll"
if test "x$je_cv_msvc" = "xyes" ; then if test "x$je_cv_msvc" = "xyes" ; then
...@@ -351,6 +369,22 @@ case "${host}" in ...@@ -351,6 +369,22 @@ case "${host}" in
abi="elf" abi="elf"
;; ;;
esac esac
JEMALLOC_USABLE_SIZE_CONST=const
AC_CHECK_HEADERS([malloc.h], [
AC_MSG_CHECKING([whether malloc_usable_size definition can use const argument])
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include <malloc.h>
#include <stddef.h>
size_t malloc_usable_size(const void *ptr);
],
[])],[
AC_MSG_RESULT([yes])
],[
JEMALLOC_USABLE_SIZE_CONST=
AC_MSG_RESULT([no])
])
])
AC_DEFINE_UNQUOTED([JEMALLOC_USABLE_SIZE_CONST], [$JEMALLOC_USABLE_SIZE_CONST]) AC_DEFINE_UNQUOTED([JEMALLOC_USABLE_SIZE_CONST], [$JEMALLOC_USABLE_SIZE_CONST])
AC_SUBST([abi]) AC_SUBST([abi])
AC_SUBST([RPATH]) AC_SUBST([RPATH])
...@@ -387,7 +421,7 @@ SAVED_CFLAGS="${CFLAGS}" ...@@ -387,7 +421,7 @@ SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror]) JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([tls_model attribute], [], JE_COMPILABLE([tls_model attribute], [],
[static __thread int [static __thread int
__attribute__((tls_model("initial-exec"))) foo; __attribute__((tls_model("initial-exec"), unused)) foo;
foo = 0;], foo = 0;],
[je_cv_tls_model]) [je_cv_tls_model])
CFLAGS="${SAVED_CFLAGS}" CFLAGS="${SAVED_CFLAGS}"
...@@ -397,6 +431,36 @@ if test "x${je_cv_tls_model}" = "xyes" ; then ...@@ -397,6 +431,36 @@ if test "x${je_cv_tls_model}" = "xyes" ; then
else else
AC_DEFINE([JEMALLOC_TLS_MODEL], [ ]) AC_DEFINE([JEMALLOC_TLS_MODEL], [ ])
fi fi
dnl Check for alloc_size attribute support.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([alloc_size attribute], [#include <stdlib.h>],
[void *foo(size_t size) __attribute__((alloc_size(1)));],
[je_cv_alloc_size])
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_alloc_size}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_ATTR_ALLOC_SIZE], [ ])
fi
dnl Check for format(gnu_printf, ...) attribute support.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([format(gnu_printf, ...) attribute], [#include <stdlib.h>],
[void *foo(const char *format, ...) __attribute__((format(gnu_printf, 1, 2)));],
[je_cv_format_gnu_printf])
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_gnu_printf}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF], [ ])
fi
dnl Check for format(printf, ...) attribute support.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([format(printf, ...) attribute], [#include <stdlib.h>],
[void *foo(const char *format, ...) __attribute__((format(printf, 1, 2)));],
[je_cv_format_printf])
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_printf}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_ATTR_FORMAT_PRINTF], [ ])
fi
dnl Support optional additions to rpath. dnl Support optional additions to rpath.
AC_ARG_WITH([rpath], AC_ARG_WITH([rpath],
...@@ -428,7 +492,7 @@ AC_PROG_RANLIB ...@@ -428,7 +492,7 @@ AC_PROG_RANLIB
AC_PATH_PROG([LD], [ld], [false], [$PATH]) AC_PATH_PROG([LD], [ld], [false], [$PATH])
AC_PATH_PROG([AUTOCONF], [autoconf], [false], [$PATH]) AC_PATH_PROG([AUTOCONF], [autoconf], [false], [$PATH])
public_syms="malloc_conf malloc_message malloc calloc posix_memalign aligned_alloc realloc free mallocx rallocx xallocx sallocx dallocx nallocx mallctl mallctlnametomib mallctlbymib malloc_stats_print malloc_usable_size" public_syms="malloc_conf malloc_message malloc calloc posix_memalign aligned_alloc realloc free mallocx rallocx xallocx sallocx dallocx sdallocx nallocx mallctl mallctlnametomib mallctlbymib malloc_stats_print malloc_usable_size"
dnl Check for allocator-related functions that should be wrapped. dnl Check for allocator-related functions that should be wrapped.
AC_CHECK_FUNC([memalign], AC_CHECK_FUNC([memalign],
...@@ -438,24 +502,6 @@ AC_CHECK_FUNC([valloc], ...@@ -438,24 +502,6 @@ AC_CHECK_FUNC([valloc],
[AC_DEFINE([JEMALLOC_OVERRIDE_VALLOC], [ ]) [AC_DEFINE([JEMALLOC_OVERRIDE_VALLOC], [ ])
public_syms="${public_syms} valloc"]) public_syms="${public_syms} valloc"])
dnl Support the experimental API by default.
AC_ARG_ENABLE([experimental],
[AS_HELP_STRING([--disable-experimental],
[Disable support for the experimental API])],
[if test "x$enable_experimental" = "xno" ; then
enable_experimental="0"
else
enable_experimental="1"
fi
],
[enable_experimental="1"]
)
if test "x$enable_experimental" = "x1" ; then
AC_DEFINE([JEMALLOC_EXPERIMENTAL], [ ])
public_syms="${public_syms} allocm dallocm nallocm rallocm sallocm"
fi
AC_SUBST([enable_experimental])
dnl Do not compute test code coverage by default. dnl Do not compute test code coverage by default.
GCOV_FLAGS= GCOV_FLAGS=
AC_ARG_ENABLE([code-coverage], AC_ARG_ENABLE([code-coverage],
...@@ -501,6 +547,7 @@ if test "x$JEMALLOC_PREFIX" != "x" ; then ...@@ -501,6 +547,7 @@ if test "x$JEMALLOC_PREFIX" != "x" ; then
AC_DEFINE_UNQUOTED([JEMALLOC_PREFIX], ["$JEMALLOC_PREFIX"]) AC_DEFINE_UNQUOTED([JEMALLOC_PREFIX], ["$JEMALLOC_PREFIX"])
AC_DEFINE_UNQUOTED([JEMALLOC_CPREFIX], ["$JEMALLOC_CPREFIX"]) AC_DEFINE_UNQUOTED([JEMALLOC_CPREFIX], ["$JEMALLOC_CPREFIX"])
fi fi
AC_SUBST([JEMALLOC_CPREFIX])
AC_ARG_WITH([export], AC_ARG_WITH([export],
[AS_HELP_STRING([--without-export], [disable exporting jemalloc public APIs])], [AS_HELP_STRING([--without-export], [disable exporting jemalloc public APIs])],
...@@ -533,48 +580,54 @@ dnl jemalloc_protos_jet.h easy. ...@@ -533,48 +580,54 @@ dnl jemalloc_protos_jet.h easy.
je_="je_" je_="je_"
AC_SUBST([je_]) AC_SUBST([je_])
cfgoutputs_in="${srcroot}Makefile.in" cfgoutputs_in="Makefile.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}doc/html.xsl.in" cfgoutputs_in="${cfgoutputs_in} jemalloc.pc.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}doc/manpages.xsl.in" cfgoutputs_in="${cfgoutputs_in} doc/html.xsl.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}doc/jemalloc.xml.in" cfgoutputs_in="${cfgoutputs_in} doc/manpages.xsl.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}include/jemalloc/jemalloc_macros.h.in" cfgoutputs_in="${cfgoutputs_in} doc/jemalloc.xml.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}include/jemalloc/jemalloc_protos.h.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/jemalloc_macros.h.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}include/jemalloc/internal/jemalloc_internal.h.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/jemalloc_protos.h.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}test/test.sh.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/jemalloc_typedefs.h.in"
cfgoutputs_in="${cfgoutputs_in} ${srcroot}test/include/test/jemalloc_test.h.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/internal/jemalloc_internal.h.in"
cfgoutputs_in="${cfgoutputs_in} test/test.sh.in"
cfgoutputs_in="${cfgoutputs_in} test/include/test/jemalloc_test.h.in"
cfgoutputs_out="Makefile" cfgoutputs_out="Makefile"
cfgoutputs_out="${cfgoutputs_out} jemalloc.pc"
cfgoutputs_out="${cfgoutputs_out} doc/html.xsl" cfgoutputs_out="${cfgoutputs_out} doc/html.xsl"
cfgoutputs_out="${cfgoutputs_out} doc/manpages.xsl" cfgoutputs_out="${cfgoutputs_out} doc/manpages.xsl"
cfgoutputs_out="${cfgoutputs_out} doc/jemalloc.xml" cfgoutputs_out="${cfgoutputs_out} doc/jemalloc.xml"
cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_macros.h" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_macros.h"
cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_protos.h" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_protos.h"
cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_typedefs.h"
cfgoutputs_out="${cfgoutputs_out} include/jemalloc/internal/jemalloc_internal.h" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/internal/jemalloc_internal.h"
cfgoutputs_out="${cfgoutputs_out} test/test.sh" cfgoutputs_out="${cfgoutputs_out} test/test.sh"
cfgoutputs_out="${cfgoutputs_out} test/include/test/jemalloc_test.h" cfgoutputs_out="${cfgoutputs_out} test/include/test/jemalloc_test.h"
cfgoutputs_tup="Makefile" cfgoutputs_tup="Makefile"
cfgoutputs_tup="${cfgoutputs_tup} jemalloc.pc:jemalloc.pc.in"
cfgoutputs_tup="${cfgoutputs_tup} doc/html.xsl:doc/html.xsl.in" cfgoutputs_tup="${cfgoutputs_tup} doc/html.xsl:doc/html.xsl.in"
cfgoutputs_tup="${cfgoutputs_tup} doc/manpages.xsl:doc/manpages.xsl.in" cfgoutputs_tup="${cfgoutputs_tup} doc/manpages.xsl:doc/manpages.xsl.in"
cfgoutputs_tup="${cfgoutputs_tup} doc/jemalloc.xml:doc/jemalloc.xml.in" cfgoutputs_tup="${cfgoutputs_tup} doc/jemalloc.xml:doc/jemalloc.xml.in"
cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_macros.h:include/jemalloc/jemalloc_macros.h.in" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_macros.h:include/jemalloc/jemalloc_macros.h.in"
cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_protos.h:include/jemalloc/jemalloc_protos.h.in" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_protos.h:include/jemalloc/jemalloc_protos.h.in"
cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_typedefs.h:include/jemalloc/jemalloc_typedefs.h.in"
cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/internal/jemalloc_internal.h" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/internal/jemalloc_internal.h"
cfgoutputs_tup="${cfgoutputs_tup} test/test.sh:test/test.sh.in" cfgoutputs_tup="${cfgoutputs_tup} test/test.sh:test/test.sh.in"
cfgoutputs_tup="${cfgoutputs_tup} test/include/test/jemalloc_test.h:test/include/test/jemalloc_test.h.in" cfgoutputs_tup="${cfgoutputs_tup} test/include/test/jemalloc_test.h:test/include/test/jemalloc_test.h.in"
cfghdrs_in="${srcroot}include/jemalloc/jemalloc_defs.h.in" cfghdrs_in="include/jemalloc/jemalloc_defs.h.in"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/jemalloc_internal_defs.h.in" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/jemalloc_internal_defs.h.in"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/private_namespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/private_namespace.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/private_unnamespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/private_unnamespace.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/private_symbols.txt" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/private_symbols.txt"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/public_namespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/public_namespace.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/public_unnamespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/public_unnamespace.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/internal/size_classes.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/size_classes.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/jemalloc_rename.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/jemalloc_rename.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/jemalloc_mangle.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/jemalloc_mangle.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}include/jemalloc/jemalloc.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/jemalloc.sh"
cfghdrs_in="${cfghdrs_in} ${srcroot}test/include/test/jemalloc_test_defs.h.in" cfghdrs_in="${cfghdrs_in} test/include/test/jemalloc_test_defs.h.in"
cfghdrs_out="include/jemalloc/jemalloc_defs.h" cfghdrs_out="include/jemalloc/jemalloc_defs.h"
cfghdrs_out="${cfghdrs_out} include/jemalloc/jemalloc${install_suffix}.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/jemalloc${install_suffix}.h"
...@@ -592,21 +645,20 @@ cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/jemalloc_internal_defs.h" ...@@ -592,21 +645,20 @@ cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/jemalloc_internal_defs.h"
cfghdrs_out="${cfghdrs_out} test/include/test/jemalloc_test_defs.h" cfghdrs_out="${cfghdrs_out} test/include/test/jemalloc_test_defs.h"
cfghdrs_tup="include/jemalloc/jemalloc_defs.h:include/jemalloc/jemalloc_defs.h.in" cfghdrs_tup="include/jemalloc/jemalloc_defs.h:include/jemalloc/jemalloc_defs.h.in"
cfghdrs_tup="${cfghdrs_tup} include/jemalloc/internal/jemalloc_internal_defs.h:${srcroot}include/jemalloc/internal/jemalloc_internal_defs.h.in" cfghdrs_tup="${cfghdrs_tup} include/jemalloc/internal/jemalloc_internal_defs.h:include/jemalloc/internal/jemalloc_internal_defs.h.in"
cfghdrs_tup="${cfghdrs_tup} test/include/test/jemalloc_test_defs.h:${srcroot}test/include/test/jemalloc_test_defs.h.in" cfghdrs_tup="${cfghdrs_tup} test/include/test/jemalloc_test_defs.h:test/include/test/jemalloc_test_defs.h.in"
dnl Do not silence irrelevant compiler warnings by default, since enabling this dnl Silence irrelevant compiler warnings by default.
dnl option incurs a performance penalty.
AC_ARG_ENABLE([cc-silence], AC_ARG_ENABLE([cc-silence],
[AS_HELP_STRING([--enable-cc-silence], [AS_HELP_STRING([--disable-cc-silence],
[Silence irrelevant compiler warnings])], [Do not silence irrelevant compiler warnings])],
[if test "x$enable_cc_silence" = "xno" ; then [if test "x$enable_cc_silence" = "xno" ; then
enable_cc_silence="0" enable_cc_silence="0"
else else
enable_cc_silence="1" enable_cc_silence="1"
fi fi
], ],
[enable_cc_silence="0"] [enable_cc_silence="1"]
) )
if test "x$enable_cc_silence" = "x1" ; then if test "x$enable_cc_silence" = "x1" ; then
AC_DEFINE([JEMALLOC_CC_SILENCE], [ ]) AC_DEFINE([JEMALLOC_CC_SILENCE], [ ])
...@@ -614,7 +666,8 @@ fi ...@@ -614,7 +666,8 @@ fi
dnl Do not compile with debugging by default. dnl Do not compile with debugging by default.
AC_ARG_ENABLE([debug], AC_ARG_ENABLE([debug],
[AS_HELP_STRING([--enable-debug], [Build debugging code (implies --enable-ivsalloc)])], [AS_HELP_STRING([--enable-debug],
[Build debugging code (implies --enable-ivsalloc)])],
[if test "x$enable_debug" = "xno" ; then [if test "x$enable_debug" = "xno" ; then
enable_debug="0" enable_debug="0"
else else
...@@ -623,6 +676,9 @@ fi ...@@ -623,6 +676,9 @@ fi
], ],
[enable_debug="0"] [enable_debug="0"]
) )
if test "x$enable_debug" = "x1" ; then
AC_DEFINE([JEMALLOC_DEBUG], [ ])
fi
if test "x$enable_debug" = "x1" ; then if test "x$enable_debug" = "x1" ; then
AC_DEFINE([JEMALLOC_DEBUG], [ ]) AC_DEFINE([JEMALLOC_DEBUG], [ ])
enable_ivsalloc="1" enable_ivsalloc="1"
...@@ -631,7 +687,8 @@ AC_SUBST([enable_debug]) ...@@ -631,7 +687,8 @@ AC_SUBST([enable_debug])
dnl Do not validate pointers by default. dnl Do not validate pointers by default.
AC_ARG_ENABLE([ivsalloc], AC_ARG_ENABLE([ivsalloc],
[AS_HELP_STRING([--enable-ivsalloc], [Validate pointers passed through the public API])], [AS_HELP_STRING([--enable-ivsalloc],
[Validate pointers passed through the public API])],
[if test "x$enable_ivsalloc" = "xno" ; then [if test "x$enable_ivsalloc" = "xno" ; then
enable_ivsalloc="0" enable_ivsalloc="0"
else else
...@@ -721,7 +778,7 @@ fi, ...@@ -721,7 +778,7 @@ fi,
if test "x$backtrace_method" = "x" -a "x$enable_prof_libunwind" = "x1" ; then if test "x$backtrace_method" = "x" -a "x$enable_prof_libunwind" = "x1" ; then
AC_CHECK_HEADERS([libunwind.h], , [enable_prof_libunwind="0"]) AC_CHECK_HEADERS([libunwind.h], , [enable_prof_libunwind="0"])
if test "x$LUNWIND" = "x-lunwind" ; then if test "x$LUNWIND" = "x-lunwind" ; then
AC_CHECK_LIB([unwind], [backtrace], [LIBS="$LIBS $LUNWIND"], AC_CHECK_LIB([unwind], [unw_backtrace], [LIBS="$LIBS $LUNWIND"],
[enable_prof_libunwind="0"]) [enable_prof_libunwind="0"])
else else
LIBS="$LIBS $LUNWIND" LIBS="$LIBS $LUNWIND"
...@@ -782,11 +839,6 @@ fi ...@@ -782,11 +839,6 @@ fi
AC_MSG_CHECKING([configured backtracing method]) AC_MSG_CHECKING([configured backtracing method])
AC_MSG_RESULT([$backtrace_method]) AC_MSG_RESULT([$backtrace_method])
if test "x$enable_prof" = "x1" ; then if test "x$enable_prof" = "x1" ; then
if test "x${force_tls}" = "x0" ; then
AC_MSG_ERROR([Heap profiling requires TLS]);
fi
force_tls="1"
if test "x$abi" != "xpecoff"; then if test "x$abi" != "xpecoff"; then
dnl Heap profiling uses the log(3) function. dnl Heap profiling uses the log(3) function.
LIBS="$LIBS -lm" LIBS="$LIBS -lm"
...@@ -812,32 +864,11 @@ if test "x$enable_tcache" = "x1" ; then ...@@ -812,32 +864,11 @@ if test "x$enable_tcache" = "x1" ; then
fi fi
AC_SUBST([enable_tcache]) AC_SUBST([enable_tcache])
dnl Disable mremap() for huge realloc() by default. dnl Indicate whether adjacent virtual memory mappings automatically coalesce
AC_ARG_ENABLE([mremap], dnl (and fragment on demand).
[AS_HELP_STRING([--enable-mremap], [Enable mremap(2) for huge realloc()])], if test "x${maps_coalesce}" = "x1" ; then
[if test "x$enable_mremap" = "xno" ; then AC_DEFINE([JEMALLOC_MAPS_COALESCE], [ ])
enable_mremap="0"
else
enable_mremap="1"
fi
],
[enable_mremap="0"]
)
if test "x$enable_mremap" = "x1" ; then
JE_COMPILABLE([mremap(...MREMAP_FIXED...)], [
#define _GNU_SOURCE
#include <sys/mman.h>
], [
void *p = mremap((void *)0, 0, 0, MREMAP_MAYMOVE|MREMAP_FIXED, (void *)0);
], [je_cv_mremap_fixed])
if test "x${je_cv_mremap_fixed}" = "xno" ; then
enable_mremap="0"
fi
fi
if test "x$enable_mremap" = "x1" ; then
AC_DEFINE([JEMALLOC_MREMAP], [ ])
fi fi
AC_SUBST([enable_mremap])
dnl Enable VM deallocation via munmap() by default. dnl Enable VM deallocation via munmap() by default.
AC_ARG_ENABLE([munmap], AC_ARG_ENABLE([munmap],
...@@ -855,34 +886,22 @@ if test "x$enable_munmap" = "x1" ; then ...@@ -855,34 +886,22 @@ if test "x$enable_munmap" = "x1" ; then
fi fi
AC_SUBST([enable_munmap]) AC_SUBST([enable_munmap])
dnl Do not enable allocation from DSS by default. dnl Enable allocation from DSS if supported by the OS.
AC_ARG_ENABLE([dss], have_dss="1"
[AS_HELP_STRING([--enable-dss], [Enable allocation from DSS])],
[if test "x$enable_dss" = "xno" ; then
enable_dss="0"
else
enable_dss="1"
fi
],
[enable_dss="0"]
)
dnl Check whether the BSD/SUSv1 sbrk() exists. If not, disable DSS support. dnl Check whether the BSD/SUSv1 sbrk() exists. If not, disable DSS support.
AC_CHECK_FUNC([sbrk], [have_sbrk="1"], [have_sbrk="0"]) AC_CHECK_FUNC([sbrk], [have_sbrk="1"], [have_sbrk="0"])
if test "x$have_sbrk" = "x1" ; then if test "x$have_sbrk" = "x1" ; then
if test "x$sbrk_deprecated" == "x1" ; then if test "x$sbrk_deprecated" = "x1" ; then
AC_MSG_RESULT([Disabling dss allocation because sbrk is deprecated]) AC_MSG_RESULT([Disabling dss allocation because sbrk is deprecated])
enable_dss="0" have_dss="0"
else
AC_DEFINE([JEMALLOC_HAVE_SBRK], [ ])
fi fi
else else
enable_dss="0" have_dss="0"
fi fi
if test "x$enable_dss" = "x1" ; then if test "x$have_dss" = "x1" ; then
AC_DEFINE([JEMALLOC_DSS], [ ]) AC_DEFINE([JEMALLOC_DSS], [ ])
fi fi
AC_SUBST([enable_dss])
dnl Support the junk/zero filling option by default. dnl Support the junk/zero filling option by default.
AC_ARG_ENABLE([fill], AC_ARG_ENABLE([fill],
...@@ -974,8 +993,83 @@ if test "x$enable_xmalloc" = "x1" ; then ...@@ -974,8 +993,83 @@ if test "x$enable_xmalloc" = "x1" ; then
fi fi
AC_SUBST([enable_xmalloc]) AC_SUBST([enable_xmalloc])
AC_CACHE_CHECK([STATIC_PAGE_SHIFT], dnl Support cache-oblivious allocation alignment by default.
[je_cv_static_page_shift], AC_ARG_ENABLE([cache-oblivious],
[AS_HELP_STRING([--disable-cache-oblivious],
[Disable support for cache-oblivious allocation alignment])],
[if test "x$enable_cache_oblivious" = "xno" ; then
enable_cache_oblivious="0"
else
enable_cache_oblivious="1"
fi
],
[enable_cache_oblivious="1"]
)
if test "x$enable_cache_oblivious" = "x1" ; then
AC_DEFINE([JEMALLOC_CACHE_OBLIVIOUS], [ ])
fi
AC_SUBST([enable_cache_oblivious])
dnl ============================================================================
dnl Check for __builtin_ffsl(), then ffsl(3), and fail if neither are found.
dnl One of those two functions should (theoretically) exist on all platforms
dnl that jemalloc currently has a chance of functioning on without modification.
dnl We additionally assume ffs() or __builtin_ffs() are defined if
dnl ffsl() or __builtin_ffsl() are defined, respectively.
JE_COMPILABLE([a program using __builtin_ffsl], [
#include <stdio.h>
#include <strings.h>
#include <string.h>
], [
{
int rv = __builtin_ffsl(0x08);
printf("%d\n", rv);
}
], [je_cv_gcc_builtin_ffsl])
if test "x${je_cv_gcc_builtin_ffsl}" = "xyes" ; then
AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [__builtin_ffsl])
AC_DEFINE([JEMALLOC_INTERNAL_FFS], [__builtin_ffs])
else
JE_COMPILABLE([a program using ffsl], [
#include <stdio.h>
#include <strings.h>
#include <string.h>
], [
{
int rv = ffsl(0x08);
printf("%d\n", rv);
}
], [je_cv_function_ffsl])
if test "x${je_cv_function_ffsl}" = "xyes" ; then
AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [ffsl])
AC_DEFINE([JEMALLOC_INTERNAL_FFS], [ffs])
else
AC_MSG_ERROR([Cannot build without ffsl(3) or __builtin_ffsl()])
fi
fi
AC_ARG_WITH([lg_tiny_min],
[AS_HELP_STRING([--with-lg-tiny-min=<lg-tiny-min>],
[Base 2 log of minimum tiny size class to support])],
[LG_TINY_MIN="$with_lg_tiny_min"],
[LG_TINY_MIN="3"])
AC_DEFINE_UNQUOTED([LG_TINY_MIN], [$LG_TINY_MIN])
AC_ARG_WITH([lg_quantum],
[AS_HELP_STRING([--with-lg-quantum=<lg-quantum>],
[Base 2 log of minimum allocation alignment])],
[LG_QUANTA="$with_lg_quantum"],
[LG_QUANTA="3 4"])
if test "x$with_lg_quantum" != "x" ; then
AC_DEFINE_UNQUOTED([LG_QUANTUM], [$with_lg_quantum])
fi
AC_ARG_WITH([lg_page],
[AS_HELP_STRING([--with-lg-page=<lg-page>], [Base 2 log of system page size])],
[LG_PAGE="$with_lg_page"], [LG_PAGE="detect"])
if test "x$LG_PAGE" = "xdetect"; then
AC_CACHE_CHECK([LG_PAGE],
[je_cv_lg_page],
AC_RUN_IFELSE([AC_LANG_PROGRAM( AC_RUN_IFELSE([AC_LANG_PROGRAM(
[[ [[
#include <strings.h> #include <strings.h>
...@@ -1000,7 +1094,7 @@ AC_CACHE_CHECK([STATIC_PAGE_SHIFT], ...@@ -1000,7 +1094,7 @@ AC_CACHE_CHECK([STATIC_PAGE_SHIFT],
if (result == -1) { if (result == -1) {
return 1; return 1;
} }
result = ffsl(result) - 1; result = JEMALLOC_INTERNAL_FFSL(result) - 1;
f = fopen("conftest.out", "w"); f = fopen("conftest.out", "w");
if (f == NULL) { if (f == NULL) {
...@@ -1011,24 +1105,65 @@ AC_CACHE_CHECK([STATIC_PAGE_SHIFT], ...@@ -1011,24 +1105,65 @@ AC_CACHE_CHECK([STATIC_PAGE_SHIFT],
return 0; return 0;
]])], ]])],
[je_cv_static_page_shift=`cat conftest.out`], [je_cv_lg_page=`cat conftest.out`],
[je_cv_static_page_shift=undefined])) [je_cv_lg_page=undefined],
[je_cv_lg_page=12]))
if test "x$je_cv_static_page_shift" != "xundefined"; then fi
AC_DEFINE_UNQUOTED([STATIC_PAGE_SHIFT], [$je_cv_static_page_shift]) if test "x${je_cv_lg_page}" != "x" ; then
LG_PAGE="${je_cv_lg_page}"
fi
if test "x${LG_PAGE}" != "xundefined" ; then
AC_DEFINE_UNQUOTED([LG_PAGE], [$LG_PAGE])
else else
AC_MSG_ERROR([cannot determine value for STATIC_PAGE_SHIFT]) AC_MSG_ERROR([cannot determine value for LG_PAGE])
fi fi
AC_ARG_WITH([lg_page_sizes],
[AS_HELP_STRING([--with-lg-page-sizes=<lg-page-sizes>],
[Base 2 logs of system page sizes to support])],
[LG_PAGE_SIZES="$with_lg_page_sizes"], [LG_PAGE_SIZES="$LG_PAGE"])
AC_ARG_WITH([lg_size_class_group],
[AS_HELP_STRING([--with-lg-size-class-group=<lg-size-class-group>],
[Base 2 log of size classes per doubling])],
[LG_SIZE_CLASS_GROUP="$with_lg_size_class_group"],
[LG_SIZE_CLASS_GROUP="2"])
dnl ============================================================================ dnl ============================================================================
dnl jemalloc configuration. dnl jemalloc configuration.
dnl dnl
dnl Set VERSION if source directory has an embedded git repository. dnl Set VERSION if source directory is inside a git repository.
if test -d "${srcroot}.git" ; then if test "x`test ! \"${srcroot}\" && cd \"${srcroot}\"; git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
git describe --long --abbrev=40 > ${srcroot}VERSION dnl Pattern globs aren't powerful enough to match both single- and
dnl double-digit version numbers, so iterate over patterns to support up to
dnl version 99.99.99 without any accidental matches.
rm -f "${objroot}VERSION"
for pattern in ['[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \
'[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9][0-9]']; do
if test ! -e "${objroot}VERSION" ; then
(test ! "${srcroot}" && cd "${srcroot}"; git describe --long --abbrev=40 --match="${pattern}") > "${objroot}VERSION.tmp" 2>/dev/null
if test $? -eq 0 ; then
mv "${objroot}VERSION.tmp" "${objroot}VERSION"
break
fi
fi
done
fi
rm -f "${objroot}VERSION.tmp"
if test ! -e "${objroot}VERSION" ; then
if test ! -e "${srcroot}VERSION" ; then
AC_MSG_RESULT(
[Missing VERSION file, and unable to generate it; creating bogus VERSION])
echo "0.0.0-0-g0000000000000000000000000000000000000000" > "${objroot}VERSION"
else
cp ${srcroot}VERSION ${objroot}VERSION
fi
fi fi
jemalloc_version=`cat ${srcroot}VERSION` jemalloc_version=`cat "${objroot}VERSION"`
jemalloc_version_major=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]1}'` jemalloc_version_major=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]1}'`
jemalloc_version_minor=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]2}'` jemalloc_version_minor=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]2}'`
jemalloc_version_bugfix=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]3}'` jemalloc_version_bugfix=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]3}'`
...@@ -1055,6 +1190,32 @@ fi ...@@ -1055,6 +1190,32 @@ fi
CPPFLAGS="$CPPFLAGS -D_REENTRANT" CPPFLAGS="$CPPFLAGS -D_REENTRANT"
dnl Check whether clock_gettime(2) is in libc or librt. This function is only
dnl used in test code, so save the result to TESTLIBS to avoid poluting LIBS.
SAVED_LIBS="${LIBS}"
LIBS=
AC_SEARCH_LIBS([clock_gettime], [rt], [TESTLIBS="${LIBS}"])
AC_SUBST([TESTLIBS])
LIBS="${SAVED_LIBS}"
dnl Check if the GNU-specific secure_getenv function exists.
AC_CHECK_FUNC([secure_getenv],
[have_secure_getenv="1"],
[have_secure_getenv="0"]
)
if test "x$have_secure_getenv" = "x1" ; then
AC_DEFINE([JEMALLOC_HAVE_SECURE_GETENV], [ ])
fi
dnl Check if the Solaris/BSD issetugid function exists.
AC_CHECK_FUNC([issetugid],
[have_issetugid="1"],
[have_issetugid="0"]
)
if test "x$have_issetugid" = "x1" ; then
AC_DEFINE([JEMALLOC_HAVE_ISSETUGID], [ ])
fi
dnl Check whether the BSD-specific _malloc_thread_cleanup() exists. If so, use dnl Check whether the BSD-specific _malloc_thread_cleanup() exists. If so, use
dnl it rather than pthreads TSD cleanup functions to support cleanup during dnl it rather than pthreads TSD cleanup functions to support cleanup during
dnl thread exit, in order to avoid pthreads library recursion during dnl thread exit, in order to avoid pthreads library recursion during
...@@ -1089,9 +1250,9 @@ else ...@@ -1089,9 +1250,9 @@ else
enable_lazy_lock="1" enable_lazy_lock="1"
fi fi
], ],
[enable_lazy_lock="0"] [enable_lazy_lock=""]
) )
if test "x$enable_lazy_lock" = "x0" -a "x${force_lazy_lock}" = "x1" ; then if test "x$enable_lazy_lock" = "x" -a "x${force_lazy_lock}" = "x1" ; then
AC_MSG_RESULT([Forcing lazy-lock to avoid allocator/threading bootstrap issues]) AC_MSG_RESULT([Forcing lazy-lock to avoid allocator/threading bootstrap issues])
enable_lazy_lock="1" enable_lazy_lock="1"
fi fi
...@@ -1104,6 +1265,8 @@ if test "x$enable_lazy_lock" = "x1" ; then ...@@ -1104,6 +1265,8 @@ if test "x$enable_lazy_lock" = "x1" ; then
]) ])
fi fi
AC_DEFINE([JEMALLOC_LAZY_LOCK], [ ]) AC_DEFINE([JEMALLOC_LAZY_LOCK], [ ])
else
enable_lazy_lock="0"
fi fi
AC_SUBST([enable_lazy_lock]) AC_SUBST([enable_lazy_lock])
...@@ -1115,15 +1278,18 @@ else ...@@ -1115,15 +1278,18 @@ else
enable_tls="1" enable_tls="1"
fi fi
, ,
enable_tls="1" enable_tls=""
) )
if test "x${enable_tls}" = "x0" -a "x${force_tls}" = "x1" ; then if test "x${enable_tls}" = "x" ; then
if test "x${force_tls}" = "x1" ; then
AC_MSG_RESULT([Forcing TLS to avoid allocator/threading bootstrap issues]) AC_MSG_RESULT([Forcing TLS to avoid allocator/threading bootstrap issues])
enable_tls="1" enable_tls="1"
fi elif test "x${force_tls}" = "x0" ; then
if test "x${enable_tls}" = "x1" -a "x${force_tls}" = "x0" ; then
AC_MSG_RESULT([Forcing no TLS to avoid allocator/threading bootstrap issues]) AC_MSG_RESULT([Forcing no TLS to avoid allocator/threading bootstrap issues])
enable_tls="0" enable_tls="0"
else
enable_tls="1"
fi
fi fi
if test "x${enable_tls}" = "x1" ; then if test "x${enable_tls}" = "x1" ; then
AC_MSG_CHECKING([for TLS]) AC_MSG_CHECKING([for TLS])
...@@ -1138,30 +1304,38 @@ AC_COMPILE_IFELSE([AC_LANG_PROGRAM( ...@@ -1138,30 +1304,38 @@ AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
AC_MSG_RESULT([yes]), AC_MSG_RESULT([yes]),
AC_MSG_RESULT([no]) AC_MSG_RESULT([no])
enable_tls="0") enable_tls="0")
else
enable_tls="0"
fi fi
AC_SUBST([enable_tls]) AC_SUBST([enable_tls])
if test "x${enable_tls}" = "x1" ; then if test "x${enable_tls}" = "x1" ; then
if test "x${force_tls}" = "x0" ; then
AC_MSG_WARN([TLS enabled despite being marked unusable on this platform])
fi
AC_DEFINE_UNQUOTED([JEMALLOC_TLS], [ ]) AC_DEFINE_UNQUOTED([JEMALLOC_TLS], [ ])
elif test "x${force_tls}" = "x1" ; then elif test "x${force_tls}" = "x1" ; then
AC_MSG_ERROR([Failed to configure TLS, which is mandatory for correct function]) AC_MSG_WARN([TLS disabled despite being marked critical on this platform])
fi fi
dnl ============================================================================ dnl ============================================================================
dnl Check for ffsl(3), and fail if not found. This function exists on all dnl Check for C11 atomics.
dnl platforms that jemalloc currently has a chance of functioning on without
dnl modification. JE_COMPILABLE([C11 atomics], [
JE_COMPILABLE([a program using ffsl], [ #include <stdint.h>
#include <stdio.h> #if (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)
#include <strings.h> #include <stdatomic.h>
#include <string.h> #else
#error Atomics not available
#endif
], [ ], [
{ uint64_t *p = (uint64_t *)0;
int rv = ffsl(0x08); uint64_t x = 1;
printf("%d\n", rv); volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
} uint64_t r = atomic_fetch_add(a, x) + x;
], [je_cv_function_ffsl]) return (r == 0);
if test "x${je_cv_function_ffsl}" != "xyes" ; then ], [je_cv_c11atomics])
AC_MSG_ERROR([Cannot build without ffsl(3)]) if test "x${je_cv_c11atomics}" = "xyes" ; then
AC_DEFINE([JEMALLOC_C11ATOMICS])
fi fi
dnl ============================================================================ dnl ============================================================================
...@@ -1209,6 +1383,20 @@ if test "x${je_cv_osatomic}" = "xyes" ; then ...@@ -1209,6 +1383,20 @@ if test "x${je_cv_osatomic}" = "xyes" ; then
AC_DEFINE([JEMALLOC_OSATOMIC], [ ]) AC_DEFINE([JEMALLOC_OSATOMIC], [ ])
fi fi
dnl ============================================================================
dnl Check for madvise(2).
JE_COMPILABLE([madvise(2)], [
#include <sys/mman.h>
], [
{
madvise((void *)0, 0, 0);
}
], [je_cv_madvise])
if test "x${je_cv_madvise}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_MADVISE], [ ])
fi
dnl ============================================================================ dnl ============================================================================
dnl Check whether __sync_{add,sub}_and_fetch() are available despite dnl Check whether __sync_{add,sub}_and_fetch() are available despite
dnl __GCC_HAVE_SYNC_COMPARE_AND_SWAP_n macros being undefined. dnl __GCC_HAVE_SYNC_COMPARE_AND_SWAP_n macros being undefined.
...@@ -1243,6 +1431,29 @@ if test "x${je_cv_atomic9}" != "xyes" -a "x${je_cv_osatomic}" != "xyes" ; then ...@@ -1243,6 +1431,29 @@ if test "x${je_cv_atomic9}" != "xyes" -a "x${je_cv_osatomic}" != "xyes" ; then
JE_SYNC_COMPARE_AND_SWAP_CHECK(64, 8) JE_SYNC_COMPARE_AND_SWAP_CHECK(64, 8)
fi fi
dnl ============================================================================
dnl Check for __builtin_clz() and __builtin_clzl().
AC_CACHE_CHECK([for __builtin_clz],
[je_cv_builtin_clz],
[AC_LINK_IFELSE([AC_LANG_PROGRAM([],
[
{
unsigned x = 0;
int y = __builtin_clz(x);
}
{
unsigned long x = 0;
int y = __builtin_clzl(x);
}
])],
[je_cv_builtin_clz=yes],
[je_cv_builtin_clz=no])])
if test "x${je_cv_builtin_clz}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_BUILTIN_CLZ], [ ])
fi
dnl ============================================================================ dnl ============================================================================
dnl Check for spinlock(3) operations as provided on Darwin. dnl Check for spinlock(3) operations as provided on Darwin.
...@@ -1281,7 +1492,6 @@ if test "x${enable_zone_allocator}" = "x1" ; then ...@@ -1281,7 +1492,6 @@ if test "x${enable_zone_allocator}" = "x1" ; then
if test "x${abi}" != "xmacho"; then if test "x${abi}" != "xmacho"; then
AC_MSG_ERROR([--enable-zone-allocator is only supported on Darwin]) AC_MSG_ERROR([--enable-zone-allocator is only supported on Darwin])
fi fi
AC_DEFINE([JEMALLOC_IVSALLOC], [ ])
AC_DEFINE([JEMALLOC_ZONE], [ ]) AC_DEFINE([JEMALLOC_ZONE], [ ])
dnl The szone version jumped from 3 to 6 between the OS X 10.5.x and 10.6 dnl The szone version jumped from 3 to 6 between the OS X 10.5.x and 10.6
...@@ -1291,7 +1501,7 @@ if test "x${enable_zone_allocator}" = "x1" ; then ...@@ -1291,7 +1501,7 @@ if test "x${enable_zone_allocator}" = "x1" ; then
AC_DEFUN([JE_ZONE_PROGRAM], AC_DEFUN([JE_ZONE_PROGRAM],
[AC_LANG_PROGRAM( [AC_LANG_PROGRAM(
[#include <malloc/malloc.h>], [#include <malloc/malloc.h>],
[static foo[[sizeof($1) $2 sizeof(void *) * $3 ? 1 : -1]]] [static int foo[[sizeof($1) $2 sizeof(void *) * $3 ? 1 : -1]]]
)]) )])
AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_zone_t,==,14)],[JEMALLOC_ZONE_VERSION=3],[ AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_zone_t,==,14)],[JEMALLOC_ZONE_VERSION=3],[
...@@ -1316,6 +1526,49 @@ if test "x${enable_zone_allocator}" = "x1" ; then ...@@ -1316,6 +1526,49 @@ if test "x${enable_zone_allocator}" = "x1" ; then
AC_DEFINE_UNQUOTED(JEMALLOC_ZONE_VERSION, [$JEMALLOC_ZONE_VERSION]) AC_DEFINE_UNQUOTED(JEMALLOC_ZONE_VERSION, [$JEMALLOC_ZONE_VERSION])
fi fi
dnl ============================================================================
dnl Check for glibc malloc hooks
JE_COMPILABLE([glibc malloc hook], [
#include <stddef.h>
extern void (* __free_hook)(void *ptr);
extern void *(* __malloc_hook)(size_t size);
extern void *(* __realloc_hook)(void *ptr, size_t size);
], [
void *ptr = 0L;
if (__malloc_hook) ptr = __malloc_hook(1);
if (__realloc_hook) ptr = __realloc_hook(ptr, 2);
if (__free_hook && ptr) __free_hook(ptr);
], [je_cv_glibc_malloc_hook])
if test "x${je_cv_glibc_malloc_hook}" = "xyes" ; then
AC_DEFINE([JEMALLOC_GLIBC_MALLOC_HOOK], [ ])
fi
JE_COMPILABLE([glibc memalign hook], [
#include <stddef.h>
extern void *(* __memalign_hook)(size_t alignment, size_t size);
], [
void *ptr = 0L;
if (__memalign_hook) ptr = __memalign_hook(16, 7);
], [je_cv_glibc_memalign_hook])
if test "x${je_cv_glibc_memalign_hook}" = "xyes" ; then
AC_DEFINE([JEMALLOC_GLIBC_MEMALIGN_HOOK], [ ])
fi
JE_COMPILABLE([pthreads adaptive mutexes], [
#include <pthread.h>
], [
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ADAPTIVE_NP);
pthread_mutexattr_destroy(&attr);
], [je_cv_pthread_mutex_adaptive_np])
if test "x${je_cv_pthread_mutex_adaptive_np}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP], [ ])
fi
dnl ============================================================================ dnl ============================================================================
dnl Check for typedefs, structures, and compiler characteristics. dnl Check for typedefs, structures, and compiler characteristics.
AC_HEADER_STDBOOL AC_HEADER_STDBOOL
...@@ -1376,10 +1629,15 @@ AC_CONFIG_COMMANDS([include/jemalloc/internal/public_unnamespace.h], [ ...@@ -1376,10 +1629,15 @@ AC_CONFIG_COMMANDS([include/jemalloc/internal/public_unnamespace.h], [
]) ])
AC_CONFIG_COMMANDS([include/jemalloc/internal/size_classes.h], [ AC_CONFIG_COMMANDS([include/jemalloc/internal/size_classes.h], [
mkdir -p "${objroot}include/jemalloc/internal" mkdir -p "${objroot}include/jemalloc/internal"
"${srcdir}/include/jemalloc/internal/size_classes.sh" > "${objroot}include/jemalloc/internal/size_classes.h" "${SHELL}" "${srcdir}/include/jemalloc/internal/size_classes.sh" "${LG_QUANTA}" ${LG_TINY_MIN} "${LG_PAGE_SIZES}" ${LG_SIZE_CLASS_GROUP} > "${objroot}include/jemalloc/internal/size_classes.h"
], [ ], [
SHELL="${SHELL}"
srcdir="${srcdir}" srcdir="${srcdir}"
objroot="${objroot}" objroot="${objroot}"
LG_QUANTA="${LG_QUANTA}"
LG_TINY_MIN=${LG_TINY_MIN}
LG_PAGE_SIZES="${LG_PAGE_SIZES}"
LG_SIZE_CLASS_GROUP=${LG_SIZE_CLASS_GROUP}
]) ])
AC_CONFIG_COMMANDS([include/jemalloc/jemalloc_protos_jet.h], [ AC_CONFIG_COMMANDS([include/jemalloc/jemalloc_protos_jet.h], [
mkdir -p "${objroot}include/jemalloc" mkdir -p "${objroot}include/jemalloc"
...@@ -1426,7 +1684,7 @@ AC_CONFIG_HEADERS([$cfghdrs_tup]) ...@@ -1426,7 +1684,7 @@ AC_CONFIG_HEADERS([$cfghdrs_tup])
dnl ============================================================================ dnl ============================================================================
dnl Generate outputs. dnl Generate outputs.
AC_CONFIG_FILES([$cfgoutputs_tup config.stamp bin/jemalloc.sh]) AC_CONFIG_FILES([$cfgoutputs_tup config.stamp bin/jemalloc-config bin/jemalloc.sh bin/jeprof])
AC_SUBST([cfgoutputs_in]) AC_SUBST([cfgoutputs_in])
AC_SUBST([cfgoutputs_out]) AC_SUBST([cfgoutputs_out])
AC_OUTPUT AC_OUTPUT
...@@ -1437,12 +1695,14 @@ AC_MSG_RESULT([================================================================= ...@@ -1437,12 +1695,14 @@ AC_MSG_RESULT([=================================================================
AC_MSG_RESULT([jemalloc version : ${jemalloc_version}]) AC_MSG_RESULT([jemalloc version : ${jemalloc_version}])
AC_MSG_RESULT([library revision : ${rev}]) AC_MSG_RESULT([library revision : ${rev}])
AC_MSG_RESULT([]) AC_MSG_RESULT([])
AC_MSG_RESULT([CONFIG : ${CONFIG}])
AC_MSG_RESULT([CC : ${CC}]) AC_MSG_RESULT([CC : ${CC}])
AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}])
AC_MSG_RESULT([CFLAGS : ${CFLAGS}]) AC_MSG_RESULT([CFLAGS : ${CFLAGS}])
AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}])
AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}]) AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}])
AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}]) AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}])
AC_MSG_RESULT([LIBS : ${LIBS}]) AC_MSG_RESULT([LIBS : ${LIBS}])
AC_MSG_RESULT([TESTLIBS : ${TESTLIBS}])
AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}]) AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}])
AC_MSG_RESULT([]) AC_MSG_RESULT([])
AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}]) AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}])
...@@ -1450,9 +1710,9 @@ AC_MSG_RESULT([XSLROOT : ${XSLROOT}]) ...@@ -1450,9 +1710,9 @@ AC_MSG_RESULT([XSLROOT : ${XSLROOT}])
AC_MSG_RESULT([]) AC_MSG_RESULT([])
AC_MSG_RESULT([PREFIX : ${PREFIX}]) AC_MSG_RESULT([PREFIX : ${PREFIX}])
AC_MSG_RESULT([BINDIR : ${BINDIR}]) AC_MSG_RESULT([BINDIR : ${BINDIR}])
AC_MSG_RESULT([DATADIR : ${DATADIR}])
AC_MSG_RESULT([INCLUDEDIR : ${INCLUDEDIR}]) AC_MSG_RESULT([INCLUDEDIR : ${INCLUDEDIR}])
AC_MSG_RESULT([LIBDIR : ${LIBDIR}]) AC_MSG_RESULT([LIBDIR : ${LIBDIR}])
AC_MSG_RESULT([DATADIR : ${DATADIR}])
AC_MSG_RESULT([MANDIR : ${MANDIR}]) AC_MSG_RESULT([MANDIR : ${MANDIR}])
AC_MSG_RESULT([]) AC_MSG_RESULT([])
AC_MSG_RESULT([srcroot : ${srcroot}]) AC_MSG_RESULT([srcroot : ${srcroot}])
...@@ -1465,7 +1725,6 @@ AC_MSG_RESULT([JEMALLOC_PRIVATE_NAMESPACE]) ...@@ -1465,7 +1725,6 @@ AC_MSG_RESULT([JEMALLOC_PRIVATE_NAMESPACE])
AC_MSG_RESULT([ : ${JEMALLOC_PRIVATE_NAMESPACE}]) AC_MSG_RESULT([ : ${JEMALLOC_PRIVATE_NAMESPACE}])
AC_MSG_RESULT([install_suffix : ${install_suffix}]) AC_MSG_RESULT([install_suffix : ${install_suffix}])
AC_MSG_RESULT([autogen : ${enable_autogen}]) AC_MSG_RESULT([autogen : ${enable_autogen}])
AC_MSG_RESULT([experimental : ${enable_experimental}])
AC_MSG_RESULT([cc-silence : ${enable_cc_silence}]) AC_MSG_RESULT([cc-silence : ${enable_cc_silence}])
AC_MSG_RESULT([debug : ${enable_debug}]) AC_MSG_RESULT([debug : ${enable_debug}])
AC_MSG_RESULT([code-coverage : ${enable_code_coverage}]) AC_MSG_RESULT([code-coverage : ${enable_code_coverage}])
...@@ -1479,9 +1738,8 @@ AC_MSG_RESULT([fill : ${enable_fill}]) ...@@ -1479,9 +1738,8 @@ AC_MSG_RESULT([fill : ${enable_fill}])
AC_MSG_RESULT([utrace : ${enable_utrace}]) AC_MSG_RESULT([utrace : ${enable_utrace}])
AC_MSG_RESULT([valgrind : ${enable_valgrind}]) AC_MSG_RESULT([valgrind : ${enable_valgrind}])
AC_MSG_RESULT([xmalloc : ${enable_xmalloc}]) AC_MSG_RESULT([xmalloc : ${enable_xmalloc}])
AC_MSG_RESULT([mremap : ${enable_mremap}])
AC_MSG_RESULT([munmap : ${enable_munmap}]) AC_MSG_RESULT([munmap : ${enable_munmap}])
AC_MSG_RESULT([dss : ${enable_dss}])
AC_MSG_RESULT([lazy_lock : ${enable_lazy_lock}]) AC_MSG_RESULT([lazy_lock : ${enable_lazy_lock}])
AC_MSG_RESULT([tls : ${enable_tls}]) AC_MSG_RESULT([tls : ${enable_tls}])
AC_MSG_RESULT([cache-oblivious : ${enable_cache_oblivious}])
AC_MSG_RESULT([===============================================================================]) AC_MSG_RESULT([===============================================================================])
...@@ -2,12 +2,12 @@ ...@@ -2,12 +2,12 @@
.\" Title: JEMALLOC .\" Title: JEMALLOC
.\" Author: Jason Evans .\" Author: Jason Evans
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/> .\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 03/31/2014 .\" Date: 09/24/2015
.\" Manual: User Manual .\" Manual: User Manual
.\" Source: jemalloc 3.6.0-0-g46c0af68bd248b04df75e4f92d5fb804c3d75340 .\" Source: jemalloc 4.0.3-0-ge9192eacf8935e29fc62fddc2701f7942b1cc02c
.\" Language: English .\" Language: English
.\" .\"
.TH "JEMALLOC" "3" "03/31/2014" "jemalloc 3.6.0-0-g46c0af68bd24" "User Manual" .TH "JEMALLOC" "3" "09/24/2015" "jemalloc 4.0.3-0-ge9192eacf893" "User Manual"
.\" ----------------------------------------------------------------- .\" -----------------------------------------------------------------
.\" * Define some portability stuff .\" * Define some portability stuff
.\" ----------------------------------------------------------------- .\" -----------------------------------------------------------------
...@@ -31,13 +31,12 @@ ...@@ -31,13 +31,12 @@
jemalloc \- general purpose memory allocation functions jemalloc \- general purpose memory allocation functions
.SH "LIBRARY" .SH "LIBRARY"
.PP .PP
This manual describes jemalloc 3\&.6\&.0\-0\-g46c0af68bd248b04df75e4f92d5fb804c3d75340\&. More information can be found at the This manual describes jemalloc 4\&.0\&.3\-0\-ge9192eacf8935e29fc62fddc2701f7942b1cc02c\&. More information can be found at the
\m[blue]\fBjemalloc website\fR\m[]\&\s-2\u[1]\d\s+2\&. \m[blue]\fBjemalloc website\fR\m[]\&\s-2\u[1]\d\s+2\&.
.SH "SYNOPSIS" .SH "SYNOPSIS"
.sp .sp
.ft B .ft B
.nf .nf
#include <stdlib\&.h>
#include <jemalloc/jemalloc\&.h> #include <jemalloc/jemalloc\&.h>
.fi .fi
.ft .ft
...@@ -65,6 +64,8 @@ This manual describes jemalloc 3\&.6\&.0\-0\-g46c0af68bd248b04df75e4f92d5fb804c3 ...@@ -65,6 +64,8 @@ This manual describes jemalloc 3\&.6\&.0\-0\-g46c0af68bd248b04df75e4f92d5fb804c3
.BI "size_t sallocx(void\ *" "ptr" ", int\ " "flags" ");" .BI "size_t sallocx(void\ *" "ptr" ", int\ " "flags" ");"
.HP \w'void\ dallocx('u .HP \w'void\ dallocx('u
.BI "void dallocx(void\ *" "ptr" ", int\ " "flags" ");" .BI "void dallocx(void\ *" "ptr" ", int\ " "flags" ");"
.HP \w'void\ sdallocx('u
.BI "void sdallocx(void\ *" "ptr" ", size_t\ " "size" ", int\ " "flags" ");"
.HP \w'size_t\ nallocx('u .HP \w'size_t\ nallocx('u
.BI "size_t nallocx(size_t\ " "size" ", int\ " "flags" ");" .BI "size_t nallocx(size_t\ " "size" ", int\ " "flags" ");"
.HP \w'int\ mallctl('u .HP \w'int\ mallctl('u
...@@ -81,17 +82,6 @@ This manual describes jemalloc 3\&.6\&.0\-0\-g46c0af68bd248b04df75e4f92d5fb804c3 ...@@ -81,17 +82,6 @@ This manual describes jemalloc 3\&.6\&.0\-0\-g46c0af68bd248b04df75e4f92d5fb804c3
.BI "void (*malloc_message)(void\ *" "cbopaque" ", const\ char\ *" "s" ");" .BI "void (*malloc_message)(void\ *" "cbopaque" ", const\ char\ *" "s" ");"
.PP .PP
const char *\fImalloc_conf\fR; const char *\fImalloc_conf\fR;
.SS "Experimental API"
.HP \w'int\ allocm('u
.BI "int allocm(void\ **" "ptr" ", size_t\ *" "rsize" ", size_t\ " "size" ", int\ " "flags" ");"
.HP \w'int\ rallocm('u
.BI "int rallocm(void\ **" "ptr" ", size_t\ *" "rsize" ", size_t\ " "size" ", size_t\ " "extra" ", int\ " "flags" ");"
.HP \w'int\ sallocm('u
.BI "int sallocm(const\ void\ *" "ptr" ", size_t\ *" "rsize" ", int\ " "flags" ");"
.HP \w'int\ dallocm('u
.BI "int dallocm(void\ *" "ptr" ", int\ " "flags" ");"
.HP \w'int\ nallocm('u
.BI "int nallocm(size_t\ *" "rsize" ", size_t\ " "size" ", int\ " "flags" ");"
.SH "DESCRIPTION" .SH "DESCRIPTION"
.SS "Standard API" .SS "Standard API"
.PP .PP
...@@ -118,7 +108,7 @@ The ...@@ -118,7 +108,7 @@ The
\fBposix_memalign\fR\fB\fR \fBposix_memalign\fR\fB\fR
function allocates function allocates
\fIsize\fR \fIsize\fR
bytes of memory such that the allocation\*(Aqs base address is an even multiple of bytes of memory such that the allocation\*(Aqs base address is a multiple of
\fIalignment\fR, and returns the allocation in the value pointed to by \fIalignment\fR, and returns the allocation in the value pointed to by
\fIptr\fR\&. The requested \fIptr\fR\&. The requested
\fIalignment\fR \fIalignment\fR
...@@ -129,7 +119,7 @@ The ...@@ -129,7 +119,7 @@ The
\fBaligned_alloc\fR\fB\fR \fBaligned_alloc\fR\fB\fR
function allocates function allocates
\fIsize\fR \fIsize\fR
bytes of memory such that the allocation\*(Aqs base address is an even multiple of bytes of memory such that the allocation\*(Aqs base address is a multiple of
\fIalignment\fR\&. The requested \fIalignment\fR\&. The requested
\fIalignment\fR \fIalignment\fR
must be a power of 2\&. Behavior is undefined if must be a power of 2\&. Behavior is undefined if
...@@ -172,7 +162,8 @@ The ...@@ -172,7 +162,8 @@ The
\fBrallocx\fR\fB\fR, \fBrallocx\fR\fB\fR,
\fBxallocx\fR\fB\fR, \fBxallocx\fR\fB\fR,
\fBsallocx\fR\fB\fR, \fBsallocx\fR\fB\fR,
\fBdallocx\fR\fB\fR, and \fBdallocx\fR\fB\fR,
\fBsdallocx\fR\fB\fR, and
\fBnallocx\fR\fB\fR \fBnallocx\fR\fB\fR
functions all have a functions all have a
\fIflags\fR \fIflags\fR
...@@ -201,11 +192,32 @@ is a power of 2\&. ...@@ -201,11 +192,32 @@ is a power of 2\&.
Initialize newly allocated memory to contain zero bytes\&. In the growing reallocation case, the real size prior to reallocation defines the boundary between untouched bytes and those that are initialized to contain zero bytes\&. If this macro is absent, newly allocated memory is uninitialized\&. Initialize newly allocated memory to contain zero bytes\&. In the growing reallocation case, the real size prior to reallocation defines the boundary between untouched bytes and those that are initialized to contain zero bytes\&. If this macro is absent, newly allocated memory is uninitialized\&.
.RE .RE
.PP .PP
\fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB) \fR
.RS 4
Use the thread\-specific cache (tcache) specified by the identifier
\fItc\fR, which must have been acquired via the
"tcache\&.create"
mallctl\&. This macro does not validate that
\fItc\fR
specifies a valid identifier\&.
.RE
.PP
\fBMALLOCX_TCACHE_NONE\fR
.RS 4
Do not use a thread\-specific cache (tcache)\&. Unless
\fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB)\fR
or
\fBMALLOCX_TCACHE_NONE\fR
is specified, an automatically managed tcache will be used under many circumstances\&. This macro cannot be used in the same
\fIflags\fR
argument as
\fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB)\fR\&.
.RE
.PP
\fBMALLOCX_ARENA(\fR\fB\fIa\fR\fR\fB) \fR \fBMALLOCX_ARENA(\fR\fB\fIa\fR\fR\fB) \fR
.RS 4 .RS 4
Use the arena specified by the index Use the arena specified by the index
\fIa\fR \fIa\fR\&. This macro has no effect for regions that were allocated via an arena other than the one specified\&. This macro does not validate that
(and by necessity bypass the thread cache)\&. This macro has no effect for huge regions, nor for regions that were allocated via an arena other than the one specified\&. This macro does not validate that
\fIa\fR \fIa\fR
specifies an arena index in the valid range\&. specifies an arena index in the valid range\&.
.RE .RE
...@@ -258,6 +270,17 @@ function causes the memory referenced by ...@@ -258,6 +270,17 @@ function causes the memory referenced by
to be made available for future allocations\&. to be made available for future allocations\&.
.PP .PP
The The
\fBsdallocx\fR\fB\fR
function is an extension of
\fBdallocx\fR\fB\fR
with a
\fIsize\fR
parameter to allow the caller to pass in the allocation size as an optimization\&. The minimum valid input size is the original requested size of the allocation, and the maximum valid input size is the corresponding value returned by
\fBnallocx\fR\fB\fR
or
\fBsallocx\fR\fB\fR\&.
.PP
The
\fBnallocx\fR\fB\fR \fBnallocx\fR\fB\fR
function allocates no memory, but it performs the same size computation as the function allocates no memory, but it performs the same size computation as the
\fBmallocx\fR\fB\fR \fBmallocx\fR\fB\fR
...@@ -351,7 +374,7 @@ uses the ...@@ -351,7 +374,7 @@ uses the
\fBmallctl*\fR\fB\fR \fBmallctl*\fR\fB\fR
functions internally, so inconsistent statistics can be reported if multiple threads use these functions simultaneously\&. If functions internally, so inconsistent statistics can be reported if multiple threads use these functions simultaneously\&. If
\fB\-\-enable\-stats\fR \fB\-\-enable\-stats\fR
is specified during configuration, \(lqm\(rq and \(lqa\(rq can be specified to omit merged arena and per arena statistics, respectively; \(lqb\(rq and \(lql\(rq can be specified to omit per size class statistics for bins and large objects, respectively\&. Unrecognized characters are silently ignored\&. Note that thread caching may prevent some statistics from being completely up to date, since extra locking would be required to merge counters that track thread cache operations\&. is specified during configuration, \(lqm\(rq and \(lqa\(rq can be specified to omit merged arena and per arena statistics, respectively; \(lqb\(rq, \(lql\(rq, and \(lqh\(rq can be specified to omit per size class statistics for bins, large objects, and huge objects, respectively\&. Unrecognized characters are silently ignored\&. Note that thread caching may prevent some statistics from being completely up to date, since extra locking would be required to merge counters that track thread cache operations\&.
.PP .PP
The The
\fBmalloc_usable_size\fR\fB\fR \fBmalloc_usable_size\fR\fB\fR
...@@ -362,126 +385,6 @@ function is not a mechanism for in\-place ...@@ -362,126 +385,6 @@ function is not a mechanism for in\-place
\fBrealloc\fR\fB\fR; rather it is provided solely as a tool for introspection purposes\&. Any discrepancy between the requested allocation size and the size reported by \fBrealloc\fR\fB\fR; rather it is provided solely as a tool for introspection purposes\&. Any discrepancy between the requested allocation size and the size reported by
\fBmalloc_usable_size\fR\fB\fR \fBmalloc_usable_size\fR\fB\fR
should not be depended on, since such behavior is entirely implementation\-dependent\&. should not be depended on, since such behavior is entirely implementation\-dependent\&.
.SS "Experimental API"
.PP
The experimental API is subject to change or removal without regard for backward compatibility\&. If
\fB\-\-disable\-experimental\fR
is specified during configuration, the experimental API is omitted\&.
.PP
The
\fBallocm\fR\fB\fR,
\fBrallocm\fR\fB\fR,
\fBsallocm\fR\fB\fR,
\fBdallocm\fR\fB\fR, and
\fBnallocm\fR\fB\fR
functions all have a
\fIflags\fR
argument that can be used to specify options\&. The functions only check the options that are contextually relevant\&. Use bitwise or (|) operations to specify one or more of the following:
.PP
\fBALLOCM_LG_ALIGN(\fR\fB\fIla\fR\fR\fB) \fR
.RS 4
Align the memory allocation to start at an address that is a multiple of
(1 << \fIla\fR)\&. This macro does not validate that
\fIla\fR
is within the valid range\&.
.RE
.PP
\fBALLOCM_ALIGN(\fR\fB\fIa\fR\fR\fB) \fR
.RS 4
Align the memory allocation to start at an address that is a multiple of
\fIa\fR, where
\fIa\fR
is a power of two\&. This macro does not validate that
\fIa\fR
is a power of 2\&.
.RE
.PP
\fBALLOCM_ZERO\fR
.RS 4
Initialize newly allocated memory to contain zero bytes\&. In the growing reallocation case, the real size prior to reallocation defines the boundary between untouched bytes and those that are initialized to contain zero bytes\&. If this macro is absent, newly allocated memory is uninitialized\&.
.RE
.PP
\fBALLOCM_NO_MOVE\fR
.RS 4
For reallocation, fail rather than moving the object\&. This constraint can apply to both growth and shrinkage\&.
.RE
.PP
\fBALLOCM_ARENA(\fR\fB\fIa\fR\fR\fB) \fR
.RS 4
Use the arena specified by the index
\fIa\fR
(and by necessity bypass the thread cache)\&. This macro has no effect for huge regions, nor for regions that were allocated via an arena other than the one specified\&. This macro does not validate that
\fIa\fR
specifies an arena index in the valid range\&.
.RE
.PP
The
\fBallocm\fR\fB\fR
function allocates at least
\fIsize\fR
bytes of memory, sets
\fI*ptr\fR
to the base address of the allocation, and sets
\fI*rsize\fR
to the real size of the allocation if
\fIrsize\fR
is not
\fBNULL\fR\&. Behavior is undefined if
\fIsize\fR
is
\fB0\fR, or if request size overflows due to size class and/or alignment constraints\&.
.PP
The
\fBrallocm\fR\fB\fR
function resizes the allocation at
\fI*ptr\fR
to be at least
\fIsize\fR
bytes, sets
\fI*ptr\fR
to the base address of the allocation if it moved, and sets
\fI*rsize\fR
to the real size of the allocation if
\fIrsize\fR
is not
\fBNULL\fR\&. If
\fIextra\fR
is non\-zero, an attempt is made to resize the allocation to be at least
(\fIsize\fR + \fIextra\fR)
bytes, though inability to allocate the extra byte(s) will not by itself result in failure\&. Behavior is undefined if
\fIsize\fR
is
\fB0\fR, if request size overflows due to size class and/or alignment constraints, or if
(\fIsize\fR + \fIextra\fR > \fBSIZE_T_MAX\fR)\&.
.PP
The
\fBsallocm\fR\fB\fR
function sets
\fI*rsize\fR
to the real size of the allocation\&.
.PP
The
\fBdallocm\fR\fB\fR
function causes the memory referenced by
\fIptr\fR
to be made available for future allocations\&.
.PP
The
\fBnallocm\fR\fB\fR
function allocates no memory, but it performs the same size computation as the
\fBallocm\fR\fB\fR
function, and if
\fIrsize\fR
is not
\fBNULL\fR
it sets
\fI*rsize\fR
to the real size of the allocation that would result from the equivalent
\fBallocm\fR\fB\fR
function call\&. Behavior is undefined if
\fIsize\fR
is
\fB0\fR, or if request size overflows due to size class and/or alignment constraints\&.
.SH "TUNING" .SH "TUNING"
.PP .PP
Once, when the first call is made to one of the memory allocation routines, the allocator initializes its internals based in part on various options that can be specified at compile\- or run\-time\&. Once, when the first call is made to one of the memory allocation routines, the allocator initializes its internals based in part on various options that can be specified at compile\- or run\-time\&.
...@@ -519,8 +422,8 @@ options\&. Some options have boolean values (true/false), others have integer va ...@@ -519,8 +422,8 @@ options\&. Some options have boolean values (true/false), others have integer va
Traditionally, allocators have used Traditionally, allocators have used
\fBsbrk\fR(2) \fBsbrk\fR(2)
to obtain memory, which is suboptimal for several reasons, including race conditions, increased fragmentation, and artificial limitations on maximum usable memory\&. If to obtain memory, which is suboptimal for several reasons, including race conditions, increased fragmentation, and artificial limitations on maximum usable memory\&. If
\fB\-\-enable\-dss\fR \fBsbrk\fR(2)
is specified during configuration, this allocator uses both is supported by the operating system, this allocator uses both
\fBmmap\fR(2) \fBmmap\fR(2)
and and
\fBsbrk\fR(2), in that order of preference; otherwise only \fBsbrk\fR(2), in that order of preference; otherwise only
...@@ -535,18 +438,29 @@ is specified during configuration, this allocator supports thread\-specific cach ...@@ -535,18 +438,29 @@ is specified during configuration, this allocator supports thread\-specific cach
.PP .PP
Memory is conceptually broken into equal\-sized chunks, where the chunk size is a power of two that is greater than the page size\&. Chunks are always aligned to multiples of the chunk size\&. This alignment makes it possible to find metadata for user objects very quickly\&. Memory is conceptually broken into equal\-sized chunks, where the chunk size is a power of two that is greater than the page size\&. Chunks are always aligned to multiples of the chunk size\&. This alignment makes it possible to find metadata for user objects very quickly\&.
.PP .PP
User objects are broken into three categories according to size: small, large, and huge\&. Small objects are smaller than one page\&. Large objects are smaller than the chunk size\&. Huge objects are a multiple of the chunk size\&. Small and large objects are managed by arenas; huge objects are managed separately in a single data structure that is shared by all threads\&. Huge objects are used by applications infrequently enough that this single data structure is not a scalability issue\&. User objects are broken into three categories according to size: small, large, and huge\&. Small and large objects are managed entirely by arenas; huge objects are additionally aggregated in a single data structure that is shared by all threads\&. Huge objects are typically used by applications infrequently enough that this single data structure is not a scalability issue\&.
.PP .PP
Each chunk that is managed by an arena tracks its contents as runs of contiguous pages (unused, backing a set of small objects, or backing one large object)\&. The combination of chunk alignment and chunk page maps makes it possible to determine all metadata regarding small and large allocations in constant time\&. Each chunk that is managed by an arena tracks its contents as runs of contiguous pages (unused, backing a set of small objects, or backing one large object)\&. The combination of chunk alignment and chunk page maps makes it possible to determine all metadata regarding small and large allocations in constant time\&.
.PP .PP
Small objects are managed in groups by page runs\&. Each run maintains a frontier and free list to track which regions are in use\&. Allocation requests that are no more than half the quantum (8 or 16, depending on architecture) are rounded up to the nearest power of two that is at least Small objects are managed in groups by page runs\&. Each run maintains a bitmap to track which regions are in use\&. Allocation requests that are no more than half the quantum (8 or 16, depending on architecture) are rounded up to the nearest power of two that is at least
sizeof(\fBdouble\fR)\&. All other small object size classes are multiples of the quantum, spaced such that internal fragmentation is limited to approximately 25% for all but the smallest size classes\&. Allocation requests that are larger than the maximum small size class, but small enough to fit in an arena\-managed chunk (see the sizeof(\fBdouble\fR)\&. All other object size classes are multiples of the quantum, spaced such that there are four size classes for each doubling in size, which limits internal fragmentation to approximately 20% for all but the smallest size classes\&. Small size classes are smaller than four times the page size, large size classes are smaller than the chunk size (see the
"opt\&.lg_chunk" "opt\&.lg_chunk"
option), are rounded up to the nearest run size\&. Allocation requests that are too large to fit in an arena\-managed chunk are rounded up to the nearest multiple of the chunk size\&. option), and huge size classes extend from the chunk size up to one size class less than the full address space size\&.
.PP .PP
Allocations are packed tightly together, which can be an issue for multi\-threaded applications\&. If you need to assure that allocations do not suffer from cacheline sharing, round your allocation requests up to the nearest multiple of the cacheline size, or specify cacheline alignment when allocating\&. Allocations are packed tightly together, which can be an issue for multi\-threaded applications\&. If you need to assure that allocations do not suffer from cacheline sharing, round your allocation requests up to the nearest multiple of the cacheline size, or specify cacheline alignment when allocating\&.
.PP .PP
Assuming 4 MiB chunks, 4 KiB pages, and a 16\-byte quantum on a 64\-bit system, the size classes in each category are as shown in The
\fBrealloc\fR\fB\fR,
\fBrallocx\fR\fB\fR, and
\fBxallocx\fR\fB\fR
functions may resize allocations without moving them under limited circumstances\&. Unlike the
\fB*allocx\fR\fB\fR
API, the standard API does not officially round up the usable size of an allocation to the nearest size class, so technically it is necessary to call
\fBrealloc\fR\fB\fR
to grow e\&.g\&. a 9\-byte allocation to 16 bytes, or shrink a 16\-byte allocation to 9 bytes\&. Growth and shrinkage trivially succeeds in place as long as the pre\-size and post\-size both round up to the same size class\&. No other API guarantees are made regarding in\-place resizing, but the current implementation also tries to resize large and huge allocations in place, as long as the pre\-size and post\-size are both large or both huge\&. In such cases shrinkage always succeeds for large size classes, but for huge size classes the chunk allocator must support splitting (see
"arena\&.<i>\&.chunk_hooks")\&. Growth only succeeds if the trailing memory is currently available, and additionally for huge size classes the chunk allocator must support merging\&.
.PP
Assuming 2 MiB chunks, 4 KiB pages, and a 16\-byte quantum on a 64\-bit system, the size classes in each category are as shown in
Table 1\&. Table 1\&.
.sp .sp
.it 1 an-trap .it 1 an-trap
...@@ -572,8 +486,23 @@ l r l ...@@ -572,8 +486,23 @@ l r l
^ r l ^ r l
^ r l ^ r l
^ r l ^ r l
^ r l
^ r l
l r l l r l
l r l. ^ r l
^ r l
^ r l
^ r l
^ r l
^ r l
^ r l
l r l
^ r l
^ r l
^ r l
^ r l
^ r l
^ r l.
T{ T{
Small Small
T}:T{ T}:T{
...@@ -584,7 +513,7 @@ T} ...@@ -584,7 +513,7 @@ T}
:T{ :T{
16 16
T}:T{ T}:T{
[16, 32, 48, \&.\&.\&., 128] [16, 32, 48, 64, 80, 96, 112, 128]
T} T}
:T{ :T{
32 32
...@@ -609,21 +538,96 @@ T} ...@@ -609,21 +538,96 @@ T}
:T{ :T{
512 512
T}:T{ T}:T{
[2560, 3072, 3584] [2560, 3072, 3584, 4096]
T}
:T{
1 KiB
T}:T{
[5 KiB, 6 KiB, 7 KiB, 8 KiB]
T}
:T{
2 KiB
T}:T{
[10 KiB, 12 KiB, 14 KiB]
T} T}
T{ T{
Large Large
T}:T{ T}:T{
2 KiB
T}:T{
[16 KiB]
T}
:T{
4 KiB 4 KiB
T}:T{ T}:T{
[4 KiB, 8 KiB, 12 KiB, \&.\&.\&., 4072 KiB] [20 KiB, 24 KiB, 28 KiB, 32 KiB]
T}
:T{
8 KiB
T}:T{
[40 KiB, 48 KiB, 54 KiB, 64 KiB]
T}
:T{
16 KiB
T}:T{
[80 KiB, 96 KiB, 112 KiB, 128 KiB]
T}
:T{
32 KiB
T}:T{
[160 KiB, 192 KiB, 224 KiB, 256 KiB]
T}
:T{
64 KiB
T}:T{
[320 KiB, 384 KiB, 448 KiB, 512 KiB]
T}
:T{
128 KiB
T}:T{
[640 KiB, 768 KiB, 896 KiB, 1 MiB]
T}
:T{
256 KiB
T}:T{
[1280 KiB, 1536 KiB, 1792 KiB]
T} T}
T{ T{
Huge Huge
T}:T{ T}:T{
256 KiB
T}:T{
[2 MiB]
T}
:T{
512 KiB
T}:T{
[2560 KiB, 3 MiB, 3584 KiB, 4 MiB]
T}
:T{
1 MiB
T}:T{
[5 MiB, 6 MiB, 7 MiB, 8 MiB]
T}
:T{
2 MiB
T}:T{
[10 MiB, 12 MiB, 14 MiB, 16 MiB]
T}
:T{
4 MiB 4 MiB
T}:T{ T}:T{
[4 MiB, 8 MiB, 12 MiB, \&.\&.\&.] [20 MiB, 24 MiB, 28 MiB, 32 MiB]
T}
:T{
8 MiB
T}:T{
[40 MiB, 48 MiB, 56 MiB, 64 MiB]
T}
:T{
\&.\&.\&.
T}:T{
\&.\&.\&.
T} T}
.TE .TE
.sp 1 .sp 1
...@@ -660,15 +664,15 @@ If a value is passed in, refresh the data from which the ...@@ -660,15 +664,15 @@ If a value is passed in, refresh the data from which the
functions report values, and increment the epoch\&. Return the current epoch\&. This is useful for detecting whether another thread caused a refresh\&. functions report values, and increment the epoch\&. Return the current epoch\&. This is useful for detecting whether another thread caused a refresh\&.
.RE .RE
.PP .PP
"config\&.debug" (\fBbool\fR) r\- "config\&.cache_oblivious" (\fBbool\fR) r\-
.RS 4 .RS 4
\fB\-\-enable\-debug\fR \fB\-\-enable\-cache\-oblivious\fR
was specified during build configuration\&. was specified during build configuration\&.
.RE .RE
.PP .PP
"config\&.dss" (\fBbool\fR) r\- "config\&.debug" (\fBbool\fR) r\-
.RS 4 .RS 4
\fB\-\-enable\-dss\fR \fB\-\-enable\-debug\fR
was specified during build configuration\&. was specified during build configuration\&.
.RE .RE
.PP .PP
...@@ -684,12 +688,6 @@ was specified during build configuration\&. ...@@ -684,12 +688,6 @@ was specified during build configuration\&.
was specified during build configuration\&. was specified during build configuration\&.
.RE .RE
.PP .PP
"config\&.mremap" (\fBbool\fR) r\-
.RS 4
\fB\-\-enable\-mremap\fR
was specified during build configuration\&.
.RE
.PP
"config\&.munmap" (\fBbool\fR) r\- "config\&.munmap" (\fBbool\fR) r\-
.RS 4 .RS 4
\fB\-\-enable\-munmap\fR \fB\-\-enable\-munmap\fR
...@@ -763,14 +761,16 @@ is specified during configuration, in which case it is enabled by default\&. ...@@ -763,14 +761,16 @@ is specified during configuration, in which case it is enabled by default\&.
.RS 4 .RS 4
dss (\fBsbrk\fR(2)) allocation precedence as related to dss (\fBsbrk\fR(2)) allocation precedence as related to
\fBmmap\fR(2) \fBmmap\fR(2)
allocation\&. The following settings are supported: \(lqdisabled\(rq, \(lqprimary\(rq, and \(lqsecondary\(rq\&. The default is \(lqsecondary\(rq if allocation\&. The following settings are supported if
"config\&.dss" \fBsbrk\fR(2)
is true, \(lqdisabled\(rq otherwise\&. is supported by the operating system: \(lqdisabled\(rq, \(lqprimary\(rq, and \(lqsecondary\(rq; otherwise only \(lqdisabled\(rq is supported\&. The default is \(lqsecondary\(rq if
\fBsbrk\fR(2)
is supported by the operating system; \(lqdisabled\(rq otherwise\&.
.RE .RE
.PP .PP
"opt\&.lg_chunk" (\fBsize_t\fR) r\- "opt\&.lg_chunk" (\fBsize_t\fR) r\-
.RS 4 .RS 4
Virtual memory chunk size (log base 2)\&. If a chunk size outside the supported size range is specified, the size is silently clipped to the minimum/maximum supported size\&. The default chunk size is 4 MiB (2^22)\&. Virtual memory chunk size (log base 2)\&. If a chunk size outside the supported size range is specified, the size is silently clipped to the minimum/maximum supported size\&. The default chunk size is 2 MiB (2^21)\&.
.RE .RE
.PP .PP
"opt\&.narenas" (\fBsize_t\fR) r\- "opt\&.narenas" (\fBsize_t\fR) r\-
...@@ -782,7 +782,11 @@ Maximum number of arenas to use for automatic multiplexing of threads and arenas ...@@ -782,7 +782,11 @@ Maximum number of arenas to use for automatic multiplexing of threads and arenas
.RS 4 .RS 4
Per\-arena minimum ratio (log base 2) of active to dirty pages\&. Some dirty unused pages may be allowed to accumulate, within the limit set by the ratio (or one chunk worth of dirty pages, whichever is greater), before informing the kernel about some of those pages via Per\-arena minimum ratio (log base 2) of active to dirty pages\&. Some dirty unused pages may be allowed to accumulate, within the limit set by the ratio (or one chunk worth of dirty pages, whichever is greater), before informing the kernel about some of those pages via
\fBmadvise\fR(2) \fBmadvise\fR(2)
or a similar system call\&. This provides the kernel with sufficient information to recycle dirty pages if physical memory becomes scarce and the pages remain unused\&. The default minimum ratio is 8:1 (2^3:1); an option value of \-1 will disable dirty page purging\&. or a similar system call\&. This provides the kernel with sufficient information to recycle dirty pages if physical memory becomes scarce and the pages remain unused\&. The default minimum ratio is 8:1 (2^3:1); an option value of \-1 will disable dirty page purging\&. See
"arenas\&.lg_dirty_mult"
and
"arena\&.<i>\&.lg_dirty_mult"
for related dynamic control options\&.
.RE .RE
.PP .PP
"opt\&.stats_print" (\fBbool\fR) r\- "opt\&.stats_print" (\fBbool\fR) r\-
...@@ -793,16 +797,21 @@ function is called at program exit via an ...@@ -793,16 +797,21 @@ function is called at program exit via an
\fBatexit\fR(3) \fBatexit\fR(3)
function\&. If function\&. If
\fB\-\-enable\-stats\fR \fB\-\-enable\-stats\fR
is specified during configuration, this has the potential to cause deadlock for a multi\-threaded process that exits while one or more threads are executing in the memory allocation functions\&. Therefore, this option should only be used with care; it is primarily intended as a performance tuning aid during application development\&. This option is disabled by default\&. is specified during configuration, this has the potential to cause deadlock for a multi\-threaded process that exits while one or more threads are executing in the memory allocation functions\&. Furthermore,
\fBatexit\fR\fB\fR
may allocate memory during application initialization and then deadlock internally when jemalloc in turn calls
\fBatexit\fR\fB\fR, so this option is not univerally usable (though the application can register its own
\fBatexit\fR\fB\fR
function with equivalent functionality)\&. Therefore, this option should only be used with care; it is primarily intended as a performance tuning aid during application development\&. This option is disabled by default\&.
.RE .RE
.PP .PP
"opt\&.junk" (\fBbool\fR) r\- [\fB\-\-enable\-fill\fR] "opt\&.junk" (\fBconst char *\fR) r\- [\fB\-\-enable\-fill\fR]
.RS 4 .RS 4
Junk filling enabled/disabled\&. If enabled, each byte of uninitialized allocated memory will be initialized to Junk filling\&. If set to "alloc", each byte of uninitialized allocated memory will be initialized to
0xa5\&. All deallocated memory will be initialized to 0xa5\&. If set to "free", all deallocated memory will be initialized to
0x5a\&. This is intended for debugging and will impact performance negatively\&. This option is disabled by default unless 0x5a\&. If set to "true", both allocated and deallocated memory will be initialized, and if set to "false", junk filling be disabled entirely\&. This is intended for debugging and will impact performance negatively\&. This option is "false" by default unless
\fB\-\-enable\-debug\fR \fB\-\-enable\-debug\fR
is specified during configuration, in which case it is enabled by default unless running inside is specified during configuration, in which case it is "true" by default unless running inside
\m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2\&. \m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2\&.
.RE .RE
.PP .PP
...@@ -825,10 +834,9 @@ option is enabled, the redzones are checked for corruption during deallocation\& ...@@ -825,10 +834,9 @@ option is enabled, the redzones are checked for corruption during deallocation\&
"opt\&.zero" (\fBbool\fR) r\- [\fB\-\-enable\-fill\fR] "opt\&.zero" (\fBbool\fR) r\- [\fB\-\-enable\-fill\fR]
.RS 4 .RS 4
Zero filling enabled/disabled\&. If enabled, each byte of uninitialized allocated memory will be initialized to 0\&. Note that this initialization only happens once for each byte, so Zero filling enabled/disabled\&. If enabled, each byte of uninitialized allocated memory will be initialized to 0\&. Note that this initialization only happens once for each byte, so
\fBrealloc\fR\fB\fR, \fBrealloc\fR\fB\fR
\fBrallocx\fR\fB\fR
and and
\fBrallocm\fR\fB\fR \fBrallocx\fR\fB\fR
calls do not zero memory that was previously allocated\&. This is intended for debugging and will impact performance negatively\&. This option is disabled by default\&. calls do not zero memory that was previously allocated\&. This is intended for debugging and will impact performance negatively\&. This option is disabled by default\&.
.RE .RE
.PP .PP
...@@ -839,12 +847,6 @@ Allocation tracing based on ...@@ -839,12 +847,6 @@ Allocation tracing based on
enabled/disabled\&. This option is disabled by default\&. enabled/disabled\&. This option is disabled by default\&.
.RE .RE
.PP .PP
"opt\&.valgrind" (\fBbool\fR) r\- [\fB\-\-enable\-valgrind\fR]
.RS 4
\m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2
support enabled/disabled\&. This option is vestigal because jemalloc auto\-detects whether it is running inside Valgrind\&. This option is disabled by default, unless running inside Valgrind\&.
.RE
.PP
"opt\&.xmalloc" (\fBbool\fR) r\- [\fB\-\-enable\-xmalloc\fR] "opt\&.xmalloc" (\fBbool\fR) r\- [\fB\-\-enable\-xmalloc\fR]
.RS 4 .RS 4
Abort\-on\-out\-of\-memory enabled/disabled\&. If enabled, rather than returning failure for any allocation function, display a diagnostic message on Abort\-on\-out\-of\-memory enabled/disabled\&. If enabled, rather than returning failure for any allocation function, display a diagnostic message on
...@@ -867,15 +869,15 @@ This option is disabled by default\&. ...@@ -867,15 +869,15 @@ This option is disabled by default\&.
.PP .PP
"opt\&.tcache" (\fBbool\fR) r\- [\fB\-\-enable\-tcache\fR] "opt\&.tcache" (\fBbool\fR) r\- [\fB\-\-enable\-tcache\fR]
.RS 4 .RS 4
Thread\-specific caching enabled/disabled\&. When there are multiple threads, each thread uses a thread\-specific cache for objects up to a certain size\&. Thread\-specific caching allows many allocations to be satisfied without performing any thread synchronization, at the cost of increased memory use\&. See the Thread\-specific caching (tcache) enabled/disabled\&. When there are multiple threads, each thread uses a tcache for objects up to a certain size\&. Thread\-specific caching allows many allocations to be satisfied without performing any thread synchronization, at the cost of increased memory use\&. See the
"opt\&.lg_tcache_max" "opt\&.lg_tcache_max"
option for related tuning information\&. This option is enabled by default unless running inside option for related tuning information\&. This option is enabled by default unless running inside
\m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2\&. \m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2, in which case it is forcefully disabled\&.
.RE .RE
.PP .PP
"opt\&.lg_tcache_max" (\fBsize_t\fR) r\- [\fB\-\-enable\-tcache\fR] "opt\&.lg_tcache_max" (\fBsize_t\fR) r\- [\fB\-\-enable\-tcache\fR]
.RS 4 .RS 4
Maximum size class (log base 2) to cache in the thread\-specific cache\&. At a minimum, all small size classes are cached, and at a maximum all large size classes are cached\&. The default maximum is 32 KiB (2^15)\&. Maximum size class (log base 2) to cache in the thread\-specific cache (tcache)\&. At a minimum, all small size classes are cached, and at a maximum all large size classes are cached\&. The default maximum is 32 KiB (2^15)\&.
.RE .RE
.PP .PP
"opt\&.prof" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] "opt\&.prof" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR]
...@@ -892,9 +894,11 @@ option for information on interval\-triggered profile dumping, the ...@@ -892,9 +894,11 @@ option for information on interval\-triggered profile dumping, the
"opt\&.prof_gdump" "opt\&.prof_gdump"
option for information on high\-water\-triggered profile dumping, and the option for information on high\-water\-triggered profile dumping, and the
"opt\&.prof_final" "opt\&.prof_final"
option for final profile dumping\&. Profile output is compatible with the included option for final profile dumping\&. Profile output is compatible with the
\fBjeprof\fR
command, which is based on the
\fBpprof\fR \fBpprof\fR
Perl script, which originates from the that is developed as part of the
\m[blue]\fBgperftools package\fR\m[]\&\s-2\u[3]\d\s+2\&. \m[blue]\fBgperftools package\fR\m[]\&\s-2\u[3]\d\s+2\&.
.RE .RE
.PP .PP
...@@ -904,7 +908,7 @@ Filename prefix for profile dumps\&. If the prefix is set to the empty string, n ...@@ -904,7 +908,7 @@ Filename prefix for profile dumps\&. If the prefix is set to the empty string, n
jeprof\&. jeprof\&.
.RE .RE
.PP .PP
"opt\&.prof_active" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] "opt\&.prof_active" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR]
.RS 4 .RS 4
Profiling activated/deactivated\&. This is a secondary control mechanism that makes it possible to start the application with profiling enabled (see the Profiling activated/deactivated\&. This is a secondary control mechanism that makes it possible to start the application with profiling enabled (see the
"opt\&.prof" "opt\&.prof"
...@@ -913,7 +917,16 @@ option) but inactive, then toggle profiling at any time during program execution ...@@ -913,7 +917,16 @@ option) but inactive, then toggle profiling at any time during program execution
mallctl\&. This option is enabled by default\&. mallctl\&. This option is enabled by default\&.
.RE .RE
.PP .PP
"opt\&.lg_prof_sample" (\fBssize_t\fR) r\- [\fB\-\-enable\-prof\fR] "opt\&.prof_thread_active_init" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR]
.RS 4
Initial setting for
"thread\&.prof\&.active"
in newly created threads\&. The initial setting for newly created threads can also be changed during execution via the
"prof\&.thread_active_init"
mallctl\&. This option is enabled by default\&.
.RE
.PP
"opt\&.lg_prof_sample" (\fBsize_t\fR) r\- [\fB\-\-enable\-prof\fR]
.RS 4 .RS 4
Average interval (log base 2) between allocation samples, as measured in bytes of allocation activity\&. Increasing the sampling interval decreases profile fidelity, but also decreases the computational overhead\&. The default sample interval is 512 KiB (2^19 B)\&. Average interval (log base 2) between allocation samples, as measured in bytes of allocation activity\&. Increasing the sampling interval decreases profile fidelity, but also decreases the computational overhead\&. The default sample interval is 512 KiB (2^19 B)\&.
.RE .RE
...@@ -935,12 +948,8 @@ option\&. By default, interval\-triggered profile dumping is disabled (encoded a ...@@ -935,12 +948,8 @@ option\&. By default, interval\-triggered profile dumping is disabled (encoded a
.PP .PP
"opt\&.prof_gdump" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] "opt\&.prof_gdump" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR]
.RS 4 .RS 4
Trigger a memory profile dump every time the total virtual memory exceeds the previous maximum\&. Profiles are dumped to files named according to the pattern Set the initial state of
<prefix>\&.<pid>\&.<seq>\&.u<useq>\&.heap, where "prof\&.gdump", which when enabled triggers a memory profile dump every time the total virtual memory exceeds the previous maximum\&. This option is disabled by default\&.
<prefix>
is controlled by the
"opt\&.prof_prefix"
option\&. This option is disabled by default\&.
.RE .RE
.PP .PP
"opt\&.prof_final" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] "opt\&.prof_final" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR]
...@@ -952,7 +961,12 @@ function to dump final memory usage to a file named according to the pattern ...@@ -952,7 +961,12 @@ function to dump final memory usage to a file named according to the pattern
<prefix> <prefix>
is controlled by the is controlled by the
"opt\&.prof_prefix" "opt\&.prof_prefix"
option\&. This option is enabled by default\&. option\&. Note that
\fBatexit\fR\fB\fR
may allocate memory during application initialization and then deadlock internally when jemalloc in turn calls
\fBatexit\fR\fB\fR, so this option is not univerally usable (though the application can register its own
\fBatexit\fR\fB\fR
function with equivalent functionality)\&. This option is disabled by default\&.
.RE .RE
.PP .PP
"opt\&.prof_leak" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] "opt\&.prof_leak" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR]
...@@ -1007,10 +1021,42 @@ Enable/disable calling thread\*(Aqs tcache\&. The tcache is implicitly flushed a ...@@ -1007,10 +1021,42 @@ Enable/disable calling thread\*(Aqs tcache\&. The tcache is implicitly flushed a
.PP .PP
"thread\&.tcache\&.flush" (\fBvoid\fR) \-\- [\fB\-\-enable\-tcache\fR] "thread\&.tcache\&.flush" (\fBvoid\fR) \-\- [\fB\-\-enable\-tcache\fR]
.RS 4 .RS 4
Flush calling thread\*(Aqs tcache\&. This interface releases all cached objects and internal data structures associated with the calling thread\*(Aqs thread\-specific cache\&. Ordinarily, this interface need not be called, since automatic periodic incremental garbage collection occurs, and the thread cache is automatically discarded when a thread exits\&. However, garbage collection is triggered by allocation activity, so it is possible for a thread that stops allocating/deallocating to retain its cache indefinitely, in which case the developer may find manual flushing useful\&. Flush calling thread\*(Aqs thread\-specific cache (tcache)\&. This interface releases all cached objects and internal data structures associated with the calling thread\*(Aqs tcache\&. Ordinarily, this interface need not be called, since automatic periodic incremental garbage collection occurs, and the thread cache is automatically discarded when a thread exits\&. However, garbage collection is triggered by allocation activity, so it is possible for a thread that stops allocating/deallocating to retain its cache indefinitely, in which case the developer may find manual flushing useful\&.
.RE
.PP
"thread\&.prof\&.name" (\fBconst char *\fR) r\- or \-w [\fB\-\-enable\-prof\fR]
.RS 4
Get/set the descriptive name associated with the calling thread in memory profile dumps\&. An internal copy of the name string is created, so the input string need not be maintained after this interface completes execution\&. The output string of this interface should be copied for non\-ephemeral uses, because multiple implementation details can cause asynchronous string deallocation\&. Furthermore, each invocation of this interface can only read or write; simultaneous read/write is not supported due to string lifetime limitations\&. The name string must nil\-terminated and comprised only of characters in the sets recognized by
\fBisgraph\fR(3)
and
\fBisblank\fR(3)\&.
.RE
.PP
"thread\&.prof\&.active" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR]
.RS 4
Control whether sampling is currently active for the calling thread\&. This is an activation mechanism in addition to
"prof\&.active"; both must be active for the calling thread to sample\&. This flag is enabled by default\&.
.RE
.PP
"tcache\&.create" (\fBunsigned\fR) r\- [\fB\-\-enable\-tcache\fR]
.RS 4
Create an explicit thread\-specific cache (tcache) and return an identifier that can be passed to the
\fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB)\fR
macro to explicitly use the specified cache rather than the automatically managed one that is used by default\&. Each explicit cache can be used by only one thread at a time; the application must assure that this constraint holds\&.
.RE
.PP
"tcache\&.flush" (\fBunsigned\fR) \-w [\fB\-\-enable\-tcache\fR]
.RS 4
Flush the specified thread\-specific cache (tcache)\&. The same considerations apply to this interface as to
"thread\&.tcache\&.flush", except that the tcache will never be automatically be discarded\&.
.RE .RE
.PP .PP
"arena\&.<i>\&.purge" (\fBunsigned\fR) \-\- "tcache\&.destroy" (\fBunsigned\fR) \-w [\fB\-\-enable\-tcache\fR]
.RS 4
Flush the specified thread\-specific cache (tcache) and make the identifier available for use during a future tcache creation\&.
.RE
.PP
"arena\&.<i>\&.purge" (\fBvoid\fR) \-\-
.RS 4 .RS 4
Purge unused dirty pages for arena <i>, or for all arenas if <i> equals Purge unused dirty pages for arena <i>, or for all arenas if <i> equals
"arenas\&.narenas"\&. "arenas\&.narenas"\&.
...@@ -1019,11 +1065,237 @@ Purge unused dirty pages for arena <i>, or for all arenas if <i> equals ...@@ -1019,11 +1065,237 @@ Purge unused dirty pages for arena <i>, or for all arenas if <i> equals
"arena\&.<i>\&.dss" (\fBconst char *\fR) rw "arena\&.<i>\&.dss" (\fBconst char *\fR) rw
.RS 4 .RS 4
Set the precedence of dss allocation as related to mmap allocation for arena <i>, or for all arenas if <i> equals Set the precedence of dss allocation as related to mmap allocation for arena <i>, or for all arenas if <i> equals
"arenas\&.narenas"\&. Note that even during huge allocation this setting is read from the arena that would be chosen for small or large allocation so that applications can depend on consistent dss versus mmap allocation regardless of allocation size\&. See "arenas\&.narenas"\&. See
"opt\&.dss" "opt\&.dss"
for supported settings\&. for supported settings\&.
.RE .RE
.PP .PP
"arena\&.<i>\&.lg_dirty_mult" (\fBssize_t\fR) rw
.RS 4
Current per\-arena minimum ratio (log base 2) of active to dirty pages for arena <i>\&. Each time this interface is set and the ratio is increased, pages are synchronously purged as necessary to impose the new ratio\&. See
"opt\&.lg_dirty_mult"
for additional information\&.
.RE
.PP
"arena\&.<i>\&.chunk_hooks" (\fBchunk_hooks_t\fR) rw
.RS 4
Get or set the chunk management hook functions for arena <i>\&. The functions must be capable of operating on all extant chunks associated with arena <i>, usually by passing unknown chunks to the replaced functions\&. In practice, it is feasible to control allocation for arenas created via
"arenas\&.extend"
such that all chunks originate from an application\-supplied chunk allocator (by setting custom chunk hook functions just after arena creation), but the automatically created arenas may have already created chunks prior to the application having an opportunity to take over chunk allocation\&.
.sp
.if n \{\
.RS 4
.\}
.nf
typedef struct {
chunk_alloc_t *alloc;
chunk_dalloc_t *dalloc;
chunk_commit_t *commit;
chunk_decommit_t *decommit;
chunk_purge_t *purge;
chunk_split_t *split;
chunk_merge_t *merge;
} chunk_hooks_t;
.fi
.if n \{\
.RE
.\}
.sp
The
\fBchunk_hooks_t\fR
structure comprises function pointers which are described individually below\&. jemalloc uses these functions to manage chunk lifetime, which starts off with allocation of mapped committed memory, in the simplest case followed by deallocation\&. However, there are performance and platform reasons to retain chunks for later reuse\&. Cleanup attempts cascade from deallocation to decommit to purging, which gives the chunk management functions opportunities to reject the most permanent cleanup operations in favor of less permanent (and often less costly) operations\&. The chunk splitting and merging operations can also be opted out of, but this is mainly intended to support platforms on which virtual memory mappings provided by the operating system kernel do not automatically coalesce and split, e\&.g\&. Windows\&.
.HP \w'typedef\ void\ *(chunk_alloc_t)('u
.BI "typedef void *(chunk_alloc_t)(void\ *" "chunk" ", size_t\ " "size" ", size_t\ " "alignment" ", bool\ *" "zero" ", bool\ *" "commit" ", unsigned\ " "arena_ind" ");"
.sp
.if n \{\
.RS 4
.\}
.nf
.fi
.if n \{\
.RE
.\}
.sp
A chunk allocation function conforms to the
\fBchunk_alloc_t\fR
type and upon success returns a pointer to
\fIsize\fR
bytes of mapped memory on behalf of arena
\fIarena_ind\fR
such that the chunk\*(Aqs base address is a multiple of
\fIalignment\fR, as well as setting
\fI*zero\fR
to indicate whether the chunk is zeroed and
\fI*commit\fR
to indicate whether the chunk is committed\&. Upon error the function returns
\fBNULL\fR
and leaves
\fI*zero\fR
and
\fI*commit\fR
unmodified\&. The
\fIsize\fR
parameter is always a multiple of the chunk size\&. The
\fIalignment\fR
parameter is always a power of two at least as large as the chunk size\&. Zeroing is mandatory if
\fI*zero\fR
is true upon function entry\&. Committing is mandatory if
\fI*commit\fR
is true upon function entry\&. If
\fIchunk\fR
is not
\fBNULL\fR, the returned pointer must be
\fIchunk\fR
on success or
\fBNULL\fR
on error\&. Committed memory may be committed in absolute terms as on a system that does not overcommit, or in implicit terms as on a system that overcommits and satisfies physical memory needs on demand via soft page faults\&. Note that replacing the default chunk allocation function makes the arena\*(Aqs
"arena\&.<i>\&.dss"
setting irrelevant\&.
.HP \w'typedef\ bool\ (chunk_dalloc_t)('u
.BI "typedef bool (chunk_dalloc_t)(void\ *" "chunk" ", size_t\ " "size" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");"
.sp
.if n \{\
.RS 4
.\}
.nf
.fi
.if n \{\
.RE
.\}
.sp
A chunk deallocation function conforms to the
\fBchunk_dalloc_t\fR
type and deallocates a
\fIchunk\fR
of given
\fIsize\fR
with
\fIcommitted\fR/decommited memory as indicated, on behalf of arena
\fIarena_ind\fR, returning false upon success\&. If the function returns true, this indicates opt\-out from deallocation; the virtual memory mapping associated with the chunk remains mapped, in the same commit state, and available for future use, in which case it will be automatically retained for later reuse\&.
.HP \w'typedef\ bool\ (chunk_commit_t)('u
.BI "typedef bool (chunk_commit_t)(void\ *" "chunk" ", size_t\ " "size" ", size_t\ " "offset" ", size_t\ " "length" ", unsigned\ " "arena_ind" ");"
.sp
.if n \{\
.RS 4
.\}
.nf
.fi
.if n \{\
.RE
.\}
.sp
A chunk commit function conforms to the
\fBchunk_commit_t\fR
type and commits zeroed physical memory to back pages within a
\fIchunk\fR
of given
\fIsize\fR
at
\fIoffset\fR
bytes, extending for
\fIlength\fR
on behalf of arena
\fIarena_ind\fR, returning false upon success\&. Committed memory may be committed in absolute terms as on a system that does not overcommit, or in implicit terms as on a system that overcommits and satisfies physical memory needs on demand via soft page faults\&. If the function returns true, this indicates insufficient physical memory to satisfy the request\&.
.HP \w'typedef\ bool\ (chunk_decommit_t)('u
.BI "typedef bool (chunk_decommit_t)(void\ *" "chunk" ", size_t\ " "size" ", size_t\ " "offset" ", size_t\ " "length" ", unsigned\ " "arena_ind" ");"
.sp
.if n \{\
.RS 4
.\}
.nf
.fi
.if n \{\
.RE
.\}
.sp
A chunk decommit function conforms to the
\fBchunk_decommit_t\fR
type and decommits any physical memory that is backing pages within a
\fIchunk\fR
of given
\fIsize\fR
at
\fIoffset\fR
bytes, extending for
\fIlength\fR
on behalf of arena
\fIarena_ind\fR, returning false upon success, in which case the pages will be committed via the chunk commit function before being reused\&. If the function returns true, this indicates opt\-out from decommit; the memory remains committed and available for future use, in which case it will be automatically retained for later reuse\&.
.HP \w'typedef\ bool\ (chunk_purge_t)('u
.BI "typedef bool (chunk_purge_t)(void\ *" "chunk" ", size_t" "size" ", size_t\ " "offset" ", size_t\ " "length" ", unsigned\ " "arena_ind" ");"
.sp
.if n \{\
.RS 4
.\}
.nf
.fi
.if n \{\
.RE
.\}
.sp
A chunk purge function conforms to the
\fBchunk_purge_t\fR
type and optionally discards physical pages within the virtual memory mapping associated with
\fIchunk\fR
of given
\fIsize\fR
at
\fIoffset\fR
bytes, extending for
\fIlength\fR
on behalf of arena
\fIarena_ind\fR, returning false if pages within the purged virtual memory range will be zero\-filled the next time they are accessed\&.
.HP \w'typedef\ bool\ (chunk_split_t)('u
.BI "typedef bool (chunk_split_t)(void\ *" "chunk" ", size_t\ " "size" ", size_t\ " "size_a" ", size_t\ " "size_b" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");"
.sp
.if n \{\
.RS 4
.\}
.nf
.fi
.if n \{\
.RE
.\}
.sp
A chunk split function conforms to the
\fBchunk_split_t\fR
type and optionally splits
\fIchunk\fR
of given
\fIsize\fR
into two adjacent chunks, the first of
\fIsize_a\fR
bytes, and the second of
\fIsize_b\fR
bytes, operating on
\fIcommitted\fR/decommitted memory as indicated, on behalf of arena
\fIarena_ind\fR, returning false upon success\&. If the function returns true, this indicates that the chunk remains unsplit and therefore should continue to be operated on as a whole\&.
.HP \w'typedef\ bool\ (chunk_merge_t)('u
.BI "typedef bool (chunk_merge_t)(void\ *" "chunk_a" ", size_t\ " "size_a" ", void\ *" "chunk_b" ", size_t\ " "size_b" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");"
.sp
.if n \{\
.RS 4
.\}
.nf
.fi
.if n \{\
.RE
.\}
.sp
A chunk merge function conforms to the
\fBchunk_merge_t\fR
type and optionally merges adjacent chunks,
\fIchunk_a\fR
of given
\fIsize_a\fR
and
\fIchunk_b\fR
of given
\fIsize_b\fR
into one contiguous chunk, operating on
\fIcommitted\fR/decommitted memory as indicated, on behalf of arena
\fIarena_ind\fR, returning false upon success\&. If the function returns true, this indicates that the chunks remain distinct mappings and therefore should continue to be operated on independently\&.
.RE
.PP
"arenas\&.narenas" (\fBunsigned\fR) r\- "arenas\&.narenas" (\fBunsigned\fR) r\-
.RS 4 .RS 4
Current limit on number of arenas\&. Current limit on number of arenas\&.
...@@ -1036,6 +1308,15 @@ An array of ...@@ -1036,6 +1308,15 @@ An array of
booleans\&. Each boolean indicates whether the corresponding arena is initialized\&. booleans\&. Each boolean indicates whether the corresponding arena is initialized\&.
.RE .RE
.PP .PP
"arenas\&.lg_dirty_mult" (\fBssize_t\fR) rw
.RS 4
Current default per\-arena minimum ratio (log base 2) of active to dirty pages, used to initialize
"arena\&.<i>\&.lg_dirty_mult"
during arena creation\&. See
"opt\&.lg_dirty_mult"
for additional information\&.
.RE
.PP
"arenas\&.quantum" (\fBsize_t\fR) r\- "arenas\&.quantum" (\fBsize_t\fR) r\-
.RS 4 .RS 4
Quantum size\&. Quantum size\&.
...@@ -1076,7 +1357,7 @@ Number of regions per page run\&. ...@@ -1076,7 +1357,7 @@ Number of regions per page run\&.
Number of bytes per page run\&. Number of bytes per page run\&.
.RE .RE
.PP .PP
"arenas\&.nlruns" (\fBsize_t\fR) r\- "arenas\&.nlruns" (\fBunsigned\fR) r\-
.RS 4 .RS 4
Total number of large size classes\&. Total number of large size classes\&.
.RE .RE
...@@ -1086,9 +1367,14 @@ Total number of large size classes\&. ...@@ -1086,9 +1367,14 @@ Total number of large size classes\&.
Maximum size supported by this large size class\&. Maximum size supported by this large size class\&.
.RE .RE
.PP .PP
"arenas\&.purge" (\fBunsigned\fR) \-w "arenas\&.nhchunks" (\fBunsigned\fR) r\-
.RS 4
Total number of huge size classes\&.
.RE
.PP
"arenas\&.hchunk\&.<i>\&.size" (\fBsize_t\fR) r\-
.RS 4 .RS 4
Purge unused dirty pages for the specified arena, or for all arenas if none is specified\&. Maximum size supported by this huge size class\&.
.RE .RE
.PP .PP
"arenas\&.extend" (\fBunsigned\fR) r\- "arenas\&.extend" (\fBunsigned\fR) r\-
...@@ -1096,11 +1382,22 @@ Purge unused dirty pages for the specified arena, or for all arenas if none is s ...@@ -1096,11 +1382,22 @@ Purge unused dirty pages for the specified arena, or for all arenas if none is s
Extend the array of arenas by appending a new arena, and returning the new arena index\&. Extend the array of arenas by appending a new arena, and returning the new arena index\&.
.RE .RE
.PP .PP
"prof\&.thread_active_init" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR]
.RS 4
Control the initial setting for
"thread\&.prof\&.active"
in newly created threads\&. See the
"opt\&.prof_thread_active_init"
option for additional information\&.
.RE
.PP
"prof\&.active" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] "prof\&.active" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR]
.RS 4 .RS 4
Control whether sampling is currently active\&. See the Control whether sampling is currently active\&. See the
"opt\&.prof_active" "opt\&.prof_active"
option for additional information\&. option for additional information, as well as the interrelated
"thread\&.prof\&.active"
mallctl\&.
.RE .RE
.PP .PP
"prof\&.dump" (\fBconst char *\fR) \-w [\fB\-\-enable\-prof\fR] "prof\&.dump" (\fBconst char *\fR) \-w [\fB\-\-enable\-prof\fR]
...@@ -1113,6 +1410,30 @@ is controlled by the ...@@ -1113,6 +1410,30 @@ is controlled by the
option\&. option\&.
.RE .RE
.PP .PP
"prof\&.gdump" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR]
.RS 4
When enabled, trigger a memory profile dump every time the total virtual memory exceeds the previous maximum\&. Profiles are dumped to files named according to the pattern
<prefix>\&.<pid>\&.<seq>\&.u<useq>\&.heap, where
<prefix>
is controlled by the
"opt\&.prof_prefix"
option\&.
.RE
.PP
"prof\&.reset" (\fBsize_t\fR) \-w [\fB\-\-enable\-prof\fR]
.RS 4
Reset all memory profile statistics, and optionally update the sample rate (see
"opt\&.lg_prof_sample"
and
"prof\&.lg_sample")\&.
.RE
.PP
"prof\&.lg_sample" (\fBsize_t\fR) r\- [\fB\-\-enable\-prof\fR]
.RS 4
Get the current sample rate (see
"opt\&.lg_prof_sample")\&.
.RE
.PP
"prof\&.interval" (\fBuint64_t\fR) r\- [\fB\-\-enable\-prof\fR] "prof\&.interval" (\fBuint64_t\fR) r\- [\fB\-\-enable\-prof\fR]
.RS 4 .RS 4
Average number of bytes allocated between inverval\-based profile dumps\&. See the Average number of bytes allocated between inverval\-based profile dumps\&. See the
...@@ -1122,7 +1443,7 @@ option for additional information\&. ...@@ -1122,7 +1443,7 @@ option for additional information\&.
.PP .PP
"stats\&.cactive" (\fBsize_t *\fR) r\- [\fB\-\-enable\-stats\fR] "stats\&.cactive" (\fBsize_t *\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4 .RS 4
Pointer to a counter that contains an approximate count of the current number of bytes in active pages\&. The estimate may be high, but never low, because each arena rounds up to the nearest multiple of the chunk size when computing its contribution to the counter\&. Note that the Pointer to a counter that contains an approximate count of the current number of bytes in active pages\&. The estimate may be high, but never low, because each arena rounds up when computing its contribution to the counter\&. Note that the
"epoch" "epoch"
mallctl has no bearing on this counter\&. Furthermore, counter consistency is maintained via atomic operations, so it is necessary to use an atomic operation in order to guarantee a consistent read when dereferencing the pointer\&. mallctl has no bearing on this counter\&. Furthermore, counter consistency is maintained via atomic operations, so it is necessary to use an atomic operation in order to guarantee a consistent read when dereferencing the pointer\&.
.RE .RE
...@@ -1136,44 +1457,27 @@ Total number of bytes allocated by the application\&. ...@@ -1136,44 +1457,27 @@ Total number of bytes allocated by the application\&.
.RS 4 .RS 4
Total number of bytes in active pages allocated by the application\&. This is a multiple of the page size, and greater than or equal to Total number of bytes in active pages allocated by the application\&. This is a multiple of the page size, and greater than or equal to
"stats\&.allocated"\&. This does not include "stats\&.allocated"\&. This does not include
"stats\&.arenas\&.<i>\&.pdirty" "stats\&.arenas\&.<i>\&.pdirty", nor pages entirely devoted to allocator metadata\&.
and pages entirely devoted to allocator metadata\&.
.RE
.PP
"stats\&.mapped" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Total number of bytes in chunks mapped on behalf of the application\&. This is a multiple of the chunk size, and is at least as large as
"stats\&.active"\&. This does not include inactive chunks\&.
.RE
.PP
"stats\&.chunks\&.current" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Total number of chunks actively mapped on behalf of the application\&. This does not include inactive chunks\&.
.RE
.PP
"stats\&.chunks\&.total" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Cumulative number of chunks allocated\&.
.RE
.PP
"stats\&.chunks\&.high" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Maximum number of active chunks at any time thus far\&.
.RE .RE
.PP .PP
"stats\&.huge\&.allocated" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] "stats\&.metadata" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4 .RS 4
Number of bytes currently allocated by huge objects\&. Total number of bytes dedicated to metadata, which comprise base allocations used for bootstrap\-sensitive internal allocator data structures, arena chunk headers (see
"stats\&.arenas\&.<i>\&.metadata\&.mapped"), and internal allocations (see
"stats\&.arenas\&.<i>\&.metadata\&.allocated")\&.
.RE .RE
.PP .PP
"stats\&.huge\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] "stats\&.resident" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4 .RS 4
Cumulative number of huge allocation requests\&. Maximum number of bytes in physically resident data pages mapped by the allocator, comprising all pages dedicated to allocator metadata, pages backing active allocations, and unused dirty pages\&. This is a maximum rather than precise because pages may not actually be physically resident if they correspond to demand\-zeroed virtual memory that has not yet been touched\&. This is a multiple of the page size, and is larger than
"stats\&.active"\&.
.RE .RE
.PP .PP
"stats\&.huge\&.ndalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] "stats\&.mapped" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4 .RS 4
Cumulative number of huge deallocation requests\&. Total number of bytes in active chunks mapped by the allocator\&. This is a multiple of the chunk size, and is larger than
"stats\&.active"\&. This does not include inactive chunks, even those that contain unused dirty pages, which means that there is no strict ordering between this and
"stats\&.resident"\&.
.RE .RE
.PP .PP
"stats\&.arenas\&.<i>\&.dss" (\fBconst char *\fR) r\- "stats\&.arenas\&.<i>\&.dss" (\fBconst char *\fR) r\-
...@@ -1185,6 +1489,13 @@ allocation\&. See ...@@ -1185,6 +1489,13 @@ allocation\&. See
for details\&. for details\&.
.RE .RE
.PP .PP
"stats\&.arenas\&.<i>\&.lg_dirty_mult" (\fBssize_t\fR) r\-
.RS 4
Minimum ratio (log base 2) of active to dirty pages\&. See
"opt\&.lg_dirty_mult"
for details\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.nthreads" (\fBunsigned\fR) r\- "stats\&.arenas\&.<i>\&.nthreads" (\fBunsigned\fR) r\-
.RS 4 .RS 4
Number of threads currently assigned to arena\&. Number of threads currently assigned to arena\&.
...@@ -1207,6 +1518,24 @@ or similar has not been called\&. ...@@ -1207,6 +1518,24 @@ or similar has not been called\&.
Number of mapped bytes\&. Number of mapped bytes\&.
.RE .RE
.PP .PP
"stats\&.arenas\&.<i>\&.metadata\&.mapped" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Number of mapped bytes in arena chunk headers, which track the states of the non\-metadata pages\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.metadata\&.allocated" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Number of bytes dedicated to internal allocations\&. Internal allocations differ from application\-originated allocations in that they are for internal use, and that they are omitted from heap profiles\&. This statistic is reported separately from
"stats\&.metadata"
and
"stats\&.arenas\&.<i>\&.metadata\&.mapped"
because it overlaps with e\&.g\&. the
"stats\&.allocated"
and
"stats\&.active"
statistics, whereas the other metadata statistics do not\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.npurge" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] "stats\&.arenas\&.<i>\&.npurge" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4 .RS 4
Number of dirty page purge sweeps performed\&. Number of dirty page purge sweeps performed\&.
...@@ -1264,9 +1593,24 @@ Cumulative number of large deallocation requests served directly by the arena\&. ...@@ -1264,9 +1593,24 @@ Cumulative number of large deallocation requests served directly by the arena\&.
Cumulative number of large allocation requests\&. Cumulative number of large allocation requests\&.
.RE .RE
.PP .PP
"stats\&.arenas\&.<i>\&.bins\&.<j>\&.allocated" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] "stats\&.arenas\&.<i>\&.huge\&.allocated" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Number of bytes currently allocated by huge objects\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.huge\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Cumulative number of huge allocation requests served directly by the arena\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.huge\&.ndalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4 .RS 4
Current number of bytes allocated by bin\&. Cumulative number of huge deallocation requests served directly by the arena\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.huge\&.nrequests" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Cumulative number of huge allocation requests\&.
.RE .RE
.PP .PP
"stats\&.arenas\&.<i>\&.bins\&.<j>\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] "stats\&.arenas\&.<i>\&.bins\&.<j>\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR]
...@@ -1284,6 +1628,11 @@ Cumulative number of allocations returned to bin\&. ...@@ -1284,6 +1628,11 @@ Cumulative number of allocations returned to bin\&.
Cumulative number of allocation requests\&. Cumulative number of allocation requests\&.
.RE .RE
.PP .PP
"stats\&.arenas\&.<i>\&.bins\&.<j>\&.curregs" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Current number of regions for this size class\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.bins\&.<j>\&.nfills" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR \fB\-\-enable\-tcache\fR] "stats\&.arenas\&.<i>\&.bins\&.<j>\&.nfills" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR \fB\-\-enable\-tcache\fR]
.RS 4 .RS 4
Cumulative number of tcache fills\&. Cumulative number of tcache fills\&.
...@@ -1328,6 +1677,26 @@ Cumulative number of allocation requests for this size class\&. ...@@ -1328,6 +1677,26 @@ Cumulative number of allocation requests for this size class\&.
.RS 4 .RS 4
Current number of runs for this size class\&. Current number of runs for this size class\&.
.RE .RE
.PP
"stats\&.arenas\&.<i>\&.hchunks\&.<j>\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Cumulative number of allocation requests for this size class served directly by the arena\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.hchunks\&.<j>\&.ndalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Cumulative number of deallocation requests for this size class served directly by the arena\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.hchunks\&.<j>\&.nrequests" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Cumulative number of allocation requests for this size class\&.
.RE
.PP
"stats\&.arenas\&.<i>\&.hchunks\&.<j>\&.curhchunks" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR]
.RS 4
Current number of huge allocations for this size class\&.
.RE
.SH "DEBUGGING MALLOC PROBLEMS" .SH "DEBUGGING MALLOC PROBLEMS"
.PP .PP
When debugging, it is a good idea to configure/build jemalloc with the When debugging, it is a good idea to configure/build jemalloc with the
...@@ -1513,44 +1882,6 @@ The ...@@ -1513,44 +1882,6 @@ The
\fBmalloc_usable_size\fR\fB\fR \fBmalloc_usable_size\fR\fB\fR
function returns the usable size of the allocation pointed to by function returns the usable size of the allocation pointed to by
\fIptr\fR\&. \fIptr\fR\&.
.SS "Experimental API"
.PP
The
\fBallocm\fR\fB\fR,
\fBrallocm\fR\fB\fR,
\fBsallocm\fR\fB\fR,
\fBdallocm\fR\fB\fR, and
\fBnallocm\fR\fB\fR
functions return
\fBALLOCM_SUCCESS\fR
on success; otherwise they return an error value\&. The
\fBallocm\fR\fB\fR,
\fBrallocm\fR\fB\fR, and
\fBnallocm\fR\fB\fR
functions will fail if:
.PP
ALLOCM_ERR_OOM
.RS 4
Out of memory\&. Insufficient contiguous memory was available to service the allocation request\&. The
\fBallocm\fR\fB\fR
function additionally sets
\fI*ptr\fR
to
\fBNULL\fR, whereas the
\fBrallocm\fR\fB\fR
function leaves
\fB*ptr\fR
unmodified\&.
.RE
The
\fBrallocm\fR\fB\fR
function will also fail if:
.PP
ALLOCM_ERR_NOT_MOVED
.RS 4
\fBALLOCM_NO_MOVE\fR
was specified, but the reallocation request could not be serviced without moving the object\&.
.RE
.SH "ENVIRONMENT" .SH "ENVIRONMENT"
.PP .PP
The following environment variable affects the execution of the allocation functions: The following environment variable affects the execution of the allocation functions:
......
<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>JEMALLOC</title><meta name="generator" content="DocBook XSL Stylesheets V1.78.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry"><a name="idm316394519664"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>jemalloc &#8212; general purpose memory allocation functions</p></div><div class="refsect1"><a name="library"></a><h2>LIBRARY</h2><p>This manual describes jemalloc 3.6.0-0-g46c0af68bd248b04df75e4f92d5fb804c3d75340. More information <html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>JEMALLOC</title><meta name="generator" content="DocBook XSL Stylesheets V1.78.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry"><a name="idp45223136"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>jemalloc &#8212; general purpose memory allocation functions</p></div><div class="refsect1"><a name="library"></a><h2>LIBRARY</h2><p>This manual describes jemalloc 4.0.3-0-ge9192eacf8935e29fc62fddc2701f7942b1cc02c. More information
can be found at the <a class="ulink" href="http://www.canonware.com/jemalloc/" target="_top">jemalloc website</a>.</p></div><div class="refsynopsisdiv"><h2>SYNOPSIS</h2><div class="funcsynopsis"><pre class="funcsynopsisinfo">#include &lt;<code class="filename">stdlib.h</code>&gt; can be found at the <a class="ulink" href="http://www.canonware.com/jemalloc/" target="_top">jemalloc website</a>.</p></div><div class="refsynopsisdiv"><h2>SYNOPSIS</h2><div class="funcsynopsis"><pre class="funcsynopsisinfo">#include &lt;<code class="filename">jemalloc/jemalloc.h</code>&gt;</pre><div class="refsect2"><a name="idp44244480"></a><h3>Standard API</h3><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">malloc</b>(</code></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">calloc</b>(</code></td><td>size_t <var class="pdparam">number</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">posix_memalign</b>(</code></td><td>void **<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">alignment</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">aligned_alloc</b>(</code></td><td>size_t <var class="pdparam">alignment</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">realloc</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void <b class="fsfunc">free</b>(</code></td><td>void *<var class="pdparam">ptr</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div><div class="refsect2"><a name="idp46062768"></a><h3>Non-standard API</h3><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">mallocx</b>(</code></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">rallocx</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">size_t <b class="fsfunc">xallocx</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">extra</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">size_t <b class="fsfunc">sallocx</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void <b class="fsfunc">dallocx</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void <b class="fsfunc">sdallocx</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">size_t <b class="fsfunc">nallocx</b>(</code></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">mallctl</b>(</code></td><td>const char *<var class="pdparam">name</var>, </td></tr><tr><td></td><td>void *<var class="pdparam">oldp</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">oldlenp</var>, </td></tr><tr><td></td><td>void *<var class="pdparam">newp</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">newlen</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">mallctlnametomib</b>(</code></td><td>const char *<var class="pdparam">name</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">mibp</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">miblenp</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">mallctlbymib</b>(</code></td><td>const size_t *<var class="pdparam">mib</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">miblen</var>, </td></tr><tr><td></td><td>void *<var class="pdparam">oldp</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">oldlenp</var>, </td></tr><tr><td></td><td>void *<var class="pdparam">newp</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">newlen</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void <b class="fsfunc">malloc_stats_print</b>(</code></td><td>void <var class="pdparam">(*write_cb)</var>
#include &lt;<code class="filename">jemalloc/jemalloc.h</code>&gt;</pre><div class="refsect2"><a name="idm316394002288"></a><h3>Standard API</h3><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">malloc</b>(</code></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">calloc</b>(</code></td><td>size_t <var class="pdparam">number</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">posix_memalign</b>(</code></td><td>void **<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">alignment</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">aligned_alloc</b>(</code></td><td>size_t <var class="pdparam">alignment</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">realloc</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void <b class="fsfunc">free</b>(</code></td><td>void *<var class="pdparam">ptr</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div><div class="refsect2"><a name="idm316393986160"></a><h3>Non-standard API</h3><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">mallocx</b>(</code></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void *<b class="fsfunc">rallocx</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">size_t <b class="fsfunc">xallocx</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">extra</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">size_t <b class="fsfunc">sallocx</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void <b class="fsfunc">dallocx</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">size_t <b class="fsfunc">nallocx</b>(</code></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">mallctl</b>(</code></td><td>const char *<var class="pdparam">name</var>, </td></tr><tr><td></td><td>void *<var class="pdparam">oldp</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">oldlenp</var>, </td></tr><tr><td></td><td>void *<var class="pdparam">newp</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">newlen</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">mallctlnametomib</b>(</code></td><td>const char *<var class="pdparam">name</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">mibp</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">miblenp</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">mallctlbymib</b>(</code></td><td>const size_t *<var class="pdparam">mib</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">miblen</var>, </td></tr><tr><td></td><td>void *<var class="pdparam">oldp</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">oldlenp</var>, </td></tr><tr><td></td><td>void *<var class="pdparam">newp</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">newlen</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void <b class="fsfunc">malloc_stats_print</b>(</code></td><td>void <var class="pdparam">(*write_cb)</var>
<code>(</code>void *, const char *<code>)</code> <code>(</code>void *, const char *<code>)</code>
, </td></tr><tr><td></td><td>void *<var class="pdparam">cbopaque</var>, </td></tr><tr><td></td><td>const char *<var class="pdparam">opts</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">size_t <b class="fsfunc">malloc_usable_size</b>(</code></td><td>const void *<var class="pdparam">ptr</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void <b class="fsfunc">(*malloc_message)</b>(</code></td><td>void *<var class="pdparam">cbopaque</var>, </td></tr><tr><td></td><td>const char *<var class="pdparam">s</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><p><span class="type">const char *</span><code class="varname">malloc_conf</code>;</p></div><div class="refsect2"><a name="idm316388684112"></a><h3>Experimental API</h3><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">allocm</b>(</code></td><td>void **<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">rsize</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">rallocm</b>(</code></td><td>void **<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">rsize</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">extra</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">sallocm</b>(</code></td><td>const void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>size_t *<var class="pdparam">rsize</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">dallocm</b>(</code></td><td>void *<var class="pdparam">ptr</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">int <b class="fsfunc">nallocm</b>(</code></td><td>size_t *<var class="pdparam">rsize</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>int <var class="pdparam">flags</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div></div></div><div class="refsect1"><a name="description"></a><h2>DESCRIPTION</h2><div class="refsect2"><a name="idm316388663504"></a><h3>Standard API</h3><p>The <code class="function">malloc</code>(<em class="parameter"><code></code></em>) function allocates , </td></tr><tr><td></td><td>void *<var class="pdparam">cbopaque</var>, </td></tr><tr><td></td><td>const char *<var class="pdparam">opts</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">size_t <b class="fsfunc">malloc_usable_size</b>(</code></td><td>const void *<var class="pdparam">ptr</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">void <b class="fsfunc">(*malloc_message)</b>(</code></td><td>void *<var class="pdparam">cbopaque</var>, </td></tr><tr><td></td><td>const char *<var class="pdparam">s</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div><p><span class="type">const char *</span><code class="varname">malloc_conf</code>;</p></div></div></div><div class="refsect1"><a name="description"></a><h2>DESCRIPTION</h2><div class="refsect2"><a name="idp46115952"></a><h3>Standard API</h3><p>The <code class="function">malloc</code>(<em class="parameter"><code></code></em>) function allocates
<em class="parameter"><code>size</code></em> bytes of uninitialized memory. The allocated <em class="parameter"><code>size</code></em> bytes of uninitialized memory. The allocated
space is suitably aligned (after possible pointer coercion) for storage space is suitably aligned (after possible pointer coercion) for storage
of any type of object.</p><p>The <code class="function">calloc</code>(<em class="parameter"><code></code></em>) function allocates of any type of object.</p><p>The <code class="function">calloc</code>(<em class="parameter"><code></code></em>) function allocates
...@@ -13,13 +12,13 @@ ...@@ -13,13 +12,13 @@
exception that the allocated memory is explicitly initialized to zero exception that the allocated memory is explicitly initialized to zero
bytes.</p><p>The <code class="function">posix_memalign</code>(<em class="parameter"><code></code></em>) function bytes.</p><p>The <code class="function">posix_memalign</code>(<em class="parameter"><code></code></em>) function
allocates <em class="parameter"><code>size</code></em> bytes of memory such that the allocates <em class="parameter"><code>size</code></em> bytes of memory such that the
allocation's base address is an even multiple of allocation's base address is a multiple of
<em class="parameter"><code>alignment</code></em>, and returns the allocation in the value <em class="parameter"><code>alignment</code></em>, and returns the allocation in the value
pointed to by <em class="parameter"><code>ptr</code></em>. The requested pointed to by <em class="parameter"><code>ptr</code></em>. The requested
<em class="parameter"><code>alignment</code></em> must be a power of 2 at least as large <em class="parameter"><code>alignment</code></em> must be a power of 2 at least as large as
as <code class="code">sizeof(<span class="type">void *</span>)</code>.</p><p>The <code class="function">aligned_alloc</code>(<em class="parameter"><code></code></em>) function <code class="code">sizeof(<span class="type">void *</span>)</code>.</p><p>The <code class="function">aligned_alloc</code>(<em class="parameter"><code></code></em>) function
allocates <em class="parameter"><code>size</code></em> bytes of memory such that the allocates <em class="parameter"><code>size</code></em> bytes of memory such that the
allocation's base address is an even multiple of allocation's base address is a multiple of
<em class="parameter"><code>alignment</code></em>. The requested <em class="parameter"><code>alignment</code></em>. The requested
<em class="parameter"><code>alignment</code></em> must be a power of 2. Behavior is <em class="parameter"><code>alignment</code></em> must be a power of 2. Behavior is
undefined if <em class="parameter"><code>size</code></em> is not an integral multiple of undefined if <em class="parameter"><code>size</code></em> is not an integral multiple of
...@@ -38,37 +37,51 @@ ...@@ -38,37 +37,51 @@
<code class="function">malloc</code>(<em class="parameter"><code></code></em>) for the specified size.</p><p>The <code class="function">free</code>(<em class="parameter"><code></code></em>) function causes the <code class="function">malloc</code>(<em class="parameter"><code></code></em>) for the specified size.</p><p>The <code class="function">free</code>(<em class="parameter"><code></code></em>) function causes the
allocated memory referenced by <em class="parameter"><code>ptr</code></em> to be made allocated memory referenced by <em class="parameter"><code>ptr</code></em> to be made
available for future allocations. If <em class="parameter"><code>ptr</code></em> is available for future allocations. If <em class="parameter"><code>ptr</code></em> is
<code class="constant">NULL</code>, no action occurs.</p></div><div class="refsect2"><a name="idm316388639904"></a><h3>Non-standard API</h3><p>The <code class="function">mallocx</code>(<em class="parameter"><code></code></em>), <code class="constant">NULL</code>, no action occurs.</p></div><div class="refsect2"><a name="idp46144704"></a><h3>Non-standard API</h3><p>The <code class="function">mallocx</code>(<em class="parameter"><code></code></em>),
<code class="function">rallocx</code>(<em class="parameter"><code></code></em>), <code class="function">rallocx</code>(<em class="parameter"><code></code></em>),
<code class="function">xallocx</code>(<em class="parameter"><code></code></em>), <code class="function">xallocx</code>(<em class="parameter"><code></code></em>),
<code class="function">sallocx</code>(<em class="parameter"><code></code></em>), <code class="function">sallocx</code>(<em class="parameter"><code></code></em>),
<code class="function">dallocx</code>(<em class="parameter"><code></code></em>), and <code class="function">dallocx</code>(<em class="parameter"><code></code></em>),
<code class="function">sdallocx</code>(<em class="parameter"><code></code></em>), and
<code class="function">nallocx</code>(<em class="parameter"><code></code></em>) functions all have a <code class="function">nallocx</code>(<em class="parameter"><code></code></em>) functions all have a
<em class="parameter"><code>flags</code></em> argument that can be used to specify <em class="parameter"><code>flags</code></em> argument that can be used to specify
options. The functions only check the options that are contextually options. The functions only check the options that are contextually
relevant. Use bitwise or (<code class="code">|</code>) operations to relevant. Use bitwise or (<code class="code">|</code>) operations to
specify one or more of the following: specify one or more of the following:
</p><div class="variablelist"><dl class="variablelist"><dt><span class="term"><code class="constant">MALLOCX_LG_ALIGN(<em class="parameter"><code>la</code></em>) </p><div class="variablelist"><dl class="variablelist"><dt><a name="MALLOCX_LG_ALIGN"></a><span class="term"><code class="constant">MALLOCX_LG_ALIGN(<em class="parameter"><code>la</code></em>)
</code></span></dt><dd><p>Align the memory allocation to start at an address </code></span></dt><dd><p>Align the memory allocation to start at an address
that is a multiple of <code class="code">(1 &lt;&lt; that is a multiple of <code class="code">(1 &lt;&lt;
<em class="parameter"><code>la</code></em>)</code>. This macro does not validate <em class="parameter"><code>la</code></em>)</code>. This macro does not validate
that <em class="parameter"><code>la</code></em> is within the valid that <em class="parameter"><code>la</code></em> is within the valid
range.</p></dd><dt><span class="term"><code class="constant">MALLOCX_ALIGN(<em class="parameter"><code>a</code></em>) range.</p></dd><dt><a name="MALLOCX_ALIGN"></a><span class="term"><code class="constant">MALLOCX_ALIGN(<em class="parameter"><code>a</code></em>)
</code></span></dt><dd><p>Align the memory allocation to start at an address </code></span></dt><dd><p>Align the memory allocation to start at an address
that is a multiple of <em class="parameter"><code>a</code></em>, where that is a multiple of <em class="parameter"><code>a</code></em>, where
<em class="parameter"><code>a</code></em> is a power of two. This macro does not <em class="parameter"><code>a</code></em> is a power of two. This macro does not
validate that <em class="parameter"><code>a</code></em> is a power of 2. validate that <em class="parameter"><code>a</code></em> is a power of 2.
</p></dd><dt><span class="term"><code class="constant">MALLOCX_ZERO</code></span></dt><dd><p>Initialize newly allocated memory to contain zero </p></dd><dt><a name="MALLOCX_ZERO"></a><span class="term"><code class="constant">MALLOCX_ZERO</code></span></dt><dd><p>Initialize newly allocated memory to contain zero
bytes. In the growing reallocation case, the real size prior to bytes. In the growing reallocation case, the real size prior to
reallocation defines the boundary between untouched bytes and those reallocation defines the boundary between untouched bytes and those
that are initialized to contain zero bytes. If this macro is that are initialized to contain zero bytes. If this macro is
absent, newly allocated memory is uninitialized.</p></dd><dt><span class="term"><code class="constant">MALLOCX_ARENA(<em class="parameter"><code>a</code></em>) absent, newly allocated memory is uninitialized.</p></dd><dt><a name="MALLOCX_TCACHE"></a><span class="term"><code class="constant">MALLOCX_TCACHE(<em class="parameter"><code>tc</code></em>)
</code></span></dt><dd><p>Use the thread-specific cache (tcache) specified by
the identifier <em class="parameter"><code>tc</code></em>, which must have been
acquired via the <a class="link" href="#tcache.create">
"<code class="mallctl">tcache.create</code>"
</a>
mallctl. This macro does not validate that
<em class="parameter"><code>tc</code></em> specifies a valid
identifier.</p></dd><dt><a name="MALLOC_TCACHE_NONE"></a><span class="term"><code class="constant">MALLOCX_TCACHE_NONE</code></span></dt><dd><p>Do not use a thread-specific cache (tcache). Unless
<code class="constant">MALLOCX_TCACHE(<em class="parameter"><code>tc</code></em>)</code> or
<code class="constant">MALLOCX_TCACHE_NONE</code> is specified, an
automatically managed tcache will be used under many circumstances.
This macro cannot be used in the same <em class="parameter"><code>flags</code></em>
argument as
<code class="constant">MALLOCX_TCACHE(<em class="parameter"><code>tc</code></em>)</code>.</p></dd><dt><a name="MALLOCX_ARENA"></a><span class="term"><code class="constant">MALLOCX_ARENA(<em class="parameter"><code>a</code></em>)
</code></span></dt><dd><p>Use the arena specified by the index </code></span></dt><dd><p>Use the arena specified by the index
<em class="parameter"><code>a</code></em> (and by necessity bypass the thread <em class="parameter"><code>a</code></em>. This macro has no effect for regions that
cache). This macro has no effect for huge regions, nor for regions were allocated via an arena other than the one specified. This
that were allocated via an arena other than the one specified. macro does not validate that <em class="parameter"><code>a</code></em> specifies an
This macro does not validate that <em class="parameter"><code>a</code></em> arena index in the valid range.</p></dd></dl></div><p>
specifies an arena index in the valid range.</p></dd></dl></div><p>
</p><p>The <code class="function">mallocx</code>(<em class="parameter"><code></code></em>) function allocates at </p><p>The <code class="function">mallocx</code>(<em class="parameter"><code></code></em>) function allocates at
least <em class="parameter"><code>size</code></em> bytes of memory, and returns a pointer least <em class="parameter"><code>size</code></em> bytes of memory, and returns a pointer
to the base address of the allocation. Behavior is undefined if to the base address of the allocation. Behavior is undefined if
...@@ -91,7 +104,14 @@ ...@@ -91,7 +104,14 @@
&gt; <code class="constant">SIZE_T_MAX</code>)</code>.</p><p>The <code class="function">sallocx</code>(<em class="parameter"><code></code></em>) function returns the &gt; <code class="constant">SIZE_T_MAX</code>)</code>.</p><p>The <code class="function">sallocx</code>(<em class="parameter"><code></code></em>) function returns the
real size of the allocation at <em class="parameter"><code>ptr</code></em>.</p><p>The <code class="function">dallocx</code>(<em class="parameter"><code></code></em>) function causes the real size of the allocation at <em class="parameter"><code>ptr</code></em>.</p><p>The <code class="function">dallocx</code>(<em class="parameter"><code></code></em>) function causes the
memory referenced by <em class="parameter"><code>ptr</code></em> to be made available for memory referenced by <em class="parameter"><code>ptr</code></em> to be made available for
future allocations.</p><p>The <code class="function">nallocx</code>(<em class="parameter"><code></code></em>) function allocates no future allocations.</p><p>The <code class="function">sdallocx</code>(<em class="parameter"><code></code></em>) function is an
extension of <code class="function">dallocx</code>(<em class="parameter"><code></code></em>) with a
<em class="parameter"><code>size</code></em> parameter to allow the caller to pass in the
allocation size as an optimization. The minimum valid input size is the
original requested size of the allocation, and the maximum valid input
size is the corresponding value returned by
<code class="function">nallocx</code>(<em class="parameter"><code></code></em>) or
<code class="function">sallocx</code>(<em class="parameter"><code></code></em>).</p><p>The <code class="function">nallocx</code>(<em class="parameter"><code></code></em>) function allocates no
memory, but it performs the same size computation as the memory, but it performs the same size computation as the
<code class="function">mallocx</code>(<em class="parameter"><code></code></em>) function, and returns the real <code class="function">mallocx</code>(<em class="parameter"><code></code></em>) function, and returns the real
size of the allocation that would result from the equivalent size of the allocation that would result from the equivalent
...@@ -162,11 +182,12 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -162,11 +182,12 @@ for (i = 0; i &lt; nbins; i++) {
functions simultaneously. If <code class="option">--enable-stats</code> is functions simultaneously. If <code class="option">--enable-stats</code> is
specified during configuration, &#8220;m&#8221; and &#8220;a&#8221; can specified during configuration, &#8220;m&#8221; and &#8220;a&#8221; can
be specified to omit merged arena and per arena statistics, respectively; be specified to omit merged arena and per arena statistics, respectively;
&#8220;b&#8221; and &#8220;l&#8221; can be specified to omit per size &#8220;b&#8221;, &#8220;l&#8221;, and &#8220;h&#8221; can be specified to
class statistics for bins and large objects, respectively. Unrecognized omit per size class statistics for bins, large objects, and huge objects,
characters are silently ignored. Note that thread caching may prevent respectively. Unrecognized characters are silently ignored. Note that
some statistics from being completely up to date, since extra locking thread caching may prevent some statistics from being completely up to
would be required to merge counters that track thread cache operations. date, since extra locking would be required to merge counters that track
thread cache operations.
</p><p>The <code class="function">malloc_usable_size</code>(<em class="parameter"><code></code></em>) function </p><p>The <code class="function">malloc_usable_size</code>(<em class="parameter"><code></code></em>) function
returns the usable size of the allocation pointed to by returns the usable size of the allocation pointed to by
<em class="parameter"><code>ptr</code></em>. The return value may be larger than the size <em class="parameter"><code>ptr</code></em>. The return value may be larger than the size
...@@ -177,74 +198,7 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -177,74 +198,7 @@ for (i = 0; i &lt; nbins; i++) {
discrepancy between the requested allocation size and the size reported discrepancy between the requested allocation size and the size reported
by <code class="function">malloc_usable_size</code>(<em class="parameter"><code></code></em>) should not be by <code class="function">malloc_usable_size</code>(<em class="parameter"><code></code></em>) should not be
depended on, since such behavior is entirely implementation-dependent. depended on, since such behavior is entirely implementation-dependent.
</p></div><div class="refsect2"><a name="idm316388574208"></a><h3>Experimental API</h3><p>The experimental API is subject to change or removal without regard </p></div></div><div class="refsect1"><a name="tuning"></a><h2>TUNING</h2><p>Once, when the first call is made to one of the memory allocation
for backward compatibility. If <code class="option">--disable-experimental</code>
is specified during configuration, the experimental API is
omitted.</p><p>The <code class="function">allocm</code>(<em class="parameter"><code></code></em>),
<code class="function">rallocm</code>(<em class="parameter"><code></code></em>),
<code class="function">sallocm</code>(<em class="parameter"><code></code></em>),
<code class="function">dallocm</code>(<em class="parameter"><code></code></em>), and
<code class="function">nallocm</code>(<em class="parameter"><code></code></em>) functions all have a
<em class="parameter"><code>flags</code></em> argument that can be used to specify
options. The functions only check the options that are contextually
relevant. Use bitwise or (<code class="code">|</code>) operations to
specify one or more of the following:
</p><div class="variablelist"><dl class="variablelist"><dt><span class="term"><code class="constant">ALLOCM_LG_ALIGN(<em class="parameter"><code>la</code></em>)
</code></span></dt><dd><p>Align the memory allocation to start at an address
that is a multiple of <code class="code">(1 &lt;&lt;
<em class="parameter"><code>la</code></em>)</code>. This macro does not validate
that <em class="parameter"><code>la</code></em> is within the valid
range.</p></dd><dt><span class="term"><code class="constant">ALLOCM_ALIGN(<em class="parameter"><code>a</code></em>)
</code></span></dt><dd><p>Align the memory allocation to start at an address
that is a multiple of <em class="parameter"><code>a</code></em>, where
<em class="parameter"><code>a</code></em> is a power of two. This macro does not
validate that <em class="parameter"><code>a</code></em> is a power of 2.
</p></dd><dt><span class="term"><code class="constant">ALLOCM_ZERO</code></span></dt><dd><p>Initialize newly allocated memory to contain zero
bytes. In the growing reallocation case, the real size prior to
reallocation defines the boundary between untouched bytes and those
that are initialized to contain zero bytes. If this macro is
absent, newly allocated memory is uninitialized.</p></dd><dt><span class="term"><code class="constant">ALLOCM_NO_MOVE</code></span></dt><dd><p>For reallocation, fail rather than moving the
object. This constraint can apply to both growth and
shrinkage.</p></dd><dt><span class="term"><code class="constant">ALLOCM_ARENA(<em class="parameter"><code>a</code></em>)
</code></span></dt><dd><p>Use the arena specified by the index
<em class="parameter"><code>a</code></em> (and by necessity bypass the thread
cache). This macro has no effect for huge regions, nor for regions
that were allocated via an arena other than the one specified.
This macro does not validate that <em class="parameter"><code>a</code></em>
specifies an arena index in the valid range.</p></dd></dl></div><p>
</p><p>The <code class="function">allocm</code>(<em class="parameter"><code></code></em>) function allocates at
least <em class="parameter"><code>size</code></em> bytes of memory, sets
<em class="parameter"><code>*ptr</code></em> to the base address of the allocation, and
sets <em class="parameter"><code>*rsize</code></em> to the real size of the allocation if
<em class="parameter"><code>rsize</code></em> is not <code class="constant">NULL</code>. Behavior
is undefined if <em class="parameter"><code>size</code></em> is <code class="constant">0</code>, or
if request size overflows due to size class and/or alignment
constraints.</p><p>The <code class="function">rallocm</code>(<em class="parameter"><code></code></em>) function resizes the
allocation at <em class="parameter"><code>*ptr</code></em> to be at least
<em class="parameter"><code>size</code></em> bytes, sets <em class="parameter"><code>*ptr</code></em> to
the base address of the allocation if it moved, and sets
<em class="parameter"><code>*rsize</code></em> to the real size of the allocation if
<em class="parameter"><code>rsize</code></em> is not <code class="constant">NULL</code>. If
<em class="parameter"><code>extra</code></em> is non-zero, an attempt is made to resize
the allocation to be at least <code class="code">(<em class="parameter"><code>size</code></em> +
<em class="parameter"><code>extra</code></em>)</code> bytes, though inability to allocate
the extra byte(s) will not by itself result in failure. Behavior is
undefined if <em class="parameter"><code>size</code></em> is <code class="constant">0</code>, if
request size overflows due to size class and/or alignment constraints, or
if <code class="code">(<em class="parameter"><code>size</code></em> +
<em class="parameter"><code>extra</code></em> &gt;
<code class="constant">SIZE_T_MAX</code>)</code>.</p><p>The <code class="function">sallocm</code>(<em class="parameter"><code></code></em>) function sets
<em class="parameter"><code>*rsize</code></em> to the real size of the allocation.</p><p>The <code class="function">dallocm</code>(<em class="parameter"><code></code></em>) function causes the
memory referenced by <em class="parameter"><code>ptr</code></em> to be made available for
future allocations.</p><p>The <code class="function">nallocm</code>(<em class="parameter"><code></code></em>) function allocates no
memory, but it performs the same size computation as the
<code class="function">allocm</code>(<em class="parameter"><code></code></em>) function, and if
<em class="parameter"><code>rsize</code></em> is not <code class="constant">NULL</code> it sets
<em class="parameter"><code>*rsize</code></em> to the real size of the allocation that
would result from the equivalent <code class="function">allocm</code>(<em class="parameter"><code></code></em>)
function call. Behavior is undefined if <em class="parameter"><code>size</code></em> is
<code class="constant">0</code>, or if request size overflows due to size class
and/or alignment constraints.</p></div></div><div class="refsect1"><a name="tuning"></a><h2>TUNING</h2><p>Once, when the first call is made to one of the memory allocation
routines, the allocator initializes its internals based in part on various routines, the allocator initializes its internals based in part on various
options that can be specified at compile- or run-time.</p><p>The string pointed to by the global variable options that can be specified at compile- or run-time.</p><p>The string pointed to by the global variable
<code class="varname">malloc_conf</code>, the &#8220;name&#8221; of the file <code class="varname">malloc_conf</code>, the &#8220;name&#8221; of the file
...@@ -272,8 +226,9 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -272,8 +226,9 @@ for (i = 0; i &lt; nbins; i++) {
<span class="citerefentry"><span class="refentrytitle">sbrk</span>(2)</span> to obtain memory, which is <span class="citerefentry"><span class="refentrytitle">sbrk</span>(2)</span> to obtain memory, which is
suboptimal for several reasons, including race conditions, increased suboptimal for several reasons, including race conditions, increased
fragmentation, and artificial limitations on maximum usable memory. If fragmentation, and artificial limitations on maximum usable memory. If
<code class="option">--enable-dss</code> is specified during configuration, this <span class="citerefentry"><span class="refentrytitle">sbrk</span>(2)</span> is supported by the operating
allocator uses both <span class="citerefentry"><span class="refentrytitle">mmap</span>(2)</span> and system, this allocator uses both
<span class="citerefentry"><span class="refentrytitle">mmap</span>(2)</span> and
<span class="citerefentry"><span class="refentrytitle">sbrk</span>(2)</span>, in that order of preference; <span class="citerefentry"><span class="refentrytitle">sbrk</span>(2)</span>, in that order of preference;
otherwise only <span class="citerefentry"><span class="refentrytitle">mmap</span>(2)</span> is used.</p><p>This allocator uses multiple arenas in order to reduce lock otherwise only <span class="citerefentry"><span class="refentrytitle">mmap</span>(2)</span> is used.</p><p>This allocator uses multiple arenas in order to reduce lock
contention for threaded programs on multi-processor systems. This works contention for threaded programs on multi-processor systems. This works
...@@ -295,34 +250,52 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -295,34 +250,52 @@ for (i = 0; i &lt; nbins; i++) {
chunk size is a power of two that is greater than the page size. Chunks chunk size is a power of two that is greater than the page size. Chunks
are always aligned to multiples of the chunk size. This alignment makes it are always aligned to multiples of the chunk size. This alignment makes it
possible to find metadata for user objects very quickly.</p><p>User objects are broken into three categories according to size: possible to find metadata for user objects very quickly.</p><p>User objects are broken into three categories according to size:
small, large, and huge. Small objects are smaller than one page. Large small, large, and huge. Small and large objects are managed entirely by
objects are smaller than the chunk size. Huge objects are a multiple of arenas; huge objects are additionally aggregated in a single data structure
the chunk size. Small and large objects are managed by arenas; huge that is shared by all threads. Huge objects are typically used by
objects are managed separately in a single data structure that is shared by applications infrequently enough that this single data structure is not a
all threads. Huge objects are used by applications infrequently enough scalability issue.</p><p>Each chunk that is managed by an arena tracks its contents as runs of
that this single data structure is not a scalability issue.</p><p>Each chunk that is managed by an arena tracks its contents as runs of
contiguous pages (unused, backing a set of small objects, or backing one contiguous pages (unused, backing a set of small objects, or backing one
large object). The combination of chunk alignment and chunk page maps large object). The combination of chunk alignment and chunk page maps
makes it possible to determine all metadata regarding small and large makes it possible to determine all metadata regarding small and large
allocations in constant time.</p><p>Small objects are managed in groups by page runs. Each run maintains allocations in constant time.</p><p>Small objects are managed in groups by page runs. Each run maintains
a frontier and free list to track which regions are in use. Allocation a bitmap to track which regions are in use. Allocation requests that are no
requests that are no more than half the quantum (8 or 16, depending on more than half the quantum (8 or 16, depending on architecture) are rounded
architecture) are rounded up to the nearest power of two that is at least up to the nearest power of two that is at least <code class="code">sizeof(<span class="type">double</span>)</code>. All other object size
<code class="code">sizeof(<span class="type">double</span>)</code>. All other small classes are multiples of the quantum, spaced such that there are four size
object size classes are multiples of the quantum, spaced such that internal classes for each doubling in size, which limits internal fragmentation to
fragmentation is limited to approximately 25% for all but the smallest size approximately 20% for all but the smallest size classes. Small size classes
classes. Allocation requests that are larger than the maximum small size are smaller than four times the page size, large size classes are smaller
class, but small enough to fit in an arena-managed chunk (see the <a class="link" href="#opt.lg_chunk"> than the chunk size (see the <a class="link" href="#opt.lg_chunk">
"<code class="mallctl">opt.lg_chunk</code>" "<code class="mallctl">opt.lg_chunk</code>"
</a> option), are </a> option), and
rounded up to the nearest run size. Allocation requests that are too large huge size classes extend from the chunk size up to one size class less than
to fit in an arena-managed chunk are rounded up to the nearest multiple of the full address space size.</p><p>Allocations are packed tightly together, which can be an issue for
the chunk size.</p><p>Allocations are packed tightly together, which can be an issue for
multi-threaded applications. If you need to assure that allocations do not multi-threaded applications. If you need to assure that allocations do not
suffer from cacheline sharing, round your allocation requests up to the suffer from cacheline sharing, round your allocation requests up to the
nearest multiple of the cacheline size, or specify cacheline alignment when nearest multiple of the cacheline size, or specify cacheline alignment when
allocating.</p><p>Assuming 4 MiB chunks, 4 KiB pages, and a 16-byte quantum on a 64-bit allocating.</p><p>The <code class="function">realloc</code>(<em class="parameter"><code></code></em>),
system, the size classes in each category are as shown in <a class="xref" href="#size_classes" title="Table1.Size classes">Table 1</a>.</p><div class="table"><a name="size_classes"></a><p class="title"><b>Table1.Size classes</b></p><div class="table-contents"><table summary="Size classes" border="1"><colgroup><col align="left" class="c1"><col align="right" class="c2"><col align="left" class="c3"></colgroup><thead><tr><th align="left">Category</th><th align="right">Spacing</th><th align="left">Size</th></tr></thead><tbody><tr><td rowspan="7" align="left">Small</td><td align="right">lg</td><td align="left">[8]</td></tr><tr><td align="right">16</td><td align="left">[16, 32, 48, ..., 128]</td></tr><tr><td align="right">32</td><td align="left">[160, 192, 224, 256]</td></tr><tr><td align="right">64</td><td align="left">[320, 384, 448, 512]</td></tr><tr><td align="right">128</td><td align="left">[640, 768, 896, 1024]</td></tr><tr><td align="right">256</td><td align="left">[1280, 1536, 1792, 2048]</td></tr><tr><td align="right">512</td><td align="left">[2560, 3072, 3584]</td></tr><tr><td align="left">Large</td><td align="right">4 KiB</td><td align="left">[4 KiB, 8 KiB, 12 KiB, ..., 4072 KiB]</td></tr><tr><td align="left">Huge</td><td align="right">4 MiB</td><td align="left">[4 MiB, 8 MiB, 12 MiB, ...]</td></tr></tbody></table></div></div><br class="table-break"></div><div class="refsect1"><a name="mallctl_namespace"></a><h2>MALLCTL NAMESPACE</h2><p>The following names are defined in the namespace accessible via the <code class="function">rallocx</code>(<em class="parameter"><code></code></em>), and
<code class="function">xallocx</code>(<em class="parameter"><code></code></em>) functions may resize allocations
without moving them under limited circumstances. Unlike the
<code class="function">*allocx</code>(<em class="parameter"><code></code></em>) API, the standard API does not
officially round up the usable size of an allocation to the nearest size
class, so technically it is necessary to call
<code class="function">realloc</code>(<em class="parameter"><code></code></em>) to grow e.g. a 9-byte allocation to
16 bytes, or shrink a 16-byte allocation to 9 bytes. Growth and shrinkage
trivially succeeds in place as long as the pre-size and post-size both round
up to the same size class. No other API guarantees are made regarding
in-place resizing, but the current implementation also tries to resize large
and huge allocations in place, as long as the pre-size and post-size are
both large or both huge. In such cases shrinkage always succeeds for large
size classes, but for huge size classes the chunk allocator must support
splitting (see <a class="link" href="#arena.i.chunk_hooks">
"<code class="mallctl">arena.&lt;i&gt;.chunk_hooks</code>"
</a>).
Growth only succeeds if the trailing memory is currently available, and
additionally for huge size classes the chunk allocator must support
merging.</p><p>Assuming 2 MiB chunks, 4 KiB pages, and a 16-byte quantum on a
64-bit system, the size classes in each category are as shown in <a class="xref" href="#size_classes" title="Table1.Size classes">Table 1</a>.</p><div class="table"><a name="size_classes"></a><p class="title"><b>Table1.Size classes</b></p><div class="table-contents"><table summary="Size classes" border="1"><colgroup><col align="left" class="c1"><col align="right" class="c2"><col align="left" class="c3"></colgroup><thead><tr><th align="left">Category</th><th align="right">Spacing</th><th align="left">Size</th></tr></thead><tbody><tr><td rowspan="9" align="left">Small</td><td align="right">lg</td><td align="left">[8]</td></tr><tr><td align="right">16</td><td align="left">[16, 32, 48, 64, 80, 96, 112, 128]</td></tr><tr><td align="right">32</td><td align="left">[160, 192, 224, 256]</td></tr><tr><td align="right">64</td><td align="left">[320, 384, 448, 512]</td></tr><tr><td align="right">128</td><td align="left">[640, 768, 896, 1024]</td></tr><tr><td align="right">256</td><td align="left">[1280, 1536, 1792, 2048]</td></tr><tr><td align="right">512</td><td align="left">[2560, 3072, 3584, 4096]</td></tr><tr><td align="right">1 KiB</td><td align="left">[5 KiB, 6 KiB, 7 KiB, 8 KiB]</td></tr><tr><td align="right">2 KiB</td><td align="left">[10 KiB, 12 KiB, 14 KiB]</td></tr><tr><td rowspan="8" align="left">Large</td><td align="right">2 KiB</td><td align="left">[16 KiB]</td></tr><tr><td align="right">4 KiB</td><td align="left">[20 KiB, 24 KiB, 28 KiB, 32 KiB]</td></tr><tr><td align="right">8 KiB</td><td align="left">[40 KiB, 48 KiB, 54 KiB, 64 KiB]</td></tr><tr><td align="right">16 KiB</td><td align="left">[80 KiB, 96 KiB, 112 KiB, 128 KiB]</td></tr><tr><td align="right">32 KiB</td><td align="left">[160 KiB, 192 KiB, 224 KiB, 256 KiB]</td></tr><tr><td align="right">64 KiB</td><td align="left">[320 KiB, 384 KiB, 448 KiB, 512 KiB]</td></tr><tr><td align="right">128 KiB</td><td align="left">[640 KiB, 768 KiB, 896 KiB, 1 MiB]</td></tr><tr><td align="right">256 KiB</td><td align="left">[1280 KiB, 1536 KiB, 1792 KiB]</td></tr><tr><td rowspan="7" align="left">Huge</td><td align="right">256 KiB</td><td align="left">[2 MiB]</td></tr><tr><td align="right">512 KiB</td><td align="left">[2560 KiB, 3 MiB, 3584 KiB, 4 MiB]</td></tr><tr><td align="right">1 MiB</td><td align="left">[5 MiB, 6 MiB, 7 MiB, 8 MiB]</td></tr><tr><td align="right">2 MiB</td><td align="left">[10 MiB, 12 MiB, 14 MiB, 16 MiB]</td></tr><tr><td align="right">4 MiB</td><td align="left">[20 MiB, 24 MiB, 28 MiB, 32 MiB]</td></tr><tr><td align="right">8 MiB</td><td align="left">[40 MiB, 48 MiB, 56 MiB, 64 MiB]</td></tr><tr><td align="right">...</td><td align="left">...</td></tr></tbody></table></div></div><br class="table-break"></div><div class="refsect1"><a name="mallctl_namespace"></a><h2>MALLCTL NAMESPACE</h2><p>The following names are defined in the namespace accessible via the
<code class="function">mallctl*</code>(<em class="parameter"><code></code></em>) functions. Value types are <code class="function">mallctl*</code>(<em class="parameter"><code></code></em>) functions. Value types are
specified in parentheses, their readable/writable statuses are encoded as specified in parentheses, their readable/writable statuses are encoded as
<code class="literal">rw</code>, <code class="literal">r-</code>, <code class="literal">-w</code>, or <code class="literal">rw</code>, <code class="literal">r-</code>, <code class="literal">-w</code>, or
...@@ -355,20 +328,20 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -355,20 +328,20 @@ for (i = 0; i &lt; nbins; i++) {
</span></dt><dd><p>If a value is passed in, refresh the data from which </span></dt><dd><p>If a value is passed in, refresh the data from which
the <code class="function">mallctl*</code>(<em class="parameter"><code></code></em>) functions report values, the <code class="function">mallctl*</code>(<em class="parameter"><code></code></em>) functions report values,
and increment the epoch. Return the current epoch. This is useful for and increment the epoch. Return the current epoch. This is useful for
detecting whether another thread caused a refresh.</p></dd><dt><a name="config.debug"></a><span class="term"> detecting whether another thread caused a refresh.</p></dd><dt><a name="config.cache_oblivious"></a><span class="term">
"<code class="mallctl">config.debug</code>" "<code class="mallctl">config.cache_oblivious</code>"
(<span class="type">bool</span>) (<span class="type">bool</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
</span></dt><dd><p><code class="option">--enable-debug</code> was specified during </span></dt><dd><p><code class="option">--enable-cache-oblivious</code> was specified
build configuration.</p></dd><dt><a name="config.dss"></a><span class="term"> during build configuration.</p></dd><dt><a name="config.debug"></a><span class="term">
"<code class="mallctl">config.dss</code>" "<code class="mallctl">config.debug</code>"
(<span class="type">bool</span>) (<span class="type">bool</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
</span></dt><dd><p><code class="option">--enable-dss</code> was specified during </span></dt><dd><p><code class="option">--enable-debug</code> was specified during
build configuration.</p></dd><dt><a name="config.fill"></a><span class="term"> build configuration.</p></dd><dt><a name="config.fill"></a><span class="term">
"<code class="mallctl">config.fill</code>" "<code class="mallctl">config.fill</code>"
...@@ -383,14 +356,7 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -383,14 +356,7 @@ for (i = 0; i &lt; nbins; i++) {
(<span class="type">bool</span>) (<span class="type">bool</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
</span></dt><dd><p><code class="option">--enable-lazy-lock</code> was specified </span></dt><dd><p><code class="option">--enable-lazy-lock</code> was specified
during build configuration.</p></dd><dt><a name="config.mremap"></a><span class="term"> during build configuration.</p></dd><dt><a name="config.munmap"></a><span class="term">
"<code class="mallctl">config.mremap</code>"
(<span class="type">bool</span>)
<code class="literal">r-</code>
</span></dt><dd><p><code class="option">--enable-mremap</code> was specified during
build configuration.</p></dd><dt><a name="config.munmap"></a><span class="term">
"<code class="mallctl">config.munmap</code>" "<code class="mallctl">config.munmap</code>"
...@@ -479,12 +445,13 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -479,12 +445,13 @@ for (i = 0; i &lt; nbins; i++) {
<code class="literal">r-</code> <code class="literal">r-</code>
</span></dt><dd><p>dss (<span class="citerefentry"><span class="refentrytitle">sbrk</span>(2)</span>) allocation precedence as </span></dt><dd><p>dss (<span class="citerefentry"><span class="refentrytitle">sbrk</span>(2)</span>) allocation precedence as
related to <span class="citerefentry"><span class="refentrytitle">mmap</span>(2)</span> allocation. The following related to <span class="citerefentry"><span class="refentrytitle">mmap</span>(2)</span> allocation. The following
settings are supported: &#8220;disabled&#8221;, &#8220;primary&#8221;, settings are supported if
and &#8220;secondary&#8221;. The default is &#8220;secondary&#8221; if <span class="citerefentry"><span class="refentrytitle">sbrk</span>(2)</span> is supported by the operating
<a class="link" href="#config.dss"> system: &#8220;disabled&#8221;, &#8220;primary&#8221;, and
"<code class="mallctl">config.dss</code>" &#8220;secondary&#8221;; otherwise only &#8220;disabled&#8221; is
</a> is supported. The default is &#8220;secondary&#8221; if
true, &#8220;disabled&#8221; otherwise. <span class="citerefentry"><span class="refentrytitle">sbrk</span>(2)</span> is supported by the operating
system; &#8220;disabled&#8221; otherwise.
</p></dd><dt><a name="opt.lg_chunk"></a><span class="term"> </p></dd><dt><a name="opt.lg_chunk"></a><span class="term">
"<code class="mallctl">opt.lg_chunk</code>" "<code class="mallctl">opt.lg_chunk</code>"
...@@ -494,7 +461,7 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -494,7 +461,7 @@ for (i = 0; i &lt; nbins; i++) {
</span></dt><dd><p>Virtual memory chunk size (log base 2). If a chunk </span></dt><dd><p>Virtual memory chunk size (log base 2). If a chunk
size outside the supported size range is specified, the size is size outside the supported size range is specified, the size is
silently clipped to the minimum/maximum supported size. The default silently clipped to the minimum/maximum supported size. The default
chunk size is 4 MiB (2^22). chunk size is 2 MiB (2^21).
</p></dd><dt><a name="opt.narenas"></a><span class="term"> </p></dd><dt><a name="opt.narenas"></a><span class="term">
"<code class="mallctl">opt.narenas</code>" "<code class="mallctl">opt.narenas</code>"
...@@ -517,7 +484,13 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -517,7 +484,13 @@ for (i = 0; i &lt; nbins; i++) {
provides the kernel with sufficient information to recycle dirty pages provides the kernel with sufficient information to recycle dirty pages
if physical memory becomes scarce and the pages remain unused. The if physical memory becomes scarce and the pages remain unused. The
default minimum ratio is 8:1 (2^3:1); an option value of -1 will default minimum ratio is 8:1 (2^3:1); an option value of -1 will
disable dirty page purging.</p></dd><dt><a name="opt.stats_print"></a><span class="term"> disable dirty page purging. See <a class="link" href="#arenas.lg_dirty_mult">
"<code class="mallctl">arenas.lg_dirty_mult</code>"
</a>
and <a class="link" href="#arena.i.lg_dirty_mult">
"<code class="mallctl">arena.&lt;i&gt;.lg_dirty_mult</code>"
</a>
for related dynamic control options.</p></dd><dt><a name="opt.stats_print"></a><span class="term">
"<code class="mallctl">opt.stats_print</code>" "<code class="mallctl">opt.stats_print</code>"
...@@ -530,23 +503,31 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -530,23 +503,31 @@ for (i = 0; i &lt; nbins; i++) {
<code class="option">--enable-stats</code> is specified during configuration, this <code class="option">--enable-stats</code> is specified during configuration, this
has the potential to cause deadlock for a multi-threaded process that has the potential to cause deadlock for a multi-threaded process that
exits while one or more threads are executing in the memory allocation exits while one or more threads are executing in the memory allocation
functions. Therefore, this option should only be used with care; it is functions. Furthermore, <code class="function">atexit</code>(<em class="parameter"><code></code></em>) may
primarily intended as a performance tuning aid during application allocate memory during application initialization and then deadlock
internally when jemalloc in turn calls
<code class="function">atexit</code>(<em class="parameter"><code></code></em>), so this option is not
univerally usable (though the application can register its own
<code class="function">atexit</code>(<em class="parameter"><code></code></em>) function with equivalent
functionality). Therefore, this option should only be used with care;
it is primarily intended as a performance tuning aid during application
development. This option is disabled by default.</p></dd><dt><a name="opt.junk"></a><span class="term"> development. This option is disabled by default.</p></dd><dt><a name="opt.junk"></a><span class="term">
"<code class="mallctl">opt.junk</code>" "<code class="mallctl">opt.junk</code>"
(<span class="type">bool</span>) (<span class="type">const char *</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-fill</code>] [<code class="option">--enable-fill</code>]
</span></dt><dd><p>Junk filling enabled/disabled. If enabled, each byte </span></dt><dd><p>Junk filling. If set to "alloc", each byte of
of uninitialized allocated memory will be initialized to uninitialized allocated memory will be initialized to
<code class="literal">0xa5</code>. All deallocated memory will be initialized to <code class="literal">0xa5</code>. If set to "free", all deallocated memory will
<code class="literal">0x5a</code>. This is intended for debugging and will be initialized to <code class="literal">0x5a</code>. If set to "true", both
impact performance negatively. This option is disabled by default allocated and deallocated memory will be initialized, and if set to
unless <code class="option">--enable-debug</code> is specified during "false", junk filling be disabled entirely. This is intended for
configuration, in which case it is enabled by default unless running debugging and will impact performance negatively. This option is
inside <a class="ulink" href="http://valgrind.org/" target="_top">Valgrind</a>.</p></dd><dt><a name="opt.quarantine"></a><span class="term"> "false" by default unless <code class="option">--enable-debug</code> is specified
during configuration, in which case it is "true" by default unless
running inside <a class="ulink" href="http://valgrind.org/" target="_top">Valgrind</a>.</p></dd><dt><a name="opt.quarantine"></a><span class="term">
"<code class="mallctl">opt.quarantine</code>" "<code class="mallctl">opt.quarantine</code>"
...@@ -592,9 +573,8 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -592,9 +573,8 @@ for (i = 0; i &lt; nbins; i++) {
</span></dt><dd><p>Zero filling enabled/disabled. If enabled, each byte </span></dt><dd><p>Zero filling enabled/disabled. If enabled, each byte
of uninitialized allocated memory will be initialized to 0. Note that of uninitialized allocated memory will be initialized to 0. Note that
this initialization only happens once for each byte, so this initialization only happens once for each byte, so
<code class="function">realloc</code>(<em class="parameter"><code></code></em>), <code class="function">realloc</code>(<em class="parameter"><code></code></em>) and
<code class="function">rallocx</code>(<em class="parameter"><code></code></em>) and <code class="function">rallocx</code>(<em class="parameter"><code></code></em>) calls do not zero memory that
<code class="function">rallocm</code>(<em class="parameter"><code></code></em>) calls do not zero memory that
was previously allocated. This is intended for debugging and will was previously allocated. This is intended for debugging and will
impact performance negatively. This option is disabled by default. impact performance negatively. This option is disabled by default.
</p></dd><dt><a name="opt.utrace"></a><span class="term"> </p></dd><dt><a name="opt.utrace"></a><span class="term">
...@@ -606,17 +586,7 @@ for (i = 0; i &lt; nbins; i++) { ...@@ -606,17 +586,7 @@ for (i = 0; i &lt; nbins; i++) {
[<code class="option">--enable-utrace</code>] [<code class="option">--enable-utrace</code>]
</span></dt><dd><p>Allocation tracing based on </span></dt><dd><p>Allocation tracing based on
<span class="citerefentry"><span class="refentrytitle">utrace</span>(2)</span> enabled/disabled. This option <span class="citerefentry"><span class="refentrytitle">utrace</span>(2)</span> enabled/disabled. This option
is disabled by default.</p></dd><dt><a name="opt.valgrind"></a><span class="term"> is disabled by default.</p></dd><dt><a name="opt.xmalloc"></a><span class="term">
"<code class="mallctl">opt.valgrind</code>"
(<span class="type">bool</span>)
<code class="literal">r-</code>
[<code class="option">--enable-valgrind</code>]
</span></dt><dd><p><a class="ulink" href="http://valgrind.org/" target="_top">Valgrind</a>
support enabled/disabled. This option is vestigal because jemalloc
auto-detects whether it is running inside Valgrind. This option is
disabled by default, unless running inside Valgrind.</p></dd><dt><a name="opt.xmalloc"></a><span class="term">
"<code class="mallctl">opt.xmalloc</code>" "<code class="mallctl">opt.xmalloc</code>"
...@@ -639,16 +609,16 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -639,16 +609,16 @@ malloc_conf = "xmalloc:true";</pre><p>
(<span class="type">bool</span>) (<span class="type">bool</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-tcache</code>] [<code class="option">--enable-tcache</code>]
</span></dt><dd><p>Thread-specific caching enabled/disabled. When there </span></dt><dd><p>Thread-specific caching (tcache) enabled/disabled. When
are multiple threads, each thread uses a thread-specific cache for there are multiple threads, each thread uses a tcache for objects up to
objects up to a certain size. Thread-specific caching allows many a certain size. Thread-specific caching allows many allocations to be
allocations to be satisfied without performing any thread satisfied without performing any thread synchronization, at the cost of
synchronization, at the cost of increased memory use. See the increased memory use. See the <a class="link" href="#opt.lg_tcache_max">
<a class="link" href="#opt.lg_tcache_max">
"<code class="mallctl">opt.lg_tcache_max</code>" "<code class="mallctl">opt.lg_tcache_max</code>"
</a> </a>
option for related tuning information. This option is enabled by option for related tuning information. This option is enabled by
default unless running inside <a class="ulink" href="http://valgrind.org/" target="_top">Valgrind</a>.</p></dd><dt><a name="opt.lg_tcache_max"></a><span class="term"> default unless running inside <a class="ulink" href="http://valgrind.org/" target="_top">Valgrind</a>, in which case it is
forcefully disabled.</p></dd><dt><a name="opt.lg_tcache_max"></a><span class="term">
"<code class="mallctl">opt.lg_tcache_max</code>" "<code class="mallctl">opt.lg_tcache_max</code>"
...@@ -656,8 +626,8 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -656,8 +626,8 @@ malloc_conf = "xmalloc:true";</pre><p>
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-tcache</code>] [<code class="option">--enable-tcache</code>]
</span></dt><dd><p>Maximum size class (log base 2) to cache in the </span></dt><dd><p>Maximum size class (log base 2) to cache in the
thread-specific cache. At a minimum, all small size classes are thread-specific cache (tcache). At a minimum, all small size classes
cached, and at a maximum all large size classes are cached. The are cached, and at a maximum all large size classes are cached. The
default maximum is 32 KiB (2^15).</p></dd><dt><a name="opt.prof"></a><span class="term"> default maximum is 32 KiB (2^15).</p></dd><dt><a name="opt.prof"></a><span class="term">
"<code class="mallctl">opt.prof</code>" "<code class="mallctl">opt.prof</code>"
...@@ -686,8 +656,8 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -686,8 +656,8 @@ malloc_conf = "xmalloc:true";</pre><p>
"<code class="mallctl">opt.prof_final</code>" "<code class="mallctl">opt.prof_final</code>"
</a> </a>
option for final profile dumping. Profile output is compatible with option for final profile dumping. Profile output is compatible with
the included <span class="command"><strong>pprof</strong></span> Perl script, which originates the <span class="command"><strong>jeprof</strong></span> command, which is based on the
from the <a class="ulink" href="http://code.google.com/p/gperftools/" target="_top">gperftools <span class="command"><strong>pprof</strong></span> that is developed as part of the <a class="ulink" href="http://code.google.com/p/gperftools/" target="_top">gperftools
package</a>.</p></dd><dt><a name="opt.prof_prefix"></a><span class="term"> package</a>.</p></dd><dt><a name="opt.prof_prefix"></a><span class="term">
"<code class="mallctl">opt.prof_prefix</code>" "<code class="mallctl">opt.prof_prefix</code>"
...@@ -704,7 +674,7 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -704,7 +674,7 @@ malloc_conf = "xmalloc:true";</pre><p>
"<code class="mallctl">opt.prof_active</code>" "<code class="mallctl">opt.prof_active</code>"
(<span class="type">bool</span>) (<span class="type">bool</span>)
<code class="literal">rw</code> <code class="literal">r-</code>
[<code class="option">--enable-prof</code>] [<code class="option">--enable-prof</code>]
</span></dt><dd><p>Profiling activated/deactivated. This is a secondary </span></dt><dd><p>Profiling activated/deactivated. This is a secondary
control mechanism that makes it possible to start the application with control mechanism that makes it possible to start the application with
...@@ -715,11 +685,25 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -715,11 +685,25 @@ malloc_conf = "xmalloc:true";</pre><p>
with the <a class="link" href="#prof.active"> with the <a class="link" href="#prof.active">
"<code class="mallctl">prof.active</code>" "<code class="mallctl">prof.active</code>"
</a> mallctl. </a> mallctl.
This option is enabled by default.</p></dd><dt><a name="opt.lg_prof_sample"></a><span class="term"> This option is enabled by default.</p></dd><dt><a name="opt.prof_thread_active_init"></a><span class="term">
"<code class="mallctl">opt.prof_thread_active_init</code>"
(<span class="type">bool</span>)
<code class="literal">r-</code>
[<code class="option">--enable-prof</code>]
</span></dt><dd><p>Initial setting for <a class="link" href="#thread.prof.active">
"<code class="mallctl">thread.prof.active</code>"
</a>
in newly created threads. The initial setting for newly created threads
can also be changed during execution via the <a class="link" href="#prof.thread_active_init">
"<code class="mallctl">prof.thread_active_init</code>"
</a>
mallctl. This option is enabled by default.</p></dd><dt><a name="opt.lg_prof_sample"></a><span class="term">
"<code class="mallctl">opt.lg_prof_sample</code>" "<code class="mallctl">opt.lg_prof_sample</code>"
(<span class="type">ssize_t</span>) (<span class="type">size_t</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-prof</code>] [<code class="option">--enable-prof</code>]
</span></dt><dd><p>Average interval (log base 2) between allocation </span></dt><dd><p>Average interval (log base 2) between allocation
...@@ -764,14 +748,12 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -764,14 +748,12 @@ malloc_conf = "xmalloc:true";</pre><p>
(<span class="type">bool</span>) (<span class="type">bool</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-prof</code>] [<code class="option">--enable-prof</code>]
</span></dt><dd><p>Trigger a memory profile dump every time the total </span></dt><dd><p>Set the initial state of <a class="link" href="#prof.gdump">
virtual memory exceeds the previous maximum. Profiles are dumped to "<code class="mallctl">prof.gdump</code>"
files named according to the pattern </a>, which when
<code class="filename">&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.u&lt;useq&gt;.heap</code>, enabled triggers a memory profile dump every time the total virtual
where <code class="literal">&lt;prefix&gt;</code> is controlled by the <a class="link" href="#opt.prof_prefix"> memory exceeds the previous maximum. This option is disabled by
"<code class="mallctl">opt.prof_prefix</code>" default.</p></dd><dt><a name="opt.prof_final"></a><span class="term">
</a>
option. This option is disabled by default.</p></dd><dt><a name="opt.prof_final"></a><span class="term">
"<code class="mallctl">opt.prof_final</code>" "<code class="mallctl">opt.prof_final</code>"
...@@ -785,7 +767,13 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -785,7 +767,13 @@ malloc_conf = "xmalloc:true";</pre><p>
where <code class="literal">&lt;prefix&gt;</code> is controlled by the <a class="link" href="#opt.prof_prefix"> where <code class="literal">&lt;prefix&gt;</code> is controlled by the <a class="link" href="#opt.prof_prefix">
"<code class="mallctl">opt.prof_prefix</code>" "<code class="mallctl">opt.prof_prefix</code>"
</a> </a>
option. This option is enabled by default.</p></dd><dt><a name="opt.prof_leak"></a><span class="term"> option. Note that <code class="function">atexit</code>(<em class="parameter"><code></code></em>) may allocate
memory during application initialization and then deadlock internally
when jemalloc in turn calls <code class="function">atexit</code>(<em class="parameter"><code></code></em>), so
this option is not univerally usable (though the application can
register its own <code class="function">atexit</code>(<em class="parameter"><code></code></em>) function with
equivalent functionality). This option is disabled by
default.</p></dd><dt><a name="opt.prof_leak"></a><span class="term">
"<code class="mallctl">opt.prof_leak</code>" "<code class="mallctl">opt.prof_leak</code>"
...@@ -864,9 +852,9 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -864,9 +852,9 @@ malloc_conf = "xmalloc:true";</pre><p>
[<code class="option">--enable-tcache</code>] [<code class="option">--enable-tcache</code>]
</span></dt><dd><p>Enable/disable calling thread's tcache. The tcache is </span></dt><dd><p>Enable/disable calling thread's tcache. The tcache is
implicitly flushed as a side effect of becoming implicitly flushed as a side effect of becoming
disabled (see disabled (see <a class="link" href="#thread.tcache.flush">
"<code class="mallctl">thread.tcache.flush</code>" "<code class="mallctl">thread.tcache.flush</code>"
). </a>).
</p></dd><dt><a name="thread.tcache.flush"></a><span class="term"> </p></dd><dt><a name="thread.tcache.flush"></a><span class="term">
"<code class="mallctl">thread.tcache.flush</code>" "<code class="mallctl">thread.tcache.flush</code>"
...@@ -874,19 +862,84 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -874,19 +862,84 @@ malloc_conf = "xmalloc:true";</pre><p>
(<span class="type">void</span>) (<span class="type">void</span>)
<code class="literal">--</code> <code class="literal">--</code>
[<code class="option">--enable-tcache</code>] [<code class="option">--enable-tcache</code>]
</span></dt><dd><p>Flush calling thread's tcache. This interface releases </span></dt><dd><p>Flush calling thread's thread-specific cache (tcache).
all cached objects and internal data structures associated with the This interface releases all cached objects and internal data structures
calling thread's thread-specific cache. Ordinarily, this interface associated with the calling thread's tcache. Ordinarily, this interface
need not be called, since automatic periodic incremental garbage need not be called, since automatic periodic incremental garbage
collection occurs, and the thread cache is automatically discarded when collection occurs, and the thread cache is automatically discarded when
a thread exits. However, garbage collection is triggered by allocation a thread exits. However, garbage collection is triggered by allocation
activity, so it is possible for a thread that stops activity, so it is possible for a thread that stops
allocating/deallocating to retain its cache indefinitely, in which case allocating/deallocating to retain its cache indefinitely, in which case
the developer may find manual flushing useful.</p></dd><dt><a name="arena.i.purge"></a><span class="term"> the developer may find manual flushing useful.</p></dd><dt><a name="thread.prof.name"></a><span class="term">
"<code class="mallctl">arena.&lt;i&gt;.purge</code>" "<code class="mallctl">thread.prof.name</code>"
(<span class="type">const char *</span>)
<code class="literal">r-</code> or
<code class="literal">-w</code>
[<code class="option">--enable-prof</code>]
</span></dt><dd><p>Get/set the descriptive name associated with the calling
thread in memory profile dumps. An internal copy of the name string is
created, so the input string need not be maintained after this interface
completes execution. The output string of this interface should be
copied for non-ephemeral uses, because multiple implementation details
can cause asynchronous string deallocation. Furthermore, each
invocation of this interface can only read or write; simultaneous
read/write is not supported due to string lifetime limitations. The
name string must nil-terminated and comprised only of characters in the
sets recognized
by <span class="citerefentry"><span class="refentrytitle">isgraph</span>(3)</span> and
<span class="citerefentry"><span class="refentrytitle">isblank</span>(3)</span>.</p></dd><dt><a name="thread.prof.active"></a><span class="term">
"<code class="mallctl">thread.prof.active</code>"
(<span class="type">bool</span>)
<code class="literal">rw</code>
[<code class="option">--enable-prof</code>]
</span></dt><dd><p>Control whether sampling is currently active for the
calling thread. This is an activation mechanism in addition to <a class="link" href="#prof.active">
"<code class="mallctl">prof.active</code>"
</a>; both must
be active for the calling thread to sample. This flag is enabled by
default.</p></dd><dt><a name="tcache.create"></a><span class="term">
"<code class="mallctl">tcache.create</code>"
(<span class="type">unsigned</span>)
<code class="literal">r-</code>
[<code class="option">--enable-tcache</code>]
</span></dt><dd><p>Create an explicit thread-specific cache (tcache) and
return an identifier that can be passed to the <a class="link" href="#MALLOCX_TCACHE"><code class="constant">MALLOCX_TCACHE(<em class="parameter"><code>tc</code></em>)</code></a>
macro to explicitly use the specified cache rather than the
automatically managed one that is used by default. Each explicit cache
can be used by only one thread at a time; the application must assure
that this constraint holds.
</p></dd><dt><a name="tcache.flush"></a><span class="term">
"<code class="mallctl">tcache.flush</code>"
(<span class="type">unsigned</span>)
<code class="literal">-w</code>
[<code class="option">--enable-tcache</code>]
</span></dt><dd><p>Flush the specified thread-specific cache (tcache). The
same considerations apply to this interface as to <a class="link" href="#thread.tcache.flush">
"<code class="mallctl">thread.tcache.flush</code>"
</a>,
except that the tcache will never be automatically be discarded.
</p></dd><dt><a name="tcache.destroy"></a><span class="term">
"<code class="mallctl">tcache.destroy</code>"
(<span class="type">unsigned</span>) (<span class="type">unsigned</span>)
<code class="literal">-w</code>
[<code class="option">--enable-tcache</code>]
</span></dt><dd><p>Flush the specified thread-specific cache (tcache) and
make the identifier available for use during a future tcache creation.
</p></dd><dt><a name="arena.i.purge"></a><span class="term">
"<code class="mallctl">arena.&lt;i&gt;.purge</code>"
(<span class="type">void</span>)
<code class="literal">--</code> <code class="literal">--</code>
</span></dt><dd><p>Purge unused dirty pages for arena &lt;i&gt;, or for </span></dt><dd><p>Purge unused dirty pages for arena &lt;i&gt;, or for
all arenas if &lt;i&gt; equals <a class="link" href="#arenas.narenas"> all arenas if &lt;i&gt; equals <a class="link" href="#arenas.narenas">
...@@ -902,15 +955,138 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -902,15 +955,138 @@ malloc_conf = "xmalloc:true";</pre><p>
allocation for arena &lt;i&gt;, or for all arenas if &lt;i&gt; equals allocation for arena &lt;i&gt;, or for all arenas if &lt;i&gt; equals
<a class="link" href="#arenas.narenas"> <a class="link" href="#arenas.narenas">
"<code class="mallctl">arenas.narenas</code>" "<code class="mallctl">arenas.narenas</code>"
</a>. Note </a>. See
that even during huge allocation this setting is read from the arena <a class="link" href="#opt.dss">
that would be chosen for small or large allocation so that applications
can depend on consistent dss versus mmap allocation regardless of
allocation size. See <a class="link" href="#opt.dss">
"<code class="mallctl">opt.dss</code>" "<code class="mallctl">opt.dss</code>"
</a> for supported </a> for supported
settings. settings.</p></dd><dt><a name="arena.i.lg_dirty_mult"></a><span class="term">
</p></dd><dt><a name="arenas.narenas"></a><span class="term">
"<code class="mallctl">arena.&lt;i&gt;.lg_dirty_mult</code>"
(<span class="type">ssize_t</span>)
<code class="literal">rw</code>
</span></dt><dd><p>Current per-arena minimum ratio (log base 2) of active
to dirty pages for arena &lt;i&gt;. Each time this interface is set and
the ratio is increased, pages are synchronously purged as necessary to
impose the new ratio. See <a class="link" href="#opt.lg_dirty_mult">
"<code class="mallctl">opt.lg_dirty_mult</code>"
</a>
for additional information.</p></dd><dt><a name="arena.i.chunk_hooks"></a><span class="term">
"<code class="mallctl">arena.&lt;i&gt;.chunk_hooks</code>"
(<span class="type">chunk_hooks_t</span>)
<code class="literal">rw</code>
</span></dt><dd><p>Get or set the chunk management hook functions for arena
&lt;i&gt;. The functions must be capable of operating on all extant
chunks associated with arena &lt;i&gt;, usually by passing unknown
chunks to the replaced functions. In practice, it is feasible to
control allocation for arenas created via <a class="link" href="#arenas.extend">
"<code class="mallctl">arenas.extend</code>"
</a> such
that all chunks originate from an application-supplied chunk allocator
(by setting custom chunk hook functions just after arena creation), but
the automatically created arenas may have already created chunks prior
to the application having an opportunity to take over chunk
allocation.</p><pre class="programlisting">
typedef struct {
chunk_alloc_t *alloc;
chunk_dalloc_t *dalloc;
chunk_commit_t *commit;
chunk_decommit_t *decommit;
chunk_purge_t *purge;
chunk_split_t *split;
chunk_merge_t *merge;
} chunk_hooks_t;</pre><p>The <span class="type">chunk_hooks_t</span> structure comprises function
pointers which are described individually below. jemalloc uses these
functions to manage chunk lifetime, which starts off with allocation of
mapped committed memory, in the simplest case followed by deallocation.
However, there are performance and platform reasons to retain chunks for
later reuse. Cleanup attempts cascade from deallocation to decommit to
purging, which gives the chunk management functions opportunities to
reject the most permanent cleanup operations in favor of less permanent
(and often less costly) operations. The chunk splitting and merging
operations can also be opted out of, but this is mainly intended to
support platforms on which virtual memory mappings provided by the
operating system kernel do not automatically coalesce and split, e.g.
Windows.</p><div class="funcsynopsis"><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">typedef void *<b class="fsfunc">(chunk_alloc_t)</b>(</code></td><td>void *<var class="pdparam">chunk</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">alignment</var>, </td></tr><tr><td></td><td>bool *<var class="pdparam">zero</var>, </td></tr><tr><td></td><td>bool *<var class="pdparam">commit</var>, </td></tr><tr><td></td><td>unsigned <var class="pdparam">arena_ind</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div><div class="literallayout"><p></p></div><p>A chunk allocation function conforms to the
<span class="type">chunk_alloc_t</span> type and upon success returns a pointer to
<em class="parameter"><code>size</code></em> bytes of mapped memory on behalf of arena
<em class="parameter"><code>arena_ind</code></em> such that the chunk's base address is a
multiple of <em class="parameter"><code>alignment</code></em>, as well as setting
<em class="parameter"><code>*zero</code></em> to indicate whether the chunk is zeroed and
<em class="parameter"><code>*commit</code></em> to indicate whether the chunk is
committed. Upon error the function returns <code class="constant">NULL</code>
and leaves <em class="parameter"><code>*zero</code></em> and
<em class="parameter"><code>*commit</code></em> unmodified. The
<em class="parameter"><code>size</code></em> parameter is always a multiple of the chunk
size. The <em class="parameter"><code>alignment</code></em> parameter is always a power
of two at least as large as the chunk size. Zeroing is mandatory if
<em class="parameter"><code>*zero</code></em> is true upon function entry. Committing is
mandatory if <em class="parameter"><code>*commit</code></em> is true upon function entry.
If <em class="parameter"><code>chunk</code></em> is not <code class="constant">NULL</code>, the
returned pointer must be <em class="parameter"><code>chunk</code></em> on success or
<code class="constant">NULL</code> on error. Committed memory may be committed
in absolute terms as on a system that does not overcommit, or in
implicit terms as on a system that overcommits and satisfies physical
memory needs on demand via soft page faults. Note that replacing the
default chunk allocation function makes the arena's <a class="link" href="#arena.i.dss">
"<code class="mallctl">arena.&lt;i&gt;.dss</code>"
</a>
setting irrelevant.</p><div class="funcsynopsis"><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">typedef bool <b class="fsfunc">(chunk_dalloc_t)</b>(</code></td><td>void *<var class="pdparam">chunk</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>bool <var class="pdparam">committed</var>, </td></tr><tr><td></td><td>unsigned <var class="pdparam">arena_ind</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div><div class="literallayout"><p></p></div><p>
A chunk deallocation function conforms to the
<span class="type">chunk_dalloc_t</span> type and deallocates a
<em class="parameter"><code>chunk</code></em> of given <em class="parameter"><code>size</code></em> with
<em class="parameter"><code>committed</code></em>/decommited memory as indicated, on
behalf of arena <em class="parameter"><code>arena_ind</code></em>, returning false upon
success. If the function returns true, this indicates opt-out from
deallocation; the virtual memory mapping associated with the chunk
remains mapped, in the same commit state, and available for future use,
in which case it will be automatically retained for later reuse.</p><div class="funcsynopsis"><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">typedef bool <b class="fsfunc">(chunk_commit_t)</b>(</code></td><td>void *<var class="pdparam">chunk</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">offset</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">length</var>, </td></tr><tr><td></td><td>unsigned <var class="pdparam">arena_ind</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div><div class="literallayout"><p></p></div><p>A chunk commit function conforms to the
<span class="type">chunk_commit_t</span> type and commits zeroed physical memory to
back pages within a <em class="parameter"><code>chunk</code></em> of given
<em class="parameter"><code>size</code></em> at <em class="parameter"><code>offset</code></em> bytes,
extending for <em class="parameter"><code>length</code></em> on behalf of arena
<em class="parameter"><code>arena_ind</code></em>, returning false upon success.
Committed memory may be committed in absolute terms as on a system that
does not overcommit, or in implicit terms as on a system that
overcommits and satisfies physical memory needs on demand via soft page
faults. If the function returns true, this indicates insufficient
physical memory to satisfy the request.</p><div class="funcsynopsis"><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">typedef bool <b class="fsfunc">(chunk_decommit_t)</b>(</code></td><td>void *<var class="pdparam">chunk</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">offset</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">length</var>, </td></tr><tr><td></td><td>unsigned <var class="pdparam">arena_ind</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div><div class="literallayout"><p></p></div><p>A chunk decommit function conforms to the
<span class="type">chunk_decommit_t</span> type and decommits any physical memory
that is backing pages within a <em class="parameter"><code>chunk</code></em> of given
<em class="parameter"><code>size</code></em> at <em class="parameter"><code>offset</code></em> bytes,
extending for <em class="parameter"><code>length</code></em> on behalf of arena
<em class="parameter"><code>arena_ind</code></em>, returning false upon success, in which
case the pages will be committed via the chunk commit function before
being reused. If the function returns true, this indicates opt-out from
decommit; the memory remains committed and available for future use, in
which case it will be automatically retained for later reuse.</p><div class="funcsynopsis"><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">typedef bool <b class="fsfunc">(chunk_purge_t)</b>(</code></td><td>void *<var class="pdparam">chunk</var>, </td></tr><tr><td></td><td>size_t<var class="pdparam">size</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">offset</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">length</var>, </td></tr><tr><td></td><td>unsigned <var class="pdparam">arena_ind</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div><div class="literallayout"><p></p></div><p>A chunk purge function conforms to the <span class="type">chunk_purge_t</span>
type and optionally discards physical pages within the virtual memory
mapping associated with <em class="parameter"><code>chunk</code></em> of given
<em class="parameter"><code>size</code></em> at <em class="parameter"><code>offset</code></em> bytes,
extending for <em class="parameter"><code>length</code></em> on behalf of arena
<em class="parameter"><code>arena_ind</code></em>, returning false if pages within the
purged virtual memory range will be zero-filled the next time they are
accessed.</p><div class="funcsynopsis"><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">typedef bool <b class="fsfunc">(chunk_split_t)</b>(</code></td><td>void *<var class="pdparam">chunk</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size_a</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size_b</var>, </td></tr><tr><td></td><td>bool <var class="pdparam">committed</var>, </td></tr><tr><td></td><td>unsigned <var class="pdparam">arena_ind</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div><div class="literallayout"><p></p></div><p>A chunk split function conforms to the <span class="type">chunk_split_t</span>
type and optionally splits <em class="parameter"><code>chunk</code></em> of given
<em class="parameter"><code>size</code></em> into two adjacent chunks, the first of
<em class="parameter"><code>size_a</code></em> bytes, and the second of
<em class="parameter"><code>size_b</code></em> bytes, operating on
<em class="parameter"><code>committed</code></em>/decommitted memory as indicated, on
behalf of arena <em class="parameter"><code>arena_ind</code></em>, returning false upon
success. If the function returns true, this indicates that the chunk
remains unsplit and therefore should continue to be operated on as a
whole.</p><div class="funcsynopsis"><table border="0" class="funcprototype-table" summary="Function synopsis" style="cellspacing: 0; cellpadding: 0;"><tr><td><code class="funcdef">typedef bool <b class="fsfunc">(chunk_merge_t)</b>(</code></td><td>void *<var class="pdparam">chunk_a</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size_a</var>, </td></tr><tr><td></td><td>void *<var class="pdparam">chunk_b</var>, </td></tr><tr><td></td><td>size_t <var class="pdparam">size_b</var>, </td></tr><tr><td></td><td>bool <var class="pdparam">committed</var>, </td></tr><tr><td></td><td>unsigned <var class="pdparam">arena_ind</var><code>)</code>;</td></tr></table><div class="funcprototype-spacer"></div></div><div class="literallayout"><p></p></div><p>A chunk merge function conforms to the <span class="type">chunk_merge_t</span>
type and optionally merges adjacent chunks,
<em class="parameter"><code>chunk_a</code></em> of given <em class="parameter"><code>size_a</code></em>
and <em class="parameter"><code>chunk_b</code></em> of given
<em class="parameter"><code>size_b</code></em> into one contiguous chunk, operating on
<em class="parameter"><code>committed</code></em>/decommitted memory as indicated, on
behalf of arena <em class="parameter"><code>arena_ind</code></em>, returning false upon
success. If the function returns true, this indicates that the chunks
remain distinct mappings and therefore should continue to be operated on
independently.</p></dd><dt><a name="arenas.narenas"></a><span class="term">
"<code class="mallctl">arenas.narenas</code>" "<code class="mallctl">arenas.narenas</code>"
...@@ -926,7 +1102,20 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -926,7 +1102,20 @@ malloc_conf = "xmalloc:true";</pre><p>
"<code class="mallctl">arenas.narenas</code>" "<code class="mallctl">arenas.narenas</code>"
</a> </a>
booleans. Each boolean indicates whether the corresponding arena is booleans. Each boolean indicates whether the corresponding arena is
initialized.</p></dd><dt><a name="arenas.quantum"></a><span class="term"> initialized.</p></dd><dt><a name="arenas.lg_dirty_mult"></a><span class="term">
"<code class="mallctl">arenas.lg_dirty_mult</code>"
(<span class="type">ssize_t</span>)
<code class="literal">rw</code>
</span></dt><dd><p>Current default per-arena minimum ratio (log base 2) of
active to dirty pages, used to initialize <a class="link" href="#arena.i.lg_dirty_mult">
"<code class="mallctl">arena.&lt;i&gt;.lg_dirty_mult</code>"
</a>
during arena creation. See <a class="link" href="#opt.lg_dirty_mult">
"<code class="mallctl">opt.lg_dirty_mult</code>"
</a>
for additional information.</p></dd><dt><a name="arenas.quantum"></a><span class="term">
"<code class="mallctl">arenas.quantum</code>" "<code class="mallctl">arenas.quantum</code>"
...@@ -981,7 +1170,7 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -981,7 +1170,7 @@ malloc_conf = "xmalloc:true";</pre><p>
"<code class="mallctl">arenas.nlruns</code>" "<code class="mallctl">arenas.nlruns</code>"
(<span class="type">size_t</span>) (<span class="type">unsigned</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
</span></dt><dd><p>Total number of large size classes.</p></dd><dt><a name="arenas.lrun.i.size"></a><span class="term"> </span></dt><dd><p>Total number of large size classes.</p></dd><dt><a name="arenas.lrun.i.size"></a><span class="term">
...@@ -990,21 +1179,40 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -990,21 +1179,40 @@ malloc_conf = "xmalloc:true";</pre><p>
(<span class="type">size_t</span>) (<span class="type">size_t</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
</span></dt><dd><p>Maximum size supported by this large size </span></dt><dd><p>Maximum size supported by this large size
class.</p></dd><dt><a name="arenas.purge"></a><span class="term"> class.</p></dd><dt><a name="arenas.nhchunks"></a><span class="term">
"<code class="mallctl">arenas.purge</code>" "<code class="mallctl">arenas.nhchunks</code>"
(<span class="type">unsigned</span>) (<span class="type">unsigned</span>)
<code class="literal">-w</code> <code class="literal">r-</code>
</span></dt><dd><p>Purge unused dirty pages for the specified arena, or </span></dt><dd><p>Total number of huge size classes.</p></dd><dt><a name="arenas.hchunk.i.size"></a><span class="term">
for all arenas if none is specified.</p></dd><dt><a name="arenas.extend"></a><span class="term">
"<code class="mallctl">arenas.hchunk.&lt;i&gt;.size</code>"
(<span class="type">size_t</span>)
<code class="literal">r-</code>
</span></dt><dd><p>Maximum size supported by this huge size
class.</p></dd><dt><a name="arenas.extend"></a><span class="term">
"<code class="mallctl">arenas.extend</code>" "<code class="mallctl">arenas.extend</code>"
(<span class="type">unsigned</span>) (<span class="type">unsigned</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
</span></dt><dd><p>Extend the array of arenas by appending a new arena, </span></dt><dd><p>Extend the array of arenas by appending a new arena,
and returning the new arena index.</p></dd><dt><a name="prof.active"></a><span class="term"> and returning the new arena index.</p></dd><dt><a name="prof.thread_active_init"></a><span class="term">
"<code class="mallctl">prof.thread_active_init</code>"
(<span class="type">bool</span>)
<code class="literal">rw</code>
[<code class="option">--enable-prof</code>]
</span></dt><dd><p>Control the initial setting for <a class="link" href="#thread.prof.active">
"<code class="mallctl">thread.prof.active</code>"
</a>
in newly created threads. See the <a class="link" href="#opt.prof_thread_active_init">
"<code class="mallctl">opt.prof_thread_active_init</code>"
</a>
option for additional information.</p></dd><dt><a name="prof.active"></a><span class="term">
"<code class="mallctl">prof.active</code>" "<code class="mallctl">prof.active</code>"
...@@ -1015,8 +1223,10 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1015,8 +1223,10 @@ malloc_conf = "xmalloc:true";</pre><p>
<a class="link" href="#opt.prof_active"> <a class="link" href="#opt.prof_active">
"<code class="mallctl">opt.prof_active</code>" "<code class="mallctl">opt.prof_active</code>"
</a> </a>
option for additional information. option for additional information, as well as the interrelated <a class="link" href="#thread.prof.active">
</p></dd><dt><a name="prof.dump"></a><span class="term"> "<code class="mallctl">thread.prof.active</code>"
</a>
mallctl.</p></dd><dt><a name="prof.dump"></a><span class="term">
"<code class="mallctl">prof.dump</code>" "<code class="mallctl">prof.dump</code>"
...@@ -1030,7 +1240,45 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1030,7 +1240,45 @@ malloc_conf = "xmalloc:true";</pre><p>
<a class="link" href="#opt.prof_prefix"> <a class="link" href="#opt.prof_prefix">
"<code class="mallctl">opt.prof_prefix</code>" "<code class="mallctl">opt.prof_prefix</code>"
</a> </a>
option.</p></dd><dt><a name="prof.interval"></a><span class="term"> option.</p></dd><dt><a name="prof.gdump"></a><span class="term">
"<code class="mallctl">prof.gdump</code>"
(<span class="type">bool</span>)
<code class="literal">rw</code>
[<code class="option">--enable-prof</code>]
</span></dt><dd><p>When enabled, trigger a memory profile dump every time
the total virtual memory exceeds the previous maximum. Profiles are
dumped to files named according to the pattern
<code class="filename">&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.u&lt;useq&gt;.heap</code>,
where <code class="literal">&lt;prefix&gt;</code> is controlled by the <a class="link" href="#opt.prof_prefix">
"<code class="mallctl">opt.prof_prefix</code>"
</a>
option.</p></dd><dt><a name="prof.reset"></a><span class="term">
"<code class="mallctl">prof.reset</code>"
(<span class="type">size_t</span>)
<code class="literal">-w</code>
[<code class="option">--enable-prof</code>]
</span></dt><dd><p>Reset all memory profile statistics, and optionally
update the sample rate (see <a class="link" href="#opt.lg_prof_sample">
"<code class="mallctl">opt.lg_prof_sample</code>"
</a>
and <a class="link" href="#prof.lg_sample">
"<code class="mallctl">prof.lg_sample</code>"
</a>).
</p></dd><dt><a name="prof.lg_sample"></a><span class="term">
"<code class="mallctl">prof.lg_sample</code>"
(<span class="type">size_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-prof</code>]
</span></dt><dd><p>Get the current sample rate (see <a class="link" href="#opt.lg_prof_sample">
"<code class="mallctl">opt.lg_prof_sample</code>"
</a>).
</p></dd><dt><a name="prof.interval"></a><span class="term">
"<code class="mallctl">prof.interval</code>" "<code class="mallctl">prof.interval</code>"
...@@ -1051,9 +1299,8 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1051,9 +1299,8 @@ malloc_conf = "xmalloc:true";</pre><p>
[<code class="option">--enable-stats</code>] [<code class="option">--enable-stats</code>]
</span></dt><dd><p>Pointer to a counter that contains an approximate count </span></dt><dd><p>Pointer to a counter that contains an approximate count
of the current number of bytes in active pages. The estimate may be of the current number of bytes in active pages. The estimate may be
high, but never low, because each arena rounds up to the nearest high, but never low, because each arena rounds up when computing its
multiple of the chunk size when computing its contribution to the contribution to the counter. Note that the <a class="link" href="#epoch">
counter. Note that the <a class="link" href="#epoch">
"<code class="mallctl">epoch</code>" "<code class="mallctl">epoch</code>"
</a> mallctl has no bearing </a> mallctl has no bearing
on this counter. Furthermore, counter consistency is maintained via on this counter. Furthermore, counter consistency is maintained via
...@@ -1082,68 +1329,53 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1082,68 +1329,53 @@ malloc_conf = "xmalloc:true";</pre><p>
This does not include <a class="link" href="#stats.arenas.i.pdirty"> This does not include <a class="link" href="#stats.arenas.i.pdirty">
"<code class="mallctl">stats.arenas.&lt;i&gt;.pdirty</code>" "<code class="mallctl">stats.arenas.&lt;i&gt;.pdirty</code>"
</a> and pages </a>, nor pages
entirely devoted to allocator metadata.</p></dd><dt><a name="stats.mapped"></a><span class="term"> entirely devoted to allocator metadata.</p></dd><dt><a name="stats.metadata"></a><span class="term">
"<code class="mallctl">stats.mapped</code>"
(<span class="type">size_t</span>) "<code class="mallctl">stats.metadata</code>"
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Total number of bytes in chunks mapped on behalf of the
application. This is a multiple of the chunk size, and is at least as
large as <a class="link" href="#stats.active">
"<code class="mallctl">stats.active</code>"
</a>. This
does not include inactive chunks.</p></dd><dt><a name="stats.chunks.current"></a><span class="term">
"<code class="mallctl">stats.chunks.current</code>"
(<span class="type">size_t</span>) (<span class="type">size_t</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-stats</code>] [<code class="option">--enable-stats</code>]
</span></dt><dd><p>Total number of chunks actively mapped on behalf of the </span></dt><dd><p>Total number of bytes dedicated to metadata, which
application. This does not include inactive chunks. comprise base allocations used for bootstrap-sensitive internal
</p></dd><dt><a name="stats.chunks.total"></a><span class="term"> allocator data structures, arena chunk headers (see <a class="link" href="#stats.arenas.i.metadata.mapped">
"<code class="mallctl">stats.arenas.&lt;i&gt;.metadata.mapped</code>"
"<code class="mallctl">stats.chunks.total</code>" </a>),
and internal allocations (see <a class="link" href="#stats.arenas.i.metadata.allocated">
(<span class="type">uint64_t</span>) "<code class="mallctl">stats.arenas.&lt;i&gt;.metadata.allocated</code>"
<code class="literal">r-</code> </a>).</p></dd><dt><a name="stats.resident"></a><span class="term">
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of chunks allocated.</p></dd><dt><a name="stats.chunks.high"></a><span class="term">
"<code class="mallctl">stats.chunks.high</code>" "<code class="mallctl">stats.resident</code>"
(<span class="type">size_t</span>) (<span class="type">size_t</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-stats</code>] [<code class="option">--enable-stats</code>]
</span></dt><dd><p>Maximum number of active chunks at any time thus far. </span></dt><dd><p>Maximum number of bytes in physically resident data
</p></dd><dt><a name="stats.huge.allocated"></a><span class="term"> pages mapped by the allocator, comprising all pages dedicated to
allocator metadata, pages backing active allocations, and unused dirty
pages. This is a maximum rather than precise because pages may not
actually be physically resident if they correspond to demand-zeroed
virtual memory that has not yet been touched. This is a multiple of the
page size, and is larger than <a class="link" href="#stats.active">
"<code class="mallctl">stats.active</code>"
</a>.</p></dd><dt><a name="stats.mapped"></a><span class="term">
"<code class="mallctl">stats.huge.allocated</code>" "<code class="mallctl">stats.mapped</code>"
(<span class="type">size_t</span>) (<span class="type">size_t</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-stats</code>] [<code class="option">--enable-stats</code>]
</span></dt><dd><p>Number of bytes currently allocated by huge objects. </span></dt><dd><p>Total number of bytes in active chunks mapped by the
</p></dd><dt><a name="stats.huge.nmalloc"></a><span class="term"> allocator. This is a multiple of the chunk size, and is larger than
<a class="link" href="#stats.active">
"<code class="mallctl">stats.huge.nmalloc</code>" "<code class="mallctl">stats.active</code>"
</a>.
(<span class="type">uint64_t</span>) This does not include inactive chunks, even those that contain unused
<code class="literal">r-</code> dirty pages, which means that there is no strict ordering between this
[<code class="option">--enable-stats</code>] and <a class="link" href="#stats.resident">
</span></dt><dd><p>Cumulative number of huge allocation requests. "<code class="mallctl">stats.resident</code>"
</p></dd><dt><a name="stats.huge.ndalloc"></a><span class="term"> </a>.</p></dd><dt><a name="stats.arenas.i.dss"></a><span class="term">
"<code class="mallctl">stats.huge.ndalloc</code>"
(<span class="type">uint64_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of huge deallocation requests.
</p></dd><dt><a name="stats.arenas.i.dss"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.dss</code>" "<code class="mallctl">stats.arenas.&lt;i&gt;.dss</code>"
...@@ -1153,7 +1385,17 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1153,7 +1385,17 @@ malloc_conf = "xmalloc:true";</pre><p>
related to <span class="citerefentry"><span class="refentrytitle">mmap</span>(2)</span> allocation. See <a class="link" href="#opt.dss"> related to <span class="citerefentry"><span class="refentrytitle">mmap</span>(2)</span> allocation. See <a class="link" href="#opt.dss">
"<code class="mallctl">opt.dss</code>" "<code class="mallctl">opt.dss</code>"
</a> for details. </a> for details.
</p></dd><dt><a name="stats.arenas.i.nthreads"></a><span class="term"> </p></dd><dt><a name="stats.arenas.i.lg_dirty_mult"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.lg_dirty_mult</code>"
(<span class="type">ssize_t</span>)
<code class="literal">r-</code>
</span></dt><dd><p>Minimum ratio (log base 2) of active to dirty pages.
See <a class="link" href="#opt.lg_dirty_mult">
"<code class="mallctl">opt.lg_dirty_mult</code>"
</a>
for details.</p></dd><dt><a name="stats.arenas.i.nthreads"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.nthreads</code>" "<code class="mallctl">stats.arenas.&lt;i&gt;.nthreads</code>"
...@@ -1182,7 +1424,38 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1182,7 +1424,38 @@ malloc_conf = "xmalloc:true";</pre><p>
(<span class="type">size_t</span>) (<span class="type">size_t</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-stats</code>] [<code class="option">--enable-stats</code>]
</span></dt><dd><p>Number of mapped bytes.</p></dd><dt><a name="stats.arenas.i.npurge"></a><span class="term"> </span></dt><dd><p>Number of mapped bytes.</p></dd><dt><a name="stats.arenas.i.metadata.mapped"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.metadata.mapped</code>"
(<span class="type">size_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Number of mapped bytes in arena chunk headers, which
track the states of the non-metadata pages.</p></dd><dt><a name="stats.arenas.i.metadata.allocated"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.metadata.allocated</code>"
(<span class="type">size_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Number of bytes dedicated to internal allocations.
Internal allocations differ from application-originated allocations in
that they are for internal use, and that they are omitted from heap
profiles. This statistic is reported separately from <a class="link" href="#stats.metadata">
"<code class="mallctl">stats.metadata</code>"
</a> and
<a class="link" href="#stats.arenas.i.metadata.mapped">
"<code class="mallctl">stats.arenas.&lt;i&gt;.metadata.mapped</code>"
</a>
because it overlaps with e.g. the <a class="link" href="#stats.allocated">
"<code class="mallctl">stats.allocated</code>"
</a> and
<a class="link" href="#stats.active">
"<code class="mallctl">stats.active</code>"
</a>
statistics, whereas the other metadata statistics do
not.</p></dd><dt><a name="stats.arenas.i.npurge"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.npurge</code>" "<code class="mallctl">stats.arenas.&lt;i&gt;.npurge</code>"
...@@ -1270,15 +1543,39 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1270,15 +1543,39 @@ malloc_conf = "xmalloc:true";</pre><p>
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-stats</code>] [<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of large allocation requests. </span></dt><dd><p>Cumulative number of large allocation requests.
</p></dd><dt><a name="stats.arenas.i.bins.j.allocated"></a><span class="term"> </p></dd><dt><a name="stats.arenas.i.huge.allocated"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.allocated</code>" "<code class="mallctl">stats.arenas.&lt;i&gt;.huge.allocated</code>"
(<span class="type">size_t</span>) (<span class="type">size_t</span>)
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-stats</code>] [<code class="option">--enable-stats</code>]
</span></dt><dd><p>Current number of bytes allocated by </span></dt><dd><p>Number of bytes currently allocated by huge objects.
bin.</p></dd><dt><a name="stats.arenas.i.bins.j.nmalloc"></a><span class="term"> </p></dd><dt><a name="stats.arenas.i.huge.nmalloc"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.huge.nmalloc</code>"
(<span class="type">uint64_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of huge allocation requests served
directly by the arena.</p></dd><dt><a name="stats.arenas.i.huge.ndalloc"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.huge.ndalloc</code>"
(<span class="type">uint64_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of huge deallocation requests served
directly by the arena.</p></dd><dt><a name="stats.arenas.i.huge.nrequests"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.huge.nrequests</code>"
(<span class="type">uint64_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of huge allocation requests.
</p></dd><dt><a name="stats.arenas.i.bins.j.nmalloc"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nmalloc</code>" "<code class="mallctl">stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nmalloc</code>"
...@@ -1302,7 +1599,15 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1302,7 +1599,15 @@ malloc_conf = "xmalloc:true";</pre><p>
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-stats</code>] [<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of allocation </span></dt><dd><p>Cumulative number of allocation
requests.</p></dd><dt><a name="stats.arenas.i.bins.j.nfills"></a><span class="term"> requests.</p></dd><dt><a name="stats.arenas.i.bins.j.curregs"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.curregs</code>"
(<span class="type">size_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Current number of regions for this size
class.</p></dd><dt><a name="stats.arenas.i.bins.j.nfills"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nfills</code>" "<code class="mallctl">stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nfills</code>"
...@@ -1370,6 +1675,38 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1370,6 +1675,38 @@ malloc_conf = "xmalloc:true";</pre><p>
<code class="literal">r-</code> <code class="literal">r-</code>
[<code class="option">--enable-stats</code>] [<code class="option">--enable-stats</code>]
</span></dt><dd><p>Current number of runs for this size class. </span></dt><dd><p>Current number of runs for this size class.
</p></dd><dt><a name="stats.arenas.i.hchunks.j.nmalloc"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.hchunks.&lt;j&gt;.nmalloc</code>"
(<span class="type">uint64_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of allocation requests for this size
class served directly by the arena.</p></dd><dt><a name="stats.arenas.i.hchunks.j.ndalloc"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.hchunks.&lt;j&gt;.ndalloc</code>"
(<span class="type">uint64_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of deallocation requests for this
size class served directly by the arena.</p></dd><dt><a name="stats.arenas.i.hchunks.j.nrequests"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.hchunks.&lt;j&gt;.nrequests</code>"
(<span class="type">uint64_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Cumulative number of allocation requests for this size
class.</p></dd><dt><a name="stats.arenas.i.hchunks.j.curhchunks"></a><span class="term">
"<code class="mallctl">stats.arenas.&lt;i&gt;.hchunks.&lt;j&gt;.curhchunks</code>"
(<span class="type">size_t</span>)
<code class="literal">r-</code>
[<code class="option">--enable-stats</code>]
</span></dt><dd><p>Current number of huge allocations for this size class.
</p></dd></dl></div></div><div class="refsect1"><a name="debugging_malloc_problems"></a><h2>DEBUGGING MALLOC PROBLEMS</h2><p>When debugging, it is a good idea to configure/build jemalloc with </p></dd></dl></div></div><div class="refsect1"><a name="debugging_malloc_problems"></a><h2>DEBUGGING MALLOC PROBLEMS</h2><p>When debugging, it is a good idea to configure/build jemalloc with
the <code class="option">--enable-debug</code> and <code class="option">--enable-fill</code> the <code class="option">--enable-debug</code> and <code class="option">--enable-fill</code>
options, and recompile the program with suitable options and symbols for options, and recompile the program with suitable options and symbols for
...@@ -1406,7 +1743,7 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1406,7 +1743,7 @@ malloc_conf = "xmalloc:true";</pre><p>
<code class="function">malloc_stats_print</code>(<em class="parameter"><code></code></em>), followed by a string <code class="function">malloc_stats_print</code>(<em class="parameter"><code></code></em>), followed by a string
pointer. Please note that doing anything which tries to allocate memory in pointer. Please note that doing anything which tries to allocate memory in
this function is likely to result in a crash or deadlock.</p><p>All messages are prefixed by this function is likely to result in a crash or deadlock.</p><p>All messages are prefixed by
&#8220;<code class="computeroutput">&lt;jemalloc&gt;: </code>&#8221;.</p></div><div class="refsect1"><a name="return_values"></a><h2>RETURN VALUES</h2><div class="refsect2"><a name="idm316388028784"></a><h3>Standard API</h3><p>The <code class="function">malloc</code>(<em class="parameter"><code></code></em>) and &#8220;<code class="computeroutput">&lt;jemalloc&gt;: </code>&#8221;.</p></div><div class="refsect1"><a name="return_values"></a><h2>RETURN VALUES</h2><div class="refsect2"><a name="idp46949776"></a><h3>Standard API</h3><p>The <code class="function">malloc</code>(<em class="parameter"><code></code></em>) and
<code class="function">calloc</code>(<em class="parameter"><code></code></em>) functions return a pointer to the <code class="function">calloc</code>(<em class="parameter"><code></code></em>) functions return a pointer to the
allocated memory if successful; otherwise a <code class="constant">NULL</code> allocated memory if successful; otherwise a <code class="constant">NULL</code>
pointer is returned and <code class="varname">errno</code> is set to pointer is returned and <code class="varname">errno</code> is set to
...@@ -1434,7 +1771,7 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1434,7 +1771,7 @@ malloc_conf = "xmalloc:true";</pre><p>
allocation failure. The <code class="function">realloc</code>(<em class="parameter"><code></code></em>) allocation failure. The <code class="function">realloc</code>(<em class="parameter"><code></code></em>)
function always leaves the original buffer intact when an error occurs. function always leaves the original buffer intact when an error occurs.
</p><p>The <code class="function">free</code>(<em class="parameter"><code></code></em>) function returns no </p><p>The <code class="function">free</code>(<em class="parameter"><code></code></em>) function returns no
value.</p></div><div class="refsect2"><a name="idm316388003104"></a><h3>Non-standard API</h3><p>The <code class="function">mallocx</code>(<em class="parameter"><code></code></em>) and value.</p></div><div class="refsect2"><a name="idp46974576"></a><h3>Non-standard API</h3><p>The <code class="function">mallocx</code>(<em class="parameter"><code></code></em>) and
<code class="function">rallocx</code>(<em class="parameter"><code></code></em>) functions return a pointer to <code class="function">rallocx</code>(<em class="parameter"><code></code></em>) functions return a pointer to
the allocated memory if successful; otherwise a <code class="constant">NULL</code> the allocated memory if successful; otherwise a <code class="constant">NULL</code>
pointer is returned to indicate insufficient contiguous memory was pointer is returned to indicate insufficient contiguous memory was
...@@ -1465,27 +1802,7 @@ malloc_conf = "xmalloc:true";</pre><p> ...@@ -1465,27 +1802,7 @@ malloc_conf = "xmalloc:true";</pre><p>
read/write processing.</p></dd></dl></div><p> read/write processing.</p></dd></dl></div><p>
</p><p>The <code class="function">malloc_usable_size</code>(<em class="parameter"><code></code></em>) function </p><p>The <code class="function">malloc_usable_size</code>(<em class="parameter"><code></code></em>) function
returns the usable size of the allocation pointed to by returns the usable size of the allocation pointed to by
<em class="parameter"><code>ptr</code></em>. </p></div><div class="refsect2"><a name="idm316387973360"></a><h3>Experimental API</h3><p>The <code class="function">allocm</code>(<em class="parameter"><code></code></em>), <em class="parameter"><code>ptr</code></em>. </p></div></div><div class="refsect1"><a name="environment"></a><h2>ENVIRONMENT</h2><p>The following environment variable affects the execution of the
<code class="function">rallocm</code>(<em class="parameter"><code></code></em>),
<code class="function">sallocm</code>(<em class="parameter"><code></code></em>),
<code class="function">dallocm</code>(<em class="parameter"><code></code></em>), and
<code class="function">nallocm</code>(<em class="parameter"><code></code></em>) functions return
<code class="constant">ALLOCM_SUCCESS</code> on success; otherwise they return an
error value. The <code class="function">allocm</code>(<em class="parameter"><code></code></em>),
<code class="function">rallocm</code>(<em class="parameter"><code></code></em>), and
<code class="function">nallocm</code>(<em class="parameter"><code></code></em>) functions will fail if:
</p><div class="variablelist"><dl class="variablelist"><dt><span class="term"><span class="errorname">ALLOCM_ERR_OOM</span></span></dt><dd><p>Out of memory. Insufficient contiguous memory was
available to service the allocation request. The
<code class="function">allocm</code>(<em class="parameter"><code></code></em>) function additionally sets
<em class="parameter"><code>*ptr</code></em> to <code class="constant">NULL</code>, whereas
the <code class="function">rallocm</code>(<em class="parameter"><code></code></em>) function leaves
<code class="constant">*ptr</code> unmodified.</p></dd></dl></div><p>
The <code class="function">rallocm</code>(<em class="parameter"><code></code></em>) function will also
fail if:
</p><div class="variablelist"><dl class="variablelist"><dt><span class="term"><span class="errorname">ALLOCM_ERR_NOT_MOVED</span></span></dt><dd><p><code class="constant">ALLOCM_NO_MOVE</code> was specified,
but the reallocation request could not be serviced without moving
the object.</p></dd></dl></div><p>
</p></div></div><div class="refsect1"><a name="environment"></a><h2>ENVIRONMENT</h2><p>The following environment variable affects the execution of the
allocation functions: allocation functions:
</p><div class="variablelist"><dl class="variablelist"><dt><span class="term"><code class="envar">MALLOC_CONF</code></span></dt><dd><p>If the environment variable </p><div class="variablelist"><dl class="variablelist"><dt><span class="term"><code class="envar">MALLOC_CONF</code></span></dt><dd><p>If the environment variable
<code class="envar">MALLOC_CONF</code> is set, the characters it contains <code class="envar">MALLOC_CONF</code> is set, the characters it contains
......
...@@ -38,17 +38,13 @@ ...@@ -38,17 +38,13 @@
<refname>xallocx</refname> <refname>xallocx</refname>
<refname>sallocx</refname> <refname>sallocx</refname>
<refname>dallocx</refname> <refname>dallocx</refname>
<refname>sdallocx</refname>
<refname>nallocx</refname> <refname>nallocx</refname>
<refname>mallctl</refname> <refname>mallctl</refname>
<refname>mallctlnametomib</refname> <refname>mallctlnametomib</refname>
<refname>mallctlbymib</refname> <refname>mallctlbymib</refname>
<refname>malloc_stats_print</refname> <refname>malloc_stats_print</refname>
<refname>malloc_usable_size</refname> <refname>malloc_usable_size</refname>
<refname>allocm</refname>
<refname>rallocm</refname>
<refname>sallocm</refname>
<refname>dallocm</refname>
<refname>nallocm</refname>
--> -->
<refpurpose>general purpose memory allocation functions</refpurpose> <refpurpose>general purpose memory allocation functions</refpurpose>
</refnamediv> </refnamediv>
...@@ -61,8 +57,7 @@ ...@@ -61,8 +57,7 @@
<refsynopsisdiv> <refsynopsisdiv>
<title>SYNOPSIS</title> <title>SYNOPSIS</title>
<funcsynopsis> <funcsynopsis>
<funcsynopsisinfo>#include &lt;<filename class="headerfile">stdlib.h</filename>&gt; <funcsynopsisinfo>#include &lt;<filename class="headerfile">jemalloc/jemalloc.h</filename>&gt;</funcsynopsisinfo>
#include &lt;<filename class="headerfile">jemalloc/jemalloc.h</filename>&gt;</funcsynopsisinfo>
<refsect2> <refsect2>
<title>Standard API</title> <title>Standard API</title>
<funcprototype> <funcprototype>
...@@ -125,6 +120,12 @@ ...@@ -125,6 +120,12 @@
<paramdef>void *<parameter>ptr</parameter></paramdef> <paramdef>void *<parameter>ptr</parameter></paramdef>
<paramdef>int <parameter>flags</parameter></paramdef> <paramdef>int <parameter>flags</parameter></paramdef>
</funcprototype> </funcprototype>
<funcprototype>
<funcdef>void <function>sdallocx</function></funcdef>
<paramdef>void *<parameter>ptr</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>int <parameter>flags</parameter></paramdef>
</funcprototype>
<funcprototype> <funcprototype>
<funcdef>size_t <function>nallocx</function></funcdef> <funcdef>size_t <function>nallocx</function></funcdef>
<paramdef>size_t <parameter>size</parameter></paramdef> <paramdef>size_t <parameter>size</parameter></paramdef>
...@@ -172,41 +173,6 @@ ...@@ -172,41 +173,6 @@
</funcprototype> </funcprototype>
<para><type>const char *</type><varname>malloc_conf</varname>;</para> <para><type>const char *</type><varname>malloc_conf</varname>;</para>
</refsect2> </refsect2>
<refsect2>
<title>Experimental API</title>
<funcprototype>
<funcdef>int <function>allocm</function></funcdef>
<paramdef>void **<parameter>ptr</parameter></paramdef>
<paramdef>size_t *<parameter>rsize</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>int <parameter>flags</parameter></paramdef>
</funcprototype>
<funcprototype>
<funcdef>int <function>rallocm</function></funcdef>
<paramdef>void **<parameter>ptr</parameter></paramdef>
<paramdef>size_t *<parameter>rsize</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>size_t <parameter>extra</parameter></paramdef>
<paramdef>int <parameter>flags</parameter></paramdef>
</funcprototype>
<funcprototype>
<funcdef>int <function>sallocm</function></funcdef>
<paramdef>const void *<parameter>ptr</parameter></paramdef>
<paramdef>size_t *<parameter>rsize</parameter></paramdef>
<paramdef>int <parameter>flags</parameter></paramdef>
</funcprototype>
<funcprototype>
<funcdef>int <function>dallocm</function></funcdef>
<paramdef>void *<parameter>ptr</parameter></paramdef>
<paramdef>int <parameter>flags</parameter></paramdef>
</funcprototype>
<funcprototype>
<funcdef>int <function>nallocm</function></funcdef>
<paramdef>size_t *<parameter>rsize</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>int <parameter>flags</parameter></paramdef>
</funcprototype>
</refsect2>
</funcsynopsis> </funcsynopsis>
</refsynopsisdiv> </refsynopsisdiv>
<refsect1 id="description"> <refsect1 id="description">
...@@ -229,15 +195,15 @@ ...@@ -229,15 +195,15 @@
<para>The <function>posix_memalign<parameter/></function> function <para>The <function>posix_memalign<parameter/></function> function
allocates <parameter>size</parameter> bytes of memory such that the allocates <parameter>size</parameter> bytes of memory such that the
allocation's base address is an even multiple of allocation's base address is a multiple of
<parameter>alignment</parameter>, and returns the allocation in the value <parameter>alignment</parameter>, and returns the allocation in the value
pointed to by <parameter>ptr</parameter>. The requested pointed to by <parameter>ptr</parameter>. The requested
<parameter>alignment</parameter> must be a power of 2 at least as large <parameter>alignment</parameter> must be a power of 2 at least as large as
as <code language="C">sizeof(<type>void *</type>)</code>.</para> <code language="C">sizeof(<type>void *</type>)</code>.</para>
<para>The <function>aligned_alloc<parameter/></function> function <para>The <function>aligned_alloc<parameter/></function> function
allocates <parameter>size</parameter> bytes of memory such that the allocates <parameter>size</parameter> bytes of memory such that the
allocation's base address is an even multiple of allocation's base address is a multiple of
<parameter>alignment</parameter>. The requested <parameter>alignment</parameter>. The requested
<parameter>alignment</parameter> must be a power of 2. Behavior is <parameter>alignment</parameter> must be a power of 2. Behavior is
undefined if <parameter>size</parameter> is not an integral multiple of undefined if <parameter>size</parameter> is not an integral multiple of
...@@ -268,14 +234,15 @@ ...@@ -268,14 +234,15 @@
<function>rallocx<parameter/></function>, <function>rallocx<parameter/></function>,
<function>xallocx<parameter/></function>, <function>xallocx<parameter/></function>,
<function>sallocx<parameter/></function>, <function>sallocx<parameter/></function>,
<function>dallocx<parameter/></function>, and <function>dallocx<parameter/></function>,
<function>sdallocx<parameter/></function>, and
<function>nallocx<parameter/></function> functions all have a <function>nallocx<parameter/></function> functions all have a
<parameter>flags</parameter> argument that can be used to specify <parameter>flags</parameter> argument that can be used to specify
options. The functions only check the options that are contextually options. The functions only check the options that are contextually
relevant. Use bitwise or (<code language="C">|</code>) operations to relevant. Use bitwise or (<code language="C">|</code>) operations to
specify one or more of the following: specify one or more of the following:
<variablelist> <variablelist>
<varlistentry> <varlistentry id="MALLOCX_LG_ALIGN">
<term><constant>MALLOCX_LG_ALIGN(<parameter>la</parameter>) <term><constant>MALLOCX_LG_ALIGN(<parameter>la</parameter>)
</constant></term> </constant></term>
...@@ -285,7 +252,7 @@ ...@@ -285,7 +252,7 @@
that <parameter>la</parameter> is within the valid that <parameter>la</parameter> is within the valid
range.</para></listitem> range.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry id="MALLOCX_ALIGN">
<term><constant>MALLOCX_ALIGN(<parameter>a</parameter>) <term><constant>MALLOCX_ALIGN(<parameter>a</parameter>)
</constant></term> </constant></term>
...@@ -295,7 +262,7 @@ ...@@ -295,7 +262,7 @@
validate that <parameter>a</parameter> is a power of 2. validate that <parameter>a</parameter> is a power of 2.
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry id="MALLOCX_ZERO">
<term><constant>MALLOCX_ZERO</constant></term> <term><constant>MALLOCX_ZERO</constant></term>
<listitem><para>Initialize newly allocated memory to contain zero <listitem><para>Initialize newly allocated memory to contain zero
...@@ -304,16 +271,38 @@ ...@@ -304,16 +271,38 @@
that are initialized to contain zero bytes. If this macro is that are initialized to contain zero bytes. If this macro is
absent, newly allocated memory is uninitialized.</para></listitem> absent, newly allocated memory is uninitialized.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry id="MALLOCX_TCACHE">
<term><constant>MALLOCX_TCACHE(<parameter>tc</parameter>)
</constant></term>
<listitem><para>Use the thread-specific cache (tcache) specified by
the identifier <parameter>tc</parameter>, which must have been
acquired via the <link
linkend="tcache.create"><mallctl>tcache.create</mallctl></link>
mallctl. This macro does not validate that
<parameter>tc</parameter> specifies a valid
identifier.</para></listitem>
</varlistentry>
<varlistentry id="MALLOC_TCACHE_NONE">
<term><constant>MALLOCX_TCACHE_NONE</constant></term>
<listitem><para>Do not use a thread-specific cache (tcache). Unless
<constant>MALLOCX_TCACHE(<parameter>tc</parameter>)</constant> or
<constant>MALLOCX_TCACHE_NONE</constant> is specified, an
automatically managed tcache will be used under many circumstances.
This macro cannot be used in the same <parameter>flags</parameter>
argument as
<constant>MALLOCX_TCACHE(<parameter>tc</parameter>)</constant>.</para></listitem>
</varlistentry>
<varlistentry id="MALLOCX_ARENA">
<term><constant>MALLOCX_ARENA(<parameter>a</parameter>) <term><constant>MALLOCX_ARENA(<parameter>a</parameter>)
</constant></term> </constant></term>
<listitem><para>Use the arena specified by the index <listitem><para>Use the arena specified by the index
<parameter>a</parameter> (and by necessity bypass the thread <parameter>a</parameter>. This macro has no effect for regions that
cache). This macro has no effect for huge regions, nor for regions were allocated via an arena other than the one specified. This
that were allocated via an arena other than the one specified. macro does not validate that <parameter>a</parameter> specifies an
This macro does not validate that <parameter>a</parameter> arena index in the valid range.</para></listitem>
specifies an arena index in the valid range.</para></listitem>
</varlistentry> </varlistentry>
</variablelist> </variablelist>
</para> </para>
...@@ -352,6 +341,15 @@ ...@@ -352,6 +341,15 @@
memory referenced by <parameter>ptr</parameter> to be made available for memory referenced by <parameter>ptr</parameter> to be made available for
future allocations.</para> future allocations.</para>
<para>The <function>sdallocx<parameter/></function> function is an
extension of <function>dallocx<parameter/></function> with a
<parameter>size</parameter> parameter to allow the caller to pass in the
allocation size as an optimization. The minimum valid input size is the
original requested size of the allocation, and the maximum valid input
size is the corresponding value returned by
<function>nallocx<parameter/></function> or
<function>sallocx<parameter/></function>.</para>
<para>The <function>nallocx<parameter/></function> function allocates no <para>The <function>nallocx<parameter/></function> function allocates no
memory, but it performs the same size computation as the memory, but it performs the same size computation as the
<function>mallocx<parameter/></function> function, and returns the real <function>mallocx<parameter/></function> function, and returns the real
...@@ -430,11 +428,12 @@ for (i = 0; i < nbins; i++) { ...@@ -430,11 +428,12 @@ for (i = 0; i < nbins; i++) {
functions simultaneously. If <option>--enable-stats</option> is functions simultaneously. If <option>--enable-stats</option> is
specified during configuration, &ldquo;m&rdquo; and &ldquo;a&rdquo; can specified during configuration, &ldquo;m&rdquo; and &ldquo;a&rdquo; can
be specified to omit merged arena and per arena statistics, respectively; be specified to omit merged arena and per arena statistics, respectively;
&ldquo;b&rdquo; and &ldquo;l&rdquo; can be specified to omit per size &ldquo;b&rdquo;, &ldquo;l&rdquo;, and &ldquo;h&rdquo; can be specified to
class statistics for bins and large objects, respectively. Unrecognized omit per size class statistics for bins, large objects, and huge objects,
characters are silently ignored. Note that thread caching may prevent respectively. Unrecognized characters are silently ignored. Note that
some statistics from being completely up to date, since extra locking thread caching may prevent some statistics from being completely up to
would be required to merge counters that track thread cache operations. date, since extra locking would be required to merge counters that track
thread cache operations.
</para> </para>
<para>The <function>malloc_usable_size<parameter/></function> function <para>The <function>malloc_usable_size<parameter/></function> function
...@@ -449,116 +448,6 @@ for (i = 0; i < nbins; i++) { ...@@ -449,116 +448,6 @@ for (i = 0; i < nbins; i++) {
depended on, since such behavior is entirely implementation-dependent. depended on, since such behavior is entirely implementation-dependent.
</para> </para>
</refsect2> </refsect2>
<refsect2>
<title>Experimental API</title>
<para>The experimental API is subject to change or removal without regard
for backward compatibility. If <option>--disable-experimental</option>
is specified during configuration, the experimental API is
omitted.</para>
<para>The <function>allocm<parameter/></function>,
<function>rallocm<parameter/></function>,
<function>sallocm<parameter/></function>,
<function>dallocm<parameter/></function>, and
<function>nallocm<parameter/></function> functions all have a
<parameter>flags</parameter> argument that can be used to specify
options. The functions only check the options that are contextually
relevant. Use bitwise or (<code language="C">|</code>) operations to
specify one or more of the following:
<variablelist>
<varlistentry>
<term><constant>ALLOCM_LG_ALIGN(<parameter>la</parameter>)
</constant></term>
<listitem><para>Align the memory allocation to start at an address
that is a multiple of <code language="C">(1 &lt;&lt;
<parameter>la</parameter>)</code>. This macro does not validate
that <parameter>la</parameter> is within the valid
range.</para></listitem>
</varlistentry>
<varlistentry>
<term><constant>ALLOCM_ALIGN(<parameter>a</parameter>)
</constant></term>
<listitem><para>Align the memory allocation to start at an address
that is a multiple of <parameter>a</parameter>, where
<parameter>a</parameter> is a power of two. This macro does not
validate that <parameter>a</parameter> is a power of 2.
</para></listitem>
</varlistentry>
<varlistentry>
<term><constant>ALLOCM_ZERO</constant></term>
<listitem><para>Initialize newly allocated memory to contain zero
bytes. In the growing reallocation case, the real size prior to
reallocation defines the boundary between untouched bytes and those
that are initialized to contain zero bytes. If this macro is
absent, newly allocated memory is uninitialized.</para></listitem>
</varlistentry>
<varlistentry>
<term><constant>ALLOCM_NO_MOVE</constant></term>
<listitem><para>For reallocation, fail rather than moving the
object. This constraint can apply to both growth and
shrinkage.</para></listitem>
</varlistentry>
<varlistentry>
<term><constant>ALLOCM_ARENA(<parameter>a</parameter>)
</constant></term>
<listitem><para>Use the arena specified by the index
<parameter>a</parameter> (and by necessity bypass the thread
cache). This macro has no effect for huge regions, nor for regions
that were allocated via an arena other than the one specified.
This macro does not validate that <parameter>a</parameter>
specifies an arena index in the valid range.</para></listitem>
</varlistentry>
</variablelist>
</para>
<para>The <function>allocm<parameter/></function> function allocates at
least <parameter>size</parameter> bytes of memory, sets
<parameter>*ptr</parameter> to the base address of the allocation, and
sets <parameter>*rsize</parameter> to the real size of the allocation if
<parameter>rsize</parameter> is not <constant>NULL</constant>. Behavior
is undefined if <parameter>size</parameter> is <constant>0</constant>, or
if request size overflows due to size class and/or alignment
constraints.</para>
<para>The <function>rallocm<parameter/></function> function resizes the
allocation at <parameter>*ptr</parameter> to be at least
<parameter>size</parameter> bytes, sets <parameter>*ptr</parameter> to
the base address of the allocation if it moved, and sets
<parameter>*rsize</parameter> to the real size of the allocation if
<parameter>rsize</parameter> is not <constant>NULL</constant>. If
<parameter>extra</parameter> is non-zero, an attempt is made to resize
the allocation to be at least <code
language="C">(<parameter>size</parameter> +
<parameter>extra</parameter>)</code> bytes, though inability to allocate
the extra byte(s) will not by itself result in failure. Behavior is
undefined if <parameter>size</parameter> is <constant>0</constant>, if
request size overflows due to size class and/or alignment constraints, or
if <code language="C">(<parameter>size</parameter> +
<parameter>extra</parameter> &gt;
<constant>SIZE_T_MAX</constant>)</code>.</para>
<para>The <function>sallocm<parameter/></function> function sets
<parameter>*rsize</parameter> to the real size of the allocation.</para>
<para>The <function>dallocm<parameter/></function> function causes the
memory referenced by <parameter>ptr</parameter> to be made available for
future allocations.</para>
<para>The <function>nallocm<parameter/></function> function allocates no
memory, but it performs the same size computation as the
<function>allocm<parameter/></function> function, and if
<parameter>rsize</parameter> is not <constant>NULL</constant> it sets
<parameter>*rsize</parameter> to the real size of the allocation that
would result from the equivalent <function>allocm<parameter/></function>
function call. Behavior is undefined if <parameter>size</parameter> is
<constant>0</constant>, or if request size overflows due to size class
and/or alignment constraints.</para>
</refsect2>
</refsect1> </refsect1>
<refsect1 id="tuning"> <refsect1 id="tuning">
<title>TUNING</title> <title>TUNING</title>
...@@ -598,8 +487,10 @@ for (i = 0; i < nbins; i++) { ...@@ -598,8 +487,10 @@ for (i = 0; i < nbins; i++) {
<manvolnum>2</manvolnum></citerefentry> to obtain memory, which is <manvolnum>2</manvolnum></citerefentry> to obtain memory, which is
suboptimal for several reasons, including race conditions, increased suboptimal for several reasons, including race conditions, increased
fragmentation, and artificial limitations on maximum usable memory. If fragmentation, and artificial limitations on maximum usable memory. If
<option>--enable-dss</option> is specified during configuration, this <citerefentry><refentrytitle>sbrk</refentrytitle>
allocator uses both <citerefentry><refentrytitle>mmap</refentrytitle> <manvolnum>2</manvolnum></citerefentry> is supported by the operating
system, this allocator uses both
<citerefentry><refentrytitle>mmap</refentrytitle>
<manvolnum>2</manvolnum></citerefentry> and <manvolnum>2</manvolnum></citerefentry> and
<citerefentry><refentrytitle>sbrk</refentrytitle> <citerefentry><refentrytitle>sbrk</refentrytitle>
<manvolnum>2</manvolnum></citerefentry>, in that order of preference; <manvolnum>2</manvolnum></citerefentry>, in that order of preference;
...@@ -632,12 +523,11 @@ for (i = 0; i < nbins; i++) { ...@@ -632,12 +523,11 @@ for (i = 0; i < nbins; i++) {
possible to find metadata for user objects very quickly.</para> possible to find metadata for user objects very quickly.</para>
<para>User objects are broken into three categories according to size: <para>User objects are broken into three categories according to size:
small, large, and huge. Small objects are smaller than one page. Large small, large, and huge. Small and large objects are managed entirely by
objects are smaller than the chunk size. Huge objects are a multiple of arenas; huge objects are additionally aggregated in a single data structure
the chunk size. Small and large objects are managed by arenas; huge that is shared by all threads. Huge objects are typically used by
objects are managed separately in a single data structure that is shared by applications infrequently enough that this single data structure is not a
all threads. Huge objects are used by applications infrequently enough scalability issue.</para>
that this single data structure is not a scalability issue.</para>
<para>Each chunk that is managed by an arena tracks its contents as runs of <para>Each chunk that is managed by an arena tracks its contents as runs of
contiguous pages (unused, backing a set of small objects, or backing one contiguous pages (unused, backing a set of small objects, or backing one
...@@ -646,18 +536,18 @@ for (i = 0; i < nbins; i++) { ...@@ -646,18 +536,18 @@ for (i = 0; i < nbins; i++) {
allocations in constant time.</para> allocations in constant time.</para>
<para>Small objects are managed in groups by page runs. Each run maintains <para>Small objects are managed in groups by page runs. Each run maintains
a frontier and free list to track which regions are in use. Allocation a bitmap to track which regions are in use. Allocation requests that are no
requests that are no more than half the quantum (8 or 16, depending on more than half the quantum (8 or 16, depending on architecture) are rounded
architecture) are rounded up to the nearest power of two that is at least up to the nearest power of two that is at least <code
<code language="C">sizeof(<type>double</type>)</code>. All other small language="C">sizeof(<type>double</type>)</code>. All other object size
object size classes are multiples of the quantum, spaced such that internal classes are multiples of the quantum, spaced such that there are four size
fragmentation is limited to approximately 25% for all but the smallest size classes for each doubling in size, which limits internal fragmentation to
classes. Allocation requests that are larger than the maximum small size approximately 20% for all but the smallest size classes. Small size classes
class, but small enough to fit in an arena-managed chunk (see the <link are smaller than four times the page size, large size classes are smaller
linkend="opt.lg_chunk"><mallctl>opt.lg_chunk</mallctl></link> option), are than the chunk size (see the <link
rounded up to the nearest run size. Allocation requests that are too large linkend="opt.lg_chunk"><mallctl>opt.lg_chunk</mallctl></link> option), and
to fit in an arena-managed chunk are rounded up to the nearest multiple of huge size classes extend from the chunk size up to one size class less than
the chunk size.</para> the full address space size.</para>
<para>Allocations are packed tightly together, which can be an issue for <para>Allocations are packed tightly together, which can be an issue for
multi-threaded applications. If you need to assure that allocations do not multi-threaded applications. If you need to assure that allocations do not
...@@ -665,8 +555,29 @@ for (i = 0; i < nbins; i++) { ...@@ -665,8 +555,29 @@ for (i = 0; i < nbins; i++) {
nearest multiple of the cacheline size, or specify cacheline alignment when nearest multiple of the cacheline size, or specify cacheline alignment when
allocating.</para> allocating.</para>
<para>Assuming 4 MiB chunks, 4 KiB pages, and a 16-byte quantum on a 64-bit <para>The <function>realloc<parameter/></function>,
system, the size classes in each category are as shown in <xref <function>rallocx<parameter/></function>, and
<function>xallocx<parameter/></function> functions may resize allocations
without moving them under limited circumstances. Unlike the
<function>*allocx<parameter/></function> API, the standard API does not
officially round up the usable size of an allocation to the nearest size
class, so technically it is necessary to call
<function>realloc<parameter/></function> to grow e.g. a 9-byte allocation to
16 bytes, or shrink a 16-byte allocation to 9 bytes. Growth and shrinkage
trivially succeeds in place as long as the pre-size and post-size both round
up to the same size class. No other API guarantees are made regarding
in-place resizing, but the current implementation also tries to resize large
and huge allocations in place, as long as the pre-size and post-size are
both large or both huge. In such cases shrinkage always succeeds for large
size classes, but for huge size classes the chunk allocator must support
splitting (see <link
linkend="arena.i.chunk_hooks"><mallctl>arena.&lt;i&gt;.chunk_hooks</mallctl></link>).
Growth only succeeds if the trailing memory is currently available, and
additionally for huge size classes the chunk allocator must support
merging.</para>
<para>Assuming 2 MiB chunks, 4 KiB pages, and a 16-byte quantum on a
64-bit system, the size classes in each category are as shown in <xref
linkend="size_classes" xrefstyle="template:Table %n"/>.</para> linkend="size_classes" xrefstyle="template:Table %n"/>.</para>
<table xml:id="size_classes" frame="all"> <table xml:id="size_classes" frame="all">
...@@ -684,13 +595,13 @@ for (i = 0; i < nbins; i++) { ...@@ -684,13 +595,13 @@ for (i = 0; i < nbins; i++) {
</thead> </thead>
<tbody> <tbody>
<row> <row>
<entry morerows="6">Small</entry> <entry morerows="8">Small</entry>
<entry>lg</entry> <entry>lg</entry>
<entry>[8]</entry> <entry>[8]</entry>
</row> </row>
<row> <row>
<entry>16</entry> <entry>16</entry>
<entry>[16, 32, 48, ..., 128]</entry> <entry>[16, 32, 48, 64, 80, 96, 112, 128]</entry>
</row> </row>
<row> <row>
<entry>32</entry> <entry>32</entry>
...@@ -710,17 +621,77 @@ for (i = 0; i < nbins; i++) { ...@@ -710,17 +621,77 @@ for (i = 0; i < nbins; i++) {
</row> </row>
<row> <row>
<entry>512</entry> <entry>512</entry>
<entry>[2560, 3072, 3584]</entry> <entry>[2560, 3072, 3584, 4096]</entry>
</row>
<row>
<entry>1 KiB</entry>
<entry>[5 KiB, 6 KiB, 7 KiB, 8 KiB]</entry>
</row>
<row>
<entry>2 KiB</entry>
<entry>[10 KiB, 12 KiB, 14 KiB]</entry>
</row>
<row>
<entry morerows="7">Large</entry>
<entry>2 KiB</entry>
<entry>[16 KiB]</entry>
</row> </row>
<row> <row>
<entry>Large</entry>
<entry>4 KiB</entry> <entry>4 KiB</entry>
<entry>[4 KiB, 8 KiB, 12 KiB, ..., 4072 KiB]</entry> <entry>[20 KiB, 24 KiB, 28 KiB, 32 KiB]</entry>
</row>
<row>
<entry>8 KiB</entry>
<entry>[40 KiB, 48 KiB, 54 KiB, 64 KiB]</entry>
</row>
<row>
<entry>16 KiB</entry>
<entry>[80 KiB, 96 KiB, 112 KiB, 128 KiB]</entry>
</row>
<row>
<entry>32 KiB</entry>
<entry>[160 KiB, 192 KiB, 224 KiB, 256 KiB]</entry>
</row>
<row>
<entry>64 KiB</entry>
<entry>[320 KiB, 384 KiB, 448 KiB, 512 KiB]</entry>
</row>
<row>
<entry>128 KiB</entry>
<entry>[640 KiB, 768 KiB, 896 KiB, 1 MiB]</entry>
</row>
<row>
<entry>256 KiB</entry>
<entry>[1280 KiB, 1536 KiB, 1792 KiB]</entry>
</row>
<row>
<entry morerows="6">Huge</entry>
<entry>256 KiB</entry>
<entry>[2 MiB]</entry>
</row>
<row>
<entry>512 KiB</entry>
<entry>[2560 KiB, 3 MiB, 3584 KiB, 4 MiB]</entry>
</row>
<row>
<entry>1 MiB</entry>
<entry>[5 MiB, 6 MiB, 7 MiB, 8 MiB]</entry>
</row>
<row>
<entry>2 MiB</entry>
<entry>[10 MiB, 12 MiB, 14 MiB, 16 MiB]</entry>
</row> </row>
<row> <row>
<entry>Huge</entry>
<entry>4 MiB</entry> <entry>4 MiB</entry>
<entry>[4 MiB, 8 MiB, 12 MiB, ...]</entry> <entry>[20 MiB, 24 MiB, 28 MiB, 32 MiB]</entry>
</row>
<row>
<entry>8 MiB</entry>
<entry>[40 MiB, 48 MiB, 56 MiB, 64 MiB]</entry>
</row>
<row>
<entry>...</entry>
<entry>...</entry>
</row> </row>
</tbody> </tbody>
</tgroup> </tgroup>
...@@ -765,23 +736,23 @@ for (i = 0; i < nbins; i++) { ...@@ -765,23 +736,23 @@ for (i = 0; i < nbins; i++) {
detecting whether another thread caused a refresh.</para></listitem> detecting whether another thread caused a refresh.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="config.debug"> <varlistentry id="config.cache_oblivious">
<term> <term>
<mallctl>config.debug</mallctl> <mallctl>config.cache_oblivious</mallctl>
(<type>bool</type>) (<type>bool</type>)
<literal>r-</literal> <literal>r-</literal>
</term> </term>
<listitem><para><option>--enable-debug</option> was specified during <listitem><para><option>--enable-cache-oblivious</option> was specified
build configuration.</para></listitem> during build configuration.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="config.dss"> <varlistentry id="config.debug">
<term> <term>
<mallctl>config.dss</mallctl> <mallctl>config.debug</mallctl>
(<type>bool</type>) (<type>bool</type>)
<literal>r-</literal> <literal>r-</literal>
</term> </term>
<listitem><para><option>--enable-dss</option> was specified during <listitem><para><option>--enable-debug</option> was specified during
build configuration.</para></listitem> build configuration.</para></listitem>
</varlistentry> </varlistentry>
...@@ -805,16 +776,6 @@ for (i = 0; i < nbins; i++) { ...@@ -805,16 +776,6 @@ for (i = 0; i < nbins; i++) {
during build configuration.</para></listitem> during build configuration.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="config.mremap">
<term>
<mallctl>config.mremap</mallctl>
(<type>bool</type>)
<literal>r-</literal>
</term>
<listitem><para><option>--enable-mremap</option> was specified during
build configuration.</para></listitem>
</varlistentry>
<varlistentry id="config.munmap"> <varlistentry id="config.munmap">
<term> <term>
<mallctl>config.munmap</mallctl> <mallctl>config.munmap</mallctl>
...@@ -940,10 +901,15 @@ for (i = 0; i < nbins; i++) { ...@@ -940,10 +901,15 @@ for (i = 0; i < nbins; i++) {
<manvolnum>2</manvolnum></citerefentry>) allocation precedence as <manvolnum>2</manvolnum></citerefentry>) allocation precedence as
related to <citerefentry><refentrytitle>mmap</refentrytitle> related to <citerefentry><refentrytitle>mmap</refentrytitle>
<manvolnum>2</manvolnum></citerefentry> allocation. The following <manvolnum>2</manvolnum></citerefentry> allocation. The following
settings are supported: &ldquo;disabled&rdquo;, &ldquo;primary&rdquo;, settings are supported if
and &ldquo;secondary&rdquo;. The default is &ldquo;secondary&rdquo; if <citerefentry><refentrytitle>sbrk</refentrytitle>
<link linkend="config.dss"><mallctl>config.dss</mallctl></link> is <manvolnum>2</manvolnum></citerefentry> is supported by the operating
true, &ldquo;disabled&rdquo; otherwise. system: &ldquo;disabled&rdquo;, &ldquo;primary&rdquo;, and
&ldquo;secondary&rdquo;; otherwise only &ldquo;disabled&rdquo; is
supported. The default is &ldquo;secondary&rdquo; if
<citerefentry><refentrytitle>sbrk</refentrytitle>
<manvolnum>2</manvolnum></citerefentry> is supported by the operating
system; &ldquo;disabled&rdquo; otherwise.
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
...@@ -956,7 +922,7 @@ for (i = 0; i < nbins; i++) { ...@@ -956,7 +922,7 @@ for (i = 0; i < nbins; i++) {
<listitem><para>Virtual memory chunk size (log base 2). If a chunk <listitem><para>Virtual memory chunk size (log base 2). If a chunk
size outside the supported size range is specified, the size is size outside the supported size range is specified, the size is
silently clipped to the minimum/maximum supported size. The default silently clipped to the minimum/maximum supported size. The default
chunk size is 4 MiB (2^22). chunk size is 2 MiB (2^21).
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
...@@ -986,7 +952,11 @@ for (i = 0; i < nbins; i++) { ...@@ -986,7 +952,11 @@ for (i = 0; i < nbins; i++) {
provides the kernel with sufficient information to recycle dirty pages provides the kernel with sufficient information to recycle dirty pages
if physical memory becomes scarce and the pages remain unused. The if physical memory becomes scarce and the pages remain unused. The
default minimum ratio is 8:1 (2^3:1); an option value of -1 will default minimum ratio is 8:1 (2^3:1); an option value of -1 will
disable dirty page purging.</para></listitem> disable dirty page purging. See <link
linkend="arenas.lg_dirty_mult"><mallctl>arenas.lg_dirty_mult</mallctl></link>
and <link
linkend="arena.i.lg_dirty_mult"><mallctl>arena.&lt;i&gt;.lg_dirty_mult</mallctl></link>
for related dynamic control options.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.stats_print"> <varlistentry id="opt.stats_print">
...@@ -1003,26 +973,34 @@ for (i = 0; i < nbins; i++) { ...@@ -1003,26 +973,34 @@ for (i = 0; i < nbins; i++) {
<option>--enable-stats</option> is specified during configuration, this <option>--enable-stats</option> is specified during configuration, this
has the potential to cause deadlock for a multi-threaded process that has the potential to cause deadlock for a multi-threaded process that
exits while one or more threads are executing in the memory allocation exits while one or more threads are executing in the memory allocation
functions. Therefore, this option should only be used with care; it is functions. Furthermore, <function>atexit<parameter/></function> may
primarily intended as a performance tuning aid during application allocate memory during application initialization and then deadlock
internally when jemalloc in turn calls
<function>atexit<parameter/></function>, so this option is not
univerally usable (though the application can register its own
<function>atexit<parameter/></function> function with equivalent
functionality). Therefore, this option should only be used with care;
it is primarily intended as a performance tuning aid during application
development. This option is disabled by default.</para></listitem> development. This option is disabled by default.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.junk"> <varlistentry id="opt.junk">
<term> <term>
<mallctl>opt.junk</mallctl> <mallctl>opt.junk</mallctl>
(<type>bool</type>) (<type>const char *</type>)
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-fill</option>] [<option>--enable-fill</option>]
</term> </term>
<listitem><para>Junk filling enabled/disabled. If enabled, each byte <listitem><para>Junk filling. If set to "alloc", each byte of
of uninitialized allocated memory will be initialized to uninitialized allocated memory will be initialized to
<literal>0xa5</literal>. All deallocated memory will be initialized to <literal>0xa5</literal>. If set to "free", all deallocated memory will
<literal>0x5a</literal>. This is intended for debugging and will be initialized to <literal>0x5a</literal>. If set to "true", both
impact performance negatively. This option is disabled by default allocated and deallocated memory will be initialized, and if set to
unless <option>--enable-debug</option> is specified during "false", junk filling be disabled entirely. This is intended for
configuration, in which case it is enabled by default unless running debugging and will impact performance negatively. This option is
inside <ulink "false" by default unless <option>--enable-debug</option> is specified
during configuration, in which case it is "true" by default unless
running inside <ulink
url="http://valgrind.org/">Valgrind</ulink>.</para></listitem> url="http://valgrind.org/">Valgrind</ulink>.</para></listitem>
</varlistentry> </varlistentry>
...@@ -1076,9 +1054,8 @@ for (i = 0; i < nbins; i++) { ...@@ -1076,9 +1054,8 @@ for (i = 0; i < nbins; i++) {
<listitem><para>Zero filling enabled/disabled. If enabled, each byte <listitem><para>Zero filling enabled/disabled. If enabled, each byte
of uninitialized allocated memory will be initialized to 0. Note that of uninitialized allocated memory will be initialized to 0. Note that
this initialization only happens once for each byte, so this initialization only happens once for each byte, so
<function>realloc<parameter/></function>, <function>realloc<parameter/></function> and
<function>rallocx<parameter/></function> and <function>rallocx<parameter/></function> calls do not zero memory that
<function>rallocm<parameter/></function> calls do not zero memory that
was previously allocated. This is intended for debugging and will was previously allocated. This is intended for debugging and will
impact performance negatively. This option is disabled by default. impact performance negatively. This option is disabled by default.
</para></listitem> </para></listitem>
...@@ -1097,19 +1074,6 @@ for (i = 0; i < nbins; i++) { ...@@ -1097,19 +1074,6 @@ for (i = 0; i < nbins; i++) {
is disabled by default.</para></listitem> is disabled by default.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.valgrind">
<term>
<mallctl>opt.valgrind</mallctl>
(<type>bool</type>)
<literal>r-</literal>
[<option>--enable-valgrind</option>]
</term>
<listitem><para><ulink url="http://valgrind.org/">Valgrind</ulink>
support enabled/disabled. This option is vestigal because jemalloc
auto-detects whether it is running inside Valgrind. This option is
disabled by default, unless running inside Valgrind.</para></listitem>
</varlistentry>
<varlistentry id="opt.xmalloc"> <varlistentry id="opt.xmalloc">
<term> <term>
<mallctl>opt.xmalloc</mallctl> <mallctl>opt.xmalloc</mallctl>
...@@ -1137,16 +1101,16 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1137,16 +1101,16 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-tcache</option>] [<option>--enable-tcache</option>]
</term> </term>
<listitem><para>Thread-specific caching enabled/disabled. When there <listitem><para>Thread-specific caching (tcache) enabled/disabled. When
are multiple threads, each thread uses a thread-specific cache for there are multiple threads, each thread uses a tcache for objects up to
objects up to a certain size. Thread-specific caching allows many a certain size. Thread-specific caching allows many allocations to be
allocations to be satisfied without performing any thread satisfied without performing any thread synchronization, at the cost of
synchronization, at the cost of increased memory use. See the increased memory use. See the <link
<link
linkend="opt.lg_tcache_max"><mallctl>opt.lg_tcache_max</mallctl></link> linkend="opt.lg_tcache_max"><mallctl>opt.lg_tcache_max</mallctl></link>
option for related tuning information. This option is enabled by option for related tuning information. This option is enabled by
default unless running inside <ulink default unless running inside <ulink
url="http://valgrind.org/">Valgrind</ulink>.</para></listitem> url="http://valgrind.org/">Valgrind</ulink>, in which case it is
forcefully disabled.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.lg_tcache_max"> <varlistentry id="opt.lg_tcache_max">
...@@ -1157,8 +1121,8 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1157,8 +1121,8 @@ malloc_conf = "xmalloc:true";]]></programlisting>
[<option>--enable-tcache</option>] [<option>--enable-tcache</option>]
</term> </term>
<listitem><para>Maximum size class (log base 2) to cache in the <listitem><para>Maximum size class (log base 2) to cache in the
thread-specific cache. At a minimum, all small size classes are thread-specific cache (tcache). At a minimum, all small size classes
cached, and at a maximum all large size classes are cached. The are cached, and at a maximum all large size classes are cached. The
default maximum is 32 KiB (2^15).</para></listitem> default maximum is 32 KiB (2^15).</para></listitem>
</varlistentry> </varlistentry>
...@@ -1183,8 +1147,9 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1183,8 +1147,9 @@ malloc_conf = "xmalloc:true";]]></programlisting>
option for information on high-water-triggered profile dumping, and the option for information on high-water-triggered profile dumping, and the
<link linkend="opt.prof_final"><mallctl>opt.prof_final</mallctl></link> <link linkend="opt.prof_final"><mallctl>opt.prof_final</mallctl></link>
option for final profile dumping. Profile output is compatible with option for final profile dumping. Profile output is compatible with
the included <command>pprof</command> Perl script, which originates the <command>jeprof</command> command, which is based on the
from the <ulink url="http://code.google.com/p/gperftools/">gperftools <command>pprof</command> that is developed as part of the <ulink
url="http://code.google.com/p/gperftools/">gperftools
package</ulink>.</para></listitem> package</ulink>.</para></listitem>
</varlistentry> </varlistentry>
...@@ -1206,7 +1171,7 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1206,7 +1171,7 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<term> <term>
<mallctl>opt.prof_active</mallctl> <mallctl>opt.prof_active</mallctl>
(<type>bool</type>) (<type>bool</type>)
<literal>rw</literal> <literal>r-</literal>
[<option>--enable-prof</option>] [<option>--enable-prof</option>]
</term> </term>
<listitem><para>Profiling activated/deactivated. This is a secondary <listitem><para>Profiling activated/deactivated. This is a secondary
...@@ -1219,10 +1184,25 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1219,10 +1184,25 @@ malloc_conf = "xmalloc:true";]]></programlisting>
This option is enabled by default.</para></listitem> This option is enabled by default.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.prof_thread_active_init">
<term>
<mallctl>opt.prof_thread_active_init</mallctl>
(<type>bool</type>)
<literal>r-</literal>
[<option>--enable-prof</option>]
</term>
<listitem><para>Initial setting for <link
linkend="thread.prof.active"><mallctl>thread.prof.active</mallctl></link>
in newly created threads. The initial setting for newly created threads
can also be changed during execution via the <link
linkend="prof.thread_active_init"><mallctl>prof.thread_active_init</mallctl></link>
mallctl. This option is enabled by default.</para></listitem>
</varlistentry>
<varlistentry id="opt.lg_prof_sample"> <varlistentry id="opt.lg_prof_sample">
<term> <term>
<mallctl>opt.lg_prof_sample</mallctl> <mallctl>opt.lg_prof_sample</mallctl>
(<type>ssize_t</type>) (<type>size_t</type>)
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-prof</option>] [<option>--enable-prof</option>]
</term> </term>
...@@ -1276,13 +1256,11 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1276,13 +1256,11 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-prof</option>] [<option>--enable-prof</option>]
</term> </term>
<listitem><para>Trigger a memory profile dump every time the total <listitem><para>Set the initial state of <link
virtual memory exceeds the previous maximum. Profiles are dumped to linkend="prof.gdump"><mallctl>prof.gdump</mallctl></link>, which when
files named according to the pattern enabled triggers a memory profile dump every time the total virtual
<filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.u&lt;useq&gt;.heap</filename>, memory exceeds the previous maximum. This option is disabled by
where <literal>&lt;prefix&gt;</literal> is controlled by the <link default.</para></listitem>
linkend="opt.prof_prefix"><mallctl>opt.prof_prefix</mallctl></link>
option. This option is disabled by default.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.prof_final"> <varlistentry id="opt.prof_final">
...@@ -1299,7 +1277,13 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1299,7 +1277,13 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.f.heap</filename>, <filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.f.heap</filename>,
where <literal>&lt;prefix&gt;</literal> is controlled by the <link where <literal>&lt;prefix&gt;</literal> is controlled by the <link
linkend="opt.prof_prefix"><mallctl>opt.prof_prefix</mallctl></link> linkend="opt.prof_prefix"><mallctl>opt.prof_prefix</mallctl></link>
option. This option is enabled by default.</para></listitem> option. Note that <function>atexit<parameter/></function> may allocate
memory during application initialization and then deadlock internally
when jemalloc in turn calls <function>atexit<parameter/></function>, so
this option is not univerally usable (though the application can
register its own <function>atexit<parameter/></function> function with
equivalent functionality). This option is disabled by
default.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.prof_leak"> <varlistentry id="opt.prof_leak">
...@@ -1396,7 +1380,7 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1396,7 +1380,7 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<listitem><para>Enable/disable calling thread's tcache. The tcache is <listitem><para>Enable/disable calling thread's tcache. The tcache is
implicitly flushed as a side effect of becoming implicitly flushed as a side effect of becoming
disabled (see <link disabled (see <link
lenkend="thread.tcache.flush"><mallctl>thread.tcache.flush</mallctl></link>). linkend="thread.tcache.flush"><mallctl>thread.tcache.flush</mallctl></link>).
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
...@@ -1407,9 +1391,9 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1407,9 +1391,9 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<literal>--</literal> <literal>--</literal>
[<option>--enable-tcache</option>] [<option>--enable-tcache</option>]
</term> </term>
<listitem><para>Flush calling thread's tcache. This interface releases <listitem><para>Flush calling thread's thread-specific cache (tcache).
all cached objects and internal data structures associated with the This interface releases all cached objects and internal data structures
calling thread's thread-specific cache. Ordinarily, this interface associated with the calling thread's tcache. Ordinarily, this interface
need not be called, since automatic periodic incremental garbage need not be called, since automatic periodic incremental garbage
collection occurs, and the thread cache is automatically discarded when collection occurs, and the thread cache is automatically discarded when
a thread exits. However, garbage collection is triggered by allocation a thread exits. However, garbage collection is triggered by allocation
...@@ -1418,10 +1402,91 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1418,10 +1402,91 @@ malloc_conf = "xmalloc:true";]]></programlisting>
the developer may find manual flushing useful.</para></listitem> the developer may find manual flushing useful.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="thread.prof.name">
<term>
<mallctl>thread.prof.name</mallctl>
(<type>const char *</type>)
<literal>r-</literal> or
<literal>-w</literal>
[<option>--enable-prof</option>]
</term>
<listitem><para>Get/set the descriptive name associated with the calling
thread in memory profile dumps. An internal copy of the name string is
created, so the input string need not be maintained after this interface
completes execution. The output string of this interface should be
copied for non-ephemeral uses, because multiple implementation details
can cause asynchronous string deallocation. Furthermore, each
invocation of this interface can only read or write; simultaneous
read/write is not supported due to string lifetime limitations. The
name string must nil-terminated and comprised only of characters in the
sets recognized
by <citerefentry><refentrytitle>isgraph</refentrytitle>
<manvolnum>3</manvolnum></citerefentry> and
<citerefentry><refentrytitle>isblank</refentrytitle>
<manvolnum>3</manvolnum></citerefentry>.</para></listitem>
</varlistentry>
<varlistentry id="thread.prof.active">
<term>
<mallctl>thread.prof.active</mallctl>
(<type>bool</type>)
<literal>rw</literal>
[<option>--enable-prof</option>]
</term>
<listitem><para>Control whether sampling is currently active for the
calling thread. This is an activation mechanism in addition to <link
linkend="prof.active"><mallctl>prof.active</mallctl></link>; both must
be active for the calling thread to sample. This flag is enabled by
default.</para></listitem>
</varlistentry>
<varlistentry id="tcache.create">
<term>
<mallctl>tcache.create</mallctl>
(<type>unsigned</type>)
<literal>r-</literal>
[<option>--enable-tcache</option>]
</term>
<listitem><para>Create an explicit thread-specific cache (tcache) and
return an identifier that can be passed to the <link
linkend="MALLOCX_TCACHE"><constant>MALLOCX_TCACHE(<parameter>tc</parameter>)</constant></link>
macro to explicitly use the specified cache rather than the
automatically managed one that is used by default. Each explicit cache
can be used by only one thread at a time; the application must assure
that this constraint holds.
</para></listitem>
</varlistentry>
<varlistentry id="tcache.flush">
<term>
<mallctl>tcache.flush</mallctl>
(<type>unsigned</type>)
<literal>-w</literal>
[<option>--enable-tcache</option>]
</term>
<listitem><para>Flush the specified thread-specific cache (tcache). The
same considerations apply to this interface as to <link
linkend="thread.tcache.flush"><mallctl>thread.tcache.flush</mallctl></link>,
except that the tcache will never be automatically be discarded.
</para></listitem>
</varlistentry>
<varlistentry id="tcache.destroy">
<term>
<mallctl>tcache.destroy</mallctl>
(<type>unsigned</type>)
<literal>-w</literal>
[<option>--enable-tcache</option>]
</term>
<listitem><para>Flush the specified thread-specific cache (tcache) and
make the identifier available for use during a future tcache creation.
</para></listitem>
</varlistentry>
<varlistentry id="arena.i.purge"> <varlistentry id="arena.i.purge">
<term> <term>
<mallctl>arena.&lt;i&gt;.purge</mallctl> <mallctl>arena.&lt;i&gt;.purge</mallctl>
(<type>unsigned</type>) (<type>void</type>)
<literal>--</literal> <literal>--</literal>
</term> </term>
<listitem><para>Purge unused dirty pages for arena &lt;i&gt;, or for <listitem><para>Purge unused dirty pages for arena &lt;i&gt;, or for
...@@ -1439,14 +1504,222 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1439,14 +1504,222 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<listitem><para>Set the precedence of dss allocation as related to mmap <listitem><para>Set the precedence of dss allocation as related to mmap
allocation for arena &lt;i&gt;, or for all arenas if &lt;i&gt; equals allocation for arena &lt;i&gt;, or for all arenas if &lt;i&gt; equals
<link <link
linkend="arenas.narenas"><mallctl>arenas.narenas</mallctl></link>. Note linkend="arenas.narenas"><mallctl>arenas.narenas</mallctl></link>. See
that even during huge allocation this setting is read from the arena <link linkend="opt.dss"><mallctl>opt.dss</mallctl></link> for supported
that would be chosen for small or large allocation so that applications settings.</para></listitem>
can depend on consistent dss versus mmap allocation regardless of </varlistentry>
allocation size. See <link
linkend="opt.dss"><mallctl>opt.dss</mallctl></link> for supported <varlistentry id="arena.i.lg_dirty_mult">
settings. <term>
</para></listitem> <mallctl>arena.&lt;i&gt;.lg_dirty_mult</mallctl>
(<type>ssize_t</type>)
<literal>rw</literal>
</term>
<listitem><para>Current per-arena minimum ratio (log base 2) of active
to dirty pages for arena &lt;i&gt;. Each time this interface is set and
the ratio is increased, pages are synchronously purged as necessary to
impose the new ratio. See <link
linkend="opt.lg_dirty_mult"><mallctl>opt.lg_dirty_mult</mallctl></link>
for additional information.</para></listitem>
</varlistentry>
<varlistentry id="arena.i.chunk_hooks">
<term>
<mallctl>arena.&lt;i&gt;.chunk_hooks</mallctl>
(<type>chunk_hooks_t</type>)
<literal>rw</literal>
</term>
<listitem><para>Get or set the chunk management hook functions for arena
&lt;i&gt;. The functions must be capable of operating on all extant
chunks associated with arena &lt;i&gt;, usually by passing unknown
chunks to the replaced functions. In practice, it is feasible to
control allocation for arenas created via <link
linkend="arenas.extend"><mallctl>arenas.extend</mallctl></link> such
that all chunks originate from an application-supplied chunk allocator
(by setting custom chunk hook functions just after arena creation), but
the automatically created arenas may have already created chunks prior
to the application having an opportunity to take over chunk
allocation.</para>
<programlisting language="C"><![CDATA[
typedef struct {
chunk_alloc_t *alloc;
chunk_dalloc_t *dalloc;
chunk_commit_t *commit;
chunk_decommit_t *decommit;
chunk_purge_t *purge;
chunk_split_t *split;
chunk_merge_t *merge;
} chunk_hooks_t;]]></programlisting>
<para>The <type>chunk_hooks_t</type> structure comprises function
pointers which are described individually below. jemalloc uses these
functions to manage chunk lifetime, which starts off with allocation of
mapped committed memory, in the simplest case followed by deallocation.
However, there are performance and platform reasons to retain chunks for
later reuse. Cleanup attempts cascade from deallocation to decommit to
purging, which gives the chunk management functions opportunities to
reject the most permanent cleanup operations in favor of less permanent
(and often less costly) operations. The chunk splitting and merging
operations can also be opted out of, but this is mainly intended to
support platforms on which virtual memory mappings provided by the
operating system kernel do not automatically coalesce and split, e.g.
Windows.</para>
<funcsynopsis><funcprototype>
<funcdef>typedef void *<function>(chunk_alloc_t)</function></funcdef>
<paramdef>void *<parameter>chunk</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>size_t <parameter>alignment</parameter></paramdef>
<paramdef>bool *<parameter>zero</parameter></paramdef>
<paramdef>bool *<parameter>commit</parameter></paramdef>
<paramdef>unsigned <parameter>arena_ind</parameter></paramdef>
</funcprototype></funcsynopsis>
<literallayout></literallayout>
<para>A chunk allocation function conforms to the
<type>chunk_alloc_t</type> type and upon success returns a pointer to
<parameter>size</parameter> bytes of mapped memory on behalf of arena
<parameter>arena_ind</parameter> such that the chunk's base address is a
multiple of <parameter>alignment</parameter>, as well as setting
<parameter>*zero</parameter> to indicate whether the chunk is zeroed and
<parameter>*commit</parameter> to indicate whether the chunk is
committed. Upon error the function returns <constant>NULL</constant>
and leaves <parameter>*zero</parameter> and
<parameter>*commit</parameter> unmodified. The
<parameter>size</parameter> parameter is always a multiple of the chunk
size. The <parameter>alignment</parameter> parameter is always a power
of two at least as large as the chunk size. Zeroing is mandatory if
<parameter>*zero</parameter> is true upon function entry. Committing is
mandatory if <parameter>*commit</parameter> is true upon function entry.
If <parameter>chunk</parameter> is not <constant>NULL</constant>, the
returned pointer must be <parameter>chunk</parameter> on success or
<constant>NULL</constant> on error. Committed memory may be committed
in absolute terms as on a system that does not overcommit, or in
implicit terms as on a system that overcommits and satisfies physical
memory needs on demand via soft page faults. Note that replacing the
default chunk allocation function makes the arena's <link
linkend="arena.i.dss"><mallctl>arena.&lt;i&gt;.dss</mallctl></link>
setting irrelevant.</para>
<funcsynopsis><funcprototype>
<funcdef>typedef bool <function>(chunk_dalloc_t)</function></funcdef>
<paramdef>void *<parameter>chunk</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>bool <parameter>committed</parameter></paramdef>
<paramdef>unsigned <parameter>arena_ind</parameter></paramdef>
</funcprototype></funcsynopsis>
<literallayout></literallayout>
<para>
A chunk deallocation function conforms to the
<type>chunk_dalloc_t</type> type and deallocates a
<parameter>chunk</parameter> of given <parameter>size</parameter> with
<parameter>committed</parameter>/decommited memory as indicated, on
behalf of arena <parameter>arena_ind</parameter>, returning false upon
success. If the function returns true, this indicates opt-out from
deallocation; the virtual memory mapping associated with the chunk
remains mapped, in the same commit state, and available for future use,
in which case it will be automatically retained for later reuse.</para>
<funcsynopsis><funcprototype>
<funcdef>typedef bool <function>(chunk_commit_t)</function></funcdef>
<paramdef>void *<parameter>chunk</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>size_t <parameter>offset</parameter></paramdef>
<paramdef>size_t <parameter>length</parameter></paramdef>
<paramdef>unsigned <parameter>arena_ind</parameter></paramdef>
</funcprototype></funcsynopsis>
<literallayout></literallayout>
<para>A chunk commit function conforms to the
<type>chunk_commit_t</type> type and commits zeroed physical memory to
back pages within a <parameter>chunk</parameter> of given
<parameter>size</parameter> at <parameter>offset</parameter> bytes,
extending for <parameter>length</parameter> on behalf of arena
<parameter>arena_ind</parameter>, returning false upon success.
Committed memory may be committed in absolute terms as on a system that
does not overcommit, or in implicit terms as on a system that
overcommits and satisfies physical memory needs on demand via soft page
faults. If the function returns true, this indicates insufficient
physical memory to satisfy the request.</para>
<funcsynopsis><funcprototype>
<funcdef>typedef bool <function>(chunk_decommit_t)</function></funcdef>
<paramdef>void *<parameter>chunk</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>size_t <parameter>offset</parameter></paramdef>
<paramdef>size_t <parameter>length</parameter></paramdef>
<paramdef>unsigned <parameter>arena_ind</parameter></paramdef>
</funcprototype></funcsynopsis>
<literallayout></literallayout>
<para>A chunk decommit function conforms to the
<type>chunk_decommit_t</type> type and decommits any physical memory
that is backing pages within a <parameter>chunk</parameter> of given
<parameter>size</parameter> at <parameter>offset</parameter> bytes,
extending for <parameter>length</parameter> on behalf of arena
<parameter>arena_ind</parameter>, returning false upon success, in which
case the pages will be committed via the chunk commit function before
being reused. If the function returns true, this indicates opt-out from
decommit; the memory remains committed and available for future use, in
which case it will be automatically retained for later reuse.</para>
<funcsynopsis><funcprototype>
<funcdef>typedef bool <function>(chunk_purge_t)</function></funcdef>
<paramdef>void *<parameter>chunk</parameter></paramdef>
<paramdef>size_t<parameter>size</parameter></paramdef>
<paramdef>size_t <parameter>offset</parameter></paramdef>
<paramdef>size_t <parameter>length</parameter></paramdef>
<paramdef>unsigned <parameter>arena_ind</parameter></paramdef>
</funcprototype></funcsynopsis>
<literallayout></literallayout>
<para>A chunk purge function conforms to the <type>chunk_purge_t</type>
type and optionally discards physical pages within the virtual memory
mapping associated with <parameter>chunk</parameter> of given
<parameter>size</parameter> at <parameter>offset</parameter> bytes,
extending for <parameter>length</parameter> on behalf of arena
<parameter>arena_ind</parameter>, returning false if pages within the
purged virtual memory range will be zero-filled the next time they are
accessed.</para>
<funcsynopsis><funcprototype>
<funcdef>typedef bool <function>(chunk_split_t)</function></funcdef>
<paramdef>void *<parameter>chunk</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>size_t <parameter>size_a</parameter></paramdef>
<paramdef>size_t <parameter>size_b</parameter></paramdef>
<paramdef>bool <parameter>committed</parameter></paramdef>
<paramdef>unsigned <parameter>arena_ind</parameter></paramdef>
</funcprototype></funcsynopsis>
<literallayout></literallayout>
<para>A chunk split function conforms to the <type>chunk_split_t</type>
type and optionally splits <parameter>chunk</parameter> of given
<parameter>size</parameter> into two adjacent chunks, the first of
<parameter>size_a</parameter> bytes, and the second of
<parameter>size_b</parameter> bytes, operating on
<parameter>committed</parameter>/decommitted memory as indicated, on
behalf of arena <parameter>arena_ind</parameter>, returning false upon
success. If the function returns true, this indicates that the chunk
remains unsplit and therefore should continue to be operated on as a
whole.</para>
<funcsynopsis><funcprototype>
<funcdef>typedef bool <function>(chunk_merge_t)</function></funcdef>
<paramdef>void *<parameter>chunk_a</parameter></paramdef>
<paramdef>size_t <parameter>size_a</parameter></paramdef>
<paramdef>void *<parameter>chunk_b</parameter></paramdef>
<paramdef>size_t <parameter>size_b</parameter></paramdef>
<paramdef>bool <parameter>committed</parameter></paramdef>
<paramdef>unsigned <parameter>arena_ind</parameter></paramdef>
</funcprototype></funcsynopsis>
<literallayout></literallayout>
<para>A chunk merge function conforms to the <type>chunk_merge_t</type>
type and optionally merges adjacent chunks,
<parameter>chunk_a</parameter> of given <parameter>size_a</parameter>
and <parameter>chunk_b</parameter> of given
<parameter>size_b</parameter> into one contiguous chunk, operating on
<parameter>committed</parameter>/decommitted memory as indicated, on
behalf of arena <parameter>arena_ind</parameter>, returning false upon
success. If the function returns true, this indicates that the chunks
remain distinct mappings and therefore should continue to be operated on
independently.</para>
</listitem>
</varlistentry> </varlistentry>
<varlistentry id="arenas.narenas"> <varlistentry id="arenas.narenas">
...@@ -1470,6 +1743,20 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1470,6 +1743,20 @@ malloc_conf = "xmalloc:true";]]></programlisting>
initialized.</para></listitem> initialized.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arenas.lg_dirty_mult">
<term>
<mallctl>arenas.lg_dirty_mult</mallctl>
(<type>ssize_t</type>)
<literal>rw</literal>
</term>
<listitem><para>Current default per-arena minimum ratio (log base 2) of
active to dirty pages, used to initialize <link
linkend="arena.i.lg_dirty_mult"><mallctl>arena.&lt;i&gt;.lg_dirty_mult</mallctl></link>
during arena creation. See <link
linkend="opt.lg_dirty_mult"><mallctl>opt.lg_dirty_mult</mallctl></link>
for additional information.</para></listitem>
</varlistentry>
<varlistentry id="arenas.quantum"> <varlistentry id="arenas.quantum">
<term> <term>
<mallctl>arenas.quantum</mallctl> <mallctl>arenas.quantum</mallctl>
...@@ -1548,7 +1835,7 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1548,7 +1835,7 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<varlistentry id="arenas.nlruns"> <varlistentry id="arenas.nlruns">
<term> <term>
<mallctl>arenas.nlruns</mallctl> <mallctl>arenas.nlruns</mallctl>
(<type>size_t</type>) (<type>unsigned</type>)
<literal>r-</literal> <literal>r-</literal>
</term> </term>
<listitem><para>Total number of large size classes.</para></listitem> <listitem><para>Total number of large size classes.</para></listitem>
...@@ -1564,14 +1851,23 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1564,14 +1851,23 @@ malloc_conf = "xmalloc:true";]]></programlisting>
class.</para></listitem> class.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arenas.purge"> <varlistentry id="arenas.nhchunks">
<term> <term>
<mallctl>arenas.purge</mallctl> <mallctl>arenas.nhchunks</mallctl>
(<type>unsigned</type>) (<type>unsigned</type>)
<literal>-w</literal> <literal>r-</literal>
</term> </term>
<listitem><para>Purge unused dirty pages for the specified arena, or <listitem><para>Total number of huge size classes.</para></listitem>
for all arenas if none is specified.</para></listitem> </varlistentry>
<varlistentry id="arenas.hchunk.i.size">
<term>
<mallctl>arenas.hchunk.&lt;i&gt;.size</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
</term>
<listitem><para>Maximum size supported by this huge size
class.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arenas.extend"> <varlistentry id="arenas.extend">
...@@ -1584,6 +1880,20 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1584,6 +1880,20 @@ malloc_conf = "xmalloc:true";]]></programlisting>
and returning the new arena index.</para></listitem> and returning the new arena index.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="prof.thread_active_init">
<term>
<mallctl>prof.thread_active_init</mallctl>
(<type>bool</type>)
<literal>rw</literal>
[<option>--enable-prof</option>]
</term>
<listitem><para>Control the initial setting for <link
linkend="thread.prof.active"><mallctl>thread.prof.active</mallctl></link>
in newly created threads. See the <link
linkend="opt.prof_thread_active_init"><mallctl>opt.prof_thread_active_init</mallctl></link>
option for additional information.</para></listitem>
</varlistentry>
<varlistentry id="prof.active"> <varlistentry id="prof.active">
<term> <term>
<mallctl>prof.active</mallctl> <mallctl>prof.active</mallctl>
...@@ -1594,8 +1904,9 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1594,8 +1904,9 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<listitem><para>Control whether sampling is currently active. See the <listitem><para>Control whether sampling is currently active. See the
<link <link
linkend="opt.prof_active"><mallctl>opt.prof_active</mallctl></link> linkend="opt.prof_active"><mallctl>opt.prof_active</mallctl></link>
option for additional information. option for additional information, as well as the interrelated <link
</para></listitem> linkend="thread.prof.active"><mallctl>thread.prof.active</mallctl></link>
mallctl.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="prof.dump"> <varlistentry id="prof.dump">
...@@ -1614,6 +1925,49 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1614,6 +1925,49 @@ malloc_conf = "xmalloc:true";]]></programlisting>
option.</para></listitem> option.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="prof.gdump">
<term>
<mallctl>prof.gdump</mallctl>
(<type>bool</type>)
<literal>rw</literal>
[<option>--enable-prof</option>]
</term>
<listitem><para>When enabled, trigger a memory profile dump every time
the total virtual memory exceeds the previous maximum. Profiles are
dumped to files named according to the pattern
<filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.u&lt;useq&gt;.heap</filename>,
where <literal>&lt;prefix&gt;</literal> is controlled by the <link
linkend="opt.prof_prefix"><mallctl>opt.prof_prefix</mallctl></link>
option.</para></listitem>
</varlistentry>
<varlistentry id="prof.reset">
<term>
<mallctl>prof.reset</mallctl>
(<type>size_t</type>)
<literal>-w</literal>
[<option>--enable-prof</option>]
</term>
<listitem><para>Reset all memory profile statistics, and optionally
update the sample rate (see <link
linkend="opt.lg_prof_sample"><mallctl>opt.lg_prof_sample</mallctl></link>
and <link
linkend="prof.lg_sample"><mallctl>prof.lg_sample</mallctl></link>).
</para></listitem>
</varlistentry>
<varlistentry id="prof.lg_sample">
<term>
<mallctl>prof.lg_sample</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-prof</option>]
</term>
<listitem><para>Get the current sample rate (see <link
linkend="opt.lg_prof_sample"><mallctl>opt.lg_prof_sample</mallctl></link>).
</para></listitem>
</varlistentry>
<varlistentry id="prof.interval"> <varlistentry id="prof.interval">
<term> <term>
<mallctl>prof.interval</mallctl> <mallctl>prof.interval</mallctl>
...@@ -1637,9 +1991,8 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1637,9 +1991,8 @@ malloc_conf = "xmalloc:true";]]></programlisting>
</term> </term>
<listitem><para>Pointer to a counter that contains an approximate count <listitem><para>Pointer to a counter that contains an approximate count
of the current number of bytes in active pages. The estimate may be of the current number of bytes in active pages. The estimate may be
high, but never low, because each arena rounds up to the nearest high, but never low, because each arena rounds up when computing its
multiple of the chunk size when computing its contribution to the contribution to the counter. Note that the <link
counter. Note that the <link
linkend="epoch"><mallctl>epoch</mallctl></link> mallctl has no bearing linkend="epoch"><mallctl>epoch</mallctl></link> mallctl has no bearing
on this counter. Furthermore, counter consistency is maintained via on this counter. Furthermore, counter consistency is maintained via
atomic operations, so it is necessary to use an atomic operation in atomic operations, so it is necessary to use an atomic operation in
...@@ -1670,88 +2023,56 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1670,88 +2023,56 @@ malloc_conf = "xmalloc:true";]]></programlisting>
equal to <link equal to <link
linkend="stats.allocated"><mallctl>stats.allocated</mallctl></link>. linkend="stats.allocated"><mallctl>stats.allocated</mallctl></link>.
This does not include <link linkend="stats.arenas.i.pdirty"> This does not include <link linkend="stats.arenas.i.pdirty">
<mallctl>stats.arenas.&lt;i&gt;.pdirty</mallctl></link> and pages <mallctl>stats.arenas.&lt;i&gt;.pdirty</mallctl></link>, nor pages
entirely devoted to allocator metadata.</para></listitem> entirely devoted to allocator metadata.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.mapped"> <varlistentry id="stats.metadata">
<term> <term>
<mallctl>stats.mapped</mallctl> <mallctl>stats.metadata</mallctl>
(<type>size_t</type>) (<type>size_t</type>)
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-stats</option>] [<option>--enable-stats</option>]
</term> </term>
<listitem><para>Total number of bytes in chunks mapped on behalf of the <listitem><para>Total number of bytes dedicated to metadata, which
application. This is a multiple of the chunk size, and is at least as comprise base allocations used for bootstrap-sensitive internal
large as <link allocator data structures, arena chunk headers (see <link
linkend="stats.active"><mallctl>stats.active</mallctl></link>. This linkend="stats.arenas.i.metadata.mapped"><mallctl>stats.arenas.&lt;i&gt;.metadata.mapped</mallctl></link>),
does not include inactive chunks.</para></listitem> and internal allocations (see <link
</varlistentry> linkend="stats.arenas.i.metadata.allocated"><mallctl>stats.arenas.&lt;i&gt;.metadata.allocated</mallctl></link>).</para></listitem>
<varlistentry id="stats.chunks.current">
<term>
<mallctl>stats.chunks.current</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Total number of chunks actively mapped on behalf of the
application. This does not include inactive chunks.
</para></listitem>
</varlistentry>
<varlistentry id="stats.chunks.total">
<term>
<mallctl>stats.chunks.total</mallctl>
(<type>uint64_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Cumulative number of chunks allocated.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.chunks.high"> <varlistentry id="stats.resident">
<term> <term>
<mallctl>stats.chunks.high</mallctl> <mallctl>stats.resident</mallctl>
(<type>size_t</type>) (<type>size_t</type>)
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-stats</option>] [<option>--enable-stats</option>]
</term> </term>
<listitem><para>Maximum number of active chunks at any time thus far. <listitem><para>Maximum number of bytes in physically resident data
</para></listitem> pages mapped by the allocator, comprising all pages dedicated to
allocator metadata, pages backing active allocations, and unused dirty
pages. This is a maximum rather than precise because pages may not
actually be physically resident if they correspond to demand-zeroed
virtual memory that has not yet been touched. This is a multiple of the
page size, and is larger than <link
linkend="stats.active"><mallctl>stats.active</mallctl></link>.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.huge.allocated"> <varlistentry id="stats.mapped">
<term> <term>
<mallctl>stats.huge.allocated</mallctl> <mallctl>stats.mapped</mallctl>
(<type>size_t</type>) (<type>size_t</type>)
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-stats</option>] [<option>--enable-stats</option>]
</term> </term>
<listitem><para>Number of bytes currently allocated by huge objects. <listitem><para>Total number of bytes in active chunks mapped by the
</para></listitem> allocator. This is a multiple of the chunk size, and is larger than
</varlistentry> <link linkend="stats.active"><mallctl>stats.active</mallctl></link>.
This does not include inactive chunks, even those that contain unused
<varlistentry id="stats.huge.nmalloc"> dirty pages, which means that there is no strict ordering between this
<term> and <link
<mallctl>stats.huge.nmalloc</mallctl> linkend="stats.resident"><mallctl>stats.resident</mallctl></link>.</para></listitem>
(<type>uint64_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Cumulative number of huge allocation requests.
</para></listitem>
</varlistentry>
<varlistentry id="stats.huge.ndalloc">
<term>
<mallctl>stats.huge.ndalloc</mallctl>
(<type>uint64_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Cumulative number of huge deallocation requests.
</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.dss"> <varlistentry id="stats.arenas.i.dss">
...@@ -1768,6 +2089,18 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1768,6 +2089,18 @@ malloc_conf = "xmalloc:true";]]></programlisting>
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.lg_dirty_mult">
<term>
<mallctl>stats.arenas.&lt;i&gt;.lg_dirty_mult</mallctl>
(<type>ssize_t</type>)
<literal>r-</literal>
</term>
<listitem><para>Minimum ratio (log base 2) of active to dirty pages.
See <link
linkend="opt.lg_dirty_mult"><mallctl>opt.lg_dirty_mult</mallctl></link>
for details.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.nthreads"> <varlistentry id="stats.arenas.i.nthreads">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.nthreads</mallctl> <mallctl>stats.arenas.&lt;i&gt;.nthreads</mallctl>
...@@ -1809,6 +2142,38 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1809,6 +2142,38 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<listitem><para>Number of mapped bytes.</para></listitem> <listitem><para>Number of mapped bytes.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.metadata.mapped">
<term>
<mallctl>stats.arenas.&lt;i&gt;.metadata.mapped</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Number of mapped bytes in arena chunk headers, which
track the states of the non-metadata pages.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.metadata.allocated">
<term>
<mallctl>stats.arenas.&lt;i&gt;.metadata.allocated</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Number of bytes dedicated to internal allocations.
Internal allocations differ from application-originated allocations in
that they are for internal use, and that they are omitted from heap
profiles. This statistic is reported separately from <link
linkend="stats.metadata"><mallctl>stats.metadata</mallctl></link> and
<link
linkend="stats.arenas.i.metadata.mapped"><mallctl>stats.arenas.&lt;i&gt;.metadata.mapped</mallctl></link>
because it overlaps with e.g. the <link
linkend="stats.allocated"><mallctl>stats.allocated</mallctl></link> and
<link linkend="stats.active"><mallctl>stats.active</mallctl></link>
statistics, whereas the other metadata statistics do
not.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.npurge"> <varlistentry id="stats.arenas.i.npurge">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.npurge</mallctl> <mallctl>stats.arenas.&lt;i&gt;.npurge</mallctl>
...@@ -1930,15 +2295,48 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1930,15 +2295,48 @@ malloc_conf = "xmalloc:true";]]></programlisting>
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.bins.j.allocated"> <varlistentry id="stats.arenas.i.huge.allocated">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.allocated</mallctl> <mallctl>stats.arenas.&lt;i&gt;.huge.allocated</mallctl>
(<type>size_t</type>) (<type>size_t</type>)
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-stats</option>] [<option>--enable-stats</option>]
</term> </term>
<listitem><para>Current number of bytes allocated by <listitem><para>Number of bytes currently allocated by huge objects.
bin.</para></listitem> </para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.huge.nmalloc">
<term>
<mallctl>stats.arenas.&lt;i&gt;.huge.nmalloc</mallctl>
(<type>uint64_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Cumulative number of huge allocation requests served
directly by the arena.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.huge.ndalloc">
<term>
<mallctl>stats.arenas.&lt;i&gt;.huge.ndalloc</mallctl>
(<type>uint64_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Cumulative number of huge deallocation requests served
directly by the arena.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.huge.nrequests">
<term>
<mallctl>stats.arenas.&lt;i&gt;.huge.nrequests</mallctl>
(<type>uint64_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Cumulative number of huge allocation requests.
</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.bins.j.nmalloc"> <varlistentry id="stats.arenas.i.bins.j.nmalloc">
...@@ -1974,6 +2372,17 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -1974,6 +2372,17 @@ malloc_conf = "xmalloc:true";]]></programlisting>
requests.</para></listitem> requests.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.bins.j.curregs">
<term>
<mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.curregs</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Current number of regions for this size
class.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.bins.j.nfills"> <varlistentry id="stats.arenas.i.bins.j.nfills">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nfills</mallctl> <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nfills</mallctl>
...@@ -2068,6 +2477,50 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -2068,6 +2477,50 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<listitem><para>Current number of runs for this size class. <listitem><para>Current number of runs for this size class.
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.hchunks.j.nmalloc">
<term>
<mallctl>stats.arenas.&lt;i&gt;.hchunks.&lt;j&gt;.nmalloc</mallctl>
(<type>uint64_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Cumulative number of allocation requests for this size
class served directly by the arena.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.hchunks.j.ndalloc">
<term>
<mallctl>stats.arenas.&lt;i&gt;.hchunks.&lt;j&gt;.ndalloc</mallctl>
(<type>uint64_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Cumulative number of deallocation requests for this
size class served directly by the arena.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.hchunks.j.nrequests">
<term>
<mallctl>stats.arenas.&lt;i&gt;.hchunks.&lt;j&gt;.nrequests</mallctl>
(<type>uint64_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Cumulative number of allocation requests for this size
class.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.hchunks.j.curhchunks">
<term>
<mallctl>stats.arenas.&lt;i&gt;.hchunks.&lt;j&gt;.curhchunks</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Current number of huge allocations for this size class.
</para></listitem>
</varlistentry>
</variablelist> </variablelist>
</refsect1> </refsect1>
<refsect1 id="debugging_malloc_problems"> <refsect1 id="debugging_malloc_problems">
...@@ -2253,42 +2706,6 @@ malloc_conf = "xmalloc:true";]]></programlisting> ...@@ -2253,42 +2706,6 @@ malloc_conf = "xmalloc:true";]]></programlisting>
returns the usable size of the allocation pointed to by returns the usable size of the allocation pointed to by
<parameter>ptr</parameter>. </para> <parameter>ptr</parameter>. </para>
</refsect2> </refsect2>
<refsect2>
<title>Experimental API</title>
<para>The <function>allocm<parameter/></function>,
<function>rallocm<parameter/></function>,
<function>sallocm<parameter/></function>,
<function>dallocm<parameter/></function>, and
<function>nallocm<parameter/></function> functions return
<constant>ALLOCM_SUCCESS</constant> on success; otherwise they return an
error value. The <function>allocm<parameter/></function>,
<function>rallocm<parameter/></function>, and
<function>nallocm<parameter/></function> functions will fail if:
<variablelist>
<varlistentry>
<term><errorname>ALLOCM_ERR_OOM</errorname></term>
<listitem><para>Out of memory. Insufficient contiguous memory was
available to service the allocation request. The
<function>allocm<parameter/></function> function additionally sets
<parameter>*ptr</parameter> to <constant>NULL</constant>, whereas
the <function>rallocm<parameter/></function> function leaves
<constant>*ptr</constant> unmodified.</para></listitem>
</varlistentry>
</variablelist>
The <function>rallocm<parameter/></function> function will also
fail if:
<variablelist>
<varlistentry>
<term><errorname>ALLOCM_ERR_NOT_MOVED</errorname></term>
<listitem><para><constant>ALLOCM_NO_MOVE</constant> was specified,
but the reallocation request could not be serviced without moving
the object.</para></listitem>
</varlistentry>
</variablelist>
</para>
</refsect2>
</refsect1> </refsect1>
<refsect1 id="environment"> <refsect1 id="environment">
<title>ENVIRONMENT</title> <title>ENVIRONMENT</title>
......
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_TYPES #ifdef JEMALLOC_H_TYPES
/* #define LARGE_MINCLASS (ZU(1) << LG_LARGE_MINCLASS)
* RUN_MAX_OVRHD indicates maximum desired run header overhead. Runs are sized
* as small as possible such that this setting is still honored, without
* violating other constraints. The goal is to make runs as small as possible
* without exceeding a per run external fragmentation threshold.
*
* We use binary fixed point math for overhead computations, where the binary
* point is implicitly RUN_BFP bits to the left.
*
* Note that it is possible to set RUN_MAX_OVRHD low enough that it cannot be
* honored for some/all object sizes, since when heap profiling is enabled
* there is one pointer of header overhead per object (plus a constant). This
* constraint is relaxed (ignored) for runs that are so small that the
* per-region overhead is greater than:
*
* (RUN_MAX_OVRHD / (reg_interval << (3+RUN_BFP))
*/
#define RUN_BFP 12
/* \/ Implicit binary fixed point. */
#define RUN_MAX_OVRHD 0x0000003dU
#define RUN_MAX_OVRHD_RELAX 0x00001800U
/* Maximum number of regions in one run. */ /* Maximum number of regions in one run. */
#define LG_RUN_MAXREGS 11 #define LG_RUN_MAXREGS (LG_PAGE - LG_TINY_MIN)
#define RUN_MAXREGS (1U << LG_RUN_MAXREGS) #define RUN_MAXREGS (1U << LG_RUN_MAXREGS)
/* /*
...@@ -36,16 +16,18 @@ ...@@ -36,16 +16,18 @@
/* /*
* The minimum ratio of active:dirty pages per arena is computed as: * The minimum ratio of active:dirty pages per arena is computed as:
* *
* (nactive >> opt_lg_dirty_mult) >= ndirty * (nactive >> lg_dirty_mult) >= ndirty
* *
* So, supposing that opt_lg_dirty_mult is 3, there can be no less than 8 times * So, supposing that lg_dirty_mult is 3, there can be no less than 8 times as
* as many active pages as dirty pages. * many active pages as dirty pages.
*/ */
#define LG_DIRTY_MULT_DEFAULT 3 #define LG_DIRTY_MULT_DEFAULT 3
typedef struct arena_chunk_map_s arena_chunk_map_t; typedef struct arena_runs_dirty_link_s arena_runs_dirty_link_t;
typedef struct arena_chunk_s arena_chunk_t;
typedef struct arena_run_s arena_run_t; typedef struct arena_run_s arena_run_t;
typedef struct arena_chunk_map_bits_s arena_chunk_map_bits_t;
typedef struct arena_chunk_map_misc_s arena_chunk_map_misc_t;
typedef struct arena_chunk_s arena_chunk_t;
typedef struct arena_bin_info_s arena_bin_info_t; typedef struct arena_bin_info_s arena_bin_info_t;
typedef struct arena_bin_s arena_bin_t; typedef struct arena_bin_s arena_bin_t;
typedef struct arena_s arena_t; typedef struct arena_s arena_t;
...@@ -54,54 +36,34 @@ typedef struct arena_s arena_t; ...@@ -54,54 +36,34 @@ typedef struct arena_s arena_t;
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS #ifdef JEMALLOC_H_STRUCTS
/* Each element of the chunk map corresponds to one page within the chunk. */ #ifdef JEMALLOC_ARENA_STRUCTS_A
struct arena_chunk_map_s { struct arena_run_s {
#ifndef JEMALLOC_PROF /* Index of bin this run is associated with. */
/* szind_t binind;
* Overlay prof_ctx in order to allow it to be referenced by dead code.
* Such antics aren't warranted for per arena data structures, but
* chunk map overhead accounts for a percentage of memory, rather than
* being just a fixed cost.
*/
union {
#endif
union {
/*
* Linkage for run trees. There are two disjoint uses:
*
* 1) arena_t's runs_avail tree.
* 2) arena_run_t conceptually uses this linkage for in-use
* non-full runs, rather than directly embedding linkage.
*/
rb_node(arena_chunk_map_t) rb_link;
/*
* List of runs currently in purgatory. arena_chunk_purge()
* temporarily allocates runs that contain dirty pages while
* purging, so that other threads cannot use the runs while the
* purging thread is operating without the arena lock held.
*/
ql_elm(arena_chunk_map_t) ql_link;
} u;
/* Profile counters, used for large object runs. */ /* Number of free regions in run. */
prof_ctx_t *prof_ctx; unsigned nfree;
#ifndef JEMALLOC_PROF
}; /* union { ... }; */ /* Per region allocated/deallocated bitmap. */
#endif bitmap_t bitmap[BITMAP_GROUPS_MAX];
};
/* Each element of the chunk map corresponds to one page within the chunk. */
struct arena_chunk_map_bits_s {
/* /*
* Run address (or size) and various flags are stored together. The bit * Run address (or size) and various flags are stored together. The bit
* layout looks like (assuming 32-bit system): * layout looks like (assuming 32-bit system):
* *
* ???????? ???????? ????nnnn nnnndula * ???????? ???????? ???nnnnn nnndumla
* *
* ? : Unallocated: Run address for first/last pages, unset for internal * ? : Unallocated: Run address for first/last pages, unset for internal
* pages. * pages.
* Small: Run page offset. * Small: Run page offset.
* Large: Run size for first page, unset for trailing pages. * Large: Run page count for first page, unset for trailing pages.
* n : binind for small size class, BININD_INVALID for large size class. * n : binind for small size class, BININD_INVALID for large size class.
* d : dirty? * d : dirty?
* u : unzeroed? * u : unzeroed?
* m : decommitted?
* l : large? * l : large?
* a : allocated? * a : allocated?
* *
...@@ -110,78 +72,109 @@ struct arena_chunk_map_s { ...@@ -110,78 +72,109 @@ struct arena_chunk_map_s {
* p : run page offset * p : run page offset
* s : run size * s : run size
* n : binind for size class; large objects set these to BININD_INVALID * n : binind for size class; large objects set these to BININD_INVALID
* except for promoted allocations (see prof_promote)
* x : don't care * x : don't care
* - : 0 * - : 0
* + : 1 * + : 1
* [DULA] : bit set * [DUMLA] : bit set
* [dula] : bit unset * [dumla] : bit unset
* *
* Unallocated (clean): * Unallocated (clean):
* ssssssss ssssssss ssss++++ ++++du-a * ssssssss ssssssss sss+++++ +++dum-a
* xxxxxxxx xxxxxxxx xxxxxxxx xxxx-Uxx * xxxxxxxx xxxxxxxx xxxxxxxx xxx-Uxxx
* ssssssss ssssssss ssss++++ ++++dU-a * ssssssss ssssssss sss+++++ +++dUm-a
* *
* Unallocated (dirty): * Unallocated (dirty):
* ssssssss ssssssss ssss++++ ++++D--a * ssssssss ssssssss sss+++++ +++D-m-a
* xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx * xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx
* ssssssss ssssssss ssss++++ ++++D--a * ssssssss ssssssss sss+++++ +++D-m-a
* *
* Small: * Small:
* pppppppp pppppppp ppppnnnn nnnnd--A * pppppppp pppppppp pppnnnnn nnnd---A
* pppppppp pppppppp ppppnnnn nnnn---A * pppppppp pppppppp pppnnnnn nnn----A
* pppppppp pppppppp ppppnnnn nnnnd--A * pppppppp pppppppp pppnnnnn nnnd---A
* *
* Large: * Large:
* ssssssss ssssssss ssss++++ ++++D-LA * ssssssss ssssssss sss+++++ +++D--LA
* xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx * xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx
* -------- -------- ----++++ ++++D-LA * -------- -------- ---+++++ +++D--LA
* *
* Large (sampled, size <= PAGE): * Large (sampled, size <= LARGE_MINCLASS):
* ssssssss ssssssss ssssnnnn nnnnD-LA * ssssssss ssssssss sssnnnnn nnnD--LA
* xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx
* -------- -------- ---+++++ +++D--LA
* *
* Large (not sampled, size == PAGE): * Large (not sampled, size == LARGE_MINCLASS):
* ssssssss ssssssss ssss++++ ++++D-LA * ssssssss ssssssss sss+++++ +++D--LA
* xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx
* -------- -------- ---+++++ +++D--LA
*/ */
size_t bits; size_t bits;
#define CHUNK_MAP_BININD_SHIFT 4 #define CHUNK_MAP_ALLOCATED ((size_t)0x01U)
#define CHUNK_MAP_LARGE ((size_t)0x02U)
#define CHUNK_MAP_STATE_MASK ((size_t)0x3U)
#define CHUNK_MAP_DECOMMITTED ((size_t)0x04U)
#define CHUNK_MAP_UNZEROED ((size_t)0x08U)
#define CHUNK_MAP_DIRTY ((size_t)0x10U)
#define CHUNK_MAP_FLAGS_MASK ((size_t)0x1cU)
#define CHUNK_MAP_BININD_SHIFT 5
#define BININD_INVALID ((size_t)0xffU) #define BININD_INVALID ((size_t)0xffU)
/* CHUNK_MAP_BININD_MASK == (BININD_INVALID << CHUNK_MAP_BININD_SHIFT) */ #define CHUNK_MAP_BININD_MASK (BININD_INVALID << CHUNK_MAP_BININD_SHIFT)
#define CHUNK_MAP_BININD_MASK ((size_t)0xff0U)
#define CHUNK_MAP_BININD_INVALID CHUNK_MAP_BININD_MASK #define CHUNK_MAP_BININD_INVALID CHUNK_MAP_BININD_MASK
#define CHUNK_MAP_FLAGS_MASK ((size_t)0xcU)
#define CHUNK_MAP_DIRTY ((size_t)0x8U) #define CHUNK_MAP_RUNIND_SHIFT (CHUNK_MAP_BININD_SHIFT + 8)
#define CHUNK_MAP_UNZEROED ((size_t)0x4U) #define CHUNK_MAP_SIZE_SHIFT (CHUNK_MAP_RUNIND_SHIFT - LG_PAGE)
#define CHUNK_MAP_LARGE ((size_t)0x2U) #define CHUNK_MAP_SIZE_MASK \
#define CHUNK_MAP_ALLOCATED ((size_t)0x1U) (~(CHUNK_MAP_BININD_MASK | CHUNK_MAP_FLAGS_MASK | CHUNK_MAP_STATE_MASK))
#define CHUNK_MAP_KEY CHUNK_MAP_ALLOCATED
}; };
typedef rb_tree(arena_chunk_map_t) arena_avail_tree_t;
typedef rb_tree(arena_chunk_map_t) arena_run_tree_t;
typedef ql_head(arena_chunk_map_t) arena_chunk_mapelms_t;
/* Arena chunk header. */ struct arena_runs_dirty_link_s {
struct arena_chunk_s { qr(arena_runs_dirty_link_t) rd_link;
/* Arena that owns the chunk. */ };
arena_t *arena;
/*
* Each arena_chunk_map_misc_t corresponds to one page within the chunk, just
* like arena_chunk_map_bits_t. Two separate arrays are stored within each
* chunk header in order to improve cache locality.
*/
struct arena_chunk_map_misc_s {
/*
* Linkage for run trees. There are two disjoint uses:
*
* 1) arena_t's runs_avail tree.
* 2) arena_run_t conceptually uses this linkage for in-use non-full
* runs, rather than directly embedding linkage.
*/
rb_node(arena_chunk_map_misc_t) rb_link;
/* Linkage for tree of arena chunks that contain dirty runs. */ union {
rb_node(arena_chunk_t) dirty_link; /* Linkage for list of dirty runs. */
arena_runs_dirty_link_t rd;
/* Number of dirty pages. */ /* Profile counters, used for large object runs. */
size_t ndirty; union {
void *prof_tctx_pun;
prof_tctx_t *prof_tctx;
};
/* Number of available runs. */ /* Small region run metadata. */
size_t nruns_avail; arena_run_t run;
};
};
typedef rb_tree(arena_chunk_map_misc_t) arena_avail_tree_t;
typedef rb_tree(arena_chunk_map_misc_t) arena_run_tree_t;
#endif /* JEMALLOC_ARENA_STRUCTS_A */
#ifdef JEMALLOC_ARENA_STRUCTS_B
/* Arena chunk header. */
struct arena_chunk_s {
/* /*
* Number of available run adjacencies that purging could coalesce. * A pointer to the arena that owns the chunk is stored within the node.
* Clean and dirty available runs are not coalesced, which causes * This field as a whole is used by chunks_rtree to support both
* virtual memory fragmentation. The ratio of * ivsalloc() and core-based debugging.
* (nruns_avail-nruns_adjac):nruns_adjac is used for tracking this
* fragmentation.
*/ */
size_t nruns_adjac; extent_node_t node;
/* /*
* Map of pages within chunk that keeps track of free/large/small. The * Map of pages within chunk that keeps track of free/large/small. The
...@@ -189,19 +182,7 @@ struct arena_chunk_s { ...@@ -189,19 +182,7 @@ struct arena_chunk_s {
* need to be tracked in the map. This omission saves a header page * need to be tracked in the map. This omission saves a header page
* for common chunk sizes (e.g. 4 MiB). * for common chunk sizes (e.g. 4 MiB).
*/ */
arena_chunk_map_t map[1]; /* Dynamically sized. */ arena_chunk_map_bits_t map_bits[1]; /* Dynamically sized. */
};
typedef rb_tree(arena_chunk_t) arena_chunk_tree_t;
struct arena_run_s {
/* Bin this run is associated with. */
arena_bin_t *bin;
/* Index of next region that has never been allocated, or nregs. */
uint32_t nextind;
/* Number of free regions in run. */
unsigned nfree;
}; };
/* /*
...@@ -212,12 +193,7 @@ struct arena_run_s { ...@@ -212,12 +193,7 @@ struct arena_run_s {
* Each run has the following layout: * Each run has the following layout:
* *
* /--------------------\ * /--------------------\
* | arena_run_t header | * | pad? |
* | ... |
* bitmap_offset | bitmap |
* | ... |
* ctx0_offset | ctx map |
* | ... |
* |--------------------| * |--------------------|
* | redzone | * | redzone |
* reg0_offset | region 0 | * reg0_offset | region 0 |
...@@ -258,24 +234,12 @@ struct arena_bin_info_s { ...@@ -258,24 +234,12 @@ struct arena_bin_info_s {
/* Total number of regions in a run for this bin's size class. */ /* Total number of regions in a run for this bin's size class. */
uint32_t nregs; uint32_t nregs;
/*
* Offset of first bitmap_t element in a run header for this bin's size
* class.
*/
uint32_t bitmap_offset;
/* /*
* Metadata used to manipulate bitmaps for runs associated with this * Metadata used to manipulate bitmaps for runs associated with this
* bin. * bin.
*/ */
bitmap_info_t bitmap_info; bitmap_info_t bitmap_info;
/*
* Offset of first (prof_ctx_t *) in a run header for this bin's size
* class, or 0 if (config_prof == false || opt_prof == false).
*/
uint32_t ctx0_offset;
/* Offset of first region in a run for this bin's size class. */ /* Offset of first region in a run for this bin's size class. */
uint32_t reg0_offset; uint32_t reg0_offset;
}; };
...@@ -321,8 +285,7 @@ struct arena_s { ...@@ -321,8 +285,7 @@ struct arena_s {
/* /*
* There are three classes of arena operations from a locking * There are three classes of arena operations from a locking
* perspective: * perspective:
* 1) Thread asssignment (modifies nthreads) is protected by * 1) Thread assignment (modifies nthreads) is protected by arenas_lock.
* arenas_lock.
* 2) Bin-related operations are protected by bin locks. * 2) Bin-related operations are protected by bin locks.
* 3) Chunk- and run-related operations are protected by this mutex. * 3) Chunk- and run-related operations are protected by this mutex.
*/ */
...@@ -331,16 +294,20 @@ struct arena_s { ...@@ -331,16 +294,20 @@ struct arena_s {
arena_stats_t stats; arena_stats_t stats;
/* /*
* List of tcaches for extant threads associated with this arena. * List of tcaches for extant threads associated with this arena.
* Stats from these are merged incrementally, and at exit. * Stats from these are merged incrementally, and at exit if
* opt_stats_print is enabled.
*/ */
ql_head(tcache_t) tcache_ql; ql_head(tcache_t) tcache_ql;
uint64_t prof_accumbytes; uint64_t prof_accumbytes;
dss_prec_t dss_prec; /*
* PRNG state for cache index randomization of large allocation base
* pointers.
*/
uint64_t offset_state;
/* Tree of dirty-page-containing chunks this arena manages. */ dss_prec_t dss_prec;
arena_chunk_tree_t chunks_dirty;
/* /*
* In order to avoid rapid chunk allocation/deallocation when an arena * In order to avoid rapid chunk allocation/deallocation when an arena
...@@ -354,7 +321,13 @@ struct arena_s { ...@@ -354,7 +321,13 @@ struct arena_s {
*/ */
arena_chunk_t *spare; arena_chunk_t *spare;
/* Number of pages in active runs. */ /* Minimum ratio (log base 2) of nactive:ndirty. */
ssize_t lg_dirty_mult;
/* True if a thread is currently executing arena_purge(). */
bool purging;
/* Number of pages in active runs and huge regions. */
size_t nactive; size_t nactive;
/* /*
...@@ -366,44 +339,116 @@ struct arena_s { ...@@ -366,44 +339,116 @@ struct arena_s {
size_t ndirty; size_t ndirty;
/* /*
* Approximate number of pages being purged. It is possible for * Size/address-ordered tree of this arena's available runs. The tree
* multiple threads to purge dirty pages concurrently, and they use * is used for first-best-fit run allocation.
* npurgatory to indicate the total number of pages all threads are */
* attempting to purge. arena_avail_tree_t runs_avail;
/*
* Unused dirty memory this arena manages. Dirty memory is conceptually
* tracked as an arbitrarily interleaved LRU of dirty runs and cached
* chunks, but the list linkage is actually semi-duplicated in order to
* avoid extra arena_chunk_map_misc_t space overhead.
*
* LRU-----------------------------------------------------------MRU
*
* /-- arena ---\
* | |
* | |
* |------------| /- chunk -\
* ...->|chunks_cache|<--------------------------->| /----\ |<--...
* |------------| | |node| |
* | | | | | |
* | | /- run -\ /- run -\ | | | |
* | | | | | | | | | |
* | | | | | | | | | |
* |------------| |-------| |-------| | |----| |
* ...->|runs_dirty |<-->|rd |<-->|rd |<---->|rd |<----...
* |------------| |-------| |-------| | |----| |
* | | | | | | | | | |
* | | | | | | | \----/ |
* | | \-------/ \-------/ | |
* | | | |
* | | | |
* \------------/ \---------/
*/ */
size_t npurgatory; arena_runs_dirty_link_t runs_dirty;
extent_node_t chunks_cache;
/* Extant huge allocations. */
ql_head(extent_node_t) huge;
/* Synchronizes all huge allocation/update/deallocation. */
malloc_mutex_t huge_mtx;
/* /*
* Size/address-ordered trees of this arena's available runs. The trees * Trees of chunks that were previously allocated (trees differ only in
* are used for first-best-fit run allocation. * node ordering). These are used when allocating chunks, in an attempt
* to re-use address space. Depending on function, different tree
* orderings are needed, which is why there are two trees with the same
* contents.
*/ */
arena_avail_tree_t runs_avail; extent_tree_t chunks_szad_cached;
extent_tree_t chunks_ad_cached;
extent_tree_t chunks_szad_retained;
extent_tree_t chunks_ad_retained;
malloc_mutex_t chunks_mtx;
/* Cache of nodes that were allocated via base_alloc(). */
ql_head(extent_node_t) node_cache;
malloc_mutex_t node_cache_mtx;
/* User-configurable chunk hook functions. */
chunk_hooks_t chunk_hooks;
/* bins is used to store trees of free regions. */ /* bins is used to store trees of free regions. */
arena_bin_t bins[NBINS]; arena_bin_t bins[NBINS];
}; };
#endif /* JEMALLOC_ARENA_STRUCTS_B */
#endif /* JEMALLOC_H_STRUCTS */ #endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
static const size_t large_pad =
#ifdef JEMALLOC_CACHE_OBLIVIOUS
PAGE
#else
0
#endif
;
extern ssize_t opt_lg_dirty_mult; extern ssize_t opt_lg_dirty_mult;
/*
* small_size2bin is a compact lookup table that rounds request sizes up to
* size classes. In order to reduce cache footprint, the table is compressed,
* and all accesses are via the SMALL_SIZE2BIN macro.
*/
extern uint8_t const small_size2bin[];
#define SMALL_SIZE2BIN(s) (small_size2bin[(s-1) >> LG_TINY_MIN])
extern arena_bin_info_t arena_bin_info[NBINS]; extern arena_bin_info_t arena_bin_info[NBINS];
/* Number of large size classes. */ extern size_t map_bias; /* Number of arena chunk header pages. */
#define nlclasses (chunk_npages - map_bias) extern size_t map_misc_offset;
extern size_t arena_maxrun; /* Max run size for arenas. */
extern size_t large_maxclass; /* Max large size class. */
extern unsigned nlclasses; /* Number of large size classes. */
extern unsigned nhclasses; /* Number of huge size classes. */
void arena_chunk_cache_maybe_insert(arena_t *arena, extent_node_t *node,
bool cache);
void arena_chunk_cache_maybe_remove(arena_t *arena, extent_node_t *node,
bool cache);
extent_node_t *arena_node_alloc(arena_t *arena);
void arena_node_dalloc(arena_t *arena, extent_node_t *node);
void *arena_chunk_alloc_huge(arena_t *arena, size_t usize, size_t alignment,
bool *zero);
void arena_chunk_dalloc_huge(arena_t *arena, void *chunk, size_t usize);
void arena_chunk_ralloc_huge_similar(arena_t *arena, void *chunk,
size_t oldsize, size_t usize);
void arena_chunk_ralloc_huge_shrink(arena_t *arena, void *chunk,
size_t oldsize, size_t usize);
bool arena_chunk_ralloc_huge_expand(arena_t *arena, void *chunk,
size_t oldsize, size_t usize, bool *zero);
ssize_t arena_lg_dirty_mult_get(arena_t *arena);
bool arena_lg_dirty_mult_set(arena_t *arena, ssize_t lg_dirty_mult);
void arena_maybe_purge(arena_t *arena);
void arena_purge_all(arena_t *arena); void arena_purge_all(arena_t *arena);
void arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin, void arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin,
size_t binind, uint64_t prof_accumbytes); szind_t binind, uint64_t prof_accumbytes);
void arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info, void arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info,
bool zero); bool zero);
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
...@@ -418,19 +463,22 @@ void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info); ...@@ -418,19 +463,22 @@ void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info);
void arena_quarantine_junk_small(void *ptr, size_t usize); void arena_quarantine_junk_small(void *ptr, size_t usize);
void *arena_malloc_small(arena_t *arena, size_t size, bool zero); void *arena_malloc_small(arena_t *arena, size_t size, bool zero);
void *arena_malloc_large(arena_t *arena, size_t size, bool zero); void *arena_malloc_large(arena_t *arena, size_t size, bool zero);
void *arena_palloc(arena_t *arena, size_t size, size_t alignment, bool zero); void *arena_palloc(tsd_t *tsd, arena_t *arena, size_t usize,
size_t alignment, bool zero, tcache_t *tcache);
void arena_prof_promoted(const void *ptr, size_t size); void arena_prof_promoted(const void *ptr, size_t size);
void arena_dalloc_bin_locked(arena_t *arena, arena_chunk_t *chunk, void *ptr, void arena_dalloc_bin_junked_locked(arena_t *arena, arena_chunk_t *chunk,
arena_chunk_map_t *mapelm); void *ptr, arena_chunk_map_bits_t *bitselm);
void arena_dalloc_bin(arena_t *arena, arena_chunk_t *chunk, void *ptr, void arena_dalloc_bin(arena_t *arena, arena_chunk_t *chunk, void *ptr,
size_t pageind, arena_chunk_map_t *mapelm); size_t pageind, arena_chunk_map_bits_t *bitselm);
void arena_dalloc_small(arena_t *arena, arena_chunk_t *chunk, void *ptr, void arena_dalloc_small(arena_t *arena, arena_chunk_t *chunk, void *ptr,
size_t pageind); size_t pageind);
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
typedef void (arena_dalloc_junk_large_t)(void *, size_t); typedef void (arena_dalloc_junk_large_t)(void *, size_t);
extern arena_dalloc_junk_large_t *arena_dalloc_junk_large; extern arena_dalloc_junk_large_t *arena_dalloc_junk_large;
#else
void arena_dalloc_junk_large(void *ptr, size_t usize);
#endif #endif
void arena_dalloc_large_locked(arena_t *arena, arena_chunk_t *chunk, void arena_dalloc_large_junked_locked(arena_t *arena, arena_chunk_t *chunk,
void *ptr); void *ptr);
void arena_dalloc_large(arena_t *arena, arena_chunk_t *chunk, void *ptr); void arena_dalloc_large(arena_t *arena, arena_chunk_t *chunk, void *ptr);
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
...@@ -439,16 +487,18 @@ extern arena_ralloc_junk_large_t *arena_ralloc_junk_large; ...@@ -439,16 +487,18 @@ extern arena_ralloc_junk_large_t *arena_ralloc_junk_large;
#endif #endif
bool arena_ralloc_no_move(void *ptr, size_t oldsize, size_t size, bool arena_ralloc_no_move(void *ptr, size_t oldsize, size_t size,
size_t extra, bool zero); size_t extra, bool zero);
void *arena_ralloc(arena_t *arena, void *ptr, size_t oldsize, size_t size, void *arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, size_t size, size_t alignment, bool zero, tcache_t *tcache);
bool try_tcache_dalloc);
dss_prec_t arena_dss_prec_get(arena_t *arena); dss_prec_t arena_dss_prec_get(arena_t *arena);
void arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec); bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec);
void arena_stats_merge(arena_t *arena, const char **dss, size_t *nactive, ssize_t arena_lg_dirty_mult_default_get(void);
size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats, bool arena_lg_dirty_mult_default_set(ssize_t lg_dirty_mult);
malloc_large_stats_t *lstats); void arena_stats_merge(arena_t *arena, const char **dss,
bool arena_new(arena_t *arena, unsigned ind); ssize_t *lg_dirty_mult, size_t *nactive, size_t *ndirty,
void arena_boot(void); arena_stats_t *astats, malloc_bin_stats_t *bstats,
malloc_large_stats_t *lstats, malloc_huge_stats_t *hstats);
arena_t *arena_new(unsigned ind);
bool arena_boot(void);
void arena_prefork(arena_t *arena); void arena_prefork(arena_t *arena);
void arena_postfork_parent(arena_t *arena); void arena_postfork_parent(arena_t *arena);
void arena_postfork_child(arena_t *arena); void arena_postfork_child(arena_t *arena);
...@@ -458,64 +508,138 @@ void arena_postfork_child(arena_t *arena); ...@@ -458,64 +508,138 @@ void arena_postfork_child(arena_t *arena);
#ifdef JEMALLOC_H_INLINES #ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
arena_chunk_map_t *arena_mapp_get(arena_chunk_t *chunk, size_t pageind); arena_chunk_map_bits_t *arena_bitselm_get(arena_chunk_t *chunk,
size_t pageind);
arena_chunk_map_misc_t *arena_miscelm_get(arena_chunk_t *chunk,
size_t pageind);
size_t arena_miscelm_to_pageind(arena_chunk_map_misc_t *miscelm);
void *arena_miscelm_to_rpages(arena_chunk_map_misc_t *miscelm);
arena_chunk_map_misc_t *arena_rd_to_miscelm(arena_runs_dirty_link_t *rd);
arena_chunk_map_misc_t *arena_run_to_miscelm(arena_run_t *run);
size_t *arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind); size_t *arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbitsp_read(size_t *mapbitsp); size_t arena_mapbitsp_read(size_t *mapbitsp);
size_t arena_mapbits_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_size_decode(size_t mapbits);
size_t arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t arena_mapbits_unallocated_size_get(arena_chunk_t *chunk,
size_t pageind); size_t pageind);
size_t arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind); szind_t arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_decommitted_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_allocated_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_allocated_get(arena_chunk_t *chunk, size_t pageind);
void arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits); void arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits);
size_t arena_mapbits_size_encode(size_t size);
void arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, void arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind,
size_t size, size_t flags); size_t size, size_t flags);
void arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind, void arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind,
size_t size); size_t size);
void arena_mapbits_internal_set(arena_chunk_t *chunk, size_t pageind,
size_t flags);
void arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind, void arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind,
size_t size, size_t flags); size_t size, size_t flags);
void arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind, void arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind,
size_t binind); szind_t binind);
void arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, void arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind,
size_t runind, size_t binind, size_t flags); size_t runind, szind_t binind, size_t flags);
void arena_mapbits_unzeroed_set(arena_chunk_t *chunk, size_t pageind, void arena_metadata_allocated_add(arena_t *arena, size_t size);
size_t unzeroed); void arena_metadata_allocated_sub(arena_t *arena, size_t size);
size_t arena_metadata_allocated_get(arena_t *arena);
bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes);
bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes);
bool arena_prof_accum(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum(arena_t *arena, uint64_t accumbytes);
size_t arena_ptr_small_binind_get(const void *ptr, size_t mapbits); szind_t arena_ptr_small_binind_get(const void *ptr, size_t mapbits);
size_t arena_bin_index(arena_t *arena, arena_bin_t *bin); szind_t arena_bin_index(arena_t *arena, arena_bin_t *bin);
unsigned arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, unsigned arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info,
const void *ptr); const void *ptr);
prof_ctx_t *arena_prof_ctx_get(const void *ptr); prof_tctx_t *arena_prof_tctx_get(const void *ptr);
void arena_prof_ctx_set(const void *ptr, size_t usize, prof_ctx_t *ctx); void arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx);
void *arena_malloc(arena_t *arena, size_t size, bool zero, bool try_tcache); void arena_prof_tctx_reset(const void *ptr, size_t usize,
const void *old_ptr, prof_tctx_t *old_tctx);
void *arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
tcache_t *tcache);
arena_t *arena_aalloc(const void *ptr);
size_t arena_salloc(const void *ptr, bool demote); size_t arena_salloc(const void *ptr, bool demote);
void arena_dalloc(arena_t *arena, arena_chunk_t *chunk, void *ptr, void arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache);
bool try_tcache); void arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ARENA_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ARENA_C_))
# ifdef JEMALLOC_ARENA_INLINE_A # ifdef JEMALLOC_ARENA_INLINE_A
JEMALLOC_ALWAYS_INLINE arena_chunk_map_t * JEMALLOC_ALWAYS_INLINE arena_chunk_map_bits_t *
arena_mapp_get(arena_chunk_t *chunk, size_t pageind) arena_bitselm_get(arena_chunk_t *chunk, size_t pageind)
{
assert(pageind >= map_bias);
assert(pageind < chunk_npages);
return (&chunk->map_bits[pageind-map_bias]);
}
JEMALLOC_ALWAYS_INLINE arena_chunk_map_misc_t *
arena_miscelm_get(arena_chunk_t *chunk, size_t pageind)
{
assert(pageind >= map_bias);
assert(pageind < chunk_npages);
return ((arena_chunk_map_misc_t *)((uintptr_t)chunk +
(uintptr_t)map_misc_offset) + pageind-map_bias);
}
JEMALLOC_ALWAYS_INLINE size_t
arena_miscelm_to_pageind(arena_chunk_map_misc_t *miscelm)
{ {
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(miscelm);
size_t pageind = ((uintptr_t)miscelm - ((uintptr_t)chunk +
map_misc_offset)) / sizeof(arena_chunk_map_misc_t) + map_bias;
assert(pageind >= map_bias); assert(pageind >= map_bias);
assert(pageind < chunk_npages); assert(pageind < chunk_npages);
return (&chunk->map[pageind-map_bias]); return (pageind);
}
JEMALLOC_ALWAYS_INLINE void *
arena_miscelm_to_rpages(arena_chunk_map_misc_t *miscelm)
{
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(miscelm);
size_t pageind = arena_miscelm_to_pageind(miscelm);
return ((void *)((uintptr_t)chunk + (pageind << LG_PAGE)));
}
JEMALLOC_ALWAYS_INLINE arena_chunk_map_misc_t *
arena_rd_to_miscelm(arena_runs_dirty_link_t *rd)
{
arena_chunk_map_misc_t *miscelm = (arena_chunk_map_misc_t
*)((uintptr_t)rd - offsetof(arena_chunk_map_misc_t, rd));
assert(arena_miscelm_to_pageind(miscelm) >= map_bias);
assert(arena_miscelm_to_pageind(miscelm) < chunk_npages);
return (miscelm);
}
JEMALLOC_ALWAYS_INLINE arena_chunk_map_misc_t *
arena_run_to_miscelm(arena_run_t *run)
{
arena_chunk_map_misc_t *miscelm = (arena_chunk_map_misc_t
*)((uintptr_t)run - offsetof(arena_chunk_map_misc_t, run));
assert(arena_miscelm_to_pageind(miscelm) >= map_bias);
assert(arena_miscelm_to_pageind(miscelm) < chunk_npages);
return (miscelm);
} }
JEMALLOC_ALWAYS_INLINE size_t * JEMALLOC_ALWAYS_INLINE size_t *
arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind) arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind)
{ {
return (&arena_mapp_get(chunk, pageind)->bits); return (&arena_bitselm_get(chunk, pageind)->bits);
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
...@@ -532,6 +656,22 @@ arena_mapbits_get(arena_chunk_t *chunk, size_t pageind) ...@@ -532,6 +656,22 @@ arena_mapbits_get(arena_chunk_t *chunk, size_t pageind)
return (arena_mapbitsp_read(arena_mapbitsp_get(chunk, pageind))); return (arena_mapbitsp_read(arena_mapbitsp_get(chunk, pageind)));
} }
JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_size_decode(size_t mapbits)
{
size_t size;
#if CHUNK_MAP_SIZE_SHIFT > 0
size = (mapbits & CHUNK_MAP_SIZE_MASK) >> CHUNK_MAP_SIZE_SHIFT;
#elif CHUNK_MAP_SIZE_SHIFT == 0
size = mapbits & CHUNK_MAP_SIZE_MASK;
#else
size = (mapbits & CHUNK_MAP_SIZE_MASK) << -CHUNK_MAP_SIZE_SHIFT;
#endif
return (size);
}
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind)
{ {
...@@ -539,7 +679,7 @@ arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind) ...@@ -539,7 +679,7 @@ arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind)
mapbits = arena_mapbits_get(chunk, pageind); mapbits = arena_mapbits_get(chunk, pageind);
assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0);
return (mapbits & ~PAGE_MASK); return (arena_mapbits_size_decode(mapbits));
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
...@@ -550,7 +690,7 @@ arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind) ...@@ -550,7 +690,7 @@ arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind)
mapbits = arena_mapbits_get(chunk, pageind); mapbits = arena_mapbits_get(chunk, pageind);
assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) ==
(CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)); (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED));
return (mapbits & ~PAGE_MASK); return (arena_mapbits_size_decode(mapbits));
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
...@@ -561,14 +701,14 @@ arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind) ...@@ -561,14 +701,14 @@ arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind)
mapbits = arena_mapbits_get(chunk, pageind); mapbits = arena_mapbits_get(chunk, pageind);
assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) ==
CHUNK_MAP_ALLOCATED); CHUNK_MAP_ALLOCATED);
return (mapbits >> LG_PAGE); return (mapbits >> CHUNK_MAP_RUNIND_SHIFT);
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE szind_t
arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
size_t binind; szind_t binind;
mapbits = arena_mapbits_get(chunk, pageind); mapbits = arena_mapbits_get(chunk, pageind);
binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT; binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT;
...@@ -582,6 +722,8 @@ arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind) ...@@ -582,6 +722,8 @@ arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind)
size_t mapbits; size_t mapbits;
mapbits = arena_mapbits_get(chunk, pageind); mapbits = arena_mapbits_get(chunk, pageind);
assert((mapbits & CHUNK_MAP_DECOMMITTED) == 0 || (mapbits &
(CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0);
return (mapbits & CHUNK_MAP_DIRTY); return (mapbits & CHUNK_MAP_DIRTY);
} }
...@@ -591,9 +733,22 @@ arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind) ...@@ -591,9 +733,22 @@ arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind)
size_t mapbits; size_t mapbits;
mapbits = arena_mapbits_get(chunk, pageind); mapbits = arena_mapbits_get(chunk, pageind);
assert((mapbits & CHUNK_MAP_DECOMMITTED) == 0 || (mapbits &
(CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0);
return (mapbits & CHUNK_MAP_UNZEROED); return (mapbits & CHUNK_MAP_UNZEROED);
} }
JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_decommitted_get(arena_chunk_t *chunk, size_t pageind)
{
size_t mapbits;
mapbits = arena_mapbits_get(chunk, pageind);
assert((mapbits & CHUNK_MAP_DECOMMITTED) == 0 || (mapbits &
(CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0);
return (mapbits & CHUNK_MAP_DECOMMITTED);
}
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind)
{ {
...@@ -619,6 +774,23 @@ arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits) ...@@ -619,6 +774,23 @@ arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits)
*mapbitsp = mapbits; *mapbitsp = mapbits;
} }
JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_size_encode(size_t size)
{
size_t mapbits;
#if CHUNK_MAP_SIZE_SHIFT > 0
mapbits = size << CHUNK_MAP_SIZE_SHIFT;
#elif CHUNK_MAP_SIZE_SHIFT == 0
mapbits = size;
#else
mapbits = size >> -CHUNK_MAP_SIZE_SHIFT;
#endif
assert((mapbits & ~CHUNK_MAP_SIZE_MASK) == 0);
return (mapbits);
}
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size, arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size,
size_t flags) size_t flags)
...@@ -626,9 +798,11 @@ arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size, ...@@ -626,9 +798,11 @@ arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size,
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
assert((size & PAGE_MASK) == 0); assert((size & PAGE_MASK) == 0);
assert((flags & ~CHUNK_MAP_FLAGS_MASK) == 0); assert((flags & CHUNK_MAP_FLAGS_MASK) == flags);
assert((flags & (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == flags); assert((flags & CHUNK_MAP_DECOMMITTED) == 0 || (flags &
arena_mapbitsp_write(mapbitsp, size | CHUNK_MAP_BININD_INVALID | flags); (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0);
arena_mapbitsp_write(mapbitsp, arena_mapbits_size_encode(size) |
CHUNK_MAP_BININD_INVALID | flags);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
...@@ -640,7 +814,17 @@ arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind, ...@@ -640,7 +814,17 @@ arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind,
assert((size & PAGE_MASK) == 0); assert((size & PAGE_MASK) == 0);
assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0);
arena_mapbitsp_write(mapbitsp, size | (mapbits & PAGE_MASK)); arena_mapbitsp_write(mapbitsp, arena_mapbits_size_encode(size) |
(mapbits & ~CHUNK_MAP_SIZE_MASK));
}
JEMALLOC_ALWAYS_INLINE void
arena_mapbits_internal_set(arena_chunk_t *chunk, size_t pageind, size_t flags)
{
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
assert((flags & CHUNK_MAP_UNZEROED) == flags);
arena_mapbitsp_write(mapbitsp, flags);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
...@@ -648,54 +832,62 @@ arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind, size_t size, ...@@ -648,54 +832,62 @@ arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind, size_t size,
size_t flags) size_t flags)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
size_t mapbits = arena_mapbitsp_read(mapbitsp);
size_t unzeroed;
assert((size & PAGE_MASK) == 0); assert((size & PAGE_MASK) == 0);
assert((flags & CHUNK_MAP_DIRTY) == flags); assert((flags & CHUNK_MAP_FLAGS_MASK) == flags);
unzeroed = mapbits & CHUNK_MAP_UNZEROED; /* Preserve unzeroed. */ assert((flags & CHUNK_MAP_DECOMMITTED) == 0 || (flags &
arena_mapbitsp_write(mapbitsp, size | CHUNK_MAP_BININD_INVALID | flags (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0);
| unzeroed | CHUNK_MAP_LARGE | CHUNK_MAP_ALLOCATED); arena_mapbitsp_write(mapbitsp, arena_mapbits_size_encode(size) |
CHUNK_MAP_BININD_INVALID | flags | CHUNK_MAP_LARGE |
CHUNK_MAP_ALLOCATED);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind, arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind,
size_t binind) szind_t binind)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
size_t mapbits = arena_mapbitsp_read(mapbitsp); size_t mapbits = arena_mapbitsp_read(mapbitsp);
assert(binind <= BININD_INVALID); assert(binind <= BININD_INVALID);
assert(arena_mapbits_large_size_get(chunk, pageind) == PAGE); assert(arena_mapbits_large_size_get(chunk, pageind) == LARGE_MINCLASS +
large_pad);
arena_mapbitsp_write(mapbitsp, (mapbits & ~CHUNK_MAP_BININD_MASK) | arena_mapbitsp_write(mapbitsp, (mapbits & ~CHUNK_MAP_BININD_MASK) |
(binind << CHUNK_MAP_BININD_SHIFT)); (binind << CHUNK_MAP_BININD_SHIFT));
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, size_t runind, arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, size_t runind,
size_t binind, size_t flags) szind_t binind, size_t flags)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
size_t mapbits = arena_mapbitsp_read(mapbitsp);
size_t unzeroed;
assert(binind < BININD_INVALID); assert(binind < BININD_INVALID);
assert(pageind - runind >= map_bias); assert(pageind - runind >= map_bias);
assert((flags & CHUNK_MAP_DIRTY) == flags); assert((flags & CHUNK_MAP_UNZEROED) == flags);
unzeroed = mapbits & CHUNK_MAP_UNZEROED; /* Preserve unzeroed. */ arena_mapbitsp_write(mapbitsp, (runind << CHUNK_MAP_RUNIND_SHIFT) |
arena_mapbitsp_write(mapbitsp, (runind << LG_PAGE) | (binind << (binind << CHUNK_MAP_BININD_SHIFT) | flags | CHUNK_MAP_ALLOCATED);
CHUNK_MAP_BININD_SHIFT) | flags | unzeroed | CHUNK_MAP_ALLOCATED);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_INLINE void
arena_mapbits_unzeroed_set(arena_chunk_t *chunk, size_t pageind, arena_metadata_allocated_add(arena_t *arena, size_t size)
size_t unzeroed)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
size_t mapbits = arena_mapbitsp_read(mapbitsp);
arena_mapbitsp_write(mapbitsp, (mapbits & ~CHUNK_MAP_UNZEROED) | atomic_add_z(&arena->stats.metadata_allocated, size);
unzeroed); }
JEMALLOC_INLINE void
arena_metadata_allocated_sub(arena_t *arena, size_t size)
{
atomic_sub_z(&arena->stats.metadata_allocated, size);
}
JEMALLOC_INLINE size_t
arena_metadata_allocated_get(arena_t *arena)
{
return (atomic_read_z(&arena->stats.metadata_allocated));
} }
JEMALLOC_INLINE bool JEMALLOC_INLINE bool
...@@ -719,7 +911,7 @@ arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes) ...@@ -719,7 +911,7 @@ arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes)
cassert(config_prof); cassert(config_prof);
if (prof_interval == 0) if (likely(prof_interval == 0))
return (false); return (false);
return (arena_prof_accum_impl(arena, accumbytes)); return (arena_prof_accum_impl(arena, accumbytes));
} }
...@@ -730,7 +922,7 @@ arena_prof_accum(arena_t *arena, uint64_t accumbytes) ...@@ -730,7 +922,7 @@ arena_prof_accum(arena_t *arena, uint64_t accumbytes)
cassert(config_prof); cassert(config_prof);
if (prof_interval == 0) if (likely(prof_interval == 0))
return (false); return (false);
{ {
...@@ -743,10 +935,10 @@ arena_prof_accum(arena_t *arena, uint64_t accumbytes) ...@@ -743,10 +935,10 @@ arena_prof_accum(arena_t *arena, uint64_t accumbytes)
} }
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE szind_t
arena_ptr_small_binind_get(const void *ptr, size_t mapbits) arena_ptr_small_binind_get(const void *ptr, size_t mapbits)
{ {
size_t binind; szind_t binind;
binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT; binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT;
...@@ -755,27 +947,34 @@ arena_ptr_small_binind_get(const void *ptr, size_t mapbits) ...@@ -755,27 +947,34 @@ arena_ptr_small_binind_get(const void *ptr, size_t mapbits)
arena_t *arena; arena_t *arena;
size_t pageind; size_t pageind;
size_t actual_mapbits; size_t actual_mapbits;
size_t rpages_ind;
arena_run_t *run; arena_run_t *run;
arena_bin_t *bin; arena_bin_t *bin;
size_t actual_binind; szind_t run_binind, actual_binind;
arena_bin_info_t *bin_info; arena_bin_info_t *bin_info;
arena_chunk_map_misc_t *miscelm;
void *rpages;
assert(binind != BININD_INVALID); assert(binind != BININD_INVALID);
assert(binind < NBINS); assert(binind < NBINS);
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
arena = chunk->arena; arena = extent_node_arena_get(&chunk->node);
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
actual_mapbits = arena_mapbits_get(chunk, pageind); actual_mapbits = arena_mapbits_get(chunk, pageind);
assert(mapbits == actual_mapbits); assert(mapbits == actual_mapbits);
assert(arena_mapbits_large_get(chunk, pageind) == 0); assert(arena_mapbits_large_get(chunk, pageind) == 0);
assert(arena_mapbits_allocated_get(chunk, pageind) != 0); assert(arena_mapbits_allocated_get(chunk, pageind) != 0);
run = (arena_run_t *)((uintptr_t)chunk + (uintptr_t)((pageind - rpages_ind = pageind - arena_mapbits_small_runind_get(chunk,
(actual_mapbits >> LG_PAGE)) << LG_PAGE)); pageind);
bin = run->bin; miscelm = arena_miscelm_get(chunk, rpages_ind);
run = &miscelm->run;
run_binind = run->binind;
bin = &arena->bins[run_binind];
actual_binind = bin - arena->bins; actual_binind = bin - arena->bins;
assert(binind == actual_binind); assert(run_binind == actual_binind);
bin_info = &arena_bin_info[actual_binind]; bin_info = &arena_bin_info[actual_binind];
assert(((uintptr_t)ptr - ((uintptr_t)run + rpages = arena_miscelm_to_rpages(miscelm);
assert(((uintptr_t)ptr - ((uintptr_t)rpages +
(uintptr_t)bin_info->reg0_offset)) % bin_info->reg_interval (uintptr_t)bin_info->reg0_offset)) % bin_info->reg_interval
== 0); == 0);
} }
...@@ -785,10 +984,10 @@ arena_ptr_small_binind_get(const void *ptr, size_t mapbits) ...@@ -785,10 +984,10 @@ arena_ptr_small_binind_get(const void *ptr, size_t mapbits)
# endif /* JEMALLOC_ARENA_INLINE_A */ # endif /* JEMALLOC_ARENA_INLINE_A */
# ifdef JEMALLOC_ARENA_INLINE_B # ifdef JEMALLOC_ARENA_INLINE_B
JEMALLOC_INLINE size_t JEMALLOC_INLINE szind_t
arena_bin_index(arena_t *arena, arena_bin_t *bin) arena_bin_index(arena_t *arena, arena_bin_t *bin)
{ {
size_t binind = bin - arena->bins; szind_t binind = bin - arena->bins;
assert(binind < NBINS); assert(binind < NBINS);
return (binind); return (binind);
} }
...@@ -798,24 +997,26 @@ arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr) ...@@ -798,24 +997,26 @@ arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr)
{ {
unsigned shift, diff, regind; unsigned shift, diff, regind;
size_t interval; size_t interval;
arena_chunk_map_misc_t *miscelm = arena_run_to_miscelm(run);
void *rpages = arena_miscelm_to_rpages(miscelm);
/* /*
* Freeing a pointer lower than region zero can cause assertion * Freeing a pointer lower than region zero can cause assertion
* failure. * failure.
*/ */
assert((uintptr_t)ptr >= (uintptr_t)run + assert((uintptr_t)ptr >= (uintptr_t)rpages +
(uintptr_t)bin_info->reg0_offset); (uintptr_t)bin_info->reg0_offset);
/* /*
* Avoid doing division with a variable divisor if possible. Using * Avoid doing division with a variable divisor if possible. Using
* actual division here can reduce allocator throughput by over 20%! * actual division here can reduce allocator throughput by over 20%!
*/ */
diff = (unsigned)((uintptr_t)ptr - (uintptr_t)run - diff = (unsigned)((uintptr_t)ptr - (uintptr_t)rpages -
bin_info->reg0_offset); bin_info->reg0_offset);
/* Rescale (factor powers of 2 out of the numerator and denominator). */ /* Rescale (factor powers of 2 out of the numerator and denominator). */
interval = bin_info->reg_interval; interval = bin_info->reg_interval;
shift = ffs(interval) - 1; shift = jemalloc_ffs(interval) - 1;
diff >>= shift; diff >>= shift;
interval >>= shift; interval >>= shift;
...@@ -850,8 +1051,8 @@ arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr) ...@@ -850,8 +1051,8 @@ arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr)
SIZE_INV(28), SIZE_INV(29), SIZE_INV(30), SIZE_INV(31) SIZE_INV(28), SIZE_INV(29), SIZE_INV(30), SIZE_INV(31)
}; };
if (interval <= ((sizeof(interval_invs) / sizeof(unsigned)) + if (likely(interval <= ((sizeof(interval_invs) /
2)) { sizeof(unsigned)) + 2))) {
regind = (diff * interval_invs[interval - 3]) >> regind = (diff * interval_invs[interval - 3]) >>
SIZE_INV_SHIFT; SIZE_INV_SHIFT;
} else } else
...@@ -865,113 +1066,138 @@ arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr) ...@@ -865,113 +1066,138 @@ arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr)
return (regind); return (regind);
} }
JEMALLOC_INLINE prof_ctx_t * JEMALLOC_INLINE prof_tctx_t *
arena_prof_ctx_get(const void *ptr) arena_prof_tctx_get(const void *ptr)
{ {
prof_ctx_t *ret; prof_tctx_t *ret;
arena_chunk_t *chunk; arena_chunk_t *chunk;
size_t pageind, mapbits;
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
assert(CHUNK_ADDR2BASE(ptr) != ptr);
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; if (likely(chunk != ptr)) {
mapbits = arena_mapbits_get(chunk, pageind); size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
size_t mapbits = arena_mapbits_get(chunk, pageind);
assert((mapbits & CHUNK_MAP_ALLOCATED) != 0); assert((mapbits & CHUNK_MAP_ALLOCATED) != 0);
if ((mapbits & CHUNK_MAP_LARGE) == 0) { if (likely((mapbits & CHUNK_MAP_LARGE) == 0))
if (prof_promote) ret = (prof_tctx_t *)(uintptr_t)1U;
ret = (prof_ctx_t *)(uintptr_t)1U;
else { else {
arena_run_t *run = (arena_run_t *)((uintptr_t)chunk + arena_chunk_map_misc_t *elm = arena_miscelm_get(chunk,
(uintptr_t)((pageind - (mapbits >> LG_PAGE)) << pageind);
LG_PAGE)); ret = atomic_read_p(&elm->prof_tctx_pun);
size_t binind = arena_ptr_small_binind_get(ptr,
mapbits);
arena_bin_info_t *bin_info = &arena_bin_info[binind];
unsigned regind;
regind = arena_run_regind(run, bin_info, ptr);
ret = *(prof_ctx_t **)((uintptr_t)run +
bin_info->ctx0_offset + (regind *
sizeof(prof_ctx_t *)));
} }
} else } else
ret = arena_mapp_get(chunk, pageind)->prof_ctx; ret = huge_prof_tctx_get(ptr);
return (ret); return (ret);
} }
JEMALLOC_INLINE void JEMALLOC_INLINE void
arena_prof_ctx_set(const void *ptr, size_t usize, prof_ctx_t *ctx) arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
{ {
arena_chunk_t *chunk; arena_chunk_t *chunk;
size_t pageind;
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
assert(CHUNK_ADDR2BASE(ptr) != ptr);
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; if (likely(chunk != ptr)) {
size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
assert(arena_mapbits_allocated_get(chunk, pageind) != 0); assert(arena_mapbits_allocated_get(chunk, pageind) != 0);
if (usize > SMALL_MAXCLASS || (prof_promote && if (unlikely(usize > SMALL_MAXCLASS || (uintptr_t)tctx >
((uintptr_t)ctx != (uintptr_t)1U || arena_mapbits_large_get(chunk, (uintptr_t)1U)) {
pageind) != 0))) { arena_chunk_map_misc_t *elm;
assert(arena_mapbits_large_get(chunk, pageind) != 0); assert(arena_mapbits_large_get(chunk, pageind) != 0);
arena_mapp_get(chunk, pageind)->prof_ctx = ctx;
elm = arena_miscelm_get(chunk, pageind);
atomic_write_p(&elm->prof_tctx_pun, tctx);
} else { } else {
/*
* tctx must always be initialized for large runs.
* Assert that the surrounding conditional logic is
* equivalent to checking whether ptr refers to a large
* run.
*/
assert(arena_mapbits_large_get(chunk, pageind) == 0); assert(arena_mapbits_large_get(chunk, pageind) == 0);
if (prof_promote == false) { }
size_t mapbits = arena_mapbits_get(chunk, pageind); } else
arena_run_t *run = (arena_run_t *)((uintptr_t)chunk + huge_prof_tctx_set(ptr, tctx);
(uintptr_t)((pageind - (mapbits >> LG_PAGE)) << }
LG_PAGE));
size_t binind;
arena_bin_info_t *bin_info;
unsigned regind;
binind = arena_ptr_small_binind_get(ptr, mapbits); JEMALLOC_INLINE void
bin_info = &arena_bin_info[binind]; arena_prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
regind = arena_run_regind(run, bin_info, ptr); prof_tctx_t *old_tctx)
{
*((prof_ctx_t **)((uintptr_t)run + cassert(config_prof);
bin_info->ctx0_offset + (regind * sizeof(prof_ctx_t assert(ptr != NULL);
*)))) = ctx;
} if (unlikely(usize > SMALL_MAXCLASS || (ptr == old_ptr &&
(uintptr_t)old_tctx > (uintptr_t)1U))) {
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
if (likely(chunk != ptr)) {
size_t pageind;
arena_chunk_map_misc_t *elm;
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >>
LG_PAGE;
assert(arena_mapbits_allocated_get(chunk, pageind) !=
0);
assert(arena_mapbits_large_get(chunk, pageind) != 0);
elm = arena_miscelm_get(chunk, pageind);
atomic_write_p(&elm->prof_tctx_pun,
(prof_tctx_t *)(uintptr_t)1U);
} else
huge_prof_tctx_reset(ptr);
} }
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
arena_malloc(arena_t *arena, size_t size, bool zero, bool try_tcache) arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
tcache_t *tcache)
{ {
tcache_t *tcache;
assert(size != 0); assert(size != 0);
assert(size <= arena_maxclass);
if (size <= SMALL_MAXCLASS) { arena = arena_choose(tsd, arena);
if (try_tcache && (tcache = tcache_get(true)) != NULL) if (unlikely(arena == NULL))
return (tcache_alloc_small(tcache, size, zero)); return (NULL);
else {
return (arena_malloc_small(choose_arena(arena), size, if (likely(size <= SMALL_MAXCLASS)) {
if (likely(tcache != NULL)) {
return (tcache_alloc_small(tsd, arena, tcache, size,
zero)); zero));
} } else
} else { return (arena_malloc_small(arena, size, zero));
} else if (likely(size <= large_maxclass)) {
/* /*
* Initialize tcache after checking size in order to avoid * Initialize tcache after checking size in order to avoid
* infinite recursion during tcache initialization. * infinite recursion during tcache initialization.
*/ */
if (try_tcache && size <= tcache_maxclass && (tcache = if (likely(tcache != NULL) && size <= tcache_maxclass) {
tcache_get(true)) != NULL) return (tcache_alloc_large(tsd, arena, tcache, size,
return (tcache_alloc_large(tcache, size, zero));
else {
return (arena_malloc_large(choose_arena(arena), size,
zero)); zero));
} } else
} return (arena_malloc_large(arena, size, zero));
} else
return (huge_malloc(tsd, arena, size, zero, tcache));
}
JEMALLOC_ALWAYS_INLINE arena_t *
arena_aalloc(const void *ptr)
{
arena_chunk_t *chunk;
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
if (likely(chunk != ptr))
return (extent_node_arena_get(&chunk->node));
else
return (huge_aalloc(ptr));
} }
/* Return the size of the allocation pointed to by ptr. */ /* Return the size of the allocation pointed to by ptr. */
...@@ -980,81 +1206,139 @@ arena_salloc(const void *ptr, bool demote) ...@@ -980,81 +1206,139 @@ arena_salloc(const void *ptr, bool demote)
{ {
size_t ret; size_t ret;
arena_chunk_t *chunk; arena_chunk_t *chunk;
size_t pageind, binind; size_t pageind;
szind_t binind;
assert(ptr != NULL); assert(ptr != NULL);
assert(CHUNK_ADDR2BASE(ptr) != ptr);
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
if (likely(chunk != ptr)) {
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
assert(arena_mapbits_allocated_get(chunk, pageind) != 0); assert(arena_mapbits_allocated_get(chunk, pageind) != 0);
binind = arena_mapbits_binind_get(chunk, pageind); binind = arena_mapbits_binind_get(chunk, pageind);
if (binind == BININD_INVALID || (config_prof && demote == false && if (unlikely(binind == BININD_INVALID || (config_prof && !demote
prof_promote && arena_mapbits_large_get(chunk, pageind) != 0)) { && arena_mapbits_large_get(chunk, pageind) != 0))) {
/* /*
* Large allocation. In the common case (demote == true), and * Large allocation. In the common case (demote), and
* as this is an inline function, most callers will only end up * as this is an inline function, most callers will only
* looking at binind to determine that ptr is a small * end up looking at binind to determine that ptr is a
* allocation. * small allocation.
*/ */
assert(((uintptr_t)ptr & PAGE_MASK) == 0); assert(config_cache_oblivious || ((uintptr_t)ptr &
ret = arena_mapbits_large_size_get(chunk, pageind); PAGE_MASK) == 0);
ret = arena_mapbits_large_size_get(chunk, pageind) -
large_pad;
assert(ret != 0); assert(ret != 0);
assert(pageind + (ret>>LG_PAGE) <= chunk_npages); assert(pageind + ((ret+large_pad)>>LG_PAGE) <=
assert(ret == PAGE || arena_mapbits_large_size_get(chunk, chunk_npages);
pageind+(ret>>LG_PAGE)-1) == 0);
assert(binind == arena_mapbits_binind_get(chunk,
pageind+(ret>>LG_PAGE)-1));
assert(arena_mapbits_dirty_get(chunk, pageind) == assert(arena_mapbits_dirty_get(chunk, pageind) ==
arena_mapbits_dirty_get(chunk, pageind+(ret>>LG_PAGE)-1)); arena_mapbits_dirty_get(chunk,
pageind+((ret+large_pad)>>LG_PAGE)-1));
} else { } else {
/* /*
* Small allocation (possibly promoted to a large object due to * Small allocation (possibly promoted to a large
* prof_promote). * object).
*/ */
assert(arena_mapbits_large_get(chunk, pageind) != 0 || assert(arena_mapbits_large_get(chunk, pageind) != 0 ||
arena_ptr_small_binind_get(ptr, arena_mapbits_get(chunk, arena_ptr_small_binind_get(ptr,
pageind)) == binind); arena_mapbits_get(chunk, pageind)) == binind);
ret = arena_bin_info[binind].reg_size; ret = index2size(binind);
} }
} else
ret = huge_salloc(ptr);
return (ret); return (ret);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
arena_dalloc(arena_t *arena, arena_chunk_t *chunk, void *ptr, bool try_tcache) arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache)
{ {
arena_chunk_t *chunk;
size_t pageind, mapbits; size_t pageind, mapbits;
tcache_t *tcache;
assert(arena != NULL);
assert(chunk->arena == arena);
assert(ptr != NULL); assert(ptr != NULL);
assert(CHUNK_ADDR2BASE(ptr) != ptr);
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
if (likely(chunk != ptr)) {
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
mapbits = arena_mapbits_get(chunk, pageind); mapbits = arena_mapbits_get(chunk, pageind);
assert(arena_mapbits_allocated_get(chunk, pageind) != 0); assert(arena_mapbits_allocated_get(chunk, pageind) != 0);
if ((mapbits & CHUNK_MAP_LARGE) == 0) { if (likely((mapbits & CHUNK_MAP_LARGE) == 0)) {
/* Small allocation. */ /* Small allocation. */
if (try_tcache && (tcache = tcache_get(false)) != NULL) { if (likely(tcache != NULL)) {
size_t binind; szind_t binind = arena_ptr_small_binind_get(ptr,
mapbits);
binind = arena_ptr_small_binind_get(ptr, mapbits); tcache_dalloc_small(tsd, tcache, ptr, binind);
tcache_dalloc_small(tcache, ptr, binind); } else {
} else arena_dalloc_small(extent_node_arena_get(
arena_dalloc_small(arena, chunk, ptr, pageind); &chunk->node), chunk, ptr, pageind);
}
} else { } else {
size_t size = arena_mapbits_large_size_get(chunk, pageind); size_t size = arena_mapbits_large_size_get(chunk,
pageind);
assert(((uintptr_t)ptr & PAGE_MASK) == 0); assert(config_cache_oblivious || ((uintptr_t)ptr &
PAGE_MASK) == 0);
if (try_tcache && size <= tcache_maxclass && (tcache = if (likely(tcache != NULL) && size - large_pad <=
tcache_get(false)) != NULL) { tcache_maxclass) {
tcache_dalloc_large(tcache, ptr, size); tcache_dalloc_large(tsd, tcache, ptr, size -
large_pad);
} else {
arena_dalloc_large(extent_node_arena_get(
&chunk->node), chunk, ptr);
}
}
} else } else
arena_dalloc_large(arena, chunk, ptr); huge_dalloc(tsd, ptr, tcache);
}
JEMALLOC_ALWAYS_INLINE void
arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache)
{
arena_chunk_t *chunk;
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
if (likely(chunk != ptr)) {
if (config_prof && opt_prof) {
size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >>
LG_PAGE;
assert(arena_mapbits_allocated_get(chunk, pageind) != 0);
if (arena_mapbits_large_get(chunk, pageind) != 0) {
/*
* Make sure to use promoted size, not request
* size.
*/
size = arena_mapbits_large_size_get(chunk,
pageind) - large_pad;
} }
}
assert(s2u(size) == s2u(arena_salloc(ptr, false)));
if (likely(size <= SMALL_MAXCLASS)) {
/* Small allocation. */
if (likely(tcache != NULL)) {
szind_t binind = size2index(size);
tcache_dalloc_small(tsd, tcache, ptr, binind);
} else {
size_t pageind = ((uintptr_t)ptr -
(uintptr_t)chunk) >> LG_PAGE;
arena_dalloc_small(extent_node_arena_get(
&chunk->node), chunk, ptr, pageind);
}
} else {
assert(config_cache_oblivious || ((uintptr_t)ptr &
PAGE_MASK) == 0);
if (likely(tcache != NULL) && size <= tcache_maxclass)
tcache_dalloc_large(tsd, tcache, ptr, size);
else {
arena_dalloc_large(extent_node_arena_get(
&chunk->node), chunk, ptr);
}
}
} else
huge_dalloc(tsd, ptr, tcache);
} }
# endif /* JEMALLOC_ARENA_INLINE_B */ # endif /* JEMALLOC_ARENA_INLINE_B */
#endif #endif
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#define atomic_read_uint64(p) atomic_add_uint64(p, 0) #define atomic_read_uint64(p) atomic_add_uint64(p, 0)
#define atomic_read_uint32(p) atomic_add_uint32(p, 0) #define atomic_read_uint32(p) atomic_add_uint32(p, 0)
#define atomic_read_p(p) atomic_add_p(p, NULL)
#define atomic_read_z(p) atomic_add_z(p, 0) #define atomic_read_z(p) atomic_add_z(p, 0)
#define atomic_read_u(p) atomic_add_u(p, 0) #define atomic_read_u(p) atomic_add_u(p, 0)
...@@ -18,113 +19,244 @@ ...@@ -18,113 +19,244 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_INLINES #ifdef JEMALLOC_H_INLINES
/*
* All arithmetic functions return the arithmetic result of the atomic
* operation. Some atomic operation APIs return the value prior to mutation, in
* which case the following functions must redundantly compute the result so
* that it can be returned. These functions are normally inlined, so the extra
* operations can be optimized away if the return values aren't used by the
* callers.
*
* <t> atomic_read_<t>(<t> *p) { return (*p); }
* <t> atomic_add_<t>(<t> *p, <t> x) { return (*p + x); }
* <t> atomic_sub_<t>(<t> *p, <t> x) { return (*p - x); }
* bool atomic_cas_<t>(<t> *p, <t> c, <t> s)
* {
* if (*p != c)
* return (true);
* *p = s;
* return (false);
* }
* void atomic_write_<t>(<t> *p, <t> x) { *p = x; }
*/
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
uint64_t atomic_add_uint64(uint64_t *p, uint64_t x); uint64_t atomic_add_uint64(uint64_t *p, uint64_t x);
uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x); uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x);
bool atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s);
void atomic_write_uint64(uint64_t *p, uint64_t x);
uint32_t atomic_add_uint32(uint32_t *p, uint32_t x); uint32_t atomic_add_uint32(uint32_t *p, uint32_t x);
uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x); uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x);
bool atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s);
void atomic_write_uint32(uint32_t *p, uint32_t x);
void *atomic_add_p(void **p, void *x);
void *atomic_sub_p(void **p, void *x);
bool atomic_cas_p(void **p, void *c, void *s);
void atomic_write_p(void **p, const void *x);
size_t atomic_add_z(size_t *p, size_t x); size_t atomic_add_z(size_t *p, size_t x);
size_t atomic_sub_z(size_t *p, size_t x); size_t atomic_sub_z(size_t *p, size_t x);
bool atomic_cas_z(size_t *p, size_t c, size_t s);
void atomic_write_z(size_t *p, size_t x);
unsigned atomic_add_u(unsigned *p, unsigned x); unsigned atomic_add_u(unsigned *p, unsigned x);
unsigned atomic_sub_u(unsigned *p, unsigned x); unsigned atomic_sub_u(unsigned *p, unsigned x);
bool atomic_cas_u(unsigned *p, unsigned c, unsigned s);
void atomic_write_u(unsigned *p, unsigned x);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ATOMIC_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ATOMIC_C_))
/******************************************************************************/ /******************************************************************************/
/* 64-bit operations. */ /* 64-bit operations. */
#if (LG_SIZEOF_PTR == 3 || LG_SIZEOF_INT == 3) #if (LG_SIZEOF_PTR == 3 || LG_SIZEOF_INT == 3)
# ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8 # if (defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x) atomic_add_uint64(uint64_t *p, uint64_t x)
{ {
uint64_t t = x;
return (__sync_add_and_fetch(p, x)); asm volatile (
"lock; xaddq %0, %1;"
: "+r" (t), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (t + x);
} }
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x) atomic_sub_uint64(uint64_t *p, uint64_t x)
{ {
uint64_t t;
return (__sync_sub_and_fetch(p, x)); x = (uint64_t)(-(int64_t)x);
t = x;
asm volatile (
"lock; xaddq %0, %1;"
: "+r" (t), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (t + x);
} }
#elif (defined(_MSC_VER))
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
uint8_t success;
asm volatile (
"lock; cmpxchgq %4, %0;"
"sete %1;"
: "=m" (*p), "=a" (success) /* Outputs. */
: "m" (*p), "a" (c), "r" (s) /* Inputs. */
: "memory" /* Clobbers. */
);
return (!(bool)success);
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
asm volatile (
"xchgq %1, %0;" /* Lock is implied by xchgq. */
: "=m" (*p), "+r" (x) /* Outputs. */
: "m" (*p) /* Inputs. */
: "memory" /* Clobbers. */
);
}
# elif (defined(JEMALLOC_C11ATOMICS))
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x) atomic_add_uint64(uint64_t *p, uint64_t x)
{ {
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
return (InterlockedExchangeAdd64(p, x)); return (atomic_fetch_add(a, x) + x);
} }
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x) atomic_sub_uint64(uint64_t *p, uint64_t x)
{ {
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
return (atomic_fetch_sub(a, x) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
return (!atomic_compare_exchange_strong(a, &c, s));
}
return (InterlockedExchangeAdd64(p, -((int64_t)x))); JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
atomic_store(a, x);
} }
#elif (defined(JEMALLOC_OSATOMIC)) # elif (defined(JEMALLOC_ATOMIC9))
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x) atomic_add_uint64(uint64_t *p, uint64_t x)
{ {
return (OSAtomicAdd64((int64_t)x, (int64_t *)p)); /*
* atomic_fetchadd_64() doesn't exist, but we only ever use this
* function on LP64 systems, so atomic_fetchadd_long() will do.
*/
assert(sizeof(uint64_t) == sizeof(unsigned long));
return (atomic_fetchadd_long(p, (unsigned long)x) + x);
} }
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x) atomic_sub_uint64(uint64_t *p, uint64_t x)
{ {
return (OSAtomicAdd64(-((int64_t)x), (int64_t *)p)); assert(sizeof(uint64_t) == sizeof(unsigned long));
return (atomic_fetchadd_long(p, (unsigned long)(-(long)x)) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
assert(sizeof(uint64_t) == sizeof(unsigned long));
return (!atomic_cmpset_long(p, (unsigned long)c, (unsigned long)s));
} }
# elif (defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
assert(sizeof(uint64_t) == sizeof(unsigned long));
atomic_store_rel_long(p, x);
}
# elif (defined(JEMALLOC_OSATOMIC))
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x) atomic_add_uint64(uint64_t *p, uint64_t x)
{ {
asm volatile ( return (OSAtomicAdd64((int64_t)x, (int64_t *)p));
"lock; xaddq %0, %1;"
: "+r" (x), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (x);
} }
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x) atomic_sub_uint64(uint64_t *p, uint64_t x)
{ {
x = (uint64_t)(-(int64_t)x); return (OSAtomicAdd64(-((int64_t)x), (int64_t *)p));
asm volatile ( }
"lock; xaddq %0, %1;"
: "+r" (x), "=m" (*p) /* Outputs. */ JEMALLOC_INLINE bool
: "m" (*p) /* Inputs. */ atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
); {
return (x); return (!OSAtomicCompareAndSwap64(c, s, (int64_t *)p));
} }
# elif (defined(JEMALLOC_ATOMIC9))
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
uint64_t o;
/*The documented OSAtomic*() API does not expose an atomic exchange. */
do {
o = atomic_read_uint64(p);
} while (atomic_cas_uint64(p, o, x));
}
# elif (defined(_MSC_VER))
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x) atomic_add_uint64(uint64_t *p, uint64_t x)
{ {
/* return (InterlockedExchangeAdd64(p, x) + x);
* atomic_fetchadd_64() doesn't exist, but we only ever use this
* function on LP64 systems, so atomic_fetchadd_long() will do.
*/
assert(sizeof(uint64_t) == sizeof(unsigned long));
return (atomic_fetchadd_long(p, (unsigned long)x) + x);
} }
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x) atomic_sub_uint64(uint64_t *p, uint64_t x)
{ {
assert(sizeof(uint64_t) == sizeof(unsigned long)); return (InterlockedExchangeAdd64(p, -((int64_t)x)) - x);
}
return (atomic_fetchadd_long(p, (unsigned long)(-(long)x)) - x); JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
uint64_t o;
o = InterlockedCompareExchange64(p, s, c);
return (o != c);
} }
# elif (defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_8))
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
InterlockedExchange64(p, x);
}
# elif (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_8) || \
defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_8))
JEMALLOC_INLINE uint64_t JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x) atomic_add_uint64(uint64_t *p, uint64_t x)
{ {
...@@ -138,6 +270,20 @@ atomic_sub_uint64(uint64_t *p, uint64_t x) ...@@ -138,6 +270,20 @@ atomic_sub_uint64(uint64_t *p, uint64_t x)
return (__sync_sub_and_fetch(p, x)); return (__sync_sub_and_fetch(p, x));
} }
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
return (!__sync_bool_compare_and_swap(p, c, s));
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
__sync_lock_test_and_set(p, x);
}
# else # else
# error "Missing implementation for 64-bit atomic operations" # error "Missing implementation for 64-bit atomic operations"
# endif # endif
...@@ -145,90 +291,184 @@ atomic_sub_uint64(uint64_t *p, uint64_t x) ...@@ -145,90 +291,184 @@ atomic_sub_uint64(uint64_t *p, uint64_t x)
/******************************************************************************/ /******************************************************************************/
/* 32-bit operations. */ /* 32-bit operations. */
#ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 #if (defined(__i386__) || defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x) atomic_add_uint32(uint32_t *p, uint32_t x)
{ {
uint32_t t = x;
return (__sync_add_and_fetch(p, x)); asm volatile (
"lock; xaddl %0, %1;"
: "+r" (t), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (t + x);
} }
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x) atomic_sub_uint32(uint32_t *p, uint32_t x)
{ {
uint32_t t;
return (__sync_sub_and_fetch(p, x)); x = (uint32_t)(-(int32_t)x);
t = x;
asm volatile (
"lock; xaddl %0, %1;"
: "+r" (t), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (t + x);
} }
#elif (defined(_MSC_VER))
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
uint8_t success;
asm volatile (
"lock; cmpxchgl %4, %0;"
"sete %1;"
: "=m" (*p), "=a" (success) /* Outputs. */
: "m" (*p), "a" (c), "r" (s) /* Inputs. */
: "memory"
);
return (!(bool)success);
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
asm volatile (
"xchgl %1, %0;" /* Lock is implied by xchgl. */
: "=m" (*p), "+r" (x) /* Outputs. */
: "m" (*p) /* Inputs. */
: "memory" /* Clobbers. */
);
}
# elif (defined(JEMALLOC_C11ATOMICS))
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x) atomic_add_uint32(uint32_t *p, uint32_t x)
{ {
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
return (InterlockedExchangeAdd(p, x)); return (atomic_fetch_add(a, x) + x);
} }
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x) atomic_sub_uint32(uint32_t *p, uint32_t x)
{ {
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
return (atomic_fetch_sub(a, x) - x);
}
return (InterlockedExchangeAdd(p, -((int32_t)x))); JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
return (!atomic_compare_exchange_strong(a, &c, s));
} }
#elif (defined(JEMALLOC_OSATOMIC))
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
atomic_store(a, x);
}
#elif (defined(JEMALLOC_ATOMIC9))
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x) atomic_add_uint32(uint32_t *p, uint32_t x)
{ {
return (OSAtomicAdd32((int32_t)x, (int32_t *)p)); return (atomic_fetchadd_32(p, x) + x);
} }
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x) atomic_sub_uint32(uint32_t *p, uint32_t x)
{ {
return (OSAtomicAdd32(-((int32_t)x), (int32_t *)p)); return (atomic_fetchadd_32(p, (uint32_t)(-(int32_t)x)) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
return (!atomic_cmpset_32(p, c, s));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
atomic_store_rel_32(p, x);
} }
#elif (defined(__i386__) || defined(__amd64__) || defined(__x86_64__)) #elif (defined(JEMALLOC_OSATOMIC))
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x) atomic_add_uint32(uint32_t *p, uint32_t x)
{ {
asm volatile ( return (OSAtomicAdd32((int32_t)x, (int32_t *)p));
"lock; xaddl %0, %1;"
: "+r" (x), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (x);
} }
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x) atomic_sub_uint32(uint32_t *p, uint32_t x)
{ {
x = (uint32_t)(-(int32_t)x); return (OSAtomicAdd32(-((int32_t)x), (int32_t *)p));
asm volatile ( }
"lock; xaddl %0, %1;"
: "+r" (x), "=m" (*p) /* Outputs. */ JEMALLOC_INLINE bool
: "m" (*p) /* Inputs. */ atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
); {
return (x); return (!OSAtomicCompareAndSwap32(c, s, (int32_t *)p));
} }
#elif (defined(JEMALLOC_ATOMIC9))
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
uint32_t o;
/*The documented OSAtomic*() API does not expose an atomic exchange. */
do {
o = atomic_read_uint32(p);
} while (atomic_cas_uint32(p, o, x));
}
#elif (defined(_MSC_VER))
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x) atomic_add_uint32(uint32_t *p, uint32_t x)
{ {
return (atomic_fetchadd_32(p, x) + x); return (InterlockedExchangeAdd(p, x) + x);
} }
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x) atomic_sub_uint32(uint32_t *p, uint32_t x)
{ {
return (atomic_fetchadd_32(p, (uint32_t)(-(int32_t)x)) - x); return (InterlockedExchangeAdd(p, -((int32_t)x)) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
uint32_t o;
o = InterlockedCompareExchange(p, s, c);
return (o != c);
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
InterlockedExchange(p, x);
} }
#elif (defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_4)) #elif (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4) || \
defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_4))
JEMALLOC_INLINE uint32_t JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x) atomic_add_uint32(uint32_t *p, uint32_t x)
{ {
...@@ -242,10 +482,72 @@ atomic_sub_uint32(uint32_t *p, uint32_t x) ...@@ -242,10 +482,72 @@ atomic_sub_uint32(uint32_t *p, uint32_t x)
return (__sync_sub_and_fetch(p, x)); return (__sync_sub_and_fetch(p, x));
} }
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
return (!__sync_bool_compare_and_swap(p, c, s));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
__sync_lock_test_and_set(p, x);
}
#else #else
# error "Missing implementation for 32-bit atomic operations" # error "Missing implementation for 32-bit atomic operations"
#endif #endif
/******************************************************************************/
/* Pointer operations. */
JEMALLOC_INLINE void *
atomic_add_p(void **p, void *x)
{
#if (LG_SIZEOF_PTR == 3)
return ((void *)atomic_add_uint64((uint64_t *)p, (uint64_t)x));
#elif (LG_SIZEOF_PTR == 2)
return ((void *)atomic_add_uint32((uint32_t *)p, (uint32_t)x));
#endif
}
JEMALLOC_INLINE void *
atomic_sub_p(void **p, void *x)
{
#if (LG_SIZEOF_PTR == 3)
return ((void *)atomic_add_uint64((uint64_t *)p,
(uint64_t)-((int64_t)x)));
#elif (LG_SIZEOF_PTR == 2)
return ((void *)atomic_add_uint32((uint32_t *)p,
(uint32_t)-((int32_t)x)));
#endif
}
JEMALLOC_INLINE bool
atomic_cas_p(void **p, void *c, void *s)
{
#if (LG_SIZEOF_PTR == 3)
return (atomic_cas_uint64((uint64_t *)p, (uint64_t)c, (uint64_t)s));
#elif (LG_SIZEOF_PTR == 2)
return (atomic_cas_uint32((uint32_t *)p, (uint32_t)c, (uint32_t)s));
#endif
}
JEMALLOC_INLINE void
atomic_write_p(void **p, const void *x)
{
#if (LG_SIZEOF_PTR == 3)
atomic_write_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_PTR == 2)
atomic_write_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
/******************************************************************************/ /******************************************************************************/
/* size_t operations. */ /* size_t operations. */
JEMALLOC_INLINE size_t JEMALLOC_INLINE size_t
...@@ -272,6 +574,28 @@ atomic_sub_z(size_t *p, size_t x) ...@@ -272,6 +574,28 @@ atomic_sub_z(size_t *p, size_t x)
#endif #endif
} }
JEMALLOC_INLINE bool
atomic_cas_z(size_t *p, size_t c, size_t s)
{
#if (LG_SIZEOF_PTR == 3)
return (atomic_cas_uint64((uint64_t *)p, (uint64_t)c, (uint64_t)s));
#elif (LG_SIZEOF_PTR == 2)
return (atomic_cas_uint32((uint32_t *)p, (uint32_t)c, (uint32_t)s));
#endif
}
JEMALLOC_INLINE void
atomic_write_z(size_t *p, size_t x)
{
#if (LG_SIZEOF_PTR == 3)
atomic_write_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_PTR == 2)
atomic_write_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
/******************************************************************************/ /******************************************************************************/
/* unsigned operations. */ /* unsigned operations. */
JEMALLOC_INLINE unsigned JEMALLOC_INLINE unsigned
...@@ -297,6 +621,29 @@ atomic_sub_u(unsigned *p, unsigned x) ...@@ -297,6 +621,29 @@ atomic_sub_u(unsigned *p, unsigned x)
(uint32_t)-((int32_t)x))); (uint32_t)-((int32_t)x)));
#endif #endif
} }
JEMALLOC_INLINE bool
atomic_cas_u(unsigned *p, unsigned c, unsigned s)
{
#if (LG_SIZEOF_INT == 3)
return (atomic_cas_uint64((uint64_t *)p, (uint64_t)c, (uint64_t)s));
#elif (LG_SIZEOF_INT == 2)
return (atomic_cas_uint32((uint32_t *)p, (uint32_t)c, (uint32_t)s));
#endif
}
JEMALLOC_INLINE void
atomic_write_u(unsigned *p, unsigned x)
{
#if (LG_SIZEOF_INT == 3)
atomic_write_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_INT == 2)
atomic_write_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
/******************************************************************************/ /******************************************************************************/
#endif #endif
......
...@@ -10,9 +10,7 @@ ...@@ -10,9 +10,7 @@
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void *base_alloc(size_t size); void *base_alloc(size_t size);
void *base_calloc(size_t number, size_t size); void base_stats_get(size_t *allocated, size_t *resident, size_t *mapped);
extent_node_t *base_node_alloc(void);
void base_node_dealloc(extent_node_t *node);
bool base_boot(void); bool base_boot(void);
void base_prefork(void); void base_prefork(void);
void base_postfork_parent(void); void base_postfork_parent(void);
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment