well as avoiding a switch statement. This change has no significant impact
to performance when branch prediction is successful at predicting the sizes
of objects passed to free(), but in the case that the object sizes are
semi-random, this change has the potential to prevent many branch prediction
misses, thus improving performance substantially.
Take advantage of alignment guarantees in ipalloc(), and pad object sizes to
something less than a power of two when possible. This has the potential
to substantially reduce internal fragmentation for objects allocated via
posix_memalign().
Avoid an unnecessary pow2_ceil() call in arena_ralloc().
Submitted by: djam8193ah@hotmail.com
and instead creating a small allocation for each malloc(0) call. The
optional SysV compatibility behavior remains unchanged.
Add a couple of assertions.
Fix a couple of typos in error message strings.
4kB pages), in order to avoid dangerous rounding error when calculating
fullness limits during run promotion/demotion.
Convert a structure bitfield to a normal field in areana_run_t. This should
have been changed along with the other fields in revision 1.120.
bounds. [1]
Modify logic for utilizing the data segment, such that it is possible to
create huge allocations there.
Shrink the data segment when deallocating a chunk, if it is at the end of
the data segment.
Rename chunk_size to csize in huge_malloc(), in order to avoid masking a
static variable of the same name. [1]
Reported by: Paul Allen <nospam@ugcs.caltech.edu>
races. This isn't currently necessary for libpthread or libthr, but
without it external threads libraries like the linuxthreads port are
not safe to use.
Reported by: ganbold@micom.mng.net
have to be calculated once per allocator operation.
Make nil const.
Update various comments.
Remove/avoid division where possible.
For the one division operation that remains in the critical path, add a
switch statement that has a case for each small size class, and do division
with a constant divisor in each case. This allows the compiler to generate
optimized code that does not use hardware division [1].
Obtained from: peter [1]
* Avoid choosing an arena until it's certain that an arena is needed
for allocation.
* Convert division/multiplication to bitshifting where possible.
* Avoid accessing TLS variables in single-threaded code.
* Reduce the amount of pointer dereferencing.
* Move lock acquisition in critical paths to only protect the the code
that requires synchronization, and completely remove locking where
possible.
determine its value at run time according to other relevant values. This
avoids the creation of runs that are incompletely utilized, as long as
pagesize isn't too large (>32kB, given the current RUN_MIN_REGS_2POW
setting).
Increase the size of several structure bitfields in arena_run_t in order
to avoid integer overflow in the case that a run's header does not overlap
with the space that is usable as application allocation regions. Given
the tiny_min_2pow change, this fix has no additional impact unless
pagesize is >32kB.
Reported by: kris
internally used chunk to start at the beginning of the heap, rather
than at a chunk-aligned address. This reduces mapped memory somewhat
for 32-bit architectures.
Add the arena_run_link_t type and use it wherever a run object is only
used as a ring 'header'. This saves approximately 40 kB of memory per
arena.
Remove an obsolete (no longer used) code path from base_alloc(), which
supported the internal allocation of objects larger than the chunk
size.
Enhance chunk_dealloc() to cache chunk addresses for all deallocated
chunks. This has no impact for most programs, but has the potential
to reduce VM map fragmentation for programs that use huge
allocations.
that no linear searching is necessary if we resort to allocating from a
run that is known to be mostly full. There are pathological edge cases
that could have caused severely degraded performance, and this change
fixes that.
close enough to each other that reallocation would allocate a new region
of the same size. This improves the performance of repeated incremental
reallocations by up to three orders of magnitude. [1]
Fix arena_new() to properly constrain run size if a small chunk size was
specified during runtime configuration.
Suggested by: se [1]
allocation patterns that involve a relatively even mixture of many
different size classes.
Reduce the chunk size from 16 MB to 2 MB. Since chunks are now carved up
using an address-ordered first best fit policy, VM map fragmentation is
much less likely, which makes smaller chunks not as much of a risk. This
reduces the virtual memory size of most applications.
Remove redzones, since program buffer overruns are no longer as likely to
corrupt malloc data structures.
Remove the C MALLOC_OPTIONS flag, and add H and S.
Remove the block of code that tries to use delayed regions in LIFO order,
since from a policy perspective, it conflicts with LRU caching of newly
coalesced regions in arena_undelay(). There are numerous policy
alternatives, and it isn't readily obvious which (if any) is superior;
this change at least has the virtue of being consistent with policy.
fit regions are available, use the delayed regions in LIFO order, in order
to increase locality of reference. We might expect this to cause delayed
regions to be removed from the delay ring buffer more often (since we're
now re-using more recently buffered regions), but numerous tests indicate
that the overall impact on memory usage tends to be good (reduced
fragmentation).
Re-work arena_frag_reg_alloc() so that when large free regions are
exhausted, it uses small regions in a way that favors contiguous allocation
of sequentially allocated small regions. Use arena_frag_reg_alloc() in
this capacity, rather than directly attempting over-fitting of small
requests when no large regions are available.
Remove the bin overfit statistic, since it is no longer relevant due to
the arena_frag_reg_alloc() changes.
Do not specify arena_frag_reg_alloc() as an inline function. It is too
large to benefit much from being inlined, and it is also called in two
places, only one of which is in the critical path (the other call bloated
arena_reg_alloc()).
Call arena_coalesce() for a region before caching it with
arena_mru_cache().
Add assertions that detect the attempted caching of adjacent free regions,
so that we notice this problem when it is first created, rather than in
arena_coalesce(), when it's too late to know how the problem arose.
Reported by: Hans Blancke
problems in cases where regions are faked up for the purposes of red-black
tree searches, since those faked region headers reside on the stack, rather
than in a malloc chunk.
allowing the error to be fatal.
Move a label in order to make sure to properly handle errors in malloc(0).
Reported by: Alastair D'Silva, Saneto Takanori
there is never any need to recursively call the main allocation functions.
Remove recursive spinlock support, since it is no longer needed.
Allow chunks to be as small as the page size.
Correctly propagate OOM errors from arena_new().
broken for non-threaded shared processes in that __tls_get_addr()
assumes the thread pointer is always initialized. This is not the
case. When arenas_map is referenced in choose_arena() and it is
defined as a thread-local variable, it will result in a SIGSEGV.
PR: ia64/91846 (describes the TLS/ia64 bug).
* Add posix_memalign().
* Move calloc() from calloc.c to malloc.c. Add a calloc() implementation in
rtld-elf in order to make the loader happy (even though calloc() isn't
used in rtld-elf).
* Add _malloc_prefork() and _malloc_postfork(), and use them instead of
directly manipulating __malloc_lock.
Approved by: phk, markm (mentor)
surrounding the undef'ing it. It does not seem necessary to
undef some symbol that is not exist, and gcc does not complain
about whether a symbol is exist before #undef'ing it out.
Spotted by: mingyanguo via ChinaUnix.net forum
Reviewed by: phk
through a realloc like function.
Make the malloc_active variable a local static to this new function.
Don't warn about recursion more than once per base call.
constify malloc_func.
has been hit, this makes it cover more cases.
Call the message function directly rather than fiddle with flag-saving
when we find an unknown character in our options.
The 'A' flag should not trigger on legal out of memory conditions.
initialization overhead, there's a problem in that we never call
imalloc() and thus malloc_init() for zero-sized allocations. As a
result, malloc(0) returns NULL when it's the first or only malloc in
the program. Any non-zero allocation will initialize the malloc code
with the side-effect that subsequent zero-sized allocations return a
non-NULL pointer. This is because the pointer we return for zero-
sized allocations is calculated from malloc_pageshift, which needs
to be initialized at runtime on ia64.
The result of the inconsistent behaviour described above is that
configure scripts failed the test for a GNU compatible malloc. This
resulted in a lot of broken ports.
Other, even simpler, solutions were possible as well:
1. initialize malloc_pageshift with some non-zero value (say 13 for
8KB pages) and keep the runtime adjustment.
2. Stop using malloc_pageshift to calculate ZEROSIZEPTR.
Removal of the runtime adjustment was chosen because then ia64 is the
same as any other platform. It is not to say that using a page size
obtained at runtime is bad per se. It's that there's currently a high
level of gratuity for its existence and the moment it causes problems
is the moment you need to get rid of it. Hence, it's not unthinkable
that this commit is (partially) reverted some time in the future when
we do have a good reason for it and a good way to achieve it.
Approved by: re@ (rwatson)
Reported by: kris (portmgr@) -- may the ports be with you
it around an application's fork() call. Our new thread libraries
(libthr, libpthread) can now have threads running while another
thread calls fork(). In this case, it is possible for malloc
to be left in an inconsistent state in the child. Our thread
libraries, libpthread in particular, need to use malloc internally
after a fork (in the child).
Reviewed by: davidxu
to be called on first sight of trouble.
"sensitive" is somewhat arbitrarily defined as "setuid, setgid, uid == root
or gid == wheel".
The 'A' option carries no performance penalty.
It is not possible to override this setting: fix the program instead.
Absentmindedly nodded OK to by: various
Also, make an internal _getprogname() that is used only inside
libc. For libc, getprogname(3) is a weak symbol in case a
function of the same name is defined in userland.
If zero bytes are allocated, return pointer to the middle of page-zero
(which is protected) so that the program will crash if it dereferences
this illgotten pointer.
Inspired & Urged by: Theo de Raadt <deraadt@cvs.openbsd.org>
adding (weak definitions to) stubs for some of the pthread
functions. If the threads library is linked in, the real
pthread functions will pulled in.
Use the following convention for system calls wrapped by the
threads library:
__sys_foo - actual system call
_foo - weak definition to __sys_foo
foo - weak definition to __sys_foo
Change all libc uses of system calls wrapped by the threads
library from foo to _foo. In order to define the prototypes
for _foo(), we introduce namespace.h and un-namespace.h
(suggested by bde). All files that need to reference these
system calls, should include namespace.h before any standard
includes, then include un-namespace.h after the standard
includes and before any local includes. <db.h> is an exception
and shouldn't be included in between namespace.h and
un-namespace.h namespace.h will define foo to _foo, and
un-namespace.h will undefine foo.
Try to eliminate some of the recursive calls to MT-safe
functions in libc/stdio in preparation for adding a mutex
to FILE. We have recursive mutexes, but would like to avoid
using them if possible.
Remove uneeded includes of <errno.h> from a few files.
Add $FreeBSD$ to a few files in order to pass commitprep.
Approved by: -arch
stderr in case of warnings and errors.
Rename malloc_options to have a leading underscore, I belive I have been
told that is more correct namespace wise.
The recent problems with sshd were due to sshd reassigning
`environ' when setenv() thinks it owns it. setenv() subsequently
realloc()s the new version of environ and *boom*
just use _foo() <-- foo(). In the case of a libpthread that doesn't do
call conversion (such as linuxthreads and our upcoming libpthread), this
is adequate. In the case of libc_r, we still need three names, which are
now _thread_sys_foo() <-- _foo() <-- foo().
Convert all internal libc usage of: aio_suspend(), close(), fsync(), msync(),
nanosleep(), open(), fcntl(), read(), and write() to _foo() instead of foo().
Remove all internal libc usage of: creat(), pause(), sleep(), system(),
tcdrain(), wait(), and waitpid().
Make thread cancellation fully POSIX-compliant.
Suggested by: deischen
points. For library functions, the pattern is __sleep() <--
_libc_sleep() <-- sleep(). The arrows represent weak aliases. For
system calls, the pattern is _read() <-- _libc_read() <-- read().
changes have made this too expensive. This gains about 1.25% on
worldstone on my SMP machine.
Swap-less machines, for instance PicoBSDs, and machines which experience
page-out trafic, check with top(1), will probably want to reenable this
with:
ln -s H /etc/malloc.conf
Suggested by: alc (&dyson ?)
realloc functions check for recursion within the malloc code itself. In
a thread-safe library, the single spinlock ensures that no two threads
go inside the protected code at the same time. The thread implementation
is responsible for ensuring that the spinlock does in fact protect malloc.
There was a window of opportunity in which this was not the case. I'll fix
that with a commit RSN.
Our spinlock implementation allows a particular thread to obtain a lock
multiple times, but release the lock with a single unlock call. Since
we're detecting recursion, we know the lock is already owned by the
current thread in a previous call and must not be released in the
current call. This is really far too dependent on this particular
spinlock implementation, so I've added commented out calls to
THREAD_UNLOCK in the appropriate places. We can activate this code when
spinlock is taught to count each lock operation.