2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/licenses/publicdomain. Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
15 This library is all in one file to simplify the most common usage:
16 ftp it, compile it (-O3), and link it into another program. All of
17 the compile-time options default to reasonable values for use on
18 most platforms. You might later want to step through various
19 compile-time and dynamic tuning options.
21 For convenience, an include file for code using this malloc is at:
22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23 You don't really need this .h file unless you call functions not
24 defined in your system include files. The .h file contains only the
25 excerpts from this file needed for using this malloc on ANSI C/C++
26 systems, so long as you haven't changed compile-time options about
27 naming and tuning parameters. If you do, then you can create your
28 own malloc.h that does include all settings by cutting at the point
29 indicated below. Note that you may already by default be using a C
30 library containing a malloc that is based on some version of this
31 malloc (for example in linux). You might still want to use the one
32 in this file to customize settings or to avoid overheads associated
33 with library versions.
37 Supported pointer/size_t representation: 4 or 8 bytes
38 size_t MUST be an unsigned type of the same width as
39 pointers. (If you are using an ancient system that declares
40 size_t as a signed type, or need it to be a different width
41 than pointers, you can use a previous release of this malloc
42 (e.g. 2.7.2) supporting these.)
44 Alignment: 8 bytes (default)
45 This suffices for nearly all current machines and C compilers.
46 However, you can define MALLOC_ALIGNMENT to be wider than this
47 if necessary (up to 128bytes), at the expense of using more space.
49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
50 8 or 16 bytes (if 8byte sizes)
51 Each malloced chunk has a hidden word of overhead holding size
52 and status information, and additional cross-check word
53 if FOOTERS is defined.
55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
56 8-byte ptrs: 32 bytes (including overhead)
58 Even a request for zero bytes (i.e., malloc(0)) returns a
59 pointer to something of the minimum allocatable size.
60 The maximum overhead wastage (i.e., number of extra bytes
61 allocated than were requested in malloc) is less than or equal
62 to the minimum size, except for requests >= mmap_threshold that
63 are serviced via mmap(), where the worst case wastage is about
64 32 bytes plus the remainder from a system page (the minimal
65 mmap unit); typically 4096 or 8192 bytes.
67 Security: static-safe; optionally more or less
68 The "security" of malloc refers to the ability of malicious
69 code to accentuate the effects of errors (for example, freeing
70 space that is not currently malloc'ed or overwriting past the
71 ends of chunks) in code that calls malloc. This malloc
72 guarantees not to modify any memory locations below the base of
73 heap, i.e., static variables, even in the presence of usage
74 errors. The routines additionally detect most improper frees
75 and reallocs. All this holds as long as the static bookkeeping
76 for malloc itself is not corrupted by some other means. This
77 is only one aspect of security -- these checks do not, and
78 cannot, detect all possible programming errors.
80 If FOOTERS is defined nonzero, then each allocated chunk
81 carries an additional check word to verify that it was malloced
82 from its space. These check words are the same within each
83 execution of a program using malloc, but differ across
84 executions, so externally crafted fake chunks cannot be
85 freed. This improves security by rejecting frees/reallocs that
86 could corrupt heap memory, in addition to the checks preventing
87 writes to statics that are always on. This may further improve
88 security at the expense of time and space overhead. (Note that
89 FOOTERS may also be worth using with MSPACES.)
91 By default detected errors cause the program to abort (calling
92 "abort()"). You can override this to instead proceed past
93 errors by defining PROCEED_ON_ERROR. In this case, a bad free
94 has no effect, and a malloc that encounters a bad address
95 caused by user overwrites will ignore the bad address by
96 dropping pointers and indices to all known memory. This may
97 be appropriate for programs that should continue if at all
98 possible in the face of programming errors, although they may
99 run out of memory because dropped memory is never reclaimed.
101 If you don't like either of these options, you can define
102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103 else. And if if you are sure that your program using malloc has
104 no errors or vulnerabilities, you can define INSECURE to 1,
105 which might (or might not) provide a small performance improvement.
107 Thread-safety: NOT thread-safe unless USE_LOCKS defined
108 When USE_LOCKS is defined, each public call to malloc, free,
109 etc is surrounded with either a pthread mutex or a win32
110 spinlock (depending on WIN32). This is not especially fast, and
111 can be a major bottleneck. It is designed only to provide
112 minimal protection in concurrent environments, and to provide a
113 basis for extensions. If you are using malloc in a concurrent
114 program, consider instead using ptmalloc, which is derived from
115 a version of this malloc. (See http://www.malloc.de).
117 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118 This malloc can use unix sbrk or any emulation (invoked using
119 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121 memory. On most unix systems, it tends to work best if both
122 MORECORE and MMAP are enabled. On Win32, it uses emulations
123 based on VirtualAlloc. It also uses common C library functions
126 Compliance: I believe it is compliant with the Single Unix Specification
127 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
130 * Overview of algorithms
132 This is not the fastest, most space-conserving, most portable, or
133 most tunable malloc ever written. However it is among the fastest
134 while also being among the most space-conserving, portable and
135 tunable. Consistent balance across these factors results in a good
136 general-purpose allocator for malloc-intensive programs.
138 In most ways, this malloc is a best-fit allocator. Generally, it
139 chooses the best-fitting existing chunk for a request, with ties
140 broken in approximately least-recently-used order. (This strategy
141 normally maintains low fragmentation.) However, for requests less
142 than 256bytes, it deviates from best-fit when there is not an
143 exactly fitting available chunk by preferring to use space adjacent
144 to that used for the previous small request, as well as by breaking
145 ties in approximately most-recently-used order. (These enhance
146 locality of series of small allocations.) And for very large requests
147 (>= 256Kb by default), it relies on system memory mapping
148 facilities, if supported. (This helps avoid carrying around and
149 possibly fragmenting memory used only for large chunks.)
151 All operations (except malloc_stats and mallinfo) have execution
152 times that are bounded by a constant factor of the number of bits in
153 a size_t, not counting any clearing in calloc or copying in realloc,
154 or actions surrounding MORECORE and MMAP that have times
155 proportional to the number of non-contiguous regions returned by
156 system allocation routines, which is often just 1.
158 The implementation is not very modular and seriously overuses
159 macros. Perhaps someday all C compilers will do as good a job
160 inlining modular code as can now be done by brute-force expansion,
161 but now, enough of them seem not to.
163 Some compilers issue a lot of warnings about code that is
164 dead/unreachable only on some platforms, and also about intentional
165 uses of negation on unsigned types. All known cases of each can be
168 For a longer but out of date high-level description, see
169 http://gee.cs.oswego.edu/dl/html/malloc.html
172 If MSPACES is defined, then in addition to malloc, free, etc.,
173 this file also defines mspace_malloc, mspace_free, etc. These
174 are versions of malloc routines that take an "mspace" argument
175 obtained using create_mspace, to control all internal bookkeeping.
176 If ONLY_MSPACES is defined, only these versions are compiled.
177 So if you would like to use this allocator for only some allocations,
178 and your system malloc for others, you can compile with
179 ONLY_MSPACES and then do something like...
180 static mspace mymspace = create_mspace(0,0); // for example
181 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
183 (Note: If you only need one instance of an mspace, you can instead
184 use "USE_DL_PREFIX" to relabel the global malloc.)
186 You can similarly create thread-local allocators by storing
187 mspaces as thread-locals. For example:
188 static __thread mspace tlms = 0;
189 void* tlmalloc(size_t bytes) {
190 if (tlms == 0) tlms = create_mspace(0, 0);
191 return mspace_malloc(tlms, bytes);
193 void tlfree(void* mem) { mspace_free(tlms, mem); }
195 Unless FOOTERS is defined, each mspace is completely independent.
196 You cannot allocate from one and free to another (although
197 conformance is only weakly checked, so usage errors are not always
198 caught). If FOOTERS is defined, then each chunk carries around a tag
199 indicating its originating mspace, and frees are directed to their
202 ------------------------- Compile-time options ---------------------------
204 Be careful in setting #define values for numerical constants of type
205 size_t. On some systems, literal values are not automatically extended
206 to size_t precision unless they are explicitly casted.
208 WIN32 default: defined if _WIN32 defined
209 Defining WIN32 sets up defaults for MS environment and compilers.
210 Otherwise defaults are for unix.
212 MALLOC_ALIGNMENT default: (size_t)8
213 Controls the minimum alignment for malloc'ed chunks. It must be a
214 power of two and at least 8, even on machines for which smaller
215 alignments would suffice. It may be defined as larger than this
216 though. Note however that code and data structures are optimized for
217 the case of 8-byte alignment.
219 MSPACES default: 0 (false)
220 If true, compile in support for independent allocation spaces.
221 This is only supported if HAVE_MMAP is true.
223 ONLY_MSPACES default: 0 (false)
224 If true, only compile in mspace versions, not regular versions.
226 USE_LOCKS default: 0 (false)
227 Causes each call to each public routine to be surrounded with
228 pthread or WIN32 mutex lock/unlock. (If set true, this can be
229 overridden on a per-mspace basis for mspace versions.)
232 If true, provide extra checking and dispatching by placing
233 information in the footers of allocated chunks. This adds
234 space and time overhead.
237 If true, omit checks for usage errors and heap space overwrites.
239 USE_DL_PREFIX default: NOT defined
240 Causes compiler to prefix all public routines with the string 'dl'.
241 This can be useful when you only want to use this malloc in one part
242 of a program, using your regular system malloc elsewhere.
244 ABORT default: defined as abort()
245 Defines how to abort on failed checks. On most systems, a failed
246 check cannot die with an "assert" or even print an informative
247 message, because the underlying print routines in turn call malloc,
248 which will fail again. Generally, the best policy is to simply call
249 abort(). It's not very useful to do more than this because many
250 errors due to overwriting will show up as address faults (null, odd
251 addresses etc) rather than malloc-triggered checks, so will also
252 abort. Also, most compilers know that abort() does not return, so
253 can better optimize code conditionally calling it.
255 PROCEED_ON_ERROR default: defined as 0 (false)
256 Controls whether detected bad addresses cause them to bypassed
257 rather than aborting. If set, detected bad arguments to free and
258 realloc are ignored. And all bookkeeping information is zeroed out
259 upon a detected overwrite of freed heap space, thus losing the
260 ability to ever return it from malloc again, but enabling the
261 application to proceed. If PROCEED_ON_ERROR is defined, the
262 static variable malloc_corruption_error_count is compiled in
263 and can be examined to see if errors have occurred. This option
264 generates slower code than the default abort policy.
266 DEBUG default: NOT defined
267 The DEBUG setting is mainly intended for people trying to modify
268 this code or diagnose problems when porting to new platforms.
269 However, it may also be able to better isolate user errors than just
270 using runtime checks. The assertions in the check routines spell
271 out in more detail the assumptions and invariants underlying the
272 algorithms. The checking is fairly extensive, and will slow down
273 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274 set will attempt to check every non-mmapped allocated and free chunk
275 in the course of computing the summaries.
277 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
278 Debugging assertion failures can be nearly impossible if your
279 version of the assert macro causes malloc to be called, which will
280 lead to a cascade of further failures, blowing the runtime stack.
281 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282 which will usually make debugging easier.
284 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
285 The action to take before "return 0" when malloc fails to be able to
286 return memory because there is none available.
288 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
289 True if this system supports sbrk or an emulation of it.
291 MORECORE default: sbrk
292 The name of the sbrk-style system routine to call to obtain more
293 memory. See below for guidance on writing custom MORECORE
294 functions. The type of the argument to sbrk/MORECORE varies across
295 systems. It cannot be size_t, because it supports negative
296 arguments, so it is normally the signed type of the same width as
297 size_t (sometimes declared as "intptr_t"). It doesn't much matter
298 though. Internally, we only call it with arguments less than half
299 the max value of a size_t, which should work across all reasonable
300 possibilities, although sometimes generating compiler warnings. See
301 near the end of this file for guidelines for creating a custom
304 MORECORE_CONTIGUOUS default: 1 (true)
305 If true, take advantage of fact that consecutive calls to MORECORE
306 with positive arguments always return contiguous increasing
307 addresses. This is true of unix sbrk. It does not hurt too much to
308 set it true anyway, since malloc copes with non-contiguities.
309 Setting it false when definitely non-contiguous saves time
310 and possibly wasted space it would take to discover this though.
312 MORECORE_CANNOT_TRIM default: NOT defined
313 True if MORECORE cannot release space back to the system when given
314 negative arguments. This is generally necessary only if you are
315 using a hand-crafted MORECORE function that cannot handle negative
318 HAVE_MMAP default: 1 (true)
319 True if this system supports mmap or an emulation of it. If so, and
320 HAVE_MORECORE is not true, MMAP is used for all system
321 allocation. If set and HAVE_MORECORE is true as well, MMAP is
322 primarily used to directly allocate very large blocks. It is also
323 used as a backup strategy in cases where MORECORE fails to provide
324 space from system. Note: A single call to MUNMAP is assumed to be
325 able to unmap memory that may have be allocated using multiple calls
326 to MMAP, so long as they are adjacent.
328 HAVE_MREMAP default: 1 on linux, else 0
329 If true realloc() uses mremap() to re-allocate large blocks and
330 extend or shrink allocation spaces.
332 MMAP_CLEARS default: 1 on unix
333 True if mmap clears memory so calloc doesn't need to. This is true
334 for standard unix mmap using /dev/zero.
336 USE_BUILTIN_FFS default: 0 (i.e., not used)
337 Causes malloc to use the builtin ffs() function to compute indices.
338 Some compilers may recognize and intrinsify ffs to be faster than the
339 supplied C version. Also, the case of x86 using gcc is special-cased
340 to an asm instruction, so is already as fast as it can be, and so
341 this setting has no effect. (On most x86s, the asm version is only
342 slightly faster than the C version.)
344 malloc_getpagesize default: derive from system includes, or 4096.
345 The system page size. To the extent possible, this malloc manages
346 memory from the system in page-size units. This may be (and
347 usually is) a function rather than a constant. This is ignored
348 if WIN32, where page size is determined using getSystemInfo during
351 USE_DEV_RANDOM default: 0 (i.e., not used)
352 Causes malloc to use /dev/random to initialize secure magic seed for
353 stamping footers. Otherwise, the current time is used.
355 NO_MALLINFO default: 0
356 If defined, don't compile "mallinfo". This can be a simple way
357 of dealing with mismatches between system declarations and
360 MALLINFO_FIELD_TYPE default: size_t
361 The type of the fields in the mallinfo struct. This was originally
362 defined as "int" in SVID etc, but is more usefully defined as
363 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
365 REALLOC_ZERO_BYTES_FREES default: not defined
366 This should be set if a call to realloc with zero bytes should
367 be the same as a call to free. Some people think it should. Otherwise,
368 since this malloc returns a unique pointer for malloc(0), so does
371 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
373 LACKS_STDLIB_H default: NOT defined unless on WIN32
374 Define these if your system does not have these header files.
375 You might need to manually insert some of the declarations they provide.
377 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
378 system_info.dwAllocationGranularity in WIN32,
380 Also settable using mallopt(M_GRANULARITY, x)
381 The unit for allocating and deallocating memory from the system. On
382 most systems with contiguous MORECORE, there is no reason to
383 make this more than a page. However, systems with MMAP tend to
384 either require or encourage larger granularities. You can increase
385 this value to prevent system allocation functions to be called so
386 often, especially if they are slow. The value must be at least one
387 page and must be a power of two. Setting to 0 causes initialization
388 to either page size or win32 region size. (Note: In previous
389 versions of malloc, the equivalent of this option was called
392 DEFAULT_TRIM_THRESHOLD default: 2MB
393 Also settable using mallopt(M_TRIM_THRESHOLD, x)
394 The maximum amount of unused top-most memory to keep before
395 releasing via malloc_trim in free(). Automatic trimming is mainly
396 useful in long-lived programs using contiguous MORECORE. Because
397 trimming via sbrk can be slow on some systems, and can sometimes be
398 wasteful (in cases where programs immediately afterward allocate
399 more large chunks) the value should be high enough so that your
400 overall system performance would improve by releasing this much
401 memory. As a rough guide, you might set to a value close to the
402 average size of a process (program) running on your system.
403 Releasing this much memory would allow such a process to run in
404 memory. Generally, it is worth tuning trim thresholds when a
405 program undergoes phases where several large chunks are allocated
406 and released in ways that can reuse each other's storage, perhaps
407 mixed with phases where there are no such chunks at all. The trim
408 value must be greater than page size to have any useful effect. To
409 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410 some people use of mallocing a huge space and then freeing it at
411 program startup, in an attempt to reserve system memory, doesn't
412 have the intended effect under automatic trimming, since that memory
413 will immediately be returned to the system.
415 DEFAULT_MMAP_THRESHOLD default: 256K
416 Also settable using mallopt(M_MMAP_THRESHOLD, x)
417 The request size threshold for using MMAP to directly service a
418 request. Requests of at least this size that cannot be allocated
419 using already-existing space will be serviced via mmap. (If enough
420 normal freed space already exists it is used instead.) Using mmap
421 segregates relatively large chunks of memory so that they can be
422 individually obtained and released from the host system. A request
423 serviced through mmap is never reused by any other request (at least
424 not directly; the system may just so happen to remap successive
425 requests to the same locations). Segregating space in this way has
426 the benefits that: Mmapped space can always be individually released
427 back to the system, which helps keep the system level memory demands
428 of a long-lived program low. Also, mapped memory doesn't become
429 `locked' between other chunks, as can happen with normally allocated
430 chunks, which means that even trimming via malloc_trim would not
431 release them. However, it has the disadvantage that the space
432 cannot be reclaimed, consolidated, and then used to service later
433 requests, as happens with normal chunks. The advantages of mmap
434 nearly always outweigh disadvantages for "large" chunks, but the
435 value of "large" may vary across systems. The default is an
436 empirically derived value that works well in most systems. You can
437 disable mmap by setting to MAX_SIZE_T.
447 #define WIN32_LEAN_AND_MEAN
450 #define HAVE_MORECORE 0
451 #define LACKS_UNISTD_H
452 #define LACKS_SYS_PARAM_H
453 #define LACKS_SYS_MMAN_H
454 #define LACKS_STRING_H
455 #define LACKS_STRINGS_H
456 #define LACKS_SYS_TYPES_H
457 #define LACKS_ERRNO_H
458 #define MALLOC_FAILURE_ACTION
459 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
462 #if defined(DARWIN) || defined(_DARWIN)
463 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
464 #ifndef HAVE_MORECORE
465 #define HAVE_MORECORE 0
467 #endif /* HAVE_MORECORE */
470 #ifndef LACKS_SYS_TYPES_H
471 #include <sys/types.h> /* For size_t */
472 #endif /* LACKS_SYS_TYPES_H */
474 #include "cygmalloc.h"
475 #endif /* __CYGWIN__ */
477 /* The maximum possible size_t value has all bits set */
478 #define MAX_SIZE_T (~(size_t)0)
481 #define ONLY_MSPACES 0
482 #endif /* ONLY_MSPACES */
486 #else /* ONLY_MSPACES */
488 #endif /* ONLY_MSPACES */
490 #ifndef MALLOC_ALIGNMENT
491 #define MALLOC_ALIGNMENT ((size_t)8U)
492 #endif /* MALLOC_ALIGNMENT */
497 #define ABORT abort()
499 #ifndef ABORT_ON_ASSERT_FAILURE
500 #define ABORT_ON_ASSERT_FAILURE 1
501 #endif /* ABORT_ON_ASSERT_FAILURE */
502 #ifndef PROCEED_ON_ERROR
503 #define PROCEED_ON_ERROR 0
504 #endif /* PROCEED_ON_ERROR */
507 #endif /* USE_LOCKS */
510 #endif /* INSECURE */
513 #endif /* HAVE_MMAP */
515 #define MMAP_CLEARS 1
516 #endif /* MMAP_CLEARS */
519 #define HAVE_MREMAP 1
521 #define HAVE_MREMAP 0
523 #endif /* HAVE_MREMAP */
524 #ifndef MALLOC_FAILURE_ACTION
525 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
526 #endif /* MALLOC_FAILURE_ACTION */
527 #ifndef HAVE_MORECORE
529 #define HAVE_MORECORE 0
530 #else /* ONLY_MSPACES */
531 #define HAVE_MORECORE 1
532 #endif /* ONLY_MSPACES */
533 #endif /* HAVE_MORECORE */
535 #define MORECORE_CONTIGUOUS 0
536 #else /* !HAVE_MORECORE */
538 #define MORECORE sbrk
539 #endif /* MORECORE */
540 #ifndef MORECORE_CONTIGUOUS
541 #define MORECORE_CONTIGUOUS 1
542 #endif /* MORECORE_CONTIGUOUS */
543 #endif /* HAVE_MORECORE */
544 #ifndef DEFAULT_GRANULARITY
545 #if MORECORE_CONTIGUOUS
546 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
547 #else /* MORECORE_CONTIGUOUS */
548 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
549 #endif /* MORECORE_CONTIGUOUS */
550 #endif /* DEFAULT_GRANULARITY */
551 #ifndef DEFAULT_TRIM_THRESHOLD
552 #ifndef MORECORE_CANNOT_TRIM
553 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
554 #else /* MORECORE_CANNOT_TRIM */
555 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
556 #endif /* MORECORE_CANNOT_TRIM */
557 #endif /* DEFAULT_TRIM_THRESHOLD */
558 #ifndef DEFAULT_MMAP_THRESHOLD
560 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
561 #else /* HAVE_MMAP */
562 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
563 #endif /* HAVE_MMAP */
564 #endif /* DEFAULT_MMAP_THRESHOLD */
565 #ifndef USE_BUILTIN_FFS
566 #define USE_BUILTIN_FFS 0
567 #endif /* USE_BUILTIN_FFS */
568 #ifndef USE_DEV_RANDOM
569 #define USE_DEV_RANDOM 0
570 #endif /* USE_DEV_RANDOM */
572 #define NO_MALLINFO 0
573 #endif /* NO_MALLINFO */
574 #ifndef MALLINFO_FIELD_TYPE
575 #define MALLINFO_FIELD_TYPE size_t
576 #endif /* MALLINFO_FIELD_TYPE */
579 mallopt tuning options. SVID/XPG defines four standard parameter
580 numbers for mallopt, normally defined in malloc.h. None of these
581 are used in this malloc, so setting them has no effect. But this
582 malloc does support the following options.
585 #define M_TRIM_THRESHOLD (-1)
586 #define M_GRANULARITY (-2)
587 #define M_MMAP_THRESHOLD (-3)
589 /* ------------------------ Mallinfo declarations ------------------------ */
593 This version of malloc supports the standard SVID/XPG mallinfo
594 routine that returns a struct containing usage properties and
595 statistics. It should work on any system that has a
596 /usr/include/malloc.h defining struct mallinfo. The main
597 declaration needed is the mallinfo struct that is returned (by-copy)
598 by mallinfo(). The malloinfo struct contains a bunch of fields that
599 are not even meaningful in this version of malloc. These fields are
600 are instead filled by mallinfo() with other numbers that might be of
603 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
604 /usr/include/malloc.h file that includes a declaration of struct
605 mallinfo. If so, it is included; else a compliant version is
606 declared below. These must be precisely the same for mallinfo() to
607 work. The original SVID version of this struct, defined on most
608 systems with mallinfo, declares all fields as ints. But some others
609 define as unsigned long. If your system defines the fields using a
610 type of different width than listed here, you MUST #include your
611 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
614 /* #define HAVE_USR_INCLUDE_MALLOC_H */
616 #ifdef HAVE_USR_INCLUDE_MALLOC_H
617 #include "/usr/include/malloc.h"
618 #else /* HAVE_USR_INCLUDE_MALLOC_H */
621 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
622 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
623 MALLINFO_FIELD_TYPE smblks; /* always 0 */
624 MALLINFO_FIELD_TYPE hblks; /* always 0 */
625 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
626 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
627 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
628 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
629 MALLINFO_FIELD_TYPE fordblks; /* total free space */
630 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
633 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
634 #endif /* NO_MALLINFO */
638 #endif /* __cplusplus */
642 /* ------------------- Declarations of public routines ------------------- */
644 #ifndef USE_DL_PREFIX
645 #define dlcalloc calloc
647 #define dlmalloc malloc
648 #define dlmemalign memalign
649 #define dlrealloc realloc
650 #define dlvalloc valloc
651 #define dlpvalloc pvalloc
652 #define dlmallinfo mallinfo
653 #define dlmallopt mallopt
654 #define dlmalloc_trim malloc_trim
655 #define dlmalloc_stats malloc_stats
656 #define dlmalloc_usable_size malloc_usable_size
657 #define dlmalloc_footprint malloc_footprint
658 #define dlmalloc_max_footprint malloc_max_footprint
659 #define dlindependent_calloc independent_calloc
660 #define dlindependent_comalloc independent_comalloc
661 #endif /* USE_DL_PREFIX */
666 Returns a pointer to a newly allocated chunk of at least n bytes, or
667 null if no space is available, in which case errno is set to ENOMEM
670 If n is zero, malloc returns a minimum-sized chunk. (The minimum
671 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
672 systems.) Note that size_t is an unsigned type, so calls with
673 arguments that would be negative if signed are interpreted as
674 requests for huge amounts of space, which will often fail. The
675 maximum supported value of n differs across systems, but is in all
676 cases less than the maximum representable value of a size_t.
678 void* dlmalloc(size_t);
682 Releases the chunk of memory pointed to by p, that had been previously
683 allocated using malloc or a related routine such as realloc.
684 It has no effect if p is null. If p was not malloced or already
685 freed, free(p) will by default cause the current program to abort.
690 calloc(size_t n_elements, size_t element_size);
691 Returns a pointer to n_elements * element_size bytes, with all locations
694 void* dlcalloc(size_t, size_t);
697 realloc(void* p, size_t n)
698 Returns a pointer to a chunk of size n that contains the same data
699 as does chunk p up to the minimum of (n, p's size) bytes, or null
700 if no space is available.
702 The returned pointer may or may not be the same as p. The algorithm
703 prefers extending p in most cases when possible, otherwise it
704 employs the equivalent of a malloc-copy-free sequence.
706 If p is null, realloc is equivalent to malloc.
708 If space is not available, realloc returns null, errno is set (if on
709 ANSI) and p is NOT freed.
711 if n is for fewer bytes than already held by p, the newly unused
712 space is lopped off and freed if possible. realloc with a size
713 argument of zero (re)allocates a minimum-sized chunk.
715 The old unix realloc convention of allowing the last-free'd chunk
716 to be used as an argument to realloc is not supported.
719 void* dlrealloc(void*, size_t);
722 memalign(size_t alignment, size_t n);
723 Returns a pointer to a newly allocated chunk of n bytes, aligned
724 in accord with the alignment argument.
726 The alignment argument should be a power of two. If the argument is
727 not a power of two, the nearest greater power is used.
728 8-byte alignment is guaranteed by normal malloc calls, so don't
729 bother calling memalign with an argument of 8 or less.
731 Overreliance on memalign is a sure way to fragment space.
733 void* dlmemalign(size_t, size_t);
737 Equivalent to memalign(pagesize, n), where pagesize is the page
738 size of the system. If the pagesize is unknown, 4096 is used.
740 void* dlvalloc(size_t);
743 mallopt(int parameter_number, int parameter_value)
744 Sets tunable parameters The format is to provide a
745 (parameter-number, parameter-value) pair. mallopt then sets the
746 corresponding parameter to the argument value if it can (i.e., so
747 long as the value is meaningful), and returns 1 if successful else
748 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
749 normally defined in malloc.h. None of these are use in this malloc,
750 so setting them has no effect. But this malloc also supports other
751 options in mallopt. See below for details. Briefly, supported
752 parameters are as follows (listed defaults are for "typical"
755 Symbol param # default allowed param values
756 M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
757 M_GRANULARITY -2 page size any power of 2 >= page size
758 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
760 int dlmallopt(int, int);
764 Returns the number of bytes obtained from the system. The total
765 number of bytes allocated by malloc, realloc etc., is less than this
766 value. Unlike mallinfo, this function returns only a precomputed
767 result, so can be called frequently to monitor memory consumption.
768 Even if locks are otherwise defined, this function does not use them,
769 so results might not be up to date.
771 size_t dlmalloc_footprint(void);
774 malloc_max_footprint();
775 Returns the maximum number of bytes obtained from the system. This
776 value will be greater than current footprint if deallocated space
777 has been reclaimed by the system. The peak number of bytes allocated
778 by malloc, realloc etc., is less than this value. Unlike mallinfo,
779 this function returns only a precomputed result, so can be called
780 frequently to monitor memory consumption. Even if locks are
781 otherwise defined, this function does not use them, so results might
784 size_t dlmalloc_max_footprint(void);
789 Returns (by copy) a struct containing various summary statistics:
791 arena: current total non-mmapped bytes allocated from system
792 ordblks: the number of free chunks
794 hblks: current number of mmapped regions
795 hblkhd: total bytes held in mmapped regions
796 usmblks: the maximum total allocated space. This will be greater
797 than current total if trimming has occurred.
799 uordblks: current total allocated space (normal or mmapped)
800 fordblks: total free space
801 keepcost: the maximum number of bytes that could ideally be released
802 back to system via malloc_trim. ("ideally" means that
803 it ignores page restrictions etc.)
805 Because these fields are ints, but internal bookkeeping may
806 be kept as longs, the reported values may wrap around zero and
809 struct mallinfo dlmallinfo(void);
810 #endif /* NO_MALLINFO */
813 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
815 independent_calloc is similar to calloc, but instead of returning a
816 single cleared space, it returns an array of pointers to n_elements
817 independent elements that can hold contents of size elem_size, each
818 of which starts out cleared, and can be independently freed,
819 realloc'ed etc. The elements are guaranteed to be adjacently
820 allocated (this is not guaranteed to occur with multiple callocs or
821 mallocs), which may also improve cache locality in some
824 The "chunks" argument is optional (i.e., may be null, which is
825 probably the most typical usage). If it is null, the returned array
826 is itself dynamically allocated and should also be freed when it is
827 no longer needed. Otherwise, the chunks array must be of at least
828 n_elements in length. It is filled in with the pointers to the
831 In either case, independent_calloc returns this pointer array, or
832 null if the allocation failed. If n_elements is zero and "chunks"
833 is null, it returns a chunk representing an array with zero elements
834 (which should be freed if not wanted).
836 Each element must be individually freed when it is no longer
837 needed. If you'd like to instead be able to free all at once, you
838 should instead use regular calloc and assign pointers into this
839 space to represent elements. (In this case though, you cannot
840 independently free elements.)
842 independent_calloc simplifies and speeds up implementations of many
843 kinds of pools. It may also be useful when constructing large data
844 structures that initially have a fixed number of fixed-sized nodes,
845 but the number is not known at compile time, and some of the nodes
846 may later need to be freed. For example:
848 struct Node { int item; struct Node* next; };
850 struct Node* build_list() {
852 int n = read_number_of_nodes_needed();
853 if (n <= 0) return 0;
854 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
855 if (pool == 0) die();
856 // organize into a linked list...
857 struct Node* first = pool[0];
858 for (i = 0; i < n-1; ++i)
859 pool[i]->next = pool[i+1];
860 free(pool); // Can now free the array (or not, if it is needed later)
864 void** dlindependent_calloc(size_t, size_t, void**);
867 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
869 independent_comalloc allocates, all at once, a set of n_elements
870 chunks with sizes indicated in the "sizes" array. It returns
871 an array of pointers to these elements, each of which can be
872 independently freed, realloc'ed etc. The elements are guaranteed to
873 be adjacently allocated (this is not guaranteed to occur with
874 multiple callocs or mallocs), which may also improve cache locality
875 in some applications.
877 The "chunks" argument is optional (i.e., may be null). If it is null
878 the returned array is itself dynamically allocated and should also
879 be freed when it is no longer needed. Otherwise, the chunks array
880 must be of at least n_elements in length. It is filled in with the
881 pointers to the chunks.
883 In either case, independent_comalloc returns this pointer array, or
884 null if the allocation failed. If n_elements is zero and chunks is
885 null, it returns a chunk representing an array with zero elements
886 (which should be freed if not wanted).
888 Each element must be individually freed when it is no longer
889 needed. If you'd like to instead be able to free all at once, you
890 should instead use a single regular malloc, and assign pointers at
891 particular offsets in the aggregate space. (In this case though, you
892 cannot independently free elements.)
894 independent_comallac differs from independent_calloc in that each
895 element may have a different size, and also that it does not
896 automatically clear elements.
898 independent_comalloc can be used to speed up allocation in cases
899 where several structs or objects must always be allocated at the
900 same time. For example:
905 void send_message(char* msg) {
906 int msglen = strlen(msg);
907 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
909 if (independent_comalloc(3, sizes, chunks) == 0)
911 struct Head* head = (struct Head*)(chunks[0]);
912 char* body = (char*)(chunks[1]);
913 struct Foot* foot = (struct Foot*)(chunks[2]);
917 In general though, independent_comalloc is worth using only for
918 larger values of n_elements. For small values, you probably won't
919 detect enough difference from series of malloc calls to bother.
921 Overuse of independent_comalloc can increase overall memory usage,
922 since it cannot reuse existing noncontiguous small chunks that
923 might be available for some of the elements.
925 void** dlindependent_comalloc(size_t, size_t*, void**);
930 Equivalent to valloc(minimum-page-that-holds(n)), that is,
931 round up n to nearest pagesize.
933 void* dlpvalloc(size_t);
936 malloc_trim(size_t pad);
938 If possible, gives memory back to the system (via negative arguments
939 to sbrk) if there is unused memory at the `high' end of the malloc
940 pool or in unused MMAP segments. You can call this after freeing
941 large blocks of memory to potentially reduce the system-level memory
942 requirements of a program. However, it cannot guarantee to reduce
943 memory. Under some allocation patterns, some large free blocks of
944 memory will be locked between two used chunks, so they cannot be
945 given back to the system.
947 The `pad' argument to malloc_trim represents the amount of free
948 trailing space to leave untrimmed. If this argument is zero, only
949 the minimum amount of memory to maintain internal data structures
950 will be left. Non-zero arguments can be supplied to maintain enough
951 trailing space to service future expected allocations without having
952 to re-obtain memory from the system.
954 Malloc_trim returns 1 if it actually released any memory, else 0.
956 int dlmalloc_trim(size_t);
959 malloc_usable_size(void* p);
961 Returns the number of bytes you can actually use in
962 an allocated chunk, which may be more than you requested (although
963 often not) due to alignment and minimum size constraints.
964 You can use this many bytes without worrying about
965 overwriting other allocated objects. This is not a particularly great
966 programming practice. malloc_usable_size can be more useful in
967 debugging and assertions, for example:
970 assert(malloc_usable_size(p) >= 256);
972 size_t dlmalloc_usable_size(void*);
976 Prints on stderr the amount of space obtained from the system (both
977 via sbrk and mmap), the maximum amount (which may be more than
978 current if malloc_trim and/or munmap got called), and the current
979 number of bytes allocated via malloc (or realloc, etc) but not yet
980 freed. Note that this is the number of bytes allocated, not the
981 number requested. It will be larger than the number requested
982 because of alignment and bookkeeping overhead. Because it includes
983 alignment wastage as being in use, this figure may be greater than
984 zero even when no user-level chunks are allocated.
986 The reported current and maximum system memory can be inaccurate if
987 a program makes other calls to system memory allocation functions
988 (normally sbrk) outside of malloc.
990 malloc_stats prints only the most commonly interesting statistics.
991 More information can be obtained by calling mallinfo.
993 void dlmalloc_stats(void);
995 #endif /* ONLY_MSPACES */
1000 mspace is an opaque type representing an independent
1001 region of space that supports mspace_malloc, etc.
1003 typedef void* mspace;
1006 create_mspace creates and returns a new independent space with the
1007 given initial capacity, or, if 0, the default granularity size. It
1008 returns null if there is no system memory available to create the
1009 space. If argument locked is non-zero, the space uses a separate
1010 lock to control access. The capacity of the space will grow
1011 dynamically as needed to service mspace_malloc requests. You can
1012 control the sizes of incremental increases of this space by
1013 compiling with a different DEFAULT_GRANULARITY or dynamically
1014 setting with mallopt(M_GRANULARITY, value).
1016 mspace create_mspace(size_t capacity, int locked);
1019 destroy_mspace destroys the given space, and attempts to return all
1020 of its memory back to the system, returning the total number of
1021 bytes freed. After destruction, the results of access to all memory
1022 used by the space become undefined.
1024 size_t destroy_mspace(mspace msp);
1027 create_mspace_with_base uses the memory supplied as the initial base
1028 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1029 space is used for bookkeeping, so the capacity must be at least this
1030 large. (Otherwise 0 is returned.) When this initial space is
1031 exhausted, additional memory will be obtained from the system.
1032 Destroying this space will deallocate all additionally allocated
1033 space (if possible) but not the initial base.
1035 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1038 mspace_malloc behaves as malloc, but operates within
1041 void* mspace_malloc(mspace msp, size_t bytes);
1044 mspace_free behaves as free, but operates within
1047 If compiled with FOOTERS==1, mspace_free is not actually needed.
1048 free may be called instead of mspace_free because freed chunks from
1049 any space are handled by their originating spaces.
1051 void mspace_free(mspace msp, void* mem);
1054 mspace_realloc behaves as realloc, but operates within
1057 If compiled with FOOTERS==1, mspace_realloc is not actually
1058 needed. realloc may be called instead of mspace_realloc because
1059 realloced chunks from any space are handled by their originating
1062 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1065 mspace_calloc behaves as calloc, but operates within
1068 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1071 mspace_memalign behaves as memalign, but operates within
1074 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1077 mspace_independent_calloc behaves as independent_calloc, but
1078 operates within the given space.
1080 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1081 size_t elem_size, void* chunks[]);
1084 mspace_independent_comalloc behaves as independent_comalloc, but
1085 operates within the given space.
1087 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1088 size_t sizes[], void* chunks[]);
1091 mspace_footprint() returns the number of bytes obtained from the
1092 system for this space.
1094 size_t mspace_footprint(mspace msp);
1097 mspace_max_footprint() returns the peak number of bytes obtained from the
1098 system for this space.
1100 size_t mspace_max_footprint(mspace msp);
1105 mspace_mallinfo behaves as mallinfo, but reports properties of
1108 struct mallinfo mspace_mallinfo(mspace msp);
1109 #endif /* NO_MALLINFO */
1112 mspace_malloc_stats behaves as malloc_stats, but reports
1113 properties of the given space.
1115 void mspace_malloc_stats(mspace msp);
1118 mspace_trim behaves as malloc_trim, but
1119 operates within the given space.
1121 int mspace_trim(mspace msp, size_t pad);
1124 An alias for mallopt.
1126 int mspace_mallopt(int, int);
1128 #endif /* MSPACES */
1131 }; /* end of extern "C" */
1132 #endif /* __cplusplus */
1135 ========================================================================
1136 To make a fully customizable malloc.h header file, cut everything
1137 above this line, put into file malloc.h, edit to suit, and #include it
1138 on the next line, as well as in programs that use this malloc.
1139 ========================================================================
1142 /* #include "malloc.h" */
1144 /*------------------------------ internal #includes ---------------------- */
1147 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1150 #include <stdio.h> /* for printing in malloc_stats */
1152 #ifndef LACKS_ERRNO_H
1153 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1154 #endif /* LACKS_ERRNO_H */
1156 #include <time.h> /* for magic initialization */
1157 #endif /* FOOTERS */
1158 #ifndef LACKS_STDLIB_H
1159 #include <stdlib.h> /* for abort() */
1160 #endif /* LACKS_STDLIB_H */
1162 #if ABORT_ON_ASSERT_FAILURE
1163 #define assert(x) if(!(x)) ABORT
1164 #else /* ABORT_ON_ASSERT_FAILURE */
1166 #endif /* ABORT_ON_ASSERT_FAILURE */
1170 #ifndef LACKS_STRING_H
1171 #include <string.h> /* for memset etc */
1172 #endif /* LACKS_STRING_H */
1174 #ifndef LACKS_STRINGS_H
1175 #include <strings.h> /* for ffs */
1176 #endif /* LACKS_STRINGS_H */
1177 #endif /* USE_BUILTIN_FFS */
1179 #ifndef LACKS_SYS_MMAN_H
1180 #include <sys/mman.h> /* for mmap */
1181 #endif /* LACKS_SYS_MMAN_H */
1182 #ifndef LACKS_FCNTL_H
1184 #endif /* LACKS_FCNTL_H */
1185 #endif /* HAVE_MMAP */
1187 #ifndef LACKS_UNISTD_H
1188 #include <unistd.h> /* for sbrk */
1189 #else /* LACKS_UNISTD_H */
1190 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1191 extern void* sbrk(ptrdiff_t);
1192 #endif /* FreeBSD etc */
1193 #endif /* LACKS_UNISTD_H */
1194 #endif /* HAVE_MMAP */
1197 #ifndef malloc_getpagesize
1198 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1199 # ifndef _SC_PAGE_SIZE
1200 # define _SC_PAGE_SIZE _SC_PAGESIZE
1203 # ifdef _SC_PAGE_SIZE
1204 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1206 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1207 extern size_t getpagesize();
1208 # define malloc_getpagesize getpagesize()
1210 # ifdef WIN32 /* use supplied emulation of getpagesize */
1211 # define malloc_getpagesize getpagesize()
1213 # ifndef LACKS_SYS_PARAM_H
1214 # include <sys/param.h>
1216 # ifdef EXEC_PAGESIZE
1217 # define malloc_getpagesize EXEC_PAGESIZE
1221 # define malloc_getpagesize NBPG
1223 # define malloc_getpagesize (NBPG * CLSIZE)
1227 # define malloc_getpagesize NBPC
1230 # define malloc_getpagesize PAGESIZE
1231 # else /* just guess */
1232 # define malloc_getpagesize ((size_t)4096U)
1243 /* ------------------- size_t and alignment properties -------------------- */
1245 /* The byte and bit size of a size_t */
1246 #define SIZE_T_SIZE (sizeof(size_t))
1247 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1249 /* Some constants coerced to size_t */
1250 /* Annoying but necessary to avoid errors on some plaftorms */
1251 #define SIZE_T_ZERO ((size_t)0)
1252 #define SIZE_T_ONE ((size_t)1)
1253 #define SIZE_T_TWO ((size_t)2)
1254 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1255 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1256 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1257 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1259 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1260 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1262 /* True if address a has acceptable alignment */
1263 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1265 /* the number of bytes to offset an address to align it */
1266 #define align_offset(A)\
1267 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1268 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1270 /* -------------------------- MMAP preliminaries ------------------------- */
1273 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1274 checks to fail so compiler optimizer can delete code rather than
1275 using so many "#if"s.
1279 /* MORECORE and MMAP must return MFAIL on failure */
1280 #define MFAIL ((void*)(MAX_SIZE_T))
1281 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1284 #define IS_MMAPPED_BIT (SIZE_T_ZERO)
1285 #define USE_MMAP_BIT (SIZE_T_ZERO)
1286 #define CALL_MMAP(s) MFAIL
1287 #define CALL_MUNMAP(a, s) (-1)
1288 #define DIRECT_MMAP(s) MFAIL
1290 #else /* HAVE_MMAP */
1291 #define IS_MMAPPED_BIT (SIZE_T_ONE)
1292 #define USE_MMAP_BIT (SIZE_T_ONE)
1295 #define CALL_MUNMAP(a, s) munmap((a), (s))
1296 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1297 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1298 #define MAP_ANONYMOUS MAP_ANON
1299 #endif /* MAP_ANON */
1300 #ifdef MAP_ANONYMOUS
1301 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1302 #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1303 #else /* MAP_ANONYMOUS */
1305 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1306 is unlikely to be needed, but is supplied just in case.
1308 #define MMAP_FLAGS (MAP_PRIVATE)
1309 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1310 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1311 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1312 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1313 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1314 #endif /* MAP_ANONYMOUS */
1316 #define DIRECT_MMAP(s) CALL_MMAP(s)
1319 /* Win32 MMAP via VirtualAlloc */
1320 static void* win32mmap(size_t size) {
1321 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1322 return (ptr != 0)? ptr: MFAIL;
1325 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1326 static void* win32direct_mmap(size_t size) {
1327 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1329 return (ptr != 0)? ptr: MFAIL;
1332 /* This function supports releasing coalesed segments */
1333 static int win32munmap(void* ptr, size_t size) {
1334 MEMORY_BASIC_INFORMATION minfo;
1337 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1339 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1340 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1342 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1344 cptr += minfo.RegionSize;
1345 size -= minfo.RegionSize;
1350 #define CALL_MMAP(s) win32mmap(s)
1351 #define CALL_MUNMAP(a, s) win32munmap((a), (s))
1352 #define DIRECT_MMAP(s) win32direct_mmap(s)
1354 #endif /* HAVE_MMAP */
1356 #if HAVE_MMAP && HAVE_MREMAP
1357 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1358 #else /* HAVE_MMAP && HAVE_MREMAP */
1359 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1360 #endif /* HAVE_MMAP && HAVE_MREMAP */
1363 #define CALL_MORECORE(S) MORECORE(S)
1364 #else /* HAVE_MORECORE */
1365 #define CALL_MORECORE(S) MFAIL
1366 #endif /* HAVE_MORECORE */
1368 /* mstate bit set if continguous morecore disabled or failed */
1369 #define USE_NONCONTIGUOUS_BIT (4U)
1371 /* segment bit set in create_mspace_with_base */
1372 #define EXTERN_BIT (8U)
1375 /* --------------------------- Lock preliminaries ------------------------ */
1380 When locks are defined, there are up to two global locks:
1382 * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1383 MORECORE. In many cases sys_alloc requires two calls, that should
1384 not be interleaved with calls by other threads. This does not
1385 protect against direct calls to MORECORE by other threads not
1386 using this lock, so there is still code to cope the best we can on
1389 * magic_init_mutex ensures that mparams.magic and other
1390 unique mparams values are initialized only once.
1394 /* By default use posix locks */
1395 #include <pthread.h>
1396 #define MLOCK_T pthread_mutex_t
1397 #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1398 #define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1399 #define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1402 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1403 #endif /* HAVE_MORECORE */
1405 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1409 Because lock-protected regions have bounded times, and there
1410 are no recursive lock calls, we can use simple spinlocks.
1413 #define MLOCK_T long
1414 static int win32_acquire_lock (MLOCK_T *sl) {
1416 #ifdef InterlockedCompareExchangePointer
1417 if (!InterlockedCompareExchange(sl, 1, 0))
1419 #else /* Use older void* version */
1420 if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1422 #endif /* InterlockedCompareExchangePointer */
1427 static void win32_release_lock (MLOCK_T *sl) {
1428 InterlockedExchange (sl, 0);
1431 #define INITIAL_LOCK(l) *(l)=0
1432 #define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1433 #define RELEASE_LOCK(l) win32_release_lock(l)
1435 static MLOCK_T morecore_mutex;
1436 #endif /* HAVE_MORECORE */
1437 static MLOCK_T magic_init_mutex;
1440 #define USE_LOCK_BIT (2U)
1441 #else /* USE_LOCKS */
1442 #define USE_LOCK_BIT (0U)
1443 #define INITIAL_LOCK(l)
1444 #endif /* USE_LOCKS */
1446 #if USE_LOCKS && HAVE_MORECORE
1447 #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1448 #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1449 #else /* USE_LOCKS && HAVE_MORECORE */
1450 #define ACQUIRE_MORECORE_LOCK()
1451 #define RELEASE_MORECORE_LOCK()
1452 #endif /* USE_LOCKS && HAVE_MORECORE */
1455 #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1456 #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1457 #else /* USE_LOCKS */
1458 #define ACQUIRE_MAGIC_INIT_LOCK()
1459 #define RELEASE_MAGIC_INIT_LOCK()
1460 #endif /* USE_LOCKS */
1463 /* ----------------------- Chunk representations ------------------------ */
1466 (The following includes lightly edited explanations by Colin Plumb.)
1468 The malloc_chunk declaration below is misleading (but accurate and
1469 necessary). It declares a "view" into memory allowing access to
1470 necessary fields at known offsets from a given base.
1472 Chunks of memory are maintained using a `boundary tag' method as
1473 originally described by Knuth. (See the paper by Paul Wilson
1474 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1475 techniques.) Sizes of free chunks are stored both in the front of
1476 each chunk and at the end. This makes consolidating fragmented
1477 chunks into bigger chunks fast. The head fields also hold bits
1478 representing whether chunks are free or in use.
1480 Here are some pictures to make it clearer. They are "exploded" to
1481 show that the state of a chunk can be thought of as extending from
1482 the high 31 bits of the head field of its header through the
1483 prev_foot and PINUSE_BIT bit of the following chunk header.
1485 A chunk that's in use looks like:
1487 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1488 | Size of previous chunk (if P = 1) |
1489 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1490 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1491 | Size of this chunk 1| +-+
1492 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1498 +- size - sizeof(size_t) available payload bytes -+
1502 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1503 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1504 | Size of next chunk (may or may not be in use) | +-+
1505 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1507 And if it's free, it looks like this:
1510 | User payload (must be in use, or we would have merged!) |
1511 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1512 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1513 | Size of this chunk 0| +-+
1514 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1516 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1518 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1520 +- size - sizeof(struct chunk) unused bytes -+
1522 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1523 | Size of this chunk |
1524 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1525 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1526 | Size of next chunk (must be in use, or we would have merged)| +-+
1527 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1531 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1534 Note that since we always merge adjacent free chunks, the chunks
1535 adjacent to a free chunk must be in use.
1537 Given a pointer to a chunk (which can be derived trivially from the
1538 payload pointer) we can, in O(1) time, find out whether the adjacent
1539 chunks are free, and if so, unlink them from the lists that they
1540 are on and merge them with the current chunk.
1542 Chunks always begin on even word boundaries, so the mem portion
1543 (which is returned to the user) is also on an even word boundary, and
1544 thus at least double-word aligned.
1546 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1547 chunk size (which is always a multiple of two words), is an in-use
1548 bit for the *previous* chunk. If that bit is *clear*, then the
1549 word before the current chunk size contains the previous chunk
1550 size, and can be used to find the front of the previous chunk.
1551 The very first chunk allocated always has this bit set, preventing
1552 access to non-existent (or non-owned) memory. If pinuse is set for
1553 any given chunk, then you CANNOT determine the size of the
1554 previous chunk, and might even get a memory addressing fault when
1557 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1558 the chunk size redundantly records whether the current chunk is
1559 inuse. This redundancy enables usage checks within free and realloc,
1560 and reduces indirection when freeing and consolidating chunks.
1562 Each freshly allocated chunk must have both cinuse and pinuse set.
1563 That is, each allocated chunk borders either a previously allocated
1564 and still in-use chunk, or the base of its memory arena. This is
1565 ensured by making all allocations from the the `lowest' part of any
1566 found chunk. Further, no free chunk physically borders another one,
1567 so each free chunk is known to be preceded and followed by either
1568 inuse chunks or the ends of memory.
1570 Note that the `foot' of the current chunk is actually represented
1571 as the prev_foot of the NEXT chunk. This makes it easier to
1572 deal with alignments etc but can be very confusing when trying
1573 to extend or adapt this code.
1575 The exceptions to all this are
1577 1. The special chunk `top' is the top-most available chunk (i.e.,
1578 the one bordering the end of available memory). It is treated
1579 specially. Top is never included in any bin, is used only if
1580 no other chunk is available, and is released back to the
1581 system if it is very large (see M_TRIM_THRESHOLD). In effect,
1582 the top chunk is treated as larger (and thus less well
1583 fitting) than any other available chunk. The top chunk
1584 doesn't update its trailing size field since there is no next
1585 contiguous chunk that would have to index off it. However,
1586 space is still allocated for it (TOP_FOOT_SIZE) to enable
1587 separation or merging when space is extended.
1589 3. Chunks allocated via mmap, which have the lowest-order bit
1590 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1591 PINUSE_BIT in their head fields. Because they are allocated
1592 one-by-one, each must carry its own prev_foot field, which is
1593 also used to hold the offset this chunk has within its mmapped
1594 region, which is needed to preserve alignment. Each mmapped
1595 chunk is trailed by the first two fields of a fake next-chunk
1596 for sake of usage checks.
1600 struct malloc_chunk {
1601 size_t prev_foot; /* Size of previous chunk (if free). */
1602 size_t head; /* Size and inuse bits. */
1603 struct malloc_chunk* fd; /* double links -- used only if free. */
1604 struct malloc_chunk* bk;
1607 typedef struct malloc_chunk mchunk;
1608 typedef struct malloc_chunk* mchunkptr;
1609 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
1610 typedef unsigned int bindex_t; /* Described below */
1611 typedef unsigned int binmap_t; /* Described below */
1612 typedef unsigned int flag_t; /* The type of various bit flag sets */
1614 /* ------------------- Chunks sizes and alignments ----------------------- */
1616 #define MCHUNK_SIZE (sizeof(mchunk))
1619 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1621 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
1622 #endif /* FOOTERS */
1624 /* MMapped chunks need a second word of overhead ... */
1625 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1626 /* ... and additional padding for fake next-chunk at foot */
1627 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1629 /* The smallest size we can malloc is an aligned minimal chunk */
1630 #define MIN_CHUNK_SIZE\
1631 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1633 /* conversion from malloc headers to user pointers, and back */
1634 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1635 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1636 /* chunk associated with aligned address A */
1637 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1639 /* Bounds on request (not chunk) sizes. */
1640 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1641 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1643 /* pad request bytes into a usable size */
1644 #define pad_request(req) \
1645 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1647 /* pad request, checking for minimum (but not maximum) */
1648 #define request2size(req) \
1649 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1652 /* ------------------ Operations on head and foot fields ----------------- */
1655 The head field of a chunk is or'ed with PINUSE_BIT when previous
1656 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1657 use. If the chunk was obtained with mmap, the prev_foot field has
1658 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1659 mmapped region to the base of the chunk.
1662 #define PINUSE_BIT (SIZE_T_ONE)
1663 #define CINUSE_BIT (SIZE_T_TWO)
1664 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1666 /* Head value for fenceposts */
1667 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1669 /* extraction of fields from head words */
1670 #define cinuse(p) ((p)->head & CINUSE_BIT)
1671 #define pinuse(p) ((p)->head & PINUSE_BIT)
1672 #define chunksize(p) ((p)->head & ~(INUSE_BITS))
1674 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1675 #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1677 /* Treat space at ptr +/- offset as a chunk */
1678 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1679 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1681 /* Ptr to next or previous physical malloc_chunk. */
1682 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1683 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1685 /* extract next chunk's pinuse bit */
1686 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1688 /* Get/set size at footer */
1689 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1690 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1692 /* Set size, pinuse bit, and foot */
1693 #define set_size_and_pinuse_of_free_chunk(p, s)\
1694 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1696 /* Set size, pinuse bit, foot, and clear next pinuse */
1697 #define set_free_with_pinuse(p, s, n)\
1698 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1700 #define is_mmapped(p)\
1701 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1703 /* Get the internal overhead associated with chunk p */
1704 #define overhead_for(p)\
1705 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1707 /* Return true if malloced space is not necessarily cleared */
1709 #define calloc_must_clear(p) (!is_mmapped(p))
1710 #else /* MMAP_CLEARS */
1711 #define calloc_must_clear(p) (1)
1712 #endif /* MMAP_CLEARS */
1714 /* ---------------------- Overlaid data structures ----------------------- */
1717 When chunks are not in use, they are treated as nodes of either
1720 "Small" chunks are stored in circular doubly-linked lists, and look
1723 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1724 | Size of previous chunk |
1725 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1726 `head:' | Size of chunk, in bytes |P|
1727 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1728 | Forward pointer to next chunk in list |
1729 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1730 | Back pointer to previous chunk in list |
1731 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1732 | Unused space (may be 0 bytes long) .
1735 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1736 `foot:' | Size of chunk, in bytes |
1737 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1739 Larger chunks are kept in a form of bitwise digital trees (aka
1740 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1741 free chunks greater than 256 bytes, their size doesn't impose any
1742 constraints on user chunk sizes. Each node looks like:
1744 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1745 | Size of previous chunk |
1746 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1747 `head:' | Size of chunk, in bytes |P|
1748 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1749 | Forward pointer to next chunk of same size |
1750 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1751 | Back pointer to previous chunk of same size |
1752 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1753 | Pointer to left child (child[0]) |
1754 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1755 | Pointer to right child (child[1]) |
1756 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1757 | Pointer to parent |
1758 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1759 | bin index of this chunk |
1760 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1763 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1764 `foot:' | Size of chunk, in bytes |
1765 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1767 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1768 of the same size are arranged in a circularly-linked list, with only
1769 the oldest chunk (the next to be used, in our FIFO ordering)
1770 actually in the tree. (Tree members are distinguished by a non-null
1771 parent pointer.) If a chunk with the same size an an existing node
1772 is inserted, it is linked off the existing node using pointers that
1773 work in the same way as fd/bk pointers of small chunks.
1775 Each tree contains a power of 2 sized range of chunk sizes (the
1776 smallest is 0x100 <= x < 0x180), which is is divided in half at each
1777 tree level, with the chunks in the smaller half of the range (0x100
1778 <= x < 0x140 for the top nose) in the left subtree and the larger
1779 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1780 done by inspecting individual bits.
1782 Using these rules, each node's left subtree contains all smaller
1783 sizes than its right subtree. However, the node at the root of each
1784 subtree has no particular ordering relationship to either. (The
1785 dividing line between the subtree sizes is based on trie relation.)
1786 If we remove the last chunk of a given size from the interior of the
1787 tree, we need to replace it with a leaf node. The tree ordering
1788 rules permit a node to be replaced by any leaf below it.
1790 The smallest chunk in a tree (a common operation in a best-fit
1791 allocator) can be found by walking a path to the leftmost leaf in
1792 the tree. Unlike a usual binary tree, where we follow left child
1793 pointers until we reach a null, here we follow the right child
1794 pointer any time the left one is null, until we reach a leaf with
1795 both child pointers null. The smallest chunk in the tree will be
1796 somewhere along that path.
1798 The worst case number of steps to add, find, or remove a node is
1799 bounded by the number of bits differentiating chunks within
1800 bins. Under current bin calculations, this ranges from 6 up to 21
1801 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1802 is of course much better.
1805 struct malloc_tree_chunk {
1806 /* The first four fields must be compatible with malloc_chunk */
1809 struct malloc_tree_chunk* fd;
1810 struct malloc_tree_chunk* bk;
1812 struct malloc_tree_chunk* child[2];
1813 struct malloc_tree_chunk* parent;
1817 typedef struct malloc_tree_chunk tchunk;
1818 typedef struct malloc_tree_chunk* tchunkptr;
1819 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1821 /* A little helper macro for trees */
1822 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1824 /* ----------------------------- Segments -------------------------------- */
1827 Each malloc space may include non-contiguous segments, held in a
1828 list headed by an embedded malloc_segment record representing the
1829 top-most space. Segments also include flags holding properties of
1830 the space. Large chunks that are directly allocated by mmap are not
1831 included in this list. They are instead independently created and
1832 destroyed without otherwise keeping track of them.
1834 Segment management mainly comes into play for spaces allocated by
1835 MMAP. Any call to MMAP might or might not return memory that is
1836 adjacent to an existing segment. MORECORE normally contiguously
1837 extends the current space, so this space is almost always adjacent,
1838 which is simpler and faster to deal with. (This is why MORECORE is
1839 used preferentially to MMAP when both are available -- see
1840 sys_alloc.) When allocating using MMAP, we don't use any of the
1841 hinting mechanisms (inconsistently) supported in various
1842 implementations of unix mmap, or distinguish reserving from
1843 committing memory. Instead, we just ask for space, and exploit
1844 contiguity when we get it. It is probably possible to do
1845 better than this on some systems, but no general scheme seems
1846 to be significantly better.
1848 Management entails a simpler variant of the consolidation scheme
1849 used for chunks to reduce fragmentation -- new adjacent memory is
1850 normally prepended or appended to an existing segment. However,
1851 there are limitations compared to chunk consolidation that mostly
1852 reflect the fact that segment processing is relatively infrequent
1853 (occurring only when getting memory from system) and that we
1854 don't expect to have huge numbers of segments:
1856 * Segments are not indexed, so traversal requires linear scans. (It
1857 would be possible to index these, but is not worth the extra
1858 overhead and complexity for most programs on most platforms.)
1859 * New segments are only appended to old ones when holding top-most
1860 memory; if they cannot be prepended to others, they are held in
1863 Except for the top-most segment of an mstate, each segment record
1864 is kept at the tail of its segment. Segments are added by pushing
1865 segment records onto the list headed by &mstate.seg for the
1868 Segment flags control allocation/merge/deallocation policies:
1869 * If EXTERN_BIT set, then we did not allocate this segment,
1870 and so should not try to deallocate or merge with others.
1871 (This currently holds only for the initial segment passed
1872 into create_mspace_with_base.)
1873 * If IS_MMAPPED_BIT set, the segment may be merged with
1874 other surrounding mmapped segments and trimmed/de-allocated
1876 * If neither bit is set, then the segment was obtained using
1877 MORECORE so can be merged with surrounding MORECORE'd segments
1878 and deallocated/trimmed using MORECORE with negative arguments.
1881 struct malloc_segment {
1882 char* base; /* base address */
1883 size_t size; /* allocated size */
1884 struct malloc_segment* next; /* ptr to next segment */
1885 flag_t sflags; /* mmap and extern flag */
1888 #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
1889 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
1891 typedef struct malloc_segment msegment;
1892 typedef struct malloc_segment* msegmentptr;
1894 /* ---------------------------- malloc_state ----------------------------- */
1897 A malloc_state holds all of the bookkeeping for a space.
1898 The main fields are:
1901 The topmost chunk of the currently active segment. Its size is
1902 cached in topsize. The actual size of topmost space is
1903 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1904 fenceposts and segment records if necessary when getting more
1905 space from the system. The size at which to autotrim top is
1906 cached from mparams in trim_check, except that it is disabled if
1909 Designated victim (dv)
1910 This is the preferred chunk for servicing small requests that
1911 don't have exact fits. It is normally the chunk split off most
1912 recently to service another small request. Its size is cached in
1913 dvsize. The link fields of this chunk are not maintained since it
1914 is not kept in a bin.
1917 An array of bin headers for free chunks. These bins hold chunks
1918 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1919 chunks of all the same size, spaced 8 bytes apart. To simplify
1920 use in double-linked lists, each bin header acts as a malloc_chunk
1921 pointing to the real first node, if it exists (else pointing to
1922 itself). This avoids special-casing for headers. But to avoid
1923 waste, we allocate only the fd/bk pointers of bins, and then use
1924 repositioning tricks to treat these as the fields of a chunk.
1927 Treebins are pointers to the roots of trees holding a range of
1928 sizes. There are 2 equally spaced treebins for each power of two
1929 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1933 There is one bit map for small bins ("smallmap") and one for
1934 treebins ("treemap). Each bin sets its bit when non-empty, and
1935 clears the bit when empty. Bit operations are then used to avoid
1936 bin-by-bin searching -- nearly all "search" is done without ever
1937 looking at bins that won't be selected. The bit maps
1938 conservatively use 32 bits per map word, even if on 64bit system.
1939 For a good description of some of the bit-based techniques used
1940 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1941 supplement at http://hackersdelight.org/). Many of these are
1942 intended to reduce the branchiness of paths through malloc etc, as
1943 well as to reduce the number of memory locations read or written.
1946 A list of segments headed by an embedded malloc_segment record
1947 representing the initial space.
1949 Address check support
1950 The least_addr field is the least address ever obtained from
1951 MORECORE or MMAP. Attempted frees and reallocs of any address less
1952 than this are trapped (unless INSECURE is defined).
1955 A cross-check field that should always hold same value as mparams.magic.
1958 Bits recording whether to use MMAP, locks, or contiguous MORECORE
1961 Each space keeps track of current and maximum system memory
1962 obtained via MORECORE or MMAP.
1965 If USE_LOCKS is defined, the "mutex" lock is acquired and released
1966 around every public call using this mspace.
1969 /* Bin types, widths and sizes */
1970 #define NSMALLBINS (32U)
1971 #define NTREEBINS (32U)
1972 #define SMALLBIN_SHIFT (3U)
1973 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
1974 #define TREEBIN_SHIFT (8U)
1975 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
1976 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
1977 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1979 struct malloc_state {
1989 mchunkptr smallbins[(NSMALLBINS+1)*2];
1990 tbinptr treebins[NTREEBINS];
1992 size_t max_footprint;
1995 MLOCK_T mutex; /* locate lock among fields that rarely change */
1996 #endif /* USE_LOCKS */
2000 typedef struct malloc_state* mstate;
2002 /* ------------- Global malloc_state and malloc_params ------------------- */
2005 malloc_params holds global properties, including those that can be
2006 dynamically set using mallopt. There is a single instance, mparams,
2007 initialized in init_mparams.
2010 struct malloc_params {
2014 size_t mmap_threshold;
2015 size_t trim_threshold;
2016 flag_t default_mflags;
2019 static struct malloc_params mparams;
2021 /* The global malloc_state used for all non-"mspace" calls */
2022 static struct malloc_state _gm_;
2024 #define is_global(M) ((M) == &_gm_)
2025 #define is_initialized(M) ((M)->top != 0)
2027 /* -------------------------- system alloc setup ------------------------- */
2029 /* Operations on mflags */
2031 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2032 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2033 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2035 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2036 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2037 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2039 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2040 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2042 #define set_lock(M,L)\
2043 ((M)->mflags = (L)?\
2044 ((M)->mflags | USE_LOCK_BIT) :\
2045 ((M)->mflags & ~USE_LOCK_BIT))
2047 /* page-align a size */
2048 #define page_align(S)\
2049 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2051 /* granularity-align a size */
2052 #define granularity_align(S)\
2053 (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2055 #define is_page_aligned(S)\
2056 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2057 #define is_granularity_aligned(S)\
2058 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2060 /* True if segment S holds address A */
2061 #define segment_holds(S, A)\
2062 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2064 /* Return segment holding given address */
2065 static msegmentptr segment_holding(mstate m, char* addr) {
2066 msegmentptr sp = &m->seg;
2068 if (addr >= sp->base && addr < sp->base + sp->size)
2070 if ((sp = sp->next) == 0)
2075 /* Return true if segment contains a segment link */
2076 static int has_segment_link(mstate m, msegmentptr ss) {
2077 msegmentptr sp = &m->seg;
2079 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2081 if ((sp = sp->next) == 0)
2086 #ifndef MORECORE_CANNOT_TRIM
2087 #define should_trim(M,s) ((s) > (M)->trim_check)
2088 #else /* MORECORE_CANNOT_TRIM */
2089 #define should_trim(M,s) (0)
2090 #endif /* MORECORE_CANNOT_TRIM */
2093 TOP_FOOT_SIZE is padding at the end of a segment, including space
2094 that may be needed to place segment records and fenceposts when new
2095 noncontiguous segments are added.
2097 #define TOP_FOOT_SIZE\
2098 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2101 /* ------------------------------- Hooks -------------------------------- */
2104 PREACTION should be defined to return 0 on success, and nonzero on
2105 failure. If you are not using locking, you can redefine these to do
2111 /* Ensure locks are initialized */
2112 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2114 #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2115 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2116 #else /* USE_LOCKS */
2119 #define PREACTION(M) (0)
2120 #endif /* PREACTION */
2123 #define POSTACTION(M)
2124 #endif /* POSTACTION */
2126 #endif /* USE_LOCKS */
2129 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2130 USAGE_ERROR_ACTION is triggered on detected bad frees and
2131 reallocs. The argument p is an address that might have triggered the
2132 fault. It is ignored by the two predefined actions, but might be
2133 useful in custom actions that try to help diagnose errors.
2136 #if PROCEED_ON_ERROR
2138 /* A count of the number of corruption errors causing resets */
2139 int malloc_corruption_error_count;
2141 /* default corruption action */
2142 static void reset_on_error(mstate m);
2144 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2145 #define USAGE_ERROR_ACTION(m, p)
2147 #else /* PROCEED_ON_ERROR */
2149 #ifndef CORRUPTION_ERROR_ACTION
2150 #define CORRUPTION_ERROR_ACTION(m) ABORT
2151 #endif /* CORRUPTION_ERROR_ACTION */
2153 #ifndef USAGE_ERROR_ACTION
2154 #define USAGE_ERROR_ACTION(m,p) ABORT
2155 #endif /* USAGE_ERROR_ACTION */
2157 #endif /* PROCEED_ON_ERROR */
2159 /* -------------------------- Debugging setup ---------------------------- */
2163 #define check_free_chunk(M,P)
2164 #define check_inuse_chunk(M,P)
2165 #define check_malloced_chunk(M,P,N)
2166 #define check_mmapped_chunk(M,P)
2167 #define check_malloc_state(M)
2168 #define check_top_chunk(M,P)
2171 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2172 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2173 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2174 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2175 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2176 #define check_malloc_state(M) do_check_malloc_state(M)
2178 static void do_check_any_chunk(mstate m, mchunkptr p);
2179 static void do_check_top_chunk(mstate m, mchunkptr p);
2180 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2181 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2182 static void do_check_free_chunk(mstate m, mchunkptr p);
2183 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2184 static void do_check_tree(mstate m, tchunkptr t);
2185 static void do_check_treebin(mstate m, bindex_t i);
2186 static void do_check_smallbin(mstate m, bindex_t i);
2187 static void do_check_malloc_state(mstate m);
2188 static int bin_find(mstate m, mchunkptr x);
2189 static size_t traverse_and_check(mstate m);
2192 /* ---------------------------- Indexing Bins ---------------------------- */
2194 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2195 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2196 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2197 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2199 /* addressing by index. See above about smallbin repositioning */
2200 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2201 #define treebin_at(M,i) (&((M)->treebins[i]))
2203 /* assign tree index for size S to variable I */
2204 #if defined(__GNUC__) && defined(i386)
2205 #define compute_tree_index(S, I)\
2207 size_t X = S >> TREEBIN_SHIFT;\
2210 else if (X > 0xFFFF)\
2214 __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2215 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2219 #define compute_tree_index(S, I)\
2221 size_t X = S >> TREEBIN_SHIFT;\
2224 else if (X > 0xFFFF)\
2227 unsigned int Y = (unsigned int)X;\
2228 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2229 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2231 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2232 K = 14 - N + ((Y <<= K) >> 15);\
2233 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2238 /* Bit representing maximum resolved size in a treebin at i */
2239 #define bit_for_tree_index(i) \
2240 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2242 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2243 #define leftshift_for_tree_index(i) \
2244 ((i == NTREEBINS-1)? 0 : \
2245 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2247 /* The size of the smallest chunk held in bin with index i */
2248 #define minsize_for_tree_index(i) \
2249 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2250 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2253 /* ------------------------ Operations on bin maps ----------------------- */
2255 /* bit corresponding to given index */
2256 #define idx2bit(i) ((binmap_t)(1) << (i))
2258 /* Mark/Clear bits with given index */
2259 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2260 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2261 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2263 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2264 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2265 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2267 /* index corresponding to given bit */
2269 #if defined(__GNUC__) && defined(i386)
2270 #define compute_bit2idx(X, I)\
2273 __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2279 #define compute_bit2idx(X, I) I = ffs(X)-1
2281 #else /* USE_BUILTIN_FFS */
2282 #define compute_bit2idx(X, I)\
2284 unsigned int Y = X - 1;\
2285 unsigned int K = Y >> (16-4) & 16;\
2286 unsigned int N = K; Y >>= K;\
2287 N += K = Y >> (8-3) & 8; Y >>= K;\
2288 N += K = Y >> (4-2) & 4; Y >>= K;\
2289 N += K = Y >> (2-1) & 2; Y >>= K;\
2290 N += K = Y >> (1-0) & 1; Y >>= K;\
2291 I = (bindex_t)(N + Y);\
2293 #endif /* USE_BUILTIN_FFS */
2296 /* isolate the least set bit of a bitmap */
2297 #define least_bit(x) ((x) & -(x))
2299 /* mask with all bits to left of least bit of x on */
2300 #define left_bits(x) ((x<<1) | -(x<<1))
2302 /* mask with all bits to left of or equal to least bit of x on */
2303 #define same_or_left_bits(x) ((x) | -(x))
2306 /* ----------------------- Runtime Check Support ------------------------- */
2309 For security, the main invariant is that malloc/free/etc never
2310 writes to a static address other than malloc_state, unless static
2311 malloc_state itself has been corrupted, which cannot occur via
2312 malloc (because of these checks). In essence this means that we
2313 believe all pointers, sizes, maps etc held in malloc_state, but
2314 check all of those linked or offsetted from other embedded data
2315 structures. These checks are interspersed with main code in a way
2316 that tends to minimize their run-time cost.
2318 When FOOTERS is defined, in addition to range checking, we also
2319 verify footer fields of inuse chunks, which can be used guarantee
2320 that the mstate controlling malloc/free is intact. This is a
2321 streamlined version of the approach described by William Robertson
2322 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2323 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2324 of an inuse chunk holds the xor of its mstate and a random seed,
2325 that is checked upon calls to free() and realloc(). This is
2326 (probablistically) unguessable from outside the program, but can be
2327 computed by any code successfully malloc'ing any chunk, so does not
2328 itself provide protection against code that has already broken
2329 security through some other means. Unlike Robertson et al, we
2330 always dynamically check addresses of all offset chunks (previous,
2331 next, etc). This turns out to be cheaper than relying on hashes.
2335 /* Check if address a is at least as high as any from MORECORE or MMAP */
2336 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2337 /* Check if address of next chunk n is higher than base chunk p */
2338 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2339 /* Check if p has its cinuse bit on */
2340 #define ok_cinuse(p) cinuse(p)
2341 /* Check if p has its pinuse bit on */
2342 #define ok_pinuse(p) pinuse(p)
2344 #else /* !INSECURE */
2345 #define ok_address(M, a) (1)
2346 #define ok_next(b, n) (1)
2347 #define ok_cinuse(p) (1)
2348 #define ok_pinuse(p) (1)
2349 #endif /* !INSECURE */
2351 #if (FOOTERS && !INSECURE)
2352 /* Check if (alleged) mstate m has expected magic field */
2353 #define ok_magic(M) ((M)->magic == mparams.magic)
2354 #else /* (FOOTERS && !INSECURE) */
2355 #define ok_magic(M) (1)
2356 #endif /* (FOOTERS && !INSECURE) */
2359 /* In gcc, use __builtin_expect to minimize impact of checks */
2361 #if defined(__GNUC__) && __GNUC__ >= 3
2362 #define RTCHECK(e) __builtin_expect(e, 1)
2364 #define RTCHECK(e) (e)
2366 #else /* !INSECURE */
2367 #define RTCHECK(e) (1)
2368 #endif /* !INSECURE */
2370 /* macros to set up inuse chunks with or without footers */
2374 #define mark_inuse_foot(M,p,s)
2376 /* Set cinuse bit and pinuse bit of next chunk */
2377 #define set_inuse(M,p,s)\
2378 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2379 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2381 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2382 #define set_inuse_and_pinuse(M,p,s)\
2383 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2384 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2386 /* Set size, cinuse and pinuse bit of this chunk */
2387 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2388 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2392 /* Set foot of inuse chunk to be xor of mstate and seed */
2393 #define mark_inuse_foot(M,p,s)\
2394 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2396 #define get_mstate_for(p)\
2397 ((mstate)(((mchunkptr)((char*)(p) +\
2398 (chunksize(p))))->prev_foot ^ mparams.magic))
2400 #define set_inuse(M,p,s)\
2401 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2402 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2403 mark_inuse_foot(M,p,s))
2405 #define set_inuse_and_pinuse(M,p,s)\
2406 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2407 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2408 mark_inuse_foot(M,p,s))
2410 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2411 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2412 mark_inuse_foot(M, p, s))
2414 #endif /* !FOOTERS */
2416 /* ---------------------------- setting mparams -------------------------- */
2418 /* Initialize mparams */
2419 static int init_mparams(void) {
2420 if (mparams.page_size == 0) {
2423 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2424 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2425 #if MORECORE_CONTIGUOUS
2426 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2427 #else /* MORECORE_CONTIGUOUS */
2428 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2429 #endif /* MORECORE_CONTIGUOUS */
2431 #if (FOOTERS && !INSECURE)
2435 unsigned char buf[sizeof(size_t)];
2436 /* Try to use /dev/urandom, else fall back on using time */
2437 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2438 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2439 s = *((size_t *) buf);
2443 #endif /* USE_DEV_RANDOM */
2444 s = (size_t)(time(0) ^ (size_t)0x55555555U);
2446 s |= (size_t)8U; /* ensure nonzero */
2447 s &= ~(size_t)7U; /* improve chances of fault for bad values */
2450 #else /* (FOOTERS && !INSECURE) */
2451 s = (size_t)0x58585858U;
2452 #endif /* (FOOTERS && !INSECURE) */
2453 ACQUIRE_MAGIC_INIT_LOCK();
2454 if (mparams.magic == 0) {
2456 /* Set up lock for main malloc area */
2457 INITIAL_LOCK(&gm->mutex);
2458 gm->mflags = mparams.default_mflags;
2460 RELEASE_MAGIC_INIT_LOCK();
2463 mparams.page_size = malloc_getpagesize;
2464 mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2465 DEFAULT_GRANULARITY : mparams.page_size);
2468 SYSTEM_INFO system_info;
2469 GetSystemInfo(&system_info);
2470 mparams.page_size = system_info.dwPageSize;
2471 mparams.granularity = system_info.dwAllocationGranularity;
2475 /* Sanity-check configuration:
2476 size_t must be unsigned and as wide as pointer type.
2477 ints must be at least 4 bytes.
2478 alignment must be at least 8.
2479 Alignment, min chunk size, and page size must all be powers of 2.
2481 if ((sizeof(size_t) != sizeof(char*)) ||
2482 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2483 (sizeof(int) < 4) ||
2484 (MALLOC_ALIGNMENT < (size_t)8U) ||
2485 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
2486 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2487 ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2488 ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
2494 /* support for mallopt */
2495 static int change_mparam(int param_number, int value) {
2496 size_t val = (size_t)value;
2498 switch(param_number) {
2499 case M_TRIM_THRESHOLD:
2500 mparams.trim_threshold = val;
2503 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2504 mparams.granularity = val;
2509 case M_MMAP_THRESHOLD:
2510 mparams.mmap_threshold = val;
2518 /* ------------------------- Debugging Support --------------------------- */
2520 /* Check properties of any chunk, whether free, inuse, mmapped etc */
2521 static void do_check_any_chunk(mstate m, mchunkptr p) {
2522 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2523 assert(ok_address(m, p));
2526 /* Check properties of top chunk */
2527 static void do_check_top_chunk(mstate m, mchunkptr p) {
2528 msegmentptr sp = segment_holding(m, (char*)p);
2529 size_t sz = chunksize(p);
2531 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2532 assert(ok_address(m, p));
2533 assert(sz == m->topsize);
2535 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2537 assert(!next_pinuse(p));
2540 /* Check properties of (inuse) mmapped chunks */
2541 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2542 size_t sz = chunksize(p);
2543 size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2544 assert(is_mmapped(p));
2545 assert(use_mmap(m));
2546 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2547 assert(ok_address(m, p));
2548 assert(!is_small(sz));
2549 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2550 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2551 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2554 /* Check properties of inuse chunks */
2555 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2556 do_check_any_chunk(m, p);
2558 assert(next_pinuse(p));
2559 /* If not pinuse and not mmapped, previous chunk has OK offset */
2560 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2562 do_check_mmapped_chunk(m, p);
2565 /* Check properties of free chunks */
2566 static void do_check_free_chunk(mstate m, mchunkptr p) {
2567 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2568 mchunkptr next = chunk_plus_offset(p, sz);
2569 do_check_any_chunk(m, p);
2571 assert(!next_pinuse(p));
2572 assert (!is_mmapped(p));
2573 if (p != m->dv && p != m->top) {
2574 if (sz >= MIN_CHUNK_SIZE) {
2575 assert((sz & CHUNK_ALIGN_MASK) == 0);
2576 assert(is_aligned(chunk2mem(p)));
2577 assert(next->prev_foot == sz);
2579 assert (next == m->top || cinuse(next));
2580 assert(p->fd->bk == p);
2581 assert(p->bk->fd == p);
2583 else /* markers are always of size SIZE_T_SIZE */
2584 assert(sz == SIZE_T_SIZE);
2588 /* Check properties of malloced chunks at the point they are malloced */
2589 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2591 mchunkptr p = mem2chunk(mem);
2592 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2593 do_check_inuse_chunk(m, p);
2594 assert((sz & CHUNK_ALIGN_MASK) == 0);
2595 assert(sz >= MIN_CHUNK_SIZE);
2597 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2598 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2602 /* Check a tree and its subtrees. */
2603 static void do_check_tree(mstate m, tchunkptr t) {
2606 bindex_t tindex = t->index;
2607 size_t tsize = chunksize(t);
2609 compute_tree_index(tsize, idx);
2610 assert(tindex == idx);
2611 assert(tsize >= MIN_LARGE_SIZE);
2612 assert(tsize >= minsize_for_tree_index(idx));
2613 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2615 do { /* traverse through chain of same-sized nodes */
2616 do_check_any_chunk(m, ((mchunkptr)u));
2617 assert(u->index == tindex);
2618 assert(chunksize(u) == tsize);
2620 assert(!next_pinuse(u));
2621 assert(u->fd->bk == u);
2622 assert(u->bk->fd == u);
2623 if (u->parent == 0) {
2624 assert(u->child[0] == 0);
2625 assert(u->child[1] == 0);
2628 assert(head == 0); /* only one node on chain has parent */
2630 assert(u->parent != u);
2631 assert (u->parent->child[0] == u ||
2632 u->parent->child[1] == u ||
2633 *((tbinptr*)(u->parent)) == u);
2634 if (u->child[0] != 0) {
2635 assert(u->child[0]->parent == u);
2636 assert(u->child[0] != u);
2637 do_check_tree(m, u->child[0]);
2639 if (u->child[1] != 0) {
2640 assert(u->child[1]->parent == u);
2641 assert(u->child[1] != u);
2642 do_check_tree(m, u->child[1]);
2644 if (u->child[0] != 0 && u->child[1] != 0) {
2645 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2653 /* Check all the chunks in a treebin. */
2654 static void do_check_treebin(mstate m, bindex_t i) {
2655 tbinptr* tb = treebin_at(m, i);
2657 int empty = (m->treemap & (1U << i)) == 0;
2661 do_check_tree(m, t);
2664 /* Check all the chunks in a smallbin. */
2665 static void do_check_smallbin(mstate m, bindex_t i) {
2666 sbinptr b = smallbin_at(m, i);
2667 mchunkptr p = b->bk;
2668 unsigned int empty = (m->smallmap & (1U << i)) == 0;
2672 for (; p != b; p = p->bk) {
2673 size_t size = chunksize(p);
2675 /* each chunk claims to be free */
2676 do_check_free_chunk(m, p);
2677 /* chunk belongs in bin */
2678 assert(small_index(size) == i);
2679 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2680 /* chunk is followed by an inuse chunk */
2682 if (q->head != FENCEPOST_HEAD)
2683 do_check_inuse_chunk(m, q);
2688 /* Find x in a bin. Used in other check functions. */
2689 static int bin_find(mstate m, mchunkptr x) {
2690 size_t size = chunksize(x);
2691 if (is_small(size)) {
2692 bindex_t sidx = small_index(size);
2693 sbinptr b = smallbin_at(m, sidx);
2694 if (smallmap_is_marked(m, sidx)) {
2699 } while ((p = p->fd) != b);
2704 compute_tree_index(size, tidx);
2705 if (treemap_is_marked(m, tidx)) {
2706 tchunkptr t = *treebin_at(m, tidx);
2707 size_t sizebits = size << leftshift_for_tree_index(tidx);
2708 while (t != 0 && chunksize(t) != size) {
2709 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2715 if (u == (tchunkptr)x)
2717 } while ((u = u->fd) != t);
2724 /* Traverse each chunk and check it; return total */
2725 static size_t traverse_and_check(mstate m) {
2727 if (is_initialized(m)) {
2728 msegmentptr s = &m->seg;
2729 sum += m->topsize + TOP_FOOT_SIZE;
2731 mchunkptr q = align_as_chunk(s->base);
2732 mchunkptr lastq = 0;
2734 while (segment_holds(s, q) &&
2735 q != m->top && q->head != FENCEPOST_HEAD) {
2736 sum += chunksize(q);
2738 assert(!bin_find(m, q));
2739 do_check_inuse_chunk(m, q);
2742 assert(q == m->dv || bin_find(m, q));
2743 assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2744 do_check_free_chunk(m, q);
2755 /* Check all properties of malloc_state. */
2756 static void do_check_malloc_state(mstate m) {
2760 for (i = 0; i < NSMALLBINS; ++i)
2761 do_check_smallbin(m, i);
2762 for (i = 0; i < NTREEBINS; ++i)
2763 do_check_treebin(m, i);
2765 if (m->dvsize != 0) { /* check dv chunk */
2766 do_check_any_chunk(m, m->dv);
2767 assert(m->dvsize == chunksize(m->dv));
2768 assert(m->dvsize >= MIN_CHUNK_SIZE);
2769 assert(bin_find(m, m->dv) == 0);
2772 if (m->top != 0) { /* check top chunk */
2773 do_check_top_chunk(m, m->top);
2774 assert(m->topsize == chunksize(m->top));
2775 assert(m->topsize > 0);
2776 assert(bin_find(m, m->top) == 0);
2779 total = traverse_and_check(m);
2780 assert(total <= m->footprint);
2781 assert(m->footprint <= m->max_footprint);
2785 /* ----------------------------- statistics ------------------------------ */
2788 static struct mallinfo internal_mallinfo(mstate m) {
2789 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2790 if (!PREACTION(m)) {
2791 check_malloc_state(m);
2792 if (is_initialized(m)) {
2793 size_t nfree = SIZE_T_ONE; /* top always free */
2794 size_t mfree = m->topsize + TOP_FOOT_SIZE;
2796 msegmentptr s = &m->seg;
2798 mchunkptr q = align_as_chunk(s->base);
2799 while (segment_holds(s, q) &&
2800 q != m->top && q->head != FENCEPOST_HEAD) {
2801 size_t sz = chunksize(q);
2814 nm.hblkhd = m->footprint - sum;
2815 nm.usmblks = m->max_footprint;
2816 nm.uordblks = m->footprint - mfree;
2817 nm.fordblks = mfree;
2818 nm.keepcost = m->topsize;
2825 #endif /* !NO_MALLINFO */
2827 static void internal_malloc_stats(mstate m) {
2828 if (!PREACTION(m)) {
2832 check_malloc_state(m);
2833 if (is_initialized(m)) {
2834 msegmentptr s = &m->seg;
2835 maxfp = m->max_footprint;
2837 used = fp - (m->topsize + TOP_FOOT_SIZE);
2840 mchunkptr q = align_as_chunk(s->base);
2841 while (segment_holds(s, q) &&
2842 q != m->top && q->head != FENCEPOST_HEAD) {
2844 used -= chunksize(q);
2851 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2852 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
2853 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
2859 /* ----------------------- Operations on smallbins ----------------------- */
2862 Various forms of linking and unlinking are defined as macros. Even
2863 the ones for trees, which are very long but have very short typical
2864 paths. This is ugly but reduces reliance on inlining support of
2868 /* Link a free chunk into a smallbin */
2869 #define insert_small_chunk(M, P, S) {\
2870 bindex_t I = small_index(S);\
2871 mchunkptr B = smallbin_at(M, I);\
2873 assert(S >= MIN_CHUNK_SIZE);\
2874 if (!smallmap_is_marked(M, I))\
2875 mark_smallmap(M, I);\
2876 else if (RTCHECK(ok_address(M, B->fd)))\
2879 CORRUPTION_ERROR_ACTION(M);\
2887 /* Unlink a chunk from a smallbin */
2888 #define unlink_small_chunk(M, P, S) {\
2889 mchunkptr F = P->fd;\
2890 mchunkptr B = P->bk;\
2891 bindex_t I = small_index(S);\
2894 assert(chunksize(P) == small_index2size(I));\
2896 clear_smallmap(M, I);\
2897 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2898 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2903 CORRUPTION_ERROR_ACTION(M);\
2907 /* Unlink the first chunk from a smallbin */
2908 #define unlink_first_small_chunk(M, B, P, I) {\
2909 mchunkptr F = P->fd;\
2912 assert(chunksize(P) == small_index2size(I));\
2914 clear_smallmap(M, I);\
2915 else if (RTCHECK(ok_address(M, F))) {\
2920 CORRUPTION_ERROR_ACTION(M);\
2924 /* Replace dv node, binning the old one */
2925 /* Used only when dvsize known to be small */
2926 #define replace_dv(M, P, S) {\
2927 size_t DVS = M->dvsize;\
2929 mchunkptr DV = M->dv;\
2930 assert(is_small(DVS));\
2931 insert_small_chunk(M, DV, DVS);\
2937 /* ------------------------- Operations on trees ------------------------- */
2939 /* Insert chunk into tree */
2940 #define insert_large_chunk(M, X, S) {\
2943 compute_tree_index(S, I);\
2944 H = treebin_at(M, I);\
2946 X->child[0] = X->child[1] = 0;\
2947 if (!treemap_is_marked(M, I)) {\
2948 mark_treemap(M, I);\
2950 X->parent = (tchunkptr)H;\
2955 size_t K = S << leftshift_for_tree_index(I);\
2957 if (chunksize(T) != S) {\
2958 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2962 else if (RTCHECK(ok_address(M, C))) {\
2969 CORRUPTION_ERROR_ACTION(M);\
2974 tchunkptr F = T->fd;\
2975 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2983 CORRUPTION_ERROR_ACTION(M);\
2994 1. If x is a chained node, unlink it from its same-sized fd/bk links
2995 and choose its bk node as its replacement.
2996 2. If x was the last node of its size, but not a leaf node, it must
2997 be replaced with a leaf node (not merely one with an open left or
2998 right), to make sure that lefts and rights of descendents
2999 correspond properly to bit masks. We use the rightmost descendent
3000 of x. We could use any other leaf, but this is easy to locate and
3001 tends to counteract removal of leftmosts elsewhere, and so keeps
3002 paths shorter than minimally guaranteed. This doesn't loop much
3003 because on average a node in a tree is near the bottom.
3004 3. If x is the base of a chain (i.e., has parent links) relink
3005 x's parent and children to x's replacement (or null if none).
3008 #define unlink_large_chunk(M, X) {\
3009 tchunkptr XP = X->parent;\
3012 tchunkptr F = X->fd;\
3014 if (RTCHECK(ok_address(M, F))) {\
3019 CORRUPTION_ERROR_ACTION(M);\
3024 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3025 ((R = *(RP = &(X->child[0]))) != 0)) {\
3027 while ((*(CP = &(R->child[1])) != 0) ||\
3028 (*(CP = &(R->child[0])) != 0)) {\
3031 if (RTCHECK(ok_address(M, RP)))\
3034 CORRUPTION_ERROR_ACTION(M);\
3039 tbinptr* H = treebin_at(M, X->index);\
3041 if ((*H = R) == 0) \
3042 clear_treemap(M, X->index);\
3044 else if (RTCHECK(ok_address(M, XP))) {\
3045 if (XP->child[0] == X) \
3051 CORRUPTION_ERROR_ACTION(M);\
3053 if (RTCHECK(ok_address(M, R))) {\
3056 if ((C0 = X->child[0]) != 0) {\
3057 if (RTCHECK(ok_address(M, C0))) {\
3062 CORRUPTION_ERROR_ACTION(M);\
3064 if ((C1 = X->child[1]) != 0) {\
3065 if (RTCHECK(ok_address(M, C1))) {\
3070 CORRUPTION_ERROR_ACTION(M);\
3074 CORRUPTION_ERROR_ACTION(M);\
3079 /* Relays to large vs small bin operations */
3081 #define insert_chunk(M, P, S)\
3082 if (is_small(S)) insert_small_chunk(M, P, S)\
3083 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3085 #define unlink_chunk(M, P, S)\
3086 if (is_small(S)) unlink_small_chunk(M, P, S)\
3087 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3090 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3093 #define internal_malloc(m, b) mspace_malloc(m, b)
3094 #define internal_free(m, mem) mspace_free(m,mem);
3095 #else /* ONLY_MSPACES */
3097 #define internal_malloc(m, b)\
3098 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3099 #define internal_free(m, mem)\
3100 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3102 #define internal_malloc(m, b) dlmalloc(b)
3103 #define internal_free(m, mem) dlfree(mem)
3104 #endif /* MSPACES */
3105 #endif /* ONLY_MSPACES */
3107 /* ----------------------- Direct-mmapping chunks ----------------------- */
3110 Directly mmapped chunks are set up with an offset to the start of
3111 the mmapped region stored in the prev_foot field of the chunk. This
3112 allows reconstruction of the required argument to MUNMAP when freed,
3113 and also allows adjustment of the returned chunk to meet alignment
3114 requirements (especially in memalign). There is also enough space
3115 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3116 the PINUSE bit so frees can be checked.
3119 /* Malloc using mmap */
3120 static void* mmap_alloc(mstate m, size_t nb) {
3121 size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3122 if (mmsize > nb) { /* Check for wrap around 0 */
3123 char* mm = (char*)(DIRECT_MMAP(mmsize));
3125 size_t offset = align_offset(chunk2mem(mm));
3126 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3127 mchunkptr p = (mchunkptr)(mm + offset);
3128 p->prev_foot = offset | IS_MMAPPED_BIT;
3129 (p)->head = (psize|CINUSE_BIT);
3130 mark_inuse_foot(m, p, psize);
3131 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3132 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3134 if (mm < m->least_addr)
3136 if ((m->footprint += mmsize) > m->max_footprint)
3137 m->max_footprint = m->footprint;
3138 assert(is_aligned(chunk2mem(p)));
3139 check_mmapped_chunk(m, p);
3140 return chunk2mem(p);
3146 /* Realloc using mmap */
3147 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3148 size_t oldsize = chunksize(oldp);
3149 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3151 /* Keep old chunk if big enough but not too big */
3152 if (oldsize >= nb + SIZE_T_SIZE &&
3153 (oldsize - nb) <= (mparams.granularity << 1))
3156 size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3157 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3158 size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3160 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3161 oldmmsize, newmmsize, 1);
3163 mchunkptr newp = (mchunkptr)(cp + offset);
3164 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3165 newp->head = (psize|CINUSE_BIT);
3166 mark_inuse_foot(m, newp, psize);
3167 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3168 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3170 if (cp < m->least_addr)
3172 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3173 m->max_footprint = m->footprint;
3174 check_mmapped_chunk(m, newp);
3181 /* -------------------------- mspace management -------------------------- */
3183 /* Initialize top chunk and its size */
3184 static void init_top(mstate m, mchunkptr p, size_t psize) {
3185 /* Ensure alignment */
3186 size_t offset = align_offset(chunk2mem(p));
3187 p = (mchunkptr)((char*)p + offset);
3192 p->head = psize | PINUSE_BIT;
3193 /* set size of fake trailing chunk holding overhead space only once */
3194 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3195 m->trim_check = mparams.trim_threshold; /* reset on each update */
3198 /* Initialize bins for a new mstate that is otherwise zeroed out */
3199 static void init_bins(mstate m) {
3200 /* Establish circular links for smallbins */
3202 for (i = 0; i < NSMALLBINS; ++i) {
3203 sbinptr bin = smallbin_at(m,i);
3204 bin->fd = bin->bk = bin;
3208 #if PROCEED_ON_ERROR
3210 /* default corruption action */
3211 static void reset_on_error(mstate m) {
3213 ++malloc_corruption_error_count;
3214 /* Reinitialize fields to forget about all memory */
3215 m->smallbins = m->treebins = 0;
3216 m->dvsize = m->topsize = 0;
3221 for (i = 0; i < NTREEBINS; ++i)
3222 *treebin_at(m, i) = 0;
3225 #endif /* PROCEED_ON_ERROR */
3227 /* Allocate chunk and prepend remainder with chunk in successor base. */
3228 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3230 mchunkptr p = align_as_chunk(newbase);
3231 mchunkptr oldfirst = align_as_chunk(oldbase);
3232 size_t psize = (char*)oldfirst - (char*)p;
3233 mchunkptr q = chunk_plus_offset(p, nb);
3234 size_t qsize = psize - nb;
3235 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3237 assert((char*)oldfirst > (char*)q);
3238 assert(pinuse(oldfirst));
3239 assert(qsize >= MIN_CHUNK_SIZE);
3241 /* consolidate remainder with first chunk of old base */
3242 if (oldfirst == m->top) {
3243 size_t tsize = m->topsize += qsize;
3245 q->head = tsize | PINUSE_BIT;
3246 check_top_chunk(m, q);
3248 else if (oldfirst == m->dv) {
3249 size_t dsize = m->dvsize += qsize;
3251 set_size_and_pinuse_of_free_chunk(q, dsize);
3254 if (!cinuse(oldfirst)) {
3255 size_t nsize = chunksize(oldfirst);
3256 unlink_chunk(m, oldfirst, nsize);
3257 oldfirst = chunk_plus_offset(oldfirst, nsize);
3260 set_free_with_pinuse(q, qsize, oldfirst);
3261 insert_chunk(m, q, qsize);
3262 check_free_chunk(m, q);
3265 check_malloced_chunk(m, chunk2mem(p), nb);
3266 return chunk2mem(p);
3270 /* Add a segment to hold a new noncontiguous region */
3271 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3272 /* Determine locations and sizes of segment, fenceposts, old top */
3273 char* old_top = (char*)m->top;
3274 msegmentptr oldsp = segment_holding(m, old_top);
3275 char* old_end = oldsp->base + oldsp->size;
3276 size_t ssize = pad_request(sizeof(struct malloc_segment));
3277 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3278 size_t offset = align_offset(chunk2mem(rawsp));
3279 char* asp = rawsp + offset;
3280 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3281 mchunkptr sp = (mchunkptr)csp;
3282 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3283 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3284 mchunkptr p = tnext;
3287 /* reset top to new space */
3288 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3290 /* Set up segment record */
3291 assert(is_aligned(ss));
3292 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3293 *ss = m->seg; /* Push current record */
3294 m->seg.base = tbase;
3295 m->seg.size = tsize;
3296 m->seg.sflags = mmapped;
3299 /* Insert trailing fenceposts */
3301 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3302 p->head = FENCEPOST_HEAD;
3304 if ((char*)(&(nextp->head)) < old_end)
3309 assert(nfences >= 2);
3311 /* Insert the rest of old top into a bin as an ordinary free chunk */
3312 if (csp != old_top) {
3313 mchunkptr q = (mchunkptr)old_top;
3314 size_t psize = csp - old_top;
3315 mchunkptr tn = chunk_plus_offset(q, psize);
3316 set_free_with_pinuse(q, psize, tn);
3317 insert_chunk(m, q, psize);
3320 check_top_chunk(m, m->top);
3323 /* -------------------------- System allocation -------------------------- */
3325 /* Get memory from system using MORECORE or MMAP */
3326 static void* sys_alloc(mstate m, size_t nb) {
3327 char* tbase = CMFAIL;
3329 flag_t mmap_flag = 0;
3333 /* Directly map large chunks */
3334 if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3335 void* mem = mmap_alloc(m, nb);
3341 Try getting memory in any of three ways (in most-preferred to
3342 least-preferred order):
3343 1. A call to MORECORE that can normally contiguously extend memory.
3344 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3345 or main space is mmapped or a previous contiguous call failed)
3346 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3347 Note that under the default settings, if MORECORE is unable to
3348 fulfill a request, and HAVE_MMAP is true, then mmap is
3349 used as a noncontiguous system allocator. This is a useful backup
3350 strategy for systems with holes in address spaces -- in this case
3351 sbrk cannot contiguously expand the heap, but mmap may be able to
3353 3. A call to MORECORE that cannot usually contiguously extend memory.
3354 (disabled if not HAVE_MORECORE)
3357 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3359 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3361 ACQUIRE_MORECORE_LOCK();
3363 if (ss == 0) { /* First time through or recovery */
3364 char* base = (char*)CALL_MORECORE(0);
3365 if (base != CMFAIL) {
3366 asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3367 /* Adjust to end on a page boundary */
3368 if (!is_page_aligned(base))
3369 asize += (page_align((size_t)base) - (size_t)base);
3370 /* Can't call MORECORE if size is negative when treated as signed */
3371 if (asize < HALF_MAX_SIZE_T &&
3372 (br = (char*)(CALL_MORECORE(asize))) == base) {
3379 /* Subtract out existing available top space from MORECORE request. */
3380 asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3381 /* Use mem here only if it did continuously extend old space */
3382 if (asize < HALF_MAX_SIZE_T &&
3383 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3389 if (tbase == CMFAIL) { /* Cope with partial failure */
3390 if (br != CMFAIL) { /* Try to use/extend the space we did get */
3391 if (asize < HALF_MAX_SIZE_T &&
3392 asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3393 size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3394 if (esize < HALF_MAX_SIZE_T) {
3395 char* end = (char*)CALL_MORECORE(esize);
3398 else { /* Can't use; try to release */
3399 CALL_MORECORE(-asize);
3405 if (br != CMFAIL) { /* Use the space we did get */
3410 disable_contiguous(m); /* Don't try contiguous path in the future */
3413 RELEASE_MORECORE_LOCK();
3416 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3417 size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3418 size_t rsize = granularity_align(req);
3419 if (rsize > nb) { /* Fail if wraps around zero */
3420 char* mp = (char*)(CALL_MMAP(rsize));
3424 mmap_flag = IS_MMAPPED_BIT;
3429 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3430 size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3431 if (asize < HALF_MAX_SIZE_T) {
3434 ACQUIRE_MORECORE_LOCK();
3435 br = (char*)(CALL_MORECORE(asize));
3436 end = (char*)(CALL_MORECORE(0));
3437 RELEASE_MORECORE_LOCK();
3438 if (br != CMFAIL && end != CMFAIL && br < end) {
3439 size_t ssize = end - br;
3440 if (ssize > nb + TOP_FOOT_SIZE) {
3448 if (tbase != CMFAIL) {
3450 if ((m->footprint += tsize) > m->max_footprint)
3451 m->max_footprint = m->footprint;
3453 if (!is_initialized(m)) { /* first-time initialization */
3454 m->seg.base = m->least_addr = tbase;
3455 m->seg.size = tsize;
3456 m->seg.sflags = mmap_flag;
3457 m->magic = mparams.magic;
3460 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3462 /* Offset top by embedded malloc_state */
3463 mchunkptr mn = next_chunk(mem2chunk(m));
3464 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3469 /* Try to merge with an existing segment */
3470 msegmentptr sp = &m->seg;
3471 while (sp != 0 && tbase != sp->base + sp->size)
3474 !is_extern_segment(sp) &&
3475 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3476 segment_holds(sp, m->top)) { /* append */
3478 init_top(m, m->top, m->topsize + tsize);
3481 if (tbase < m->least_addr)
3482 m->least_addr = tbase;
3484 while (sp != 0 && sp->base != tbase + tsize)
3487 !is_extern_segment(sp) &&
3488 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3489 char* oldbase = sp->base;
3492 return prepend_alloc(m, tbase, oldbase, nb);
3495 add_segment(m, tbase, tsize, mmap_flag);
3499 if (nb < m->topsize) { /* Allocate from new or extended top space */
3500 size_t rsize = m->topsize -= nb;
3501 mchunkptr p = m->top;
3502 mchunkptr r = m->top = chunk_plus_offset(p, nb);
3503 r->head = rsize | PINUSE_BIT;
3504 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3505 check_top_chunk(m, m->top);
3506 check_malloced_chunk(m, chunk2mem(p), nb);
3507 return chunk2mem(p);
3511 MALLOC_FAILURE_ACTION;
3515 /* ----------------------- system deallocation -------------------------- */
3517 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3518 static size_t release_unused_segments(mstate m) {
3519 size_t released = 0;
3520 msegmentptr pred = &m->seg;
3521 msegmentptr sp = pred->next;
3523 char* base = sp->base;
3524 size_t size = sp->size;
3525 msegmentptr next = sp->next;
3526 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3527 mchunkptr p = align_as_chunk(base);
3528 size_t psize = chunksize(p);
3529 /* Can unmap if first chunk holds entire segment and not pinned */
3530 if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3531 tchunkptr tp = (tchunkptr)p;
3532 assert(segment_holds(sp, (char*)sp));
3538 unlink_large_chunk(m, tp);
3540 if (CALL_MUNMAP(base, size) == 0) {
3542 m->footprint -= size;
3543 /* unlink obsoleted record */
3547 else { /* back out if cannot unmap */
3548 insert_large_chunk(m, tp, psize);
3558 static int sys_trim(mstate m, size_t pad) {
3559 size_t released = 0;
3560 if (pad < MAX_REQUEST && is_initialized(m)) {
3561 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3563 if (m->topsize > pad) {
3564 /* Shrink top space in granularity-size units, keeping at least one */
3565 size_t unit = mparams.granularity;
3566 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3568 msegmentptr sp = segment_holding(m, (char*)m->top);
3570 if (!is_extern_segment(sp)) {
3571 if (is_mmapped_segment(sp)) {
3573 sp->size >= extra &&
3574 !has_segment_link(m, sp)) { /* can't shrink if pinned */
3575 size_t newsize = sp->size - extra;
3576 /* Prefer mremap, fall back to munmap */
3577 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3578 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3583 else if (HAVE_MORECORE) {
3584 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3585 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3586 ACQUIRE_MORECORE_LOCK();
3588 /* Make sure end of memory is where we last set it. */
3589 char* old_br = (char*)(CALL_MORECORE(0));
3590 if (old_br == sp->base + sp->size) {
3591 char* rel_br = (char*)(CALL_MORECORE(-extra));
3592 char* new_br = (char*)(CALL_MORECORE(0));
3593 if (rel_br != CMFAIL && new_br < old_br)
3594 released = old_br - new_br;
3597 RELEASE_MORECORE_LOCK();
3601 if (released != 0) {
3602 sp->size -= released;
3603 m->footprint -= released;
3604 init_top(m, m->top, m->topsize - released);
3605 check_top_chunk(m, m->top);
3609 /* Unmap any unused mmapped segments */
3611 released += release_unused_segments(m);
3613 /* On failure, disable autotrim to avoid repeated failed future calls */
3615 m->trim_check = MAX_SIZE_T;
3618 return (released != 0)? 1 : 0;
3621 /* ---------------------------- malloc support --------------------------- */
3623 /* allocate a large request from the best fitting chunk in a treebin */
3624 static void* tmalloc_large(mstate m, size_t nb) {
3626 size_t rsize = -nb; /* Unsigned negation */
3629 compute_tree_index(nb, idx);
3631 if ((t = *treebin_at(m, idx)) != 0) {
3632 /* Traverse tree for this bin looking for node with size == nb */
3633 size_t sizebits = nb << leftshift_for_tree_index(idx);
3634 tchunkptr rst = 0; /* The deepest untaken right subtree */
3637 size_t trem = chunksize(t) - nb;
3640 if ((rsize = trem) == 0)
3644 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3645 if (rt != 0 && rt != t)
3648 t = rst; /* set t to least subtree holding sizes > nb */
3655 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3656 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3657 if (leftbits != 0) {
3659 binmap_t leastbit = least_bit(leftbits);
3660 compute_bit2idx(leastbit, i);
3661 t = *treebin_at(m, i);
3665 while (t != 0) { /* find smallest of tree or subtree */
3666 size_t trem = chunksize(t) - nb;
3671 t = leftmost_child(t);
3674 /* If dv is a better fit, return 0 so malloc will use it */
3675 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3676 if (RTCHECK(ok_address(m, v))) { /* split */
3677 mchunkptr r = chunk_plus_offset(v, nb);
3678 assert(chunksize(v) == rsize + nb);
3679 if (RTCHECK(ok_next(v, r))) {
3680 unlink_large_chunk(m, v);
3681 if (rsize < MIN_CHUNK_SIZE)
3682 set_inuse_and_pinuse(m, v, (rsize + nb));
3684 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3685 set_size_and_pinuse_of_free_chunk(r, rsize);
3686 insert_chunk(m, r, rsize);
3688 return chunk2mem(v);
3691 CORRUPTION_ERROR_ACTION(m);
3696 /* allocate a small request from the best fitting chunk in a treebin */
3697 static void* tmalloc_small(mstate m, size_t nb) {
3701 binmap_t leastbit = least_bit(m->treemap);
3702 compute_bit2idx(leastbit, i);
3704 v = t = *treebin_at(m, i);
3705 rsize = chunksize(t) - nb;
3707 while ((t = leftmost_child(t)) != 0) {
3708 size_t trem = chunksize(t) - nb;
3715 if (RTCHECK(ok_address(m, v))) {
3716 mchunkptr r = chunk_plus_offset(v, nb);
3717 assert(chunksize(v) == rsize + nb);
3718 if (RTCHECK(ok_next(v, r))) {
3719 unlink_large_chunk(m, v);
3720 if (rsize < MIN_CHUNK_SIZE)
3721 set_inuse_and_pinuse(m, v, (rsize + nb));
3723 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3724 set_size_and_pinuse_of_free_chunk(r, rsize);
3725 replace_dv(m, r, rsize);
3727 return chunk2mem(v);
3731 CORRUPTION_ERROR_ACTION(m);
3735 /* --------------------------- realloc support --------------------------- */
3737 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3738 if (bytes >= MAX_REQUEST) {
3739 MALLOC_FAILURE_ACTION;
3742 if (!PREACTION(m)) {
3743 mchunkptr oldp = mem2chunk(oldmem);
3744 size_t oldsize = chunksize(oldp);
3745 mchunkptr next = chunk_plus_offset(oldp, oldsize);
3749 /* Try to either shrink or extend into top. Else malloc-copy-free */
3751 if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3752 ok_next(oldp, next) && ok_pinuse(next))) {
3753 size_t nb = request2size(bytes);
3754 if (is_mmapped(oldp))
3755 newp = mmap_resize(m, oldp, nb);
3756 else if (oldsize >= nb) { /* already big enough */
3757 size_t rsize = oldsize - nb;
3759 if (rsize >= MIN_CHUNK_SIZE) {
3760 mchunkptr remainder = chunk_plus_offset(newp, nb);
3761 set_inuse(m, newp, nb);
3762 set_inuse(m, remainder, rsize);
3763 extra = chunk2mem(remainder);
3766 else if (next == m->top && oldsize + m->topsize > nb) {
3767 /* Expand into top */
3768 size_t newsize = oldsize + m->topsize;
3769 size_t newtopsize = newsize - nb;
3770 mchunkptr newtop = chunk_plus_offset(oldp, nb);
3771 set_inuse(m, oldp, nb);
3772 newtop->head = newtopsize |PINUSE_BIT;
3774 m->topsize = newtopsize;
3779 USAGE_ERROR_ACTION(m, oldmem);
3788 internal_free(m, extra);
3790 check_inuse_chunk(m, newp);
3791 return chunk2mem(newp);
3794 void* newmem = internal_malloc(m, bytes);
3796 size_t oc = oldsize - overhead_for(oldp);
3797 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3798 internal_free(m, oldmem);
3806 /* --------------------------- memalign support -------------------------- */
3808 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3809 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
3810 return internal_malloc(m, bytes);
3811 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3812 alignment = MIN_CHUNK_SIZE;
3813 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3814 size_t a = MALLOC_ALIGNMENT << 1;
3815 while (a < alignment) a <<= 1;
3819 if (bytes >= MAX_REQUEST - alignment) {
3820 if (m != 0) { /* Test isn't needed but avoids compiler warning */
3821 MALLOC_FAILURE_ACTION;
3825 size_t nb = request2size(bytes);
3826 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3827 char* mem = (char*)internal_malloc(m, req);
3831 mchunkptr p = mem2chunk(mem);
3833 if (PREACTION(m)) return 0;
3834 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3836 Find an aligned spot inside chunk. Since we need to give
3837 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3838 the first calculation places us at a spot with less than
3839 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3840 We've allocated enough total room so that this is always
3843 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3847 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3849 mchunkptr newp = (mchunkptr)pos;
3850 size_t leadsize = pos - (char*)(p);
3851 size_t newsize = chunksize(p) - leadsize;
3853 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3854 newp->prev_foot = p->prev_foot + leadsize;
3855 newp->head = (newsize|CINUSE_BIT);
3857 else { /* Otherwise, give back leader, use the rest */
3858 set_inuse(m, newp, newsize);
3859 set_inuse(m, p, leadsize);
3860 leader = chunk2mem(p);
3865 /* Give back spare room at the end */
3866 if (!is_mmapped(p)) {
3867 size_t size = chunksize(p);
3868 if (size > nb + MIN_CHUNK_SIZE) {
3869 size_t remainder_size = size - nb;
3870 mchunkptr remainder = chunk_plus_offset(p, nb);
3871 set_inuse(m, p, nb);
3872 set_inuse(m, remainder, remainder_size);
3873 trailer = chunk2mem(remainder);
3877 assert (chunksize(p) >= nb);
3878 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3879 check_inuse_chunk(m, p);
3882 internal_free(m, leader);
3885 internal_free(m, trailer);
3887 return chunk2mem(p);
3893 /* ------------------------ comalloc/coalloc support --------------------- */
3895 static void** ialloc(mstate m,
3901 This provides common support for independent_X routines, handling
3902 all of the combinations that can result.
3905 bit 0 set if all elements are same size (using sizes[0])
3906 bit 1 set if elements should be zeroed
3909 size_t element_size; /* chunksize of each element, if all same */
3910 size_t contents_size; /* total size of elements */
3911 size_t array_size; /* request size of pointer array */
3912 void* mem; /* malloced aggregate space */
3913 mchunkptr p; /* corresponding chunk */
3914 size_t remainder_size; /* remaining bytes while splitting */
3915 void** marray; /* either "chunks" or malloced ptr array */
3916 mchunkptr array_chunk; /* chunk for malloced ptr array */
3917 flag_t was_enabled; /* to disable mmap */
3921 /* compute array length, if needed */
3923 if (n_elements == 0)
3924 return chunks; /* nothing to do */
3929 /* if empty req, must still return chunk representing empty array */
3930 if (n_elements == 0)
3931 return (void**)internal_malloc(m, 0);
3933 array_size = request2size(n_elements * (sizeof(void*)));
3936 /* compute total element size */
3937 if (opts & 0x1) { /* all-same-size */
3938 element_size = request2size(*sizes);
3939 contents_size = n_elements * element_size;
3941 else { /* add up all the sizes */
3944 for (i = 0; i != n_elements; ++i)
3945 contents_size += request2size(sizes[i]);
3948 size = contents_size + array_size;
3951 Allocate the aggregate chunk. First disable direct-mmapping so
3952 malloc won't use it, since we would not be able to later
3953 free/realloc space internal to a segregated mmap region.
3955 was_enabled = use_mmap(m);
3957 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3963 if (PREACTION(m)) return 0;
3965 remainder_size = chunksize(p);
3967 assert(!is_mmapped(p));
3969 if (opts & 0x2) { /* optionally clear the elements */
3970 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3973 /* If not provided, allocate the pointer array as final part of chunk */
3975 size_t array_chunk_size;
3976 array_chunk = chunk_plus_offset(p, contents_size);
3977 array_chunk_size = remainder_size - contents_size;
3978 marray = (void**) (chunk2mem(array_chunk));
3979 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3980 remainder_size = contents_size;
3983 /* split out elements */
3984 for (i = 0; ; ++i) {
3985 marray[i] = chunk2mem(p);
3986 if (i != n_elements-1) {
3987 if (element_size != 0)
3988 size = element_size;
3990 size = request2size(sizes[i]);
3991 remainder_size -= size;
3992 set_size_and_pinuse_of_inuse_chunk(m, p, size);
3993 p = chunk_plus_offset(p, size);
3995 else { /* the final element absorbs any overallocation slop */
3996 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4002 if (marray != chunks) {
4003 /* final element must have exactly exhausted chunk */
4004 if (element_size != 0) {
4005 assert(remainder_size == element_size);
4008 assert(remainder_size == request2size(sizes[i]));
4010 check_inuse_chunk(m, mem2chunk(marray));
4012 for (i = 0; i != n_elements; ++i)
4013 check_inuse_chunk(m, mem2chunk(marray[i]));
4022 /* -------------------------- public routines ---------------------------- */
4026 void* dlmalloc(size_t bytes) {
4029 If a small request (< 256 bytes minus per-chunk overhead):
4030 1. If one exists, use a remainderless chunk in associated smallbin.
4031 (Remainderless means that there are too few excess bytes to
4032 represent as a chunk.)
4033 2. If it is big enough, use the dv chunk, which is normally the
4034 chunk adjacent to the one used for the most recent small request.
4035 3. If one exists, split the smallest available chunk in a bin,
4036 saving remainder in dv.
4037 4. If it is big enough, use the top chunk.
4038 5. If available, get memory from system and use it
4039 Otherwise, for a large request:
4040 1. Find the smallest available binned chunk that fits, and use it
4041 if it is better fitting than dv chunk, splitting if necessary.
4042 2. If better fitting than any binned chunk, use the dv chunk.
4043 3. If it is big enough, use the top chunk.
4044 4. If request size >= mmap threshold, try to directly mmap this chunk.
4045 5. If available, get memory from system and use it
4047 The ugly goto's here ensure that postaction occurs along all paths.
4050 if (!PREACTION(gm)) {
4053 if (bytes <= MAX_SMALL_REQUEST) {
4056 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4057 idx = small_index(nb);
4058 smallbits = gm->smallmap >> idx;
4060 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4062 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4063 b = smallbin_at(gm, idx);
4065 assert(chunksize(p) == small_index2size(idx));
4066 unlink_first_small_chunk(gm, b, p, idx);
4067 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4069 check_malloced_chunk(gm, mem, nb);
4073 else if (nb > gm->dvsize) {
4074 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4078 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4079 binmap_t leastbit = least_bit(leftbits);
4080 compute_bit2idx(leastbit, i);
4081 b = smallbin_at(gm, i);
4083 assert(chunksize(p) == small_index2size(i));
4084 unlink_first_small_chunk(gm, b, p, i);
4085 rsize = small_index2size(i) - nb;
4086 /* Fit here cannot be remainderless if 4byte sizes */
4087 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4088 set_inuse_and_pinuse(gm, p, small_index2size(i));
4090 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4091 r = chunk_plus_offset(p, nb);
4092 set_size_and_pinuse_of_free_chunk(r, rsize);
4093 replace_dv(gm, r, rsize);
4096 check_malloced_chunk(gm, mem, nb);
4100 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4101 check_malloced_chunk(gm, mem, nb);
4106 else if (bytes >= MAX_REQUEST)
4107 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4109 nb = pad_request(bytes);
4110 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4111 check_malloced_chunk(gm, mem, nb);
4116 if (nb <= gm->dvsize) {
4117 size_t rsize = gm->dvsize - nb;
4118 mchunkptr p = gm->dv;
4119 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4120 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4122 set_size_and_pinuse_of_free_chunk(r, rsize);
4123 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4125 else { /* exhaust dv */
4126 size_t dvs = gm->dvsize;
4129 set_inuse_and_pinuse(gm, p, dvs);
4132 check_malloced_chunk(gm, mem, nb);
4136 else if (nb < gm->topsize) { /* Split top */
4137 size_t rsize = gm->topsize -= nb;
4138 mchunkptr p = gm->top;
4139 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4140 r->head = rsize | PINUSE_BIT;
4141 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4143 check_top_chunk(gm, gm->top);
4144 check_malloced_chunk(gm, mem, nb);
4148 mem = sys_alloc(gm, nb);
4158 void dlfree(void* mem) {
4160 Consolidate freed chunks with preceeding or succeeding bordering
4161 free chunks, if they exist, and then place in a bin. Intermixed
4162 with special cases for top, dv, mmapped chunks, and usage errors.
4166 mchunkptr p = mem2chunk(mem);
4168 mstate fm = get_mstate_for(p);
4169 if (!ok_magic(fm)) {
4170 USAGE_ERROR_ACTION(fm, p);
4175 #endif /* FOOTERS */
4176 if (!PREACTION(fm)) {
4177 check_inuse_chunk(fm, p);
4178 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4179 size_t psize = chunksize(p);
4180 mchunkptr next = chunk_plus_offset(p, psize);
4182 size_t prevsize = p->prev_foot;
4183 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4184 prevsize &= ~IS_MMAPPED_BIT;
4185 psize += prevsize + MMAP_FOOT_PAD;
4186 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4187 fm->footprint -= psize;
4191 mchunkptr prev = chunk_minus_offset(p, prevsize);
4194 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4196 unlink_chunk(fm, p, prevsize);
4198 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4200 set_free_with_pinuse(p, psize, next);
4209 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4210 if (!cinuse(next)) { /* consolidate forward */
4211 if (next == fm->top) {
4212 size_t tsize = fm->topsize += psize;
4214 p->head = tsize | PINUSE_BIT;
4219 if (should_trim(fm, tsize))
4223 else if (next == fm->dv) {
4224 size_t dsize = fm->dvsize += psize;
4226 set_size_and_pinuse_of_free_chunk(p, dsize);
4230 size_t nsize = chunksize(next);
4232 unlink_chunk(fm, next, nsize);
4233 set_size_and_pinuse_of_free_chunk(p, psize);
4241 set_free_with_pinuse(p, psize, next);
4242 insert_chunk(fm, p, psize);
4243 check_free_chunk(fm, p);
4248 USAGE_ERROR_ACTION(fm, p);
4255 #endif /* FOOTERS */
4258 void* dlcalloc(size_t n_elements, size_t elem_size) {
4261 if (n_elements != 0) {
4262 req = n_elements * elem_size;
4263 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4264 (req / n_elements != elem_size))
4265 req = MAX_SIZE_T; /* force downstream failure on overflow */
4267 mem = dlmalloc(req);
4268 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4269 memset(mem, 0, req);
4273 void* dlrealloc(void* oldmem, size_t bytes) {
4275 return dlmalloc(bytes);
4276 #ifdef REALLOC_ZERO_BYTES_FREES
4281 #endif /* REALLOC_ZERO_BYTES_FREES */
4286 mstate m = get_mstate_for(mem2chunk(oldmem));
4288 USAGE_ERROR_ACTION(m, oldmem);
4291 #endif /* FOOTERS */
4292 return internal_realloc(m, oldmem, bytes);
4296 void* dlmemalign(size_t alignment, size_t bytes) {
4297 return internal_memalign(gm, alignment, bytes);
4300 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4302 size_t sz = elem_size; /* serves as 1-element array */
4303 return ialloc(gm, n_elements, &sz, 3, chunks);
4306 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4308 return ialloc(gm, n_elements, sizes, 0, chunks);
4311 void* dlvalloc(size_t bytes) {
4314 pagesz = mparams.page_size;
4315 return dlmemalign(pagesz, bytes);
4318 void* dlpvalloc(size_t bytes) {
4321 pagesz = mparams.page_size;
4322 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4325 int dlmalloc_trim(size_t pad) {
4327 if (!PREACTION(gm)) {
4328 result = sys_trim(gm, pad);
4334 size_t dlmalloc_footprint(void) {
4335 return gm->footprint;
4338 size_t dlmalloc_max_footprint(void) {
4339 return gm->max_footprint;
4343 struct mallinfo dlmallinfo(void) {
4344 return internal_mallinfo(gm);
4346 #endif /* NO_MALLINFO */
4348 void dlmalloc_stats() {
4349 internal_malloc_stats(gm);
4352 size_t dlmalloc_usable_size(void* mem) {
4354 mchunkptr p = mem2chunk(mem);
4356 return chunksize(p) - overhead_for(p);
4361 int dlmallopt(int param_number, int value) {
4362 return change_mparam(param_number, value);
4365 #endif /* !ONLY_MSPACES */
4367 /* ----------------------------- user mspaces ---------------------------- */
4371 static mstate init_user_mstate(char* tbase, size_t tsize) {
4372 size_t msize = pad_request(sizeof(struct malloc_state));
4374 mchunkptr msp = align_as_chunk(tbase);
4375 mstate m = (mstate)(chunk2mem(msp));
4376 memset(m, 0, msize);
4377 INITIAL_LOCK(&m->mutex);
4378 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4379 m->seg.base = m->least_addr = tbase;
4380 m->seg.size = m->footprint = m->max_footprint = tsize;
4381 m->magic = mparams.magic;
4382 m->mflags = mparams.default_mflags;
4383 disable_contiguous(m);
4385 mn = next_chunk(mem2chunk(m));
4386 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4387 check_top_chunk(m, m->top);
4391 mspace create_mspace(size_t capacity, int locked) {
4393 size_t msize = pad_request(sizeof(struct malloc_state));
4394 init_mparams(); /* Ensure pagesize etc initialized */
4396 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4397 size_t rs = ((capacity == 0)? mparams.granularity :
4398 (capacity + TOP_FOOT_SIZE + msize));
4399 size_t tsize = granularity_align(rs);
4400 char* tbase = (char*)(CALL_MMAP(tsize));
4401 if (tbase != CMFAIL) {
4402 m = init_user_mstate(tbase, tsize);
4403 m->seg.sflags = IS_MMAPPED_BIT;
4404 set_lock(m, locked);
4410 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4412 size_t msize = pad_request(sizeof(struct malloc_state));
4413 init_mparams(); /* Ensure pagesize etc initialized */
4415 if (capacity > msize + TOP_FOOT_SIZE &&
4416 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4417 m = init_user_mstate((char*)base, capacity);
4418 m->seg.sflags = EXTERN_BIT;
4419 set_lock(m, locked);
4424 size_t destroy_mspace(mspace msp) {
4426 mstate ms = (mstate)msp;
4428 msegmentptr sp = &ms->seg;
4430 char* base = sp->base;
4431 size_t size = sp->size;
4432 flag_t flag = sp->sflags;
4434 if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4435 CALL_MUNMAP(base, size) == 0)
4440 USAGE_ERROR_ACTION(ms,ms);
4446 mspace versions of routines are near-clones of the global
4447 versions. This is not so nice but better than the alternatives.
4451 void* mspace_malloc(mspace msp, size_t bytes) {
4452 mstate ms = (mstate)msp;
4453 if (!ok_magic(ms)) {
4454 USAGE_ERROR_ACTION(ms,ms);
4457 if (!PREACTION(ms)) {
4460 if (bytes <= MAX_SMALL_REQUEST) {
4463 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4464 idx = small_index(nb);
4465 smallbits = ms->smallmap >> idx;
4467 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4469 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4470 b = smallbin_at(ms, idx);
4472 assert(chunksize(p) == small_index2size(idx));
4473 unlink_first_small_chunk(ms, b, p, idx);
4474 set_inuse_and_pinuse(ms, p, small_index2size(idx));
4476 check_malloced_chunk(ms, mem, nb);
4480 else if (nb > ms->dvsize) {
4481 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4485 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4486 binmap_t leastbit = least_bit(leftbits);
4487 compute_bit2idx(leastbit, i);
4488 b = smallbin_at(ms, i);
4490 assert(chunksize(p) == small_index2size(i));
4491 unlink_first_small_chunk(ms, b, p, i);
4492 rsize = small_index2size(i) - nb;
4493 /* Fit here cannot be remainderless if 4byte sizes */
4494 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4495 set_inuse_and_pinuse(ms, p, small_index2size(i));
4497 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4498 r = chunk_plus_offset(p, nb);
4499 set_size_and_pinuse_of_free_chunk(r, rsize);
4500 replace_dv(ms, r, rsize);
4503 check_malloced_chunk(ms, mem, nb);
4507 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4508 check_malloced_chunk(ms, mem, nb);
4513 else if (bytes >= MAX_REQUEST)
4514 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4516 nb = pad_request(bytes);
4517 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4518 check_malloced_chunk(ms, mem, nb);
4523 if (nb <= ms->dvsize) {
4524 size_t rsize = ms->dvsize - nb;
4525 mchunkptr p = ms->dv;
4526 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4527 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4529 set_size_and_pinuse_of_free_chunk(r, rsize);
4530 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4532 else { /* exhaust dv */
4533 size_t dvs = ms->dvsize;
4536 set_inuse_and_pinuse(ms, p, dvs);
4539 check_malloced_chunk(ms, mem, nb);
4543 else if (nb < ms->topsize) { /* Split top */
4544 size_t rsize = ms->topsize -= nb;
4545 mchunkptr p = ms->top;
4546 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4547 r->head = rsize | PINUSE_BIT;
4548 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4550 check_top_chunk(ms, ms->top);
4551 check_malloced_chunk(ms, mem, nb);
4555 mem = sys_alloc(ms, nb);
4565 void mspace_free(mspace msp, void* mem) {
4567 mchunkptr p = mem2chunk(mem);
4569 mstate fm = get_mstate_for(p);
4571 mstate fm = (mstate)msp;
4572 #endif /* FOOTERS */
4573 if (!ok_magic(fm)) {
4574 USAGE_ERROR_ACTION(fm, p);
4577 if (!PREACTION(fm)) {
4578 check_inuse_chunk(fm, p);
4579 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4580 size_t psize = chunksize(p);
4581 mchunkptr next = chunk_plus_offset(p, psize);
4583 size_t prevsize = p->prev_foot;
4584 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4585 prevsize &= ~IS_MMAPPED_BIT;
4586 psize += prevsize + MMAP_FOOT_PAD;
4587 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4588 fm->footprint -= psize;
4592 mchunkptr prev = chunk_minus_offset(p, prevsize);
4595 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4597 unlink_chunk(fm, p, prevsize);
4599 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4601 set_free_with_pinuse(p, psize, next);
4610 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4611 if (!cinuse(next)) { /* consolidate forward */
4612 if (next == fm->top) {
4613 size_t tsize = fm->topsize += psize;
4615 p->head = tsize | PINUSE_BIT;
4620 if (should_trim(fm, tsize))
4624 else if (next == fm->dv) {
4625 size_t dsize = fm->dvsize += psize;
4627 set_size_and_pinuse_of_free_chunk(p, dsize);
4631 size_t nsize = chunksize(next);
4633 unlink_chunk(fm, next, nsize);
4634 set_size_and_pinuse_of_free_chunk(p, psize);
4642 set_free_with_pinuse(p, psize, next);
4643 insert_chunk(fm, p, psize);
4644 check_free_chunk(fm, p);
4649 USAGE_ERROR_ACTION(fm, p);
4656 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4659 mstate ms = (mstate)msp;
4660 if (!ok_magic(ms)) {
4661 USAGE_ERROR_ACTION(ms,ms);
4664 if (n_elements != 0) {
4665 req = n_elements * elem_size;
4666 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4667 (req / n_elements != elem_size))
4668 req = MAX_SIZE_T; /* force downstream failure on overflow */
4670 mem = internal_malloc(ms, req);
4671 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4672 memset(mem, 0, req);
4676 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4678 return mspace_malloc(msp, bytes);
4679 #ifdef REALLOC_ZERO_BYTES_FREES
4681 mspace_free(msp, oldmem);
4684 #endif /* REALLOC_ZERO_BYTES_FREES */
4687 mchunkptr p = mem2chunk(oldmem);
4688 mstate ms = get_mstate_for(p);
4690 mstate ms = (mstate)msp;
4691 #endif /* FOOTERS */
4692 if (!ok_magic(ms)) {
4693 USAGE_ERROR_ACTION(ms,ms);
4696 return internal_realloc(ms, oldmem, bytes);
4700 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4701 mstate ms = (mstate)msp;
4702 if (!ok_magic(ms)) {
4703 USAGE_ERROR_ACTION(ms,ms);
4706 return internal_memalign(ms, alignment, bytes);
4709 void** mspace_independent_calloc(mspace msp, size_t n_elements,
4710 size_t elem_size, void* chunks[]) {
4711 size_t sz = elem_size; /* serves as 1-element array */
4712 mstate ms = (mstate)msp;
4713 if (!ok_magic(ms)) {
4714 USAGE_ERROR_ACTION(ms,ms);
4717 return ialloc(ms, n_elements, &sz, 3, chunks);
4720 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4721 size_t sizes[], void* chunks[]) {
4722 mstate ms = (mstate)msp;
4723 if (!ok_magic(ms)) {
4724 USAGE_ERROR_ACTION(ms,ms);
4727 return ialloc(ms, n_elements, sizes, 0, chunks);
4730 int mspace_trim(mspace msp, size_t pad) {
4732 mstate ms = (mstate)msp;
4734 if (!PREACTION(ms)) {
4735 result = sys_trim(ms, pad);
4740 USAGE_ERROR_ACTION(ms,ms);
4745 void mspace_malloc_stats(mspace msp) {
4746 mstate ms = (mstate)msp;
4748 internal_malloc_stats(ms);
4751 USAGE_ERROR_ACTION(ms,ms);
4755 size_t mspace_footprint(mspace msp) {
4757 mstate ms = (mstate)msp;
4759 result = ms->footprint;
4761 USAGE_ERROR_ACTION(ms,ms);
4766 size_t mspace_max_footprint(mspace msp) {
4768 mstate ms = (mstate)msp;
4770 result = ms->max_footprint;
4772 USAGE_ERROR_ACTION(ms,ms);
4778 struct mallinfo mspace_mallinfo(mspace msp) {
4779 mstate ms = (mstate)msp;
4780 if (!ok_magic(ms)) {
4781 USAGE_ERROR_ACTION(ms,ms);
4783 return internal_mallinfo(ms);
4785 #endif /* NO_MALLINFO */
4787 int mspace_mallopt(int param_number, int value) {
4788 return change_mparam(param_number, value);
4791 #endif /* MSPACES */
4793 /* -------------------- Alternative MORECORE functions ------------------- */
4796 Guidelines for creating a custom version of MORECORE:
4798 * For best performance, MORECORE should allocate in multiples of pagesize.
4799 * MORECORE may allocate more memory than requested. (Or even less,
4800 but this will usually result in a malloc failure.)
4801 * MORECORE must not allocate memory when given argument zero, but
4802 instead return one past the end address of memory from previous
4804 * For best performance, consecutive calls to MORECORE with positive
4805 arguments should return increasing addresses, indicating that
4806 space has been contiguously extended.
4807 * Even though consecutive calls to MORECORE need not return contiguous
4808 addresses, it must be OK for malloc'ed chunks to span multiple
4809 regions in those cases where they do happen to be contiguous.
4810 * MORECORE need not handle negative arguments -- it may instead
4811 just return MFAIL when given negative arguments.
4812 Negative arguments are always multiples of pagesize. MORECORE
4813 must not misinterpret negative args as large positive unsigned
4814 args. You can suppress all such calls from even occurring by defining
4815 MORECORE_CANNOT_TRIM,
4817 As an example alternative MORECORE, here is a custom allocator
4818 kindly contributed for pre-OSX macOS. It uses virtually but not
4819 necessarily physically contiguous non-paged memory (locked in,
4820 present and won't get swapped out). You can use it by uncommenting
4821 this section, adding some #includes, and setting up the appropriate
4824 #define MORECORE osMoreCore
4826 There is also a shutdown routine that should somehow be called for
4827 cleanup upon program exit.
4829 #define MAX_POOL_ENTRIES 100
4830 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
4831 static int next_os_pool;
4832 void *our_os_pools[MAX_POOL_ENTRIES];
4834 void *osMoreCore(int size)
4837 static void *sbrk_top = 0;
4841 if (size < MINIMUM_MORECORE_SIZE)
4842 size = MINIMUM_MORECORE_SIZE;
4843 if (CurrentExecutionLevel() == kTaskLevel)
4844 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4847 return (void *) MFAIL;
4849 // save ptrs so they can be freed during cleanup
4850 our_os_pools[next_os_pool] = ptr;
4852 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4853 sbrk_top = (char *) ptr + size;
4858 // we don't currently support shrink behavior
4859 return (void *) MFAIL;
4867 // cleanup any allocated memory pools
4868 // called as last thing before shutting down driver
4870 void osCleanupMem(void)
4874 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4877 PoolDeallocate(*ptr);
4885 /* -----------------------------------------------------------------------
4887 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
4888 * Add max_footprint functions
4889 * Ensure all appropriate literals are size_t
4890 * Fix conditional compilation problem for some #define settings
4891 * Avoid concatenating segments with the one provided
4892 in create_mspace_with_base
4893 * Rename some variables to avoid compiler shadowing warnings
4894 * Use explicit lock initialization.
4895 * Better handling of sbrk interference.
4896 * Simplify and fix segment insertion, trimming and mspace_destroy
4897 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4898 * Thanks especially to Dennis Flanagan for help on these.
4900 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
4901 * Fix memalign brace error.
4903 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
4904 * Fix improper #endif nesting in C++
4905 * Add explicit casts needed for C++
4907 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
4908 * Use trees for large bins
4910 * Use segments to unify sbrk-based and mmap-based system allocation,
4911 removing need for emulation on most platforms without sbrk.
4912 * Default safety checks
4913 * Optional footer checks. Thanks to William Robertson for the idea.
4914 * Internal code refactoring
4915 * Incorporate suggestions and platform-specific changes.
4916 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4917 Aaron Bachmann, Emery Berger, and others.
4918 * Speed up non-fastbin processing enough to remove fastbins.
4919 * Remove useless cfree() to avoid conflicts with other apps.
4920 * Remove internal memcpy, memset. Compilers handle builtins better.
4921 * Remove some options that no one ever used and rename others.
4923 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
4924 * Fix malloc_state bitmap array misdeclaration
4926 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
4927 * Allow tuning of FIRST_SORTED_BIN_SIZE
4928 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4929 * Better detection and support for non-contiguousness of MORECORE.
4930 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4931 * Bypass most of malloc if no frees. Thanks To Emery Berger.
4932 * Fix freeing of old top non-contiguous chunk im sysmalloc.
4933 * Raised default trim and map thresholds to 256K.
4934 * Fix mmap-related #defines. Thanks to Lubos Lunak.
4935 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4936 * Branch-free bin calculation
4937 * Default trim and mmap thresholds now 256K.
4939 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
4940 * Introduce independent_comalloc and independent_calloc.
4941 Thanks to Michael Pachos for motivation and help.
4942 * Make optional .h file available
4943 * Allow > 2GB requests on 32bit systems.
4944 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4945 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4947 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4949 * memalign: check alignment arg
4950 * realloc: don't try to shift chunks backwards, since this
4951 leads to more fragmentation in some programs and doesn't
4952 seem to help in any others.
4953 * Collect all cases in malloc requiring system memory into sysmalloc
4954 * Use mmap as backup to sbrk
4955 * Place all internal state in malloc_state
4956 * Introduce fastbins (although similar to 2.5.1)
4957 * Many minor tunings and cosmetic improvements
4958 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4959 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4960 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4961 * Include errno.h to support default failure action.
4963 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
4964 * return null for negative arguments
4965 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4966 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4967 (e.g. WIN32 platforms)
4968 * Cleanup header file inclusion for WIN32 platforms
4969 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4970 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4971 memory allocation routines
4972 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4973 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4974 usage of 'assert' in non-WIN32 code
4975 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4977 * Always call 'fREe()' rather than 'free()'
4979 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
4980 * Fixed ordering problem with boundary-stamping
4982 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4983 * Added pvalloc, as recommended by H.J. Liu
4984 * Added 64bit pointer support mainly from Wolfram Gloger
4985 * Added anonymously donated WIN32 sbrk emulation
4986 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4987 * malloc_extend_top: fix mask error that caused wastage after
4989 * Add linux mremap support code from HJ Liu
4991 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4992 * Integrated most documentation with the code.
4993 * Add support for mmap, with help from
4994 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4995 * Use last_remainder in more cases.
4996 * Pack bins using idea from colin@nyx10.cs.du.edu
4997 * Use ordered bins instead of best-fit threshhold
4998 * Eliminate block-local decls to simplify tracing and debugging.
4999 * Support another case of realloc via move into top
5000 * Fix error occuring when initial sbrk_base not word-aligned.
5001 * Rely on page size for units instead of SBRK_UNIT to
5002 avoid surprises about sbrk alignment conventions.
5003 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5004 (raymond@es.ele.tue.nl) for the suggestion.
5005 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5006 * More precautions for cases where other routines call sbrk,
5007 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5008 * Added macros etc., allowing use in linux libc from
5009 H.J. Lu (hjl@gnu.ai.mit.edu)
5010 * Inverted this history list
5012 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5013 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5014 * Removed all preallocation code since under current scheme
5015 the work required to undo bad preallocations exceeds
5016 the work saved in good cases for most test programs.
5017 * No longer use return list or unconsolidated bins since
5018 no scheme using them consistently outperforms those that don't
5019 given above changes.
5020 * Use best fit for very large chunks to prevent some worst-cases.
5021 * Added some support for debugging
5023 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5024 * Removed footers when chunks are in use. Thanks to
5025 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5027 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5028 * Added malloc_trim, with help from Wolfram Gloger
5029 (wmglo@Dent.MED.Uni-Muenchen.DE).
5031 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5033 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5034 * realloc: try to expand in both directions
5035 * malloc: swap order of clean-bin strategy;
5036 * realloc: only conditionally expand backwards
5037 * Try not to scavenge used bins
5038 * Use bin counts as a guide to preallocation
5039 * Occasionally bin return list chunks in first scan
5040 * Add a few optimizations from colin@nyx10.cs.du.edu
5042 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5043 * faster bin computation & slightly different binning
5044 * merged all consolidations to one part of malloc proper
5045 (eliminating old malloc_find_space & malloc_clean_bin)
5046 * Scan 2 returns chunks (not just 1)
5047 * Propagate failure in realloc if malloc returns 0
5048 * Add stuff to allow compilation on non-ANSI compilers
5049 from kpv@research.att.com
5051 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5052 * removed potential for odd address access in prev_chunk
5053 * removed dependency on getpagesize.h
5054 * misc cosmetics and a bit more internal documentation
5055 * anticosmetics: mangled names in macros to evade debugger strangeness
5056 * tested on sparc, hp-700, dec-mips, rs6000
5057 with gcc & native cc (hp, dec only) allowing
5058 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5060 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5061 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5062 structure of old version, but most details differ.)