You are not logged in.
Pages: 1
Hi,
I am looking for info about how much memory is being used by a process in linux. When we use malloc() and free(), how exactly the memory is fetched from the kernel ?
Say i do a malloc a couple of times and then free all the malloc'ed memory is this memory returned back to the kernel immediately upon every free ?
Any pointers on the glibc malloc implementation ?
Best Regards,
Vivek Purushotham
Offline
glibc's malloc implementation uses brk() and mmap() to allocate memory.
mmap() is only used for allocating big memory chunks. Glibc's malloc is
quite reluctant to release memory back to the kernel, and internal heap
fragmentation makes it quite hard as well. All in all expect glibc to give some
memory back only when you free a lot of memory. But you can see how it
goes by keeping an eye on a process' memory usage via top. Or if you want
more detailed info, look in /proc/$PID/smaps and look for "heap". memory
allocated via mmap() is easy to release and won't fragment, so that should be
fine.
Offline
What about it? It's just the amount of process data that currently is resident in RAM
(as opposed to in swap or not loaded from an mmap()'d file or something)... It's not
a Linux-specific thing, but common Unix terminology...
Offline
Thanks for your valuable suggestions and appreciate the quick reply. It was really useful.
I am looking for this kind of behaviour w.r.t a process
a) A process sets a threshold value X
b) When the amount of freed mem allocated through mmap/sbrk exceeds X, it goes back to the system. Till then, it remains with the process. This guarantees that at any moment process does not hold more than X bytes of free memory.
I am looking for this kind of behaviour because the option of setting mallopt() just gives control per allocation/free. It will turn out to be expensive for a process which is involved in too many dynamic memory allocations and free'ing of memory. pls Correct me here
Best Regards,
Vivek Purushotham
Offline
I'm not sure what you mean about mallopt()... I'm pretty sure the behavior you want
can get obtained by mallopt(M_TRIM_THRESHOLD)... That only affects normal heap
memory; as has been stated, any mmap()'d memory will immediately go back to the
system at free() time... So, you could also tweak M_MMAP_{THRESHOLD,MAX} to
control when/if mmap() gets used...
Offline
Oh, but another complication (which i3839 already mentioned, too) is fragmentation...
If the freed heap memory isn't all contiguous, it can't really free it back to the system,
even if the total amount exceeds your given threshold... That threshold is the amount
of contiguous freed memory that triggers the trimming... So, there's no real way to
get exactly what you want (a hard limit on total freed memory, regardless of how
fragmented it happens to be)...
Offline
If you have a one gigabyte of heap, but only allocate one tiny chunk of
memory at the end of it, no memory can be freed. (Though stricktly, I
think if you remap /dev/null to unused chunks they're basically freed,
but I don't know if glibc does that.)
All in all doing what Rob said and lowering the mmap threshold is your
best bet.
Offline
Thanks for your replies Rob and i3839.
We tried out this by setting the threshold to lower value in mallopt.
One behavior i saw was, say we set the threshold to 16 and start allocating memory for say 1200 bytes then the total memory consumed was like 4 times the memory that was allocated without modifying the threshold.
I think i haven't quite understood the approach of what the value should be set to based on what is the size of malloc we would be doing ...
Best Regards,
Vivek Purushotham
Offline
One page is in general 4096 bytes big, and the minimum size you can allocate
with mmap() is one page. If you set the threshold too low you will indeed
allocate much more memory than needed. It's a trade off between speed of
allocation (using heap should be faster than mmap), amount of memory wasted
to over allocation and amount of memory wasted thanks to fragmentation.
Offline
sir,
am dont know about linux.
how to learn that easily.
Pages: 1