Total memory used by Python process?
Is there a way for a Python program to determine how much memory it's currently using? I've seen discussions about memory usage for a single object, but what I need is total memory usage for the process, so that I can determine when it's necessary to start discarding cached data.
python memory-management
add a comment |
Is there a way for a Python program to determine how much memory it's currently using? I've seen discussions about memory usage for a single object, but what I need is total memory usage for the process, so that I can determine when it's necessary to start discarding cached data.
python memory-management
add a comment |
Is there a way for a Python program to determine how much memory it's currently using? I've seen discussions about memory usage for a single object, but what I need is total memory usage for the process, so that I can determine when it's necessary to start discarding cached data.
python memory-management
Is there a way for a Python program to determine how much memory it's currently using? I've seen discussions about memory usage for a single object, but what I need is total memory usage for the process, so that I can determine when it's necessary to start discarding cached data.
python memory-management
python memory-management
edited Aug 28 '12 at 15:16
jmlane
1,52311323
1,52311323
asked Jun 2 '09 at 9:50
rwallace
9,2432173145
9,2432173145
add a comment |
add a comment |
12 Answers
12
active
oldest
votes
Here is a useful solution that works for various operating systems, including Linux, Windows 7, etc.:
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_info().rss) # in bytes
On my current Python 2.7 install, the last line should be
print(process.get_memory_info()[0])
instead (there was a change in the API).
Note: do pip install psutil
if it is not installed yet.
3
psutil
is cross platform and can return the same values as theps
command line tool: pythonhosted.org/psutil/#psutil.Process.memory_info
– amos
Jul 3 '14 at 21:38
6
People from the future, apparently psutil changed its API or something, but on my machine (psutil.__version__ = 3.1.1) the get_memory_info function was renamed to memory_info.
– Mikle
Jul 30 '15 at 11:40
3
Much easier than the other solutions and isn't UNIX-specific. Thanks.
– fantabolous
Sep 1 '15 at 5:34
21
Note that psutil is not in the standard library
– grisaitis
Aug 18 '16 at 19:11
10
This is in bytes, by the way.
– wordsforthewise
Aug 25 '17 at 7:10
|
show 7 more comments
For Unixes (Linux, Mac OS X, Solaris) you could also use the getrusage()
function from the standard library module resource
. The resulting object has the attribute ru_maxrss
, which gives peak memory usage for the calling process:
>>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
2656 # peak memory usage (bytes on OS X, kilobytes on Linux)
The Python docs aren't clear on what the units are exactly, but the Mac OS X man page for getrusage(2)
describes the units as bytes. The Linux man page isn't clear, but it seems to be equivalent to the information from /proc/self/status
, which is in kilobytes.
The getrusage()
function can also be given resource.RUSAGE_CHILDREN
to get the usage for child processes, and (on some systems) resource.RUSAGE_BOTH
for total (self and child) process usage.
resource
is a standard library module.
If you only care about Linux, you can just check the /proc/self/status
file as described in a similar question.
1
Okay, will do. I wasn't sure if SO had a process for merging questions or what. The duplicate post was partly to show people there was a standard library solution on both questions... and partly for the rep. ;) Should I delete this answer?
– Nathan Craike
Oct 6 '11 at 3:19
13
resource
is not cross-platform. The docs explicitly specify it asPlatforms: Unix
.
– Henrik Heimbuerger
Apr 4 '13 at 14:45
3
Mac OS definitely returns the RSS in bytes, Linux returns it in kilobytes.
– Neil
Dec 6 '13 at 23:33
9
The units are NOT in kilobytes. It is platform dependent, so you have to use resource.getpagesize() to find out. The given Python docs (docs.python.org/2/library/resource.html#resource-usage) is actually very clear about it. It is 4096 in my box.
– Ben Lin
Apr 15 '14 at 16:53
3
@BenLin Those Python docs are clearly wrong, or there is a bug on the Mac version. The unit used by getrusage and the value returned by getpagesize are definitely different.
– Amoss
Jul 8 '15 at 17:56
|
show 7 more comments
On Windows, you can use WMI (home page, cheeseshop):
def memory():
import os
from wmi import WMI
w = WMI('.')
result = w.query("SELECT WorkingSet FROM Win32_PerfRawData_PerfProc_Process WHERE IDProcess=%d" % os.getpid())
return int(result[0].WorkingSet)
On Linux (from python cookbook http://code.activestate.com/recipes/286222/:
import os
_proc_status = '/proc/%d/status' % os.getpid()
_scale = 'kB': 1024.0, 'mB': 1024.0*1024.0,
'KB': 1024.0, 'MB': 1024.0*1024.0
def _VmB(VmKey):
'''Private.
'''
global _proc_status, _scale
# get pseudo file /proc/<pid>/status
try:
t = open(_proc_status)
v = t.read()
t.close()
except:
return 0.0 # non-Linux?
# get VmKey line e.g. 'VmRSS: 9999 kBn ...'
i = v.index(VmKey)
v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
return 0.0 # invalid format?
# convert Vm value to bytes
return float(v[1]) * _scale[v[2]]
def memory(since=0.0):
'''Return memory usage in bytes.
'''
return _VmB('VmSize:') - since
def resident(since=0.0):
'''Return resident memory usage in bytes.
'''
return _VmB('VmRSS:') - since
def stacksize(since=0.0):
'''Return stack size in bytes.
'''
return _VmB('VmStk:') - since
14
The Windows code doesn't work for me. This change does:return int(result[0].WorkingSet)
– John Fouhy
Aug 31 '10 at 0:46
1
This Windows code doesn't work for me on Windows 7 x64, even after John Fouhy's comment modification.
– Basj
Feb 7 '14 at 15:59
1
What is the error?
– codeape
Feb 7 '14 at 19:58
1
John Fouhy's change works for me on Windows 7 x64.
– simonzack
Jul 4 '14 at 10:58
I have this error: return [ wmi_object (obj, instance_of, fields) for obj in self._raw_query(wql) ] File "C:Python27libsite-packageswin32comclientutil.py", line 84, in next return _get_good_object_(self._iter.next(), resultCLSID = self.resultCLSID) pywintypes.com_error: (-2147217385, 'OLE error 0x80041017', None, None) if anyone can help me? I have win 8 x64 but python on x32
– Radu Vlad
Sep 9 '14 at 6:06
|
show 1 more comment
On unix, you can use the ps
tool to monitor it:
$ ps u -p 1347 | awk 'sum=sum+$6; END print sum/1024'
where 1347 is some process id. Also, the result is in MB.
add a comment |
Heapy (and friends) may be what you're looking for.
Also, caches typically have a fixed upper limit on their size to solve the sort of problem you're talking about. For instance, check out this LRU cache decorator.
3
Flagged as a link only answer.
– ArtOfWarfare
Nov 25 '14 at 15:39
add a comment |
I like it, thank you for @bayer. I get a specific process count tool, now.
# Megabyte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum/1024 " MB"'
87.9492 MB
# Byte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum " KB"'
90064 KB
Attach my process list.
$ ps aux | grep python
root 943 0.0 0.1 53252 9524 ? Ss Aug19 52:01 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 950 0.6 0.4 299680 34220 ? Sl Aug19 568:52 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 3803 0.2 0.4 315692 36576 ? S 12:43 0:54 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
jonny 23325 0.0 0.1 47460 9076 pts/0 S+ 17:40 0:00 python
jonny 24651 0.0 0.0 13076 924 pts/4 S+ 18:06 0:00 grep python
Reference
- memory - Linux: find out what process is using all the RAM? - Super User
- Total memory used by Python process? - Stack Overflow
- linux - ps aux output meaning - Super User
just an optimisation of code to avoid multi pipeps aux | awk '/python/sum+=$6; END print sum/1024 " MB"'
– NeronLeVelu
Oct 4 '17 at 5:06
add a comment |
Below is my function decorator which allows to track how much memory this process consumed before the function call, how much memory it uses after the function call, and how long the function is executed.
import time
import os
import psutil
def elapsed_since(start):
return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
def get_process_memory():
process = psutil.Process(os.getpid())
return process.get_memory_info().rss
def track(func):
def wrapper(*args, **kwargs):
mem_before = get_process_memory()
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
mem_after = get_process_memory()
print(": memory before: :,, after: :,, consumed: :,; exec time: ".format(
func.__name__,
mem_before, mem_after, mem_after - mem_before,
elapsed_time))
return result
return wrapper
So, when you have some function decorated with it
from utils import track
@track
def list_create(n):
print("inside list create")
x = [1] * n
return x
You will be able to see this output:
inside list create
list_create: memory before: 45,928,448, after: 46,211,072, consumed: 282,624; exec time: 00:00:00
add a comment |
import os, win32api, win32con, win32process
han = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION|win32con.PROCESS_VM_READ, 0, os.getpid())
process_memory = int(win32process.GetProcessMemoryInfo(han)['WorkingSetSize'])
6
This could be improved with some explanation of what it does and how it works.
– ArtOfWarfare
Nov 25 '14 at 15:39
Based on the large number returned (8 digits) and how I'm not doing much of anything, I'm guessing this has to be bytes? So it's around 28.5 MB for a rather idle interactive instance. (Wow... I didn't even realize the above comment was mine from 4 years ago... that's weird.)
– ArtOfWarfare
Jun 8 at 18:50
add a comment |
Current memory usage of the current process on Linux, for Python 2, Python 3, and pypy, without any imports:
def getCurrentMemoryUsage():
''' Memory usage in kB '''
with open('/proc/self/status') as f:
memusage = f.read().split('VmRSS:')[1].split('n')[0][:-3]
return int(memusage.strip())
Tested on Linux 4.4 and 4.9, but even an early Linux version should work.
Looking in man proc
and searching for the info on the /proc/$PID/status
file, it mentions minimum versions for some fields (like Linux 2.6.10 for "VmPTE"), but the "VmRSS" field (which I use here) has no such mention. Therefore I assume it has been in there since an early version.
add a comment |
Using sh and os to get into python bayer's answer.
float(sh.awk(sh.ps('u','-p',os.getpid()),'sum=sum+$6; END print sum/1024'))
Answer is in megabytes.
3
Should be noted that `sh' isn't a stdlib module. It's installable with pip, though.
– Jürgen A. Erhard
Sep 4 '13 at 0:00
add a comment |
For Python 3.6 and psutil 5.4.5 it is easier to use memory_percent()
function listed here.
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_percent())
add a comment |
Even easier to use than /proc/self/status
: /proc/self/statm
. It's just a space delimited list of several statistics. I haven't been able to tell if both files are always present.
/proc/[pid]/statm
Provides information about memory usage, measured in pages.
The columns are:
- size (1) total program size
(same as VmSize in /proc/[pid]/status)
- resident (2) resident set size
(same as VmRSS in /proc/[pid]/status)
- shared (3) number of resident shared pages (i.e., backed by a file)
(same as RssFile+RssShmem in /proc/[pid]/status)
- text (4) text (code)
- lib (5) library (unused since Linux 2.6; always 0)
- data (6) data + stack
- dt (7) dirty pages (unused since Linux 2.6; always 0)
Here's a simple example:
from pathlib import Path
from resource import getpagesize
def get_resident_set_size():
# Columns are: size resident shared text lib data dt
statm = Path('/proc/self/statm').read_text()
fields = statm.split()
return int(fields[1]) * getpagesize()
data =
start_memory = get_resident_set_size()
for _ in range(10):
data.append('X' * 100000)
print(get_resident_set_size() - start_memory)
That produces a list that looks something like this:
0
0
368640
368640
368640
638976
638976
909312
909312
909312
You can see that it jumps by about 300,000 bytes after roughly 3 allocations of 100,000 bytes.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f938733%2ftotal-memory-used-by-python-process%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
12 Answers
12
active
oldest
votes
12 Answers
12
active
oldest
votes
active
oldest
votes
active
oldest
votes
Here is a useful solution that works for various operating systems, including Linux, Windows 7, etc.:
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_info().rss) # in bytes
On my current Python 2.7 install, the last line should be
print(process.get_memory_info()[0])
instead (there was a change in the API).
Note: do pip install psutil
if it is not installed yet.
3
psutil
is cross platform and can return the same values as theps
command line tool: pythonhosted.org/psutil/#psutil.Process.memory_info
– amos
Jul 3 '14 at 21:38
6
People from the future, apparently psutil changed its API or something, but on my machine (psutil.__version__ = 3.1.1) the get_memory_info function was renamed to memory_info.
– Mikle
Jul 30 '15 at 11:40
3
Much easier than the other solutions and isn't UNIX-specific. Thanks.
– fantabolous
Sep 1 '15 at 5:34
21
Note that psutil is not in the standard library
– grisaitis
Aug 18 '16 at 19:11
10
This is in bytes, by the way.
– wordsforthewise
Aug 25 '17 at 7:10
|
show 7 more comments
Here is a useful solution that works for various operating systems, including Linux, Windows 7, etc.:
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_info().rss) # in bytes
On my current Python 2.7 install, the last line should be
print(process.get_memory_info()[0])
instead (there was a change in the API).
Note: do pip install psutil
if it is not installed yet.
3
psutil
is cross platform and can return the same values as theps
command line tool: pythonhosted.org/psutil/#psutil.Process.memory_info
– amos
Jul 3 '14 at 21:38
6
People from the future, apparently psutil changed its API or something, but on my machine (psutil.__version__ = 3.1.1) the get_memory_info function was renamed to memory_info.
– Mikle
Jul 30 '15 at 11:40
3
Much easier than the other solutions and isn't UNIX-specific. Thanks.
– fantabolous
Sep 1 '15 at 5:34
21
Note that psutil is not in the standard library
– grisaitis
Aug 18 '16 at 19:11
10
This is in bytes, by the way.
– wordsforthewise
Aug 25 '17 at 7:10
|
show 7 more comments
Here is a useful solution that works for various operating systems, including Linux, Windows 7, etc.:
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_info().rss) # in bytes
On my current Python 2.7 install, the last line should be
print(process.get_memory_info()[0])
instead (there was a change in the API).
Note: do pip install psutil
if it is not installed yet.
Here is a useful solution that works for various operating systems, including Linux, Windows 7, etc.:
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_info().rss) # in bytes
On my current Python 2.7 install, the last line should be
print(process.get_memory_info()[0])
instead (there was a change in the API).
Note: do pip install psutil
if it is not installed yet.
edited Dec 10 at 16:31
answered Feb 7 '14 at 16:11
Basj
5,42429103223
5,42429103223
3
psutil
is cross platform and can return the same values as theps
command line tool: pythonhosted.org/psutil/#psutil.Process.memory_info
– amos
Jul 3 '14 at 21:38
6
People from the future, apparently psutil changed its API or something, but on my machine (psutil.__version__ = 3.1.1) the get_memory_info function was renamed to memory_info.
– Mikle
Jul 30 '15 at 11:40
3
Much easier than the other solutions and isn't UNIX-specific. Thanks.
– fantabolous
Sep 1 '15 at 5:34
21
Note that psutil is not in the standard library
– grisaitis
Aug 18 '16 at 19:11
10
This is in bytes, by the way.
– wordsforthewise
Aug 25 '17 at 7:10
|
show 7 more comments
3
psutil
is cross platform and can return the same values as theps
command line tool: pythonhosted.org/psutil/#psutil.Process.memory_info
– amos
Jul 3 '14 at 21:38
6
People from the future, apparently psutil changed its API or something, but on my machine (psutil.__version__ = 3.1.1) the get_memory_info function was renamed to memory_info.
– Mikle
Jul 30 '15 at 11:40
3
Much easier than the other solutions and isn't UNIX-specific. Thanks.
– fantabolous
Sep 1 '15 at 5:34
21
Note that psutil is not in the standard library
– grisaitis
Aug 18 '16 at 19:11
10
This is in bytes, by the way.
– wordsforthewise
Aug 25 '17 at 7:10
3
3
psutil
is cross platform and can return the same values as the ps
command line tool: pythonhosted.org/psutil/#psutil.Process.memory_info– amos
Jul 3 '14 at 21:38
psutil
is cross platform and can return the same values as the ps
command line tool: pythonhosted.org/psutil/#psutil.Process.memory_info– amos
Jul 3 '14 at 21:38
6
6
People from the future, apparently psutil changed its API or something, but on my machine (psutil.__version__ = 3.1.1) the get_memory_info function was renamed to memory_info.
– Mikle
Jul 30 '15 at 11:40
People from the future, apparently psutil changed its API or something, but on my machine (psutil.__version__ = 3.1.1) the get_memory_info function was renamed to memory_info.
– Mikle
Jul 30 '15 at 11:40
3
3
Much easier than the other solutions and isn't UNIX-specific. Thanks.
– fantabolous
Sep 1 '15 at 5:34
Much easier than the other solutions and isn't UNIX-specific. Thanks.
– fantabolous
Sep 1 '15 at 5:34
21
21
Note that psutil is not in the standard library
– grisaitis
Aug 18 '16 at 19:11
Note that psutil is not in the standard library
– grisaitis
Aug 18 '16 at 19:11
10
10
This is in bytes, by the way.
– wordsforthewise
Aug 25 '17 at 7:10
This is in bytes, by the way.
– wordsforthewise
Aug 25 '17 at 7:10
|
show 7 more comments
For Unixes (Linux, Mac OS X, Solaris) you could also use the getrusage()
function from the standard library module resource
. The resulting object has the attribute ru_maxrss
, which gives peak memory usage for the calling process:
>>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
2656 # peak memory usage (bytes on OS X, kilobytes on Linux)
The Python docs aren't clear on what the units are exactly, but the Mac OS X man page for getrusage(2)
describes the units as bytes. The Linux man page isn't clear, but it seems to be equivalent to the information from /proc/self/status
, which is in kilobytes.
The getrusage()
function can also be given resource.RUSAGE_CHILDREN
to get the usage for child processes, and (on some systems) resource.RUSAGE_BOTH
for total (self and child) process usage.
resource
is a standard library module.
If you only care about Linux, you can just check the /proc/self/status
file as described in a similar question.
1
Okay, will do. I wasn't sure if SO had a process for merging questions or what. The duplicate post was partly to show people there was a standard library solution on both questions... and partly for the rep. ;) Should I delete this answer?
– Nathan Craike
Oct 6 '11 at 3:19
13
resource
is not cross-platform. The docs explicitly specify it asPlatforms: Unix
.
– Henrik Heimbuerger
Apr 4 '13 at 14:45
3
Mac OS definitely returns the RSS in bytes, Linux returns it in kilobytes.
– Neil
Dec 6 '13 at 23:33
9
The units are NOT in kilobytes. It is platform dependent, so you have to use resource.getpagesize() to find out. The given Python docs (docs.python.org/2/library/resource.html#resource-usage) is actually very clear about it. It is 4096 in my box.
– Ben Lin
Apr 15 '14 at 16:53
3
@BenLin Those Python docs are clearly wrong, or there is a bug on the Mac version. The unit used by getrusage and the value returned by getpagesize are definitely different.
– Amoss
Jul 8 '15 at 17:56
|
show 7 more comments
For Unixes (Linux, Mac OS X, Solaris) you could also use the getrusage()
function from the standard library module resource
. The resulting object has the attribute ru_maxrss
, which gives peak memory usage for the calling process:
>>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
2656 # peak memory usage (bytes on OS X, kilobytes on Linux)
The Python docs aren't clear on what the units are exactly, but the Mac OS X man page for getrusage(2)
describes the units as bytes. The Linux man page isn't clear, but it seems to be equivalent to the information from /proc/self/status
, which is in kilobytes.
The getrusage()
function can also be given resource.RUSAGE_CHILDREN
to get the usage for child processes, and (on some systems) resource.RUSAGE_BOTH
for total (self and child) process usage.
resource
is a standard library module.
If you only care about Linux, you can just check the /proc/self/status
file as described in a similar question.
1
Okay, will do. I wasn't sure if SO had a process for merging questions or what. The duplicate post was partly to show people there was a standard library solution on both questions... and partly for the rep. ;) Should I delete this answer?
– Nathan Craike
Oct 6 '11 at 3:19
13
resource
is not cross-platform. The docs explicitly specify it asPlatforms: Unix
.
– Henrik Heimbuerger
Apr 4 '13 at 14:45
3
Mac OS definitely returns the RSS in bytes, Linux returns it in kilobytes.
– Neil
Dec 6 '13 at 23:33
9
The units are NOT in kilobytes. It is platform dependent, so you have to use resource.getpagesize() to find out. The given Python docs (docs.python.org/2/library/resource.html#resource-usage) is actually very clear about it. It is 4096 in my box.
– Ben Lin
Apr 15 '14 at 16:53
3
@BenLin Those Python docs are clearly wrong, or there is a bug on the Mac version. The unit used by getrusage and the value returned by getpagesize are definitely different.
– Amoss
Jul 8 '15 at 17:56
|
show 7 more comments
For Unixes (Linux, Mac OS X, Solaris) you could also use the getrusage()
function from the standard library module resource
. The resulting object has the attribute ru_maxrss
, which gives peak memory usage for the calling process:
>>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
2656 # peak memory usage (bytes on OS X, kilobytes on Linux)
The Python docs aren't clear on what the units are exactly, but the Mac OS X man page for getrusage(2)
describes the units as bytes. The Linux man page isn't clear, but it seems to be equivalent to the information from /proc/self/status
, which is in kilobytes.
The getrusage()
function can also be given resource.RUSAGE_CHILDREN
to get the usage for child processes, and (on some systems) resource.RUSAGE_BOTH
for total (self and child) process usage.
resource
is a standard library module.
If you only care about Linux, you can just check the /proc/self/status
file as described in a similar question.
For Unixes (Linux, Mac OS X, Solaris) you could also use the getrusage()
function from the standard library module resource
. The resulting object has the attribute ru_maxrss
, which gives peak memory usage for the calling process:
>>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
2656 # peak memory usage (bytes on OS X, kilobytes on Linux)
The Python docs aren't clear on what the units are exactly, but the Mac OS X man page for getrusage(2)
describes the units as bytes. The Linux man page isn't clear, but it seems to be equivalent to the information from /proc/self/status
, which is in kilobytes.
The getrusage()
function can also be given resource.RUSAGE_CHILDREN
to get the usage for child processes, and (on some systems) resource.RUSAGE_BOTH
for total (self and child) process usage.
resource
is a standard library module.
If you only care about Linux, you can just check the /proc/self/status
file as described in a similar question.
edited May 23 '17 at 12:34
Community♦
11
11
answered Oct 6 '11 at 1:23
Nathan Craike
3,01621719
3,01621719
1
Okay, will do. I wasn't sure if SO had a process for merging questions or what. The duplicate post was partly to show people there was a standard library solution on both questions... and partly for the rep. ;) Should I delete this answer?
– Nathan Craike
Oct 6 '11 at 3:19
13
resource
is not cross-platform. The docs explicitly specify it asPlatforms: Unix
.
– Henrik Heimbuerger
Apr 4 '13 at 14:45
3
Mac OS definitely returns the RSS in bytes, Linux returns it in kilobytes.
– Neil
Dec 6 '13 at 23:33
9
The units are NOT in kilobytes. It is platform dependent, so you have to use resource.getpagesize() to find out. The given Python docs (docs.python.org/2/library/resource.html#resource-usage) is actually very clear about it. It is 4096 in my box.
– Ben Lin
Apr 15 '14 at 16:53
3
@BenLin Those Python docs are clearly wrong, or there is a bug on the Mac version. The unit used by getrusage and the value returned by getpagesize are definitely different.
– Amoss
Jul 8 '15 at 17:56
|
show 7 more comments
1
Okay, will do. I wasn't sure if SO had a process for merging questions or what. The duplicate post was partly to show people there was a standard library solution on both questions... and partly for the rep. ;) Should I delete this answer?
– Nathan Craike
Oct 6 '11 at 3:19
13
resource
is not cross-platform. The docs explicitly specify it asPlatforms: Unix
.
– Henrik Heimbuerger
Apr 4 '13 at 14:45
3
Mac OS definitely returns the RSS in bytes, Linux returns it in kilobytes.
– Neil
Dec 6 '13 at 23:33
9
The units are NOT in kilobytes. It is platform dependent, so you have to use resource.getpagesize() to find out. The given Python docs (docs.python.org/2/library/resource.html#resource-usage) is actually very clear about it. It is 4096 in my box.
– Ben Lin
Apr 15 '14 at 16:53
3
@BenLin Those Python docs are clearly wrong, or there is a bug on the Mac version. The unit used by getrusage and the value returned by getpagesize are definitely different.
– Amoss
Jul 8 '15 at 17:56
1
1
Okay, will do. I wasn't sure if SO had a process for merging questions or what. The duplicate post was partly to show people there was a standard library solution on both questions... and partly for the rep. ;) Should I delete this answer?
– Nathan Craike
Oct 6 '11 at 3:19
Okay, will do. I wasn't sure if SO had a process for merging questions or what. The duplicate post was partly to show people there was a standard library solution on both questions... and partly for the rep. ;) Should I delete this answer?
– Nathan Craike
Oct 6 '11 at 3:19
13
13
resource
is not cross-platform. The docs explicitly specify it as Platforms: Unix
.– Henrik Heimbuerger
Apr 4 '13 at 14:45
resource
is not cross-platform. The docs explicitly specify it as Platforms: Unix
.– Henrik Heimbuerger
Apr 4 '13 at 14:45
3
3
Mac OS definitely returns the RSS in bytes, Linux returns it in kilobytes.
– Neil
Dec 6 '13 at 23:33
Mac OS definitely returns the RSS in bytes, Linux returns it in kilobytes.
– Neil
Dec 6 '13 at 23:33
9
9
The units are NOT in kilobytes. It is platform dependent, so you have to use resource.getpagesize() to find out. The given Python docs (docs.python.org/2/library/resource.html#resource-usage) is actually very clear about it. It is 4096 in my box.
– Ben Lin
Apr 15 '14 at 16:53
The units are NOT in kilobytes. It is platform dependent, so you have to use resource.getpagesize() to find out. The given Python docs (docs.python.org/2/library/resource.html#resource-usage) is actually very clear about it. It is 4096 in my box.
– Ben Lin
Apr 15 '14 at 16:53
3
3
@BenLin Those Python docs are clearly wrong, or there is a bug on the Mac version. The unit used by getrusage and the value returned by getpagesize are definitely different.
– Amoss
Jul 8 '15 at 17:56
@BenLin Those Python docs are clearly wrong, or there is a bug on the Mac version. The unit used by getrusage and the value returned by getpagesize are definitely different.
– Amoss
Jul 8 '15 at 17:56
|
show 7 more comments
On Windows, you can use WMI (home page, cheeseshop):
def memory():
import os
from wmi import WMI
w = WMI('.')
result = w.query("SELECT WorkingSet FROM Win32_PerfRawData_PerfProc_Process WHERE IDProcess=%d" % os.getpid())
return int(result[0].WorkingSet)
On Linux (from python cookbook http://code.activestate.com/recipes/286222/:
import os
_proc_status = '/proc/%d/status' % os.getpid()
_scale = 'kB': 1024.0, 'mB': 1024.0*1024.0,
'KB': 1024.0, 'MB': 1024.0*1024.0
def _VmB(VmKey):
'''Private.
'''
global _proc_status, _scale
# get pseudo file /proc/<pid>/status
try:
t = open(_proc_status)
v = t.read()
t.close()
except:
return 0.0 # non-Linux?
# get VmKey line e.g. 'VmRSS: 9999 kBn ...'
i = v.index(VmKey)
v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
return 0.0 # invalid format?
# convert Vm value to bytes
return float(v[1]) * _scale[v[2]]
def memory(since=0.0):
'''Return memory usage in bytes.
'''
return _VmB('VmSize:') - since
def resident(since=0.0):
'''Return resident memory usage in bytes.
'''
return _VmB('VmRSS:') - since
def stacksize(since=0.0):
'''Return stack size in bytes.
'''
return _VmB('VmStk:') - since
14
The Windows code doesn't work for me. This change does:return int(result[0].WorkingSet)
– John Fouhy
Aug 31 '10 at 0:46
1
This Windows code doesn't work for me on Windows 7 x64, even after John Fouhy's comment modification.
– Basj
Feb 7 '14 at 15:59
1
What is the error?
– codeape
Feb 7 '14 at 19:58
1
John Fouhy's change works for me on Windows 7 x64.
– simonzack
Jul 4 '14 at 10:58
I have this error: return [ wmi_object (obj, instance_of, fields) for obj in self._raw_query(wql) ] File "C:Python27libsite-packageswin32comclientutil.py", line 84, in next return _get_good_object_(self._iter.next(), resultCLSID = self.resultCLSID) pywintypes.com_error: (-2147217385, 'OLE error 0x80041017', None, None) if anyone can help me? I have win 8 x64 but python on x32
– Radu Vlad
Sep 9 '14 at 6:06
|
show 1 more comment
On Windows, you can use WMI (home page, cheeseshop):
def memory():
import os
from wmi import WMI
w = WMI('.')
result = w.query("SELECT WorkingSet FROM Win32_PerfRawData_PerfProc_Process WHERE IDProcess=%d" % os.getpid())
return int(result[0].WorkingSet)
On Linux (from python cookbook http://code.activestate.com/recipes/286222/:
import os
_proc_status = '/proc/%d/status' % os.getpid()
_scale = 'kB': 1024.0, 'mB': 1024.0*1024.0,
'KB': 1024.0, 'MB': 1024.0*1024.0
def _VmB(VmKey):
'''Private.
'''
global _proc_status, _scale
# get pseudo file /proc/<pid>/status
try:
t = open(_proc_status)
v = t.read()
t.close()
except:
return 0.0 # non-Linux?
# get VmKey line e.g. 'VmRSS: 9999 kBn ...'
i = v.index(VmKey)
v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
return 0.0 # invalid format?
# convert Vm value to bytes
return float(v[1]) * _scale[v[2]]
def memory(since=0.0):
'''Return memory usage in bytes.
'''
return _VmB('VmSize:') - since
def resident(since=0.0):
'''Return resident memory usage in bytes.
'''
return _VmB('VmRSS:') - since
def stacksize(since=0.0):
'''Return stack size in bytes.
'''
return _VmB('VmStk:') - since
14
The Windows code doesn't work for me. This change does:return int(result[0].WorkingSet)
– John Fouhy
Aug 31 '10 at 0:46
1
This Windows code doesn't work for me on Windows 7 x64, even after John Fouhy's comment modification.
– Basj
Feb 7 '14 at 15:59
1
What is the error?
– codeape
Feb 7 '14 at 19:58
1
John Fouhy's change works for me on Windows 7 x64.
– simonzack
Jul 4 '14 at 10:58
I have this error: return [ wmi_object (obj, instance_of, fields) for obj in self._raw_query(wql) ] File "C:Python27libsite-packageswin32comclientutil.py", line 84, in next return _get_good_object_(self._iter.next(), resultCLSID = self.resultCLSID) pywintypes.com_error: (-2147217385, 'OLE error 0x80041017', None, None) if anyone can help me? I have win 8 x64 but python on x32
– Radu Vlad
Sep 9 '14 at 6:06
|
show 1 more comment
On Windows, you can use WMI (home page, cheeseshop):
def memory():
import os
from wmi import WMI
w = WMI('.')
result = w.query("SELECT WorkingSet FROM Win32_PerfRawData_PerfProc_Process WHERE IDProcess=%d" % os.getpid())
return int(result[0].WorkingSet)
On Linux (from python cookbook http://code.activestate.com/recipes/286222/:
import os
_proc_status = '/proc/%d/status' % os.getpid()
_scale = 'kB': 1024.0, 'mB': 1024.0*1024.0,
'KB': 1024.0, 'MB': 1024.0*1024.0
def _VmB(VmKey):
'''Private.
'''
global _proc_status, _scale
# get pseudo file /proc/<pid>/status
try:
t = open(_proc_status)
v = t.read()
t.close()
except:
return 0.0 # non-Linux?
# get VmKey line e.g. 'VmRSS: 9999 kBn ...'
i = v.index(VmKey)
v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
return 0.0 # invalid format?
# convert Vm value to bytes
return float(v[1]) * _scale[v[2]]
def memory(since=0.0):
'''Return memory usage in bytes.
'''
return _VmB('VmSize:') - since
def resident(since=0.0):
'''Return resident memory usage in bytes.
'''
return _VmB('VmRSS:') - since
def stacksize(since=0.0):
'''Return stack size in bytes.
'''
return _VmB('VmStk:') - since
On Windows, you can use WMI (home page, cheeseshop):
def memory():
import os
from wmi import WMI
w = WMI('.')
result = w.query("SELECT WorkingSet FROM Win32_PerfRawData_PerfProc_Process WHERE IDProcess=%d" % os.getpid())
return int(result[0].WorkingSet)
On Linux (from python cookbook http://code.activestate.com/recipes/286222/:
import os
_proc_status = '/proc/%d/status' % os.getpid()
_scale = 'kB': 1024.0, 'mB': 1024.0*1024.0,
'KB': 1024.0, 'MB': 1024.0*1024.0
def _VmB(VmKey):
'''Private.
'''
global _proc_status, _scale
# get pseudo file /proc/<pid>/status
try:
t = open(_proc_status)
v = t.read()
t.close()
except:
return 0.0 # non-Linux?
# get VmKey line e.g. 'VmRSS: 9999 kBn ...'
i = v.index(VmKey)
v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
return 0.0 # invalid format?
# convert Vm value to bytes
return float(v[1]) * _scale[v[2]]
def memory(since=0.0):
'''Return memory usage in bytes.
'''
return _VmB('VmSize:') - since
def resident(since=0.0):
'''Return resident memory usage in bytes.
'''
return _VmB('VmRSS:') - since
def stacksize(since=0.0):
'''Return stack size in bytes.
'''
return _VmB('VmStk:') - since
edited Mar 17 '16 at 16:19
jedwards
21.1k13160
21.1k13160
answered Jun 2 '09 at 10:13
codeape
71k20121147
71k20121147
14
The Windows code doesn't work for me. This change does:return int(result[0].WorkingSet)
– John Fouhy
Aug 31 '10 at 0:46
1
This Windows code doesn't work for me on Windows 7 x64, even after John Fouhy's comment modification.
– Basj
Feb 7 '14 at 15:59
1
What is the error?
– codeape
Feb 7 '14 at 19:58
1
John Fouhy's change works for me on Windows 7 x64.
– simonzack
Jul 4 '14 at 10:58
I have this error: return [ wmi_object (obj, instance_of, fields) for obj in self._raw_query(wql) ] File "C:Python27libsite-packageswin32comclientutil.py", line 84, in next return _get_good_object_(self._iter.next(), resultCLSID = self.resultCLSID) pywintypes.com_error: (-2147217385, 'OLE error 0x80041017', None, None) if anyone can help me? I have win 8 x64 but python on x32
– Radu Vlad
Sep 9 '14 at 6:06
|
show 1 more comment
14
The Windows code doesn't work for me. This change does:return int(result[0].WorkingSet)
– John Fouhy
Aug 31 '10 at 0:46
1
This Windows code doesn't work for me on Windows 7 x64, even after John Fouhy's comment modification.
– Basj
Feb 7 '14 at 15:59
1
What is the error?
– codeape
Feb 7 '14 at 19:58
1
John Fouhy's change works for me on Windows 7 x64.
– simonzack
Jul 4 '14 at 10:58
I have this error: return [ wmi_object (obj, instance_of, fields) for obj in self._raw_query(wql) ] File "C:Python27libsite-packageswin32comclientutil.py", line 84, in next return _get_good_object_(self._iter.next(), resultCLSID = self.resultCLSID) pywintypes.com_error: (-2147217385, 'OLE error 0x80041017', None, None) if anyone can help me? I have win 8 x64 but python on x32
– Radu Vlad
Sep 9 '14 at 6:06
14
14
The Windows code doesn't work for me. This change does:
return int(result[0].WorkingSet)
– John Fouhy
Aug 31 '10 at 0:46
The Windows code doesn't work for me. This change does:
return int(result[0].WorkingSet)
– John Fouhy
Aug 31 '10 at 0:46
1
1
This Windows code doesn't work for me on Windows 7 x64, even after John Fouhy's comment modification.
– Basj
Feb 7 '14 at 15:59
This Windows code doesn't work for me on Windows 7 x64, even after John Fouhy's comment modification.
– Basj
Feb 7 '14 at 15:59
1
1
What is the error?
– codeape
Feb 7 '14 at 19:58
What is the error?
– codeape
Feb 7 '14 at 19:58
1
1
John Fouhy's change works for me on Windows 7 x64.
– simonzack
Jul 4 '14 at 10:58
John Fouhy's change works for me on Windows 7 x64.
– simonzack
Jul 4 '14 at 10:58
I have this error: return [ wmi_object (obj, instance_of, fields) for obj in self._raw_query(wql) ] File "C:Python27libsite-packageswin32comclientutil.py", line 84, in next return _get_good_object_(self._iter.next(), resultCLSID = self.resultCLSID) pywintypes.com_error: (-2147217385, 'OLE error 0x80041017', None, None) if anyone can help me? I have win 8 x64 but python on x32
– Radu Vlad
Sep 9 '14 at 6:06
I have this error: return [ wmi_object (obj, instance_of, fields) for obj in self._raw_query(wql) ] File "C:Python27libsite-packageswin32comclientutil.py", line 84, in next return _get_good_object_(self._iter.next(), resultCLSID = self.resultCLSID) pywintypes.com_error: (-2147217385, 'OLE error 0x80041017', None, None) if anyone can help me? I have win 8 x64 but python on x32
– Radu Vlad
Sep 9 '14 at 6:06
|
show 1 more comment
On unix, you can use the ps
tool to monitor it:
$ ps u -p 1347 | awk 'sum=sum+$6; END print sum/1024'
where 1347 is some process id. Also, the result is in MB.
add a comment |
On unix, you can use the ps
tool to monitor it:
$ ps u -p 1347 | awk 'sum=sum+$6; END print sum/1024'
where 1347 is some process id. Also, the result is in MB.
add a comment |
On unix, you can use the ps
tool to monitor it:
$ ps u -p 1347 | awk 'sum=sum+$6; END print sum/1024'
where 1347 is some process id. Also, the result is in MB.
On unix, you can use the ps
tool to monitor it:
$ ps u -p 1347 | awk 'sum=sum+$6; END print sum/1024'
where 1347 is some process id. Also, the result is in MB.
answered Jun 2 '09 at 9:59
bayer
6,0091631
6,0091631
add a comment |
add a comment |
Heapy (and friends) may be what you're looking for.
Also, caches typically have a fixed upper limit on their size to solve the sort of problem you're talking about. For instance, check out this LRU cache decorator.
3
Flagged as a link only answer.
– ArtOfWarfare
Nov 25 '14 at 15:39
add a comment |
Heapy (and friends) may be what you're looking for.
Also, caches typically have a fixed upper limit on their size to solve the sort of problem you're talking about. For instance, check out this LRU cache decorator.
3
Flagged as a link only answer.
– ArtOfWarfare
Nov 25 '14 at 15:39
add a comment |
Heapy (and friends) may be what you're looking for.
Also, caches typically have a fixed upper limit on their size to solve the sort of problem you're talking about. For instance, check out this LRU cache decorator.
Heapy (and friends) may be what you're looking for.
Also, caches typically have a fixed upper limit on their size to solve the sort of problem you're talking about. For instance, check out this LRU cache decorator.
answered Jun 2 '09 at 9:55
Hank Gay
51.6k24136212
51.6k24136212
3
Flagged as a link only answer.
– ArtOfWarfare
Nov 25 '14 at 15:39
add a comment |
3
Flagged as a link only answer.
– ArtOfWarfare
Nov 25 '14 at 15:39
3
3
Flagged as a link only answer.
– ArtOfWarfare
Nov 25 '14 at 15:39
Flagged as a link only answer.
– ArtOfWarfare
Nov 25 '14 at 15:39
add a comment |
I like it, thank you for @bayer. I get a specific process count tool, now.
# Megabyte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum/1024 " MB"'
87.9492 MB
# Byte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum " KB"'
90064 KB
Attach my process list.
$ ps aux | grep python
root 943 0.0 0.1 53252 9524 ? Ss Aug19 52:01 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 950 0.6 0.4 299680 34220 ? Sl Aug19 568:52 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 3803 0.2 0.4 315692 36576 ? S 12:43 0:54 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
jonny 23325 0.0 0.1 47460 9076 pts/0 S+ 17:40 0:00 python
jonny 24651 0.0 0.0 13076 924 pts/4 S+ 18:06 0:00 grep python
Reference
- memory - Linux: find out what process is using all the RAM? - Super User
- Total memory used by Python process? - Stack Overflow
- linux - ps aux output meaning - Super User
just an optimisation of code to avoid multi pipeps aux | awk '/python/sum+=$6; END print sum/1024 " MB"'
– NeronLeVelu
Oct 4 '17 at 5:06
add a comment |
I like it, thank you for @bayer. I get a specific process count tool, now.
# Megabyte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum/1024 " MB"'
87.9492 MB
# Byte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum " KB"'
90064 KB
Attach my process list.
$ ps aux | grep python
root 943 0.0 0.1 53252 9524 ? Ss Aug19 52:01 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 950 0.6 0.4 299680 34220 ? Sl Aug19 568:52 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 3803 0.2 0.4 315692 36576 ? S 12:43 0:54 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
jonny 23325 0.0 0.1 47460 9076 pts/0 S+ 17:40 0:00 python
jonny 24651 0.0 0.0 13076 924 pts/4 S+ 18:06 0:00 grep python
Reference
- memory - Linux: find out what process is using all the RAM? - Super User
- Total memory used by Python process? - Stack Overflow
- linux - ps aux output meaning - Super User
just an optimisation of code to avoid multi pipeps aux | awk '/python/sum+=$6; END print sum/1024 " MB"'
– NeronLeVelu
Oct 4 '17 at 5:06
add a comment |
I like it, thank you for @bayer. I get a specific process count tool, now.
# Megabyte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum/1024 " MB"'
87.9492 MB
# Byte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum " KB"'
90064 KB
Attach my process list.
$ ps aux | grep python
root 943 0.0 0.1 53252 9524 ? Ss Aug19 52:01 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 950 0.6 0.4 299680 34220 ? Sl Aug19 568:52 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 3803 0.2 0.4 315692 36576 ? S 12:43 0:54 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
jonny 23325 0.0 0.1 47460 9076 pts/0 S+ 17:40 0:00 python
jonny 24651 0.0 0.0 13076 924 pts/4 S+ 18:06 0:00 grep python
Reference
- memory - Linux: find out what process is using all the RAM? - Super User
- Total memory used by Python process? - Stack Overflow
- linux - ps aux output meaning - Super User
I like it, thank you for @bayer. I get a specific process count tool, now.
# Megabyte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum/1024 " MB"'
87.9492 MB
# Byte.
$ ps aux | grep python | awk 'sum=sum+$6; END print sum " KB"'
90064 KB
Attach my process list.
$ ps aux | grep python
root 943 0.0 0.1 53252 9524 ? Ss Aug19 52:01 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 950 0.6 0.4 299680 34220 ? Sl Aug19 568:52 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
root 3803 0.2 0.4 315692 36576 ? S 12:43 0:54 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pid
jonny 23325 0.0 0.1 47460 9076 pts/0 S+ 17:40 0:00 python
jonny 24651 0.0 0.0 13076 924 pts/4 S+ 18:06 0:00 grep python
Reference
- memory - Linux: find out what process is using all the RAM? - Super User
- Total memory used by Python process? - Stack Overflow
- linux - ps aux output meaning - Super User
edited May 23 '17 at 12:34
Community♦
11
11
answered Oct 21 '16 at 10:07
Chu-Siang Lai
1,9081518
1,9081518
just an optimisation of code to avoid multi pipeps aux | awk '/python/sum+=$6; END print sum/1024 " MB"'
– NeronLeVelu
Oct 4 '17 at 5:06
add a comment |
just an optimisation of code to avoid multi pipeps aux | awk '/python/sum+=$6; END print sum/1024 " MB"'
– NeronLeVelu
Oct 4 '17 at 5:06
just an optimisation of code to avoid multi pipe
ps aux | awk '/python/sum+=$6; END print sum/1024 " MB"'
– NeronLeVelu
Oct 4 '17 at 5:06
just an optimisation of code to avoid multi pipe
ps aux | awk '/python/sum+=$6; END print sum/1024 " MB"'
– NeronLeVelu
Oct 4 '17 at 5:06
add a comment |
Below is my function decorator which allows to track how much memory this process consumed before the function call, how much memory it uses after the function call, and how long the function is executed.
import time
import os
import psutil
def elapsed_since(start):
return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
def get_process_memory():
process = psutil.Process(os.getpid())
return process.get_memory_info().rss
def track(func):
def wrapper(*args, **kwargs):
mem_before = get_process_memory()
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
mem_after = get_process_memory()
print(": memory before: :,, after: :,, consumed: :,; exec time: ".format(
func.__name__,
mem_before, mem_after, mem_after - mem_before,
elapsed_time))
return result
return wrapper
So, when you have some function decorated with it
from utils import track
@track
def list_create(n):
print("inside list create")
x = [1] * n
return x
You will be able to see this output:
inside list create
list_create: memory before: 45,928,448, after: 46,211,072, consumed: 282,624; exec time: 00:00:00
add a comment |
Below is my function decorator which allows to track how much memory this process consumed before the function call, how much memory it uses after the function call, and how long the function is executed.
import time
import os
import psutil
def elapsed_since(start):
return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
def get_process_memory():
process = psutil.Process(os.getpid())
return process.get_memory_info().rss
def track(func):
def wrapper(*args, **kwargs):
mem_before = get_process_memory()
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
mem_after = get_process_memory()
print(": memory before: :,, after: :,, consumed: :,; exec time: ".format(
func.__name__,
mem_before, mem_after, mem_after - mem_before,
elapsed_time))
return result
return wrapper
So, when you have some function decorated with it
from utils import track
@track
def list_create(n):
print("inside list create")
x = [1] * n
return x
You will be able to see this output:
inside list create
list_create: memory before: 45,928,448, after: 46,211,072, consumed: 282,624; exec time: 00:00:00
add a comment |
Below is my function decorator which allows to track how much memory this process consumed before the function call, how much memory it uses after the function call, and how long the function is executed.
import time
import os
import psutil
def elapsed_since(start):
return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
def get_process_memory():
process = psutil.Process(os.getpid())
return process.get_memory_info().rss
def track(func):
def wrapper(*args, **kwargs):
mem_before = get_process_memory()
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
mem_after = get_process_memory()
print(": memory before: :,, after: :,, consumed: :,; exec time: ".format(
func.__name__,
mem_before, mem_after, mem_after - mem_before,
elapsed_time))
return result
return wrapper
So, when you have some function decorated with it
from utils import track
@track
def list_create(n):
print("inside list create")
x = [1] * n
return x
You will be able to see this output:
inside list create
list_create: memory before: 45,928,448, after: 46,211,072, consumed: 282,624; exec time: 00:00:00
Below is my function decorator which allows to track how much memory this process consumed before the function call, how much memory it uses after the function call, and how long the function is executed.
import time
import os
import psutil
def elapsed_since(start):
return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
def get_process_memory():
process = psutil.Process(os.getpid())
return process.get_memory_info().rss
def track(func):
def wrapper(*args, **kwargs):
mem_before = get_process_memory()
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
mem_after = get_process_memory()
print(": memory before: :,, after: :,, consumed: :,; exec time: ".format(
func.__name__,
mem_before, mem_after, mem_after - mem_before,
elapsed_time))
return result
return wrapper
So, when you have some function decorated with it
from utils import track
@track
def list_create(n):
print("inside list create")
x = [1] * n
return x
You will be able to see this output:
inside list create
list_create: memory before: 45,928,448, after: 46,211,072, consumed: 282,624; exec time: 00:00:00
answered Feb 22 at 8:50
Ihor B.
713814
713814
add a comment |
add a comment |
import os, win32api, win32con, win32process
han = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION|win32con.PROCESS_VM_READ, 0, os.getpid())
process_memory = int(win32process.GetProcessMemoryInfo(han)['WorkingSetSize'])
6
This could be improved with some explanation of what it does and how it works.
– ArtOfWarfare
Nov 25 '14 at 15:39
Based on the large number returned (8 digits) and how I'm not doing much of anything, I'm guessing this has to be bytes? So it's around 28.5 MB for a rather idle interactive instance. (Wow... I didn't even realize the above comment was mine from 4 years ago... that's weird.)
– ArtOfWarfare
Jun 8 at 18:50
add a comment |
import os, win32api, win32con, win32process
han = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION|win32con.PROCESS_VM_READ, 0, os.getpid())
process_memory = int(win32process.GetProcessMemoryInfo(han)['WorkingSetSize'])
6
This could be improved with some explanation of what it does and how it works.
– ArtOfWarfare
Nov 25 '14 at 15:39
Based on the large number returned (8 digits) and how I'm not doing much of anything, I'm guessing this has to be bytes? So it's around 28.5 MB for a rather idle interactive instance. (Wow... I didn't even realize the above comment was mine from 4 years ago... that's weird.)
– ArtOfWarfare
Jun 8 at 18:50
add a comment |
import os, win32api, win32con, win32process
han = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION|win32con.PROCESS_VM_READ, 0, os.getpid())
process_memory = int(win32process.GetProcessMemoryInfo(han)['WorkingSetSize'])
import os, win32api, win32con, win32process
han = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION|win32con.PROCESS_VM_READ, 0, os.getpid())
process_memory = int(win32process.GetProcessMemoryInfo(han)['WorkingSetSize'])
edited Jun 8 at 18:48
ArtOfWarfare
12.5k783133
12.5k783133
answered Nov 25 '14 at 14:46
Pedro Reis
685616
685616
6
This could be improved with some explanation of what it does and how it works.
– ArtOfWarfare
Nov 25 '14 at 15:39
Based on the large number returned (8 digits) and how I'm not doing much of anything, I'm guessing this has to be bytes? So it's around 28.5 MB for a rather idle interactive instance. (Wow... I didn't even realize the above comment was mine from 4 years ago... that's weird.)
– ArtOfWarfare
Jun 8 at 18:50
add a comment |
6
This could be improved with some explanation of what it does and how it works.
– ArtOfWarfare
Nov 25 '14 at 15:39
Based on the large number returned (8 digits) and how I'm not doing much of anything, I'm guessing this has to be bytes? So it's around 28.5 MB for a rather idle interactive instance. (Wow... I didn't even realize the above comment was mine from 4 years ago... that's weird.)
– ArtOfWarfare
Jun 8 at 18:50
6
6
This could be improved with some explanation of what it does and how it works.
– ArtOfWarfare
Nov 25 '14 at 15:39
This could be improved with some explanation of what it does and how it works.
– ArtOfWarfare
Nov 25 '14 at 15:39
Based on the large number returned (8 digits) and how I'm not doing much of anything, I'm guessing this has to be bytes? So it's around 28.5 MB for a rather idle interactive instance. (Wow... I didn't even realize the above comment was mine from 4 years ago... that's weird.)
– ArtOfWarfare
Jun 8 at 18:50
Based on the large number returned (8 digits) and how I'm not doing much of anything, I'm guessing this has to be bytes? So it's around 28.5 MB for a rather idle interactive instance. (Wow... I didn't even realize the above comment was mine from 4 years ago... that's weird.)
– ArtOfWarfare
Jun 8 at 18:50
add a comment |
Current memory usage of the current process on Linux, for Python 2, Python 3, and pypy, without any imports:
def getCurrentMemoryUsage():
''' Memory usage in kB '''
with open('/proc/self/status') as f:
memusage = f.read().split('VmRSS:')[1].split('n')[0][:-3]
return int(memusage.strip())
Tested on Linux 4.4 and 4.9, but even an early Linux version should work.
Looking in man proc
and searching for the info on the /proc/$PID/status
file, it mentions minimum versions for some fields (like Linux 2.6.10 for "VmPTE"), but the "VmRSS" field (which I use here) has no such mention. Therefore I assume it has been in there since an early version.
add a comment |
Current memory usage of the current process on Linux, for Python 2, Python 3, and pypy, without any imports:
def getCurrentMemoryUsage():
''' Memory usage in kB '''
with open('/proc/self/status') as f:
memusage = f.read().split('VmRSS:')[1].split('n')[0][:-3]
return int(memusage.strip())
Tested on Linux 4.4 and 4.9, but even an early Linux version should work.
Looking in man proc
and searching for the info on the /proc/$PID/status
file, it mentions minimum versions for some fields (like Linux 2.6.10 for "VmPTE"), but the "VmRSS" field (which I use here) has no such mention. Therefore I assume it has been in there since an early version.
add a comment |
Current memory usage of the current process on Linux, for Python 2, Python 3, and pypy, without any imports:
def getCurrentMemoryUsage():
''' Memory usage in kB '''
with open('/proc/self/status') as f:
memusage = f.read().split('VmRSS:')[1].split('n')[0][:-3]
return int(memusage.strip())
Tested on Linux 4.4 and 4.9, but even an early Linux version should work.
Looking in man proc
and searching for the info on the /proc/$PID/status
file, it mentions minimum versions for some fields (like Linux 2.6.10 for "VmPTE"), but the "VmRSS" field (which I use here) has no such mention. Therefore I assume it has been in there since an early version.
Current memory usage of the current process on Linux, for Python 2, Python 3, and pypy, without any imports:
def getCurrentMemoryUsage():
''' Memory usage in kB '''
with open('/proc/self/status') as f:
memusage = f.read().split('VmRSS:')[1].split('n')[0][:-3]
return int(memusage.strip())
Tested on Linux 4.4 and 4.9, but even an early Linux version should work.
Looking in man proc
and searching for the info on the /proc/$PID/status
file, it mentions minimum versions for some fields (like Linux 2.6.10 for "VmPTE"), but the "VmRSS" field (which I use here) has no such mention. Therefore I assume it has been in there since an early version.
edited Nov 1 at 17:35
answered Jan 23 at 8:51
Luc
1,16911925
1,16911925
add a comment |
add a comment |
Using sh and os to get into python bayer's answer.
float(sh.awk(sh.ps('u','-p',os.getpid()),'sum=sum+$6; END print sum/1024'))
Answer is in megabytes.
3
Should be noted that `sh' isn't a stdlib module. It's installable with pip, though.
– Jürgen A. Erhard
Sep 4 '13 at 0:00
add a comment |
Using sh and os to get into python bayer's answer.
float(sh.awk(sh.ps('u','-p',os.getpid()),'sum=sum+$6; END print sum/1024'))
Answer is in megabytes.
3
Should be noted that `sh' isn't a stdlib module. It's installable with pip, though.
– Jürgen A. Erhard
Sep 4 '13 at 0:00
add a comment |
Using sh and os to get into python bayer's answer.
float(sh.awk(sh.ps('u','-p',os.getpid()),'sum=sum+$6; END print sum/1024'))
Answer is in megabytes.
Using sh and os to get into python bayer's answer.
float(sh.awk(sh.ps('u','-p',os.getpid()),'sum=sum+$6; END print sum/1024'))
Answer is in megabytes.
edited May 16 '13 at 23:35
answered May 15 '13 at 22:25
Newmu
1,1701322
1,1701322
3
Should be noted that `sh' isn't a stdlib module. It's installable with pip, though.
– Jürgen A. Erhard
Sep 4 '13 at 0:00
add a comment |
3
Should be noted that `sh' isn't a stdlib module. It's installable with pip, though.
– Jürgen A. Erhard
Sep 4 '13 at 0:00
3
3
Should be noted that `sh' isn't a stdlib module. It's installable with pip, though.
– Jürgen A. Erhard
Sep 4 '13 at 0:00
Should be noted that `sh' isn't a stdlib module. It's installable with pip, though.
– Jürgen A. Erhard
Sep 4 '13 at 0:00
add a comment |
For Python 3.6 and psutil 5.4.5 it is easier to use memory_percent()
function listed here.
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_percent())
add a comment |
For Python 3.6 and psutil 5.4.5 it is easier to use memory_percent()
function listed here.
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_percent())
add a comment |
For Python 3.6 and psutil 5.4.5 it is easier to use memory_percent()
function listed here.
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_percent())
For Python 3.6 and psutil 5.4.5 it is easier to use memory_percent()
function listed here.
import os
import psutil
process = psutil.Process(os.getpid())
print(process.memory_percent())
answered Nov 11 at 10:15
A.Ametov
214
214
add a comment |
add a comment |
Even easier to use than /proc/self/status
: /proc/self/statm
. It's just a space delimited list of several statistics. I haven't been able to tell if both files are always present.
/proc/[pid]/statm
Provides information about memory usage, measured in pages.
The columns are:
- size (1) total program size
(same as VmSize in /proc/[pid]/status)
- resident (2) resident set size
(same as VmRSS in /proc/[pid]/status)
- shared (3) number of resident shared pages (i.e., backed by a file)
(same as RssFile+RssShmem in /proc/[pid]/status)
- text (4) text (code)
- lib (5) library (unused since Linux 2.6; always 0)
- data (6) data + stack
- dt (7) dirty pages (unused since Linux 2.6; always 0)
Here's a simple example:
from pathlib import Path
from resource import getpagesize
def get_resident_set_size():
# Columns are: size resident shared text lib data dt
statm = Path('/proc/self/statm').read_text()
fields = statm.split()
return int(fields[1]) * getpagesize()
data =
start_memory = get_resident_set_size()
for _ in range(10):
data.append('X' * 100000)
print(get_resident_set_size() - start_memory)
That produces a list that looks something like this:
0
0
368640
368640
368640
638976
638976
909312
909312
909312
You can see that it jumps by about 300,000 bytes after roughly 3 allocations of 100,000 bytes.
add a comment |
Even easier to use than /proc/self/status
: /proc/self/statm
. It's just a space delimited list of several statistics. I haven't been able to tell if both files are always present.
/proc/[pid]/statm
Provides information about memory usage, measured in pages.
The columns are:
- size (1) total program size
(same as VmSize in /proc/[pid]/status)
- resident (2) resident set size
(same as VmRSS in /proc/[pid]/status)
- shared (3) number of resident shared pages (i.e., backed by a file)
(same as RssFile+RssShmem in /proc/[pid]/status)
- text (4) text (code)
- lib (5) library (unused since Linux 2.6; always 0)
- data (6) data + stack
- dt (7) dirty pages (unused since Linux 2.6; always 0)
Here's a simple example:
from pathlib import Path
from resource import getpagesize
def get_resident_set_size():
# Columns are: size resident shared text lib data dt
statm = Path('/proc/self/statm').read_text()
fields = statm.split()
return int(fields[1]) * getpagesize()
data =
start_memory = get_resident_set_size()
for _ in range(10):
data.append('X' * 100000)
print(get_resident_set_size() - start_memory)
That produces a list that looks something like this:
0
0
368640
368640
368640
638976
638976
909312
909312
909312
You can see that it jumps by about 300,000 bytes after roughly 3 allocations of 100,000 bytes.
add a comment |
Even easier to use than /proc/self/status
: /proc/self/statm
. It's just a space delimited list of several statistics. I haven't been able to tell if both files are always present.
/proc/[pid]/statm
Provides information about memory usage, measured in pages.
The columns are:
- size (1) total program size
(same as VmSize in /proc/[pid]/status)
- resident (2) resident set size
(same as VmRSS in /proc/[pid]/status)
- shared (3) number of resident shared pages (i.e., backed by a file)
(same as RssFile+RssShmem in /proc/[pid]/status)
- text (4) text (code)
- lib (5) library (unused since Linux 2.6; always 0)
- data (6) data + stack
- dt (7) dirty pages (unused since Linux 2.6; always 0)
Here's a simple example:
from pathlib import Path
from resource import getpagesize
def get_resident_set_size():
# Columns are: size resident shared text lib data dt
statm = Path('/proc/self/statm').read_text()
fields = statm.split()
return int(fields[1]) * getpagesize()
data =
start_memory = get_resident_set_size()
for _ in range(10):
data.append('X' * 100000)
print(get_resident_set_size() - start_memory)
That produces a list that looks something like this:
0
0
368640
368640
368640
638976
638976
909312
909312
909312
You can see that it jumps by about 300,000 bytes after roughly 3 allocations of 100,000 bytes.
Even easier to use than /proc/self/status
: /proc/self/statm
. It's just a space delimited list of several statistics. I haven't been able to tell if both files are always present.
/proc/[pid]/statm
Provides information about memory usage, measured in pages.
The columns are:
- size (1) total program size
(same as VmSize in /proc/[pid]/status)
- resident (2) resident set size
(same as VmRSS in /proc/[pid]/status)
- shared (3) number of resident shared pages (i.e., backed by a file)
(same as RssFile+RssShmem in /proc/[pid]/status)
- text (4) text (code)
- lib (5) library (unused since Linux 2.6; always 0)
- data (6) data + stack
- dt (7) dirty pages (unused since Linux 2.6; always 0)
Here's a simple example:
from pathlib import Path
from resource import getpagesize
def get_resident_set_size():
# Columns are: size resident shared text lib data dt
statm = Path('/proc/self/statm').read_text()
fields = statm.split()
return int(fields[1]) * getpagesize()
data =
start_memory = get_resident_set_size()
for _ in range(10):
data.append('X' * 100000)
print(get_resident_set_size() - start_memory)
That produces a list that looks something like this:
0
0
368640
368640
368640
638976
638976
909312
909312
909312
You can see that it jumps by about 300,000 bytes after roughly 3 allocations of 100,000 bytes.
answered Nov 26 at 6:25
Don Kirkby
27.2k10127203
27.2k10127203
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f938733%2ftotal-memory-used-by-python-process%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown