OOM killer not working?
For what I understand, when the system is close to have no free memory, the kernel should start to kill processes to regain some memory. But in my system this does not happen at all.
Suppose a simple script that just allocates much more memory than the available in the system (an array with millions of strings, for example). If I run a script like this (as a normal user), it just gets all the memory until the system completely freezes (only SysRQ REISUB works).
The weird part here is that when the computer freezes, the hard drive led turns on and stays that way until the computer is rebooted, either if I have a swap partition mounted or not!
So my questions are:
- Is this behavior normal? It's odd that an application executed as a normal user can just crash the system this way...
- Is there any way I can make Ubuntu just kill automatically those applications when they get too much (or the most) memory?
Additional information
- Ubuntu 12.04.3
- Kernel 3.5.0-44
RAM: ~3.7GB from 4GB (shared with graphics card).
*
$ tail -n+1 /proc/sys/vm/overcommit_*
==> /proc/sys/vm/overcommit_memory <==
0
==> /proc/sys/vm/overcommit_ratio <==
50
$ cat /proc/swaps
Filename Type Size Used Priority
/dev/dm-1 partition 4194300 344696 -1
kernel
|
show 6 more comments
For what I understand, when the system is close to have no free memory, the kernel should start to kill processes to regain some memory. But in my system this does not happen at all.
Suppose a simple script that just allocates much more memory than the available in the system (an array with millions of strings, for example). If I run a script like this (as a normal user), it just gets all the memory until the system completely freezes (only SysRQ REISUB works).
The weird part here is that when the computer freezes, the hard drive led turns on and stays that way until the computer is rebooted, either if I have a swap partition mounted or not!
So my questions are:
- Is this behavior normal? It's odd that an application executed as a normal user can just crash the system this way...
- Is there any way I can make Ubuntu just kill automatically those applications when they get too much (or the most) memory?
Additional information
- Ubuntu 12.04.3
- Kernel 3.5.0-44
RAM: ~3.7GB from 4GB (shared with graphics card).
*
$ tail -n+1 /proc/sys/vm/overcommit_*
==> /proc/sys/vm/overcommit_memory <==
0
==> /proc/sys/vm/overcommit_ratio <==
50
$ cat /proc/swaps
Filename Type Size Used Priority
/dev/dm-1 partition 4194300 344696 -1
kernel
I'm not sure why it's not working. Trytail -n+1 /proc/sys/vm/overcommit_*
and add the output. See here also: How Do I configure oom-killer
– kiri
Jan 2 '14 at 22:48
So what is happening with your swap space? Can you post some vmstat output like #vmstat 1 100 or something like that? and also show us cat /etc/fstab What should happen is at a certain amount of memory usage, you should start writing to swap. Killing processes shouldn't happen until memory and swap space are "full".
– j0h
Jan 4 '14 at 21:44
also try #swapon -a
– j0h
Jan 4 '14 at 21:58
@j0h With swap it seems to work well (after some time the process crashed with something likeAllocation failed
). But without swap it just freezes the computer. It is supposed to work this way (only kill when using swap)?
– Salem
Jan 4 '14 at 22:24
2
With SysRq you can also invoke OOM (SysRq + F iirc)
– Lekensteyn
Jan 9 '14 at 14:15
|
show 6 more comments
For what I understand, when the system is close to have no free memory, the kernel should start to kill processes to regain some memory. But in my system this does not happen at all.
Suppose a simple script that just allocates much more memory than the available in the system (an array with millions of strings, for example). If I run a script like this (as a normal user), it just gets all the memory until the system completely freezes (only SysRQ REISUB works).
The weird part here is that when the computer freezes, the hard drive led turns on and stays that way until the computer is rebooted, either if I have a swap partition mounted or not!
So my questions are:
- Is this behavior normal? It's odd that an application executed as a normal user can just crash the system this way...
- Is there any way I can make Ubuntu just kill automatically those applications when they get too much (or the most) memory?
Additional information
- Ubuntu 12.04.3
- Kernel 3.5.0-44
RAM: ~3.7GB from 4GB (shared with graphics card).
*
$ tail -n+1 /proc/sys/vm/overcommit_*
==> /proc/sys/vm/overcommit_memory <==
0
==> /proc/sys/vm/overcommit_ratio <==
50
$ cat /proc/swaps
Filename Type Size Used Priority
/dev/dm-1 partition 4194300 344696 -1
kernel
For what I understand, when the system is close to have no free memory, the kernel should start to kill processes to regain some memory. But in my system this does not happen at all.
Suppose a simple script that just allocates much more memory than the available in the system (an array with millions of strings, for example). If I run a script like this (as a normal user), it just gets all the memory until the system completely freezes (only SysRQ REISUB works).
The weird part here is that when the computer freezes, the hard drive led turns on and stays that way until the computer is rebooted, either if I have a swap partition mounted or not!
So my questions are:
- Is this behavior normal? It's odd that an application executed as a normal user can just crash the system this way...
- Is there any way I can make Ubuntu just kill automatically those applications when they get too much (or the most) memory?
Additional information
- Ubuntu 12.04.3
- Kernel 3.5.0-44
RAM: ~3.7GB from 4GB (shared with graphics card).
*
$ tail -n+1 /proc/sys/vm/overcommit_*
==> /proc/sys/vm/overcommit_memory <==
0
==> /proc/sys/vm/overcommit_ratio <==
50
$ cat /proc/swaps
Filename Type Size Used Priority
/dev/dm-1 partition 4194300 344696 -1
kernel
kernel
edited Jan 9 '14 at 15:59
Braiam
52.4k20138223
52.4k20138223
asked Dec 31 '13 at 18:21
SalemSalem
17.2k65083
17.2k65083
I'm not sure why it's not working. Trytail -n+1 /proc/sys/vm/overcommit_*
and add the output. See here also: How Do I configure oom-killer
– kiri
Jan 2 '14 at 22:48
So what is happening with your swap space? Can you post some vmstat output like #vmstat 1 100 or something like that? and also show us cat /etc/fstab What should happen is at a certain amount of memory usage, you should start writing to swap. Killing processes shouldn't happen until memory and swap space are "full".
– j0h
Jan 4 '14 at 21:44
also try #swapon -a
– j0h
Jan 4 '14 at 21:58
@j0h With swap it seems to work well (after some time the process crashed with something likeAllocation failed
). But without swap it just freezes the computer. It is supposed to work this way (only kill when using swap)?
– Salem
Jan 4 '14 at 22:24
2
With SysRq you can also invoke OOM (SysRq + F iirc)
– Lekensteyn
Jan 9 '14 at 14:15
|
show 6 more comments
I'm not sure why it's not working. Trytail -n+1 /proc/sys/vm/overcommit_*
and add the output. See here also: How Do I configure oom-killer
– kiri
Jan 2 '14 at 22:48
So what is happening with your swap space? Can you post some vmstat output like #vmstat 1 100 or something like that? and also show us cat /etc/fstab What should happen is at a certain amount of memory usage, you should start writing to swap. Killing processes shouldn't happen until memory and swap space are "full".
– j0h
Jan 4 '14 at 21:44
also try #swapon -a
– j0h
Jan 4 '14 at 21:58
@j0h With swap it seems to work well (after some time the process crashed with something likeAllocation failed
). But without swap it just freezes the computer. It is supposed to work this way (only kill when using swap)?
– Salem
Jan 4 '14 at 22:24
2
With SysRq you can also invoke OOM (SysRq + F iirc)
– Lekensteyn
Jan 9 '14 at 14:15
I'm not sure why it's not working. Try
tail -n+1 /proc/sys/vm/overcommit_*
and add the output. See here also: How Do I configure oom-killer– kiri
Jan 2 '14 at 22:48
I'm not sure why it's not working. Try
tail -n+1 /proc/sys/vm/overcommit_*
and add the output. See here also: How Do I configure oom-killer– kiri
Jan 2 '14 at 22:48
So what is happening with your swap space? Can you post some vmstat output like #vmstat 1 100 or something like that? and also show us cat /etc/fstab What should happen is at a certain amount of memory usage, you should start writing to swap. Killing processes shouldn't happen until memory and swap space are "full".
– j0h
Jan 4 '14 at 21:44
So what is happening with your swap space? Can you post some vmstat output like #vmstat 1 100 or something like that? and also show us cat /etc/fstab What should happen is at a certain amount of memory usage, you should start writing to swap. Killing processes shouldn't happen until memory and swap space are "full".
– j0h
Jan 4 '14 at 21:44
also try #swapon -a
– j0h
Jan 4 '14 at 21:58
also try #swapon -a
– j0h
Jan 4 '14 at 21:58
@j0h With swap it seems to work well (after some time the process crashed with something like
Allocation failed
). But without swap it just freezes the computer. It is supposed to work this way (only kill when using swap)?– Salem
Jan 4 '14 at 22:24
@j0h With swap it seems to work well (after some time the process crashed with something like
Allocation failed
). But without swap it just freezes the computer. It is supposed to work this way (only kill when using swap)?– Salem
Jan 4 '14 at 22:24
2
2
With SysRq you can also invoke OOM (SysRq + F iirc)
– Lekensteyn
Jan 9 '14 at 14:15
With SysRq you can also invoke OOM (SysRq + F iirc)
– Lekensteyn
Jan 9 '14 at 14:15
|
show 6 more comments
3 Answers
3
active
oldest
votes
From the official /proc/sys/vm/*
documentation:
oom_kill_allocating_task
This enables or disables killing the OOM-triggering task in
out-of-memory situations.
If this is set to zero, the OOM killer will scan through the entire
tasklist and select a task based on heuristics to kill. This normally
selects a rogue memory-hogging task that frees up a large amount of
memory when killed.
If this is set to non-zero, the OOM killer simply kills the task that
triggered the out-of-memory condition. This avoids the expensive
tasklist scan.
If panic_on_oom is selected, it takes precedence over whatever value
is used in oom_kill_allocating_task.
The default value is 0.
In order to summarize, when setting oom_kill_allocating_task
to 1
, instead of scanning your system looking for processes to kill, which is an expensive and slow task, the kernel will just kill the process that caused the system to get out of memory.
From my own experiences, when a OOM is triggered, the kernel has no more "strength" enough left to do such scan, making the system totally unusable.
Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to 0
by default.
For testing, you can just write to the proper pseudo-file in /proc/sys/vm/
, which will be undone on the next reboot:
echo 1 | sudo tee /proc/sys/vm/oom_kill_allocating_task
For a permanent fix, write the following to /etc/sysctl.conf
or to a new file under /etc/sysctl.d/
, with a .conf
extension (/etc/sysctl.d/local.conf
for example):
vm.oom_kill_allocating_task = 1
1
Was it always set to 0 in Ubuntu? Because I remember it used to kill automatically, but since a few versions it stopped doing so.
– skerit
Jun 12 '14 at 20:18
1
@skerit This I don't really know, but it was set to 0 in the kernels I used back in 2010 (Debian, Liquorix and GRML).
– Teresa e Junior
Jun 13 '14 at 3:37
"Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to0
by default." - because the process that requested memory isn't necessarily the one that "caused the problem". If process A hogs 99% of the system's memory, but process B, which is using 0.9%, happens to be the one that triggers the OOM killer by bad luck, B didn't "cause the problem" and it makes no sense to kill B. Having that as the policy risks totally unproblematic low-memory processes being killed by chance because of a different process's runaway memory usage.
– Mark Amery
Sep 11 '18 at 9:53
@MarkAmery The real problem is that Linux, instead of just killing the needed process, starts thrashing like a retard, even ifvm.admin_reserve_kbytes
is increased to, say, 128 MB. Settingvm.oom_kill_allocating_task = 1
seems to alleviate the problem, doesn't really solve it (and Ubuntu already deals with fork bombs by default).
– Teresa e Junior
Sep 11 '18 at 19:39
Maybe more elegantsudo sysctl -w vm.oom_kill_allocating_task=1
– Pablo Bianchi
Feb 27 at 16:03
add a comment |
Update: The bug is fixed.
Teresa's answer is enough to workaround the problem and is good.
Additionally, I've filed a bug report because that is definitely a broken behavior.
I don't know why you got downvoted, but that also sounds like a kernel bug to me. I've crashed a big university server today with it and killed some processes that were running for weeks... Thanks for filing that bug report though!
– shapecatcher
Dec 16 '14 at 14:26
6
Might have been fixed in 2014, in 2018 (and 18.04) the OOM killer is yet again doing nothing.
– skerit
May 22 '18 at 16:00
add a comment |
First of all I recommend the update to 13.10 (clean install, save your data).
If you don't want to update change the vm.swappiness to 10 and if you find problems with your ram install zRAM.
2
I wasn't the one who downvoted you, but generally, loweringvm.swappiness
does more harm than good, even more on systems suffering from low memory issues.
– Teresa e Junior
Jan 9 '14 at 15:46
Not when you compress the ram first and you then avoid disk use that is much slower and can be making your computer freeze.
– Brask
Jan 9 '14 at 16:02
In theory, zRAM is a nice thing, but it is CPU hungry, and generally not worth the cost. Memory is generally way cheaper than electricity. And, on a laptop, where upgrading the RAM is more expensive, CPU usage is mostly undesirable.
– Teresa e Junior
Jan 9 '14 at 16:18
What he is asking for is to have a more stable system zRAM and changing swappiness will make his system use more CPU resource yes, but what he is limited atm and having errors with is the memory, he wants to fix the problem not a theory lesson of what happens when you install zRAM.
– Brask
Jan 9 '14 at 16:22
It's clear from his question that he may write an improper script that eats more than it should (and I have already done this myself). In a situation like this, you can watch the script grabbing gigabytes of RAM in a few seconds, and zRAM won't come to the rescue, since the script will never be satisfied enough.
– Teresa e Junior
Jan 9 '14 at 16:28
|
show 2 more comments
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "89"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f398236%2foom-killer-not-working%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
From the official /proc/sys/vm/*
documentation:
oom_kill_allocating_task
This enables or disables killing the OOM-triggering task in
out-of-memory situations.
If this is set to zero, the OOM killer will scan through the entire
tasklist and select a task based on heuristics to kill. This normally
selects a rogue memory-hogging task that frees up a large amount of
memory when killed.
If this is set to non-zero, the OOM killer simply kills the task that
triggered the out-of-memory condition. This avoids the expensive
tasklist scan.
If panic_on_oom is selected, it takes precedence over whatever value
is used in oom_kill_allocating_task.
The default value is 0.
In order to summarize, when setting oom_kill_allocating_task
to 1
, instead of scanning your system looking for processes to kill, which is an expensive and slow task, the kernel will just kill the process that caused the system to get out of memory.
From my own experiences, when a OOM is triggered, the kernel has no more "strength" enough left to do such scan, making the system totally unusable.
Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to 0
by default.
For testing, you can just write to the proper pseudo-file in /proc/sys/vm/
, which will be undone on the next reboot:
echo 1 | sudo tee /proc/sys/vm/oom_kill_allocating_task
For a permanent fix, write the following to /etc/sysctl.conf
or to a new file under /etc/sysctl.d/
, with a .conf
extension (/etc/sysctl.d/local.conf
for example):
vm.oom_kill_allocating_task = 1
1
Was it always set to 0 in Ubuntu? Because I remember it used to kill automatically, but since a few versions it stopped doing so.
– skerit
Jun 12 '14 at 20:18
1
@skerit This I don't really know, but it was set to 0 in the kernels I used back in 2010 (Debian, Liquorix and GRML).
– Teresa e Junior
Jun 13 '14 at 3:37
"Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to0
by default." - because the process that requested memory isn't necessarily the one that "caused the problem". If process A hogs 99% of the system's memory, but process B, which is using 0.9%, happens to be the one that triggers the OOM killer by bad luck, B didn't "cause the problem" and it makes no sense to kill B. Having that as the policy risks totally unproblematic low-memory processes being killed by chance because of a different process's runaway memory usage.
– Mark Amery
Sep 11 '18 at 9:53
@MarkAmery The real problem is that Linux, instead of just killing the needed process, starts thrashing like a retard, even ifvm.admin_reserve_kbytes
is increased to, say, 128 MB. Settingvm.oom_kill_allocating_task = 1
seems to alleviate the problem, doesn't really solve it (and Ubuntu already deals with fork bombs by default).
– Teresa e Junior
Sep 11 '18 at 19:39
Maybe more elegantsudo sysctl -w vm.oom_kill_allocating_task=1
– Pablo Bianchi
Feb 27 at 16:03
add a comment |
From the official /proc/sys/vm/*
documentation:
oom_kill_allocating_task
This enables or disables killing the OOM-triggering task in
out-of-memory situations.
If this is set to zero, the OOM killer will scan through the entire
tasklist and select a task based on heuristics to kill. This normally
selects a rogue memory-hogging task that frees up a large amount of
memory when killed.
If this is set to non-zero, the OOM killer simply kills the task that
triggered the out-of-memory condition. This avoids the expensive
tasklist scan.
If panic_on_oom is selected, it takes precedence over whatever value
is used in oom_kill_allocating_task.
The default value is 0.
In order to summarize, when setting oom_kill_allocating_task
to 1
, instead of scanning your system looking for processes to kill, which is an expensive and slow task, the kernel will just kill the process that caused the system to get out of memory.
From my own experiences, when a OOM is triggered, the kernel has no more "strength" enough left to do such scan, making the system totally unusable.
Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to 0
by default.
For testing, you can just write to the proper pseudo-file in /proc/sys/vm/
, which will be undone on the next reboot:
echo 1 | sudo tee /proc/sys/vm/oom_kill_allocating_task
For a permanent fix, write the following to /etc/sysctl.conf
or to a new file under /etc/sysctl.d/
, with a .conf
extension (/etc/sysctl.d/local.conf
for example):
vm.oom_kill_allocating_task = 1
1
Was it always set to 0 in Ubuntu? Because I remember it used to kill automatically, but since a few versions it stopped doing so.
– skerit
Jun 12 '14 at 20:18
1
@skerit This I don't really know, but it was set to 0 in the kernels I used back in 2010 (Debian, Liquorix and GRML).
– Teresa e Junior
Jun 13 '14 at 3:37
"Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to0
by default." - because the process that requested memory isn't necessarily the one that "caused the problem". If process A hogs 99% of the system's memory, but process B, which is using 0.9%, happens to be the one that triggers the OOM killer by bad luck, B didn't "cause the problem" and it makes no sense to kill B. Having that as the policy risks totally unproblematic low-memory processes being killed by chance because of a different process's runaway memory usage.
– Mark Amery
Sep 11 '18 at 9:53
@MarkAmery The real problem is that Linux, instead of just killing the needed process, starts thrashing like a retard, even ifvm.admin_reserve_kbytes
is increased to, say, 128 MB. Settingvm.oom_kill_allocating_task = 1
seems to alleviate the problem, doesn't really solve it (and Ubuntu already deals with fork bombs by default).
– Teresa e Junior
Sep 11 '18 at 19:39
Maybe more elegantsudo sysctl -w vm.oom_kill_allocating_task=1
– Pablo Bianchi
Feb 27 at 16:03
add a comment |
From the official /proc/sys/vm/*
documentation:
oom_kill_allocating_task
This enables or disables killing the OOM-triggering task in
out-of-memory situations.
If this is set to zero, the OOM killer will scan through the entire
tasklist and select a task based on heuristics to kill. This normally
selects a rogue memory-hogging task that frees up a large amount of
memory when killed.
If this is set to non-zero, the OOM killer simply kills the task that
triggered the out-of-memory condition. This avoids the expensive
tasklist scan.
If panic_on_oom is selected, it takes precedence over whatever value
is used in oom_kill_allocating_task.
The default value is 0.
In order to summarize, when setting oom_kill_allocating_task
to 1
, instead of scanning your system looking for processes to kill, which is an expensive and slow task, the kernel will just kill the process that caused the system to get out of memory.
From my own experiences, when a OOM is triggered, the kernel has no more "strength" enough left to do such scan, making the system totally unusable.
Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to 0
by default.
For testing, you can just write to the proper pseudo-file in /proc/sys/vm/
, which will be undone on the next reboot:
echo 1 | sudo tee /proc/sys/vm/oom_kill_allocating_task
For a permanent fix, write the following to /etc/sysctl.conf
or to a new file under /etc/sysctl.d/
, with a .conf
extension (/etc/sysctl.d/local.conf
for example):
vm.oom_kill_allocating_task = 1
From the official /proc/sys/vm/*
documentation:
oom_kill_allocating_task
This enables or disables killing the OOM-triggering task in
out-of-memory situations.
If this is set to zero, the OOM killer will scan through the entire
tasklist and select a task based on heuristics to kill. This normally
selects a rogue memory-hogging task that frees up a large amount of
memory when killed.
If this is set to non-zero, the OOM killer simply kills the task that
triggered the out-of-memory condition. This avoids the expensive
tasklist scan.
If panic_on_oom is selected, it takes precedence over whatever value
is used in oom_kill_allocating_task.
The default value is 0.
In order to summarize, when setting oom_kill_allocating_task
to 1
, instead of scanning your system looking for processes to kill, which is an expensive and slow task, the kernel will just kill the process that caused the system to get out of memory.
From my own experiences, when a OOM is triggered, the kernel has no more "strength" enough left to do such scan, making the system totally unusable.
Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to 0
by default.
For testing, you can just write to the proper pseudo-file in /proc/sys/vm/
, which will be undone on the next reboot:
echo 1 | sudo tee /proc/sys/vm/oom_kill_allocating_task
For a permanent fix, write the following to /etc/sysctl.conf
or to a new file under /etc/sysctl.d/
, with a .conf
extension (/etc/sysctl.d/local.conf
for example):
vm.oom_kill_allocating_task = 1
answered Jan 9 '14 at 15:42
Teresa e JuniorTeresa e Junior
89969
89969
1
Was it always set to 0 in Ubuntu? Because I remember it used to kill automatically, but since a few versions it stopped doing so.
– skerit
Jun 12 '14 at 20:18
1
@skerit This I don't really know, but it was set to 0 in the kernels I used back in 2010 (Debian, Liquorix and GRML).
– Teresa e Junior
Jun 13 '14 at 3:37
"Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to0
by default." - because the process that requested memory isn't necessarily the one that "caused the problem". If process A hogs 99% of the system's memory, but process B, which is using 0.9%, happens to be the one that triggers the OOM killer by bad luck, B didn't "cause the problem" and it makes no sense to kill B. Having that as the policy risks totally unproblematic low-memory processes being killed by chance because of a different process's runaway memory usage.
– Mark Amery
Sep 11 '18 at 9:53
@MarkAmery The real problem is that Linux, instead of just killing the needed process, starts thrashing like a retard, even ifvm.admin_reserve_kbytes
is increased to, say, 128 MB. Settingvm.oom_kill_allocating_task = 1
seems to alleviate the problem, doesn't really solve it (and Ubuntu already deals with fork bombs by default).
– Teresa e Junior
Sep 11 '18 at 19:39
Maybe more elegantsudo sysctl -w vm.oom_kill_allocating_task=1
– Pablo Bianchi
Feb 27 at 16:03
add a comment |
1
Was it always set to 0 in Ubuntu? Because I remember it used to kill automatically, but since a few versions it stopped doing so.
– skerit
Jun 12 '14 at 20:18
1
@skerit This I don't really know, but it was set to 0 in the kernels I used back in 2010 (Debian, Liquorix and GRML).
– Teresa e Junior
Jun 13 '14 at 3:37
"Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to0
by default." - because the process that requested memory isn't necessarily the one that "caused the problem". If process A hogs 99% of the system's memory, but process B, which is using 0.9%, happens to be the one that triggers the OOM killer by bad luck, B didn't "cause the problem" and it makes no sense to kill B. Having that as the policy risks totally unproblematic low-memory processes being killed by chance because of a different process's runaway memory usage.
– Mark Amery
Sep 11 '18 at 9:53
@MarkAmery The real problem is that Linux, instead of just killing the needed process, starts thrashing like a retard, even ifvm.admin_reserve_kbytes
is increased to, say, 128 MB. Settingvm.oom_kill_allocating_task = 1
seems to alleviate the problem, doesn't really solve it (and Ubuntu already deals with fork bombs by default).
– Teresa e Junior
Sep 11 '18 at 19:39
Maybe more elegantsudo sysctl -w vm.oom_kill_allocating_task=1
– Pablo Bianchi
Feb 27 at 16:03
1
1
Was it always set to 0 in Ubuntu? Because I remember it used to kill automatically, but since a few versions it stopped doing so.
– skerit
Jun 12 '14 at 20:18
Was it always set to 0 in Ubuntu? Because I remember it used to kill automatically, but since a few versions it stopped doing so.
– skerit
Jun 12 '14 at 20:18
1
1
@skerit This I don't really know, but it was set to 0 in the kernels I used back in 2010 (Debian, Liquorix and GRML).
– Teresa e Junior
Jun 13 '14 at 3:37
@skerit This I don't really know, but it was set to 0 in the kernels I used back in 2010 (Debian, Liquorix and GRML).
– Teresa e Junior
Jun 13 '14 at 3:37
"Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to
0
by default." - because the process that requested memory isn't necessarily the one that "caused the problem". If process A hogs 99% of the system's memory, but process B, which is using 0.9%, happens to be the one that triggers the OOM killer by bad luck, B didn't "cause the problem" and it makes no sense to kill B. Having that as the policy risks totally unproblematic low-memory processes being killed by chance because of a different process's runaway memory usage.– Mark Amery
Sep 11 '18 at 9:53
"Also, it would be more obvious just killing the task that caused the problem, so I fail to understand why it is set to
0
by default." - because the process that requested memory isn't necessarily the one that "caused the problem". If process A hogs 99% of the system's memory, but process B, which is using 0.9%, happens to be the one that triggers the OOM killer by bad luck, B didn't "cause the problem" and it makes no sense to kill B. Having that as the policy risks totally unproblematic low-memory processes being killed by chance because of a different process's runaway memory usage.– Mark Amery
Sep 11 '18 at 9:53
@MarkAmery The real problem is that Linux, instead of just killing the needed process, starts thrashing like a retard, even if
vm.admin_reserve_kbytes
is increased to, say, 128 MB. Setting vm.oom_kill_allocating_task = 1
seems to alleviate the problem, doesn't really solve it (and Ubuntu already deals with fork bombs by default).– Teresa e Junior
Sep 11 '18 at 19:39
@MarkAmery The real problem is that Linux, instead of just killing the needed process, starts thrashing like a retard, even if
vm.admin_reserve_kbytes
is increased to, say, 128 MB. Setting vm.oom_kill_allocating_task = 1
seems to alleviate the problem, doesn't really solve it (and Ubuntu already deals with fork bombs by default).– Teresa e Junior
Sep 11 '18 at 19:39
Maybe more elegant
sudo sysctl -w vm.oom_kill_allocating_task=1
– Pablo Bianchi
Feb 27 at 16:03
Maybe more elegant
sudo sysctl -w vm.oom_kill_allocating_task=1
– Pablo Bianchi
Feb 27 at 16:03
add a comment |
Update: The bug is fixed.
Teresa's answer is enough to workaround the problem and is good.
Additionally, I've filed a bug report because that is definitely a broken behavior.
I don't know why you got downvoted, but that also sounds like a kernel bug to me. I've crashed a big university server today with it and killed some processes that were running for weeks... Thanks for filing that bug report though!
– shapecatcher
Dec 16 '14 at 14:26
6
Might have been fixed in 2014, in 2018 (and 18.04) the OOM killer is yet again doing nothing.
– skerit
May 22 '18 at 16:00
add a comment |
Update: The bug is fixed.
Teresa's answer is enough to workaround the problem and is good.
Additionally, I've filed a bug report because that is definitely a broken behavior.
I don't know why you got downvoted, but that also sounds like a kernel bug to me. I've crashed a big university server today with it and killed some processes that were running for weeks... Thanks for filing that bug report though!
– shapecatcher
Dec 16 '14 at 14:26
6
Might have been fixed in 2014, in 2018 (and 18.04) the OOM killer is yet again doing nothing.
– skerit
May 22 '18 at 16:00
add a comment |
Update: The bug is fixed.
Teresa's answer is enough to workaround the problem and is good.
Additionally, I've filed a bug report because that is definitely a broken behavior.
Update: The bug is fixed.
Teresa's answer is enough to workaround the problem and is good.
Additionally, I've filed a bug report because that is definitely a broken behavior.
edited Feb 27 at 16:01
Pablo Bianchi
3,03521536
3,03521536
answered Aug 21 '14 at 14:08
int_uaint_ua
4,358753111
4,358753111
I don't know why you got downvoted, but that also sounds like a kernel bug to me. I've crashed a big university server today with it and killed some processes that were running for weeks... Thanks for filing that bug report though!
– shapecatcher
Dec 16 '14 at 14:26
6
Might have been fixed in 2014, in 2018 (and 18.04) the OOM killer is yet again doing nothing.
– skerit
May 22 '18 at 16:00
add a comment |
I don't know why you got downvoted, but that also sounds like a kernel bug to me. I've crashed a big university server today with it and killed some processes that were running for weeks... Thanks for filing that bug report though!
– shapecatcher
Dec 16 '14 at 14:26
6
Might have been fixed in 2014, in 2018 (and 18.04) the OOM killer is yet again doing nothing.
– skerit
May 22 '18 at 16:00
I don't know why you got downvoted, but that also sounds like a kernel bug to me. I've crashed a big university server today with it and killed some processes that were running for weeks... Thanks for filing that bug report though!
– shapecatcher
Dec 16 '14 at 14:26
I don't know why you got downvoted, but that also sounds like a kernel bug to me. I've crashed a big university server today with it and killed some processes that were running for weeks... Thanks for filing that bug report though!
– shapecatcher
Dec 16 '14 at 14:26
6
6
Might have been fixed in 2014, in 2018 (and 18.04) the OOM killer is yet again doing nothing.
– skerit
May 22 '18 at 16:00
Might have been fixed in 2014, in 2018 (and 18.04) the OOM killer is yet again doing nothing.
– skerit
May 22 '18 at 16:00
add a comment |
First of all I recommend the update to 13.10 (clean install, save your data).
If you don't want to update change the vm.swappiness to 10 and if you find problems with your ram install zRAM.
2
I wasn't the one who downvoted you, but generally, loweringvm.swappiness
does more harm than good, even more on systems suffering from low memory issues.
– Teresa e Junior
Jan 9 '14 at 15:46
Not when you compress the ram first and you then avoid disk use that is much slower and can be making your computer freeze.
– Brask
Jan 9 '14 at 16:02
In theory, zRAM is a nice thing, but it is CPU hungry, and generally not worth the cost. Memory is generally way cheaper than electricity. And, on a laptop, where upgrading the RAM is more expensive, CPU usage is mostly undesirable.
– Teresa e Junior
Jan 9 '14 at 16:18
What he is asking for is to have a more stable system zRAM and changing swappiness will make his system use more CPU resource yes, but what he is limited atm and having errors with is the memory, he wants to fix the problem not a theory lesson of what happens when you install zRAM.
– Brask
Jan 9 '14 at 16:22
It's clear from his question that he may write an improper script that eats more than it should (and I have already done this myself). In a situation like this, you can watch the script grabbing gigabytes of RAM in a few seconds, and zRAM won't come to the rescue, since the script will never be satisfied enough.
– Teresa e Junior
Jan 9 '14 at 16:28
|
show 2 more comments
First of all I recommend the update to 13.10 (clean install, save your data).
If you don't want to update change the vm.swappiness to 10 and if you find problems with your ram install zRAM.
2
I wasn't the one who downvoted you, but generally, loweringvm.swappiness
does more harm than good, even more on systems suffering from low memory issues.
– Teresa e Junior
Jan 9 '14 at 15:46
Not when you compress the ram first and you then avoid disk use that is much slower and can be making your computer freeze.
– Brask
Jan 9 '14 at 16:02
In theory, zRAM is a nice thing, but it is CPU hungry, and generally not worth the cost. Memory is generally way cheaper than electricity. And, on a laptop, where upgrading the RAM is more expensive, CPU usage is mostly undesirable.
– Teresa e Junior
Jan 9 '14 at 16:18
What he is asking for is to have a more stable system zRAM and changing swappiness will make his system use more CPU resource yes, but what he is limited atm and having errors with is the memory, he wants to fix the problem not a theory lesson of what happens when you install zRAM.
– Brask
Jan 9 '14 at 16:22
It's clear from his question that he may write an improper script that eats more than it should (and I have already done this myself). In a situation like this, you can watch the script grabbing gigabytes of RAM in a few seconds, and zRAM won't come to the rescue, since the script will never be satisfied enough.
– Teresa e Junior
Jan 9 '14 at 16:28
|
show 2 more comments
First of all I recommend the update to 13.10 (clean install, save your data).
If you don't want to update change the vm.swappiness to 10 and if you find problems with your ram install zRAM.
First of all I recommend the update to 13.10 (clean install, save your data).
If you don't want to update change the vm.swappiness to 10 and if you find problems with your ram install zRAM.
answered Jan 9 '14 at 14:02
BraskBrask
1,4021018
1,4021018
2
I wasn't the one who downvoted you, but generally, loweringvm.swappiness
does more harm than good, even more on systems suffering from low memory issues.
– Teresa e Junior
Jan 9 '14 at 15:46
Not when you compress the ram first and you then avoid disk use that is much slower and can be making your computer freeze.
– Brask
Jan 9 '14 at 16:02
In theory, zRAM is a nice thing, but it is CPU hungry, and generally not worth the cost. Memory is generally way cheaper than electricity. And, on a laptop, where upgrading the RAM is more expensive, CPU usage is mostly undesirable.
– Teresa e Junior
Jan 9 '14 at 16:18
What he is asking for is to have a more stable system zRAM and changing swappiness will make his system use more CPU resource yes, but what he is limited atm and having errors with is the memory, he wants to fix the problem not a theory lesson of what happens when you install zRAM.
– Brask
Jan 9 '14 at 16:22
It's clear from his question that he may write an improper script that eats more than it should (and I have already done this myself). In a situation like this, you can watch the script grabbing gigabytes of RAM in a few seconds, and zRAM won't come to the rescue, since the script will never be satisfied enough.
– Teresa e Junior
Jan 9 '14 at 16:28
|
show 2 more comments
2
I wasn't the one who downvoted you, but generally, loweringvm.swappiness
does more harm than good, even more on systems suffering from low memory issues.
– Teresa e Junior
Jan 9 '14 at 15:46
Not when you compress the ram first and you then avoid disk use that is much slower and can be making your computer freeze.
– Brask
Jan 9 '14 at 16:02
In theory, zRAM is a nice thing, but it is CPU hungry, and generally not worth the cost. Memory is generally way cheaper than electricity. And, on a laptop, where upgrading the RAM is more expensive, CPU usage is mostly undesirable.
– Teresa e Junior
Jan 9 '14 at 16:18
What he is asking for is to have a more stable system zRAM and changing swappiness will make his system use more CPU resource yes, but what he is limited atm and having errors with is the memory, he wants to fix the problem not a theory lesson of what happens when you install zRAM.
– Brask
Jan 9 '14 at 16:22
It's clear from his question that he may write an improper script that eats more than it should (and I have already done this myself). In a situation like this, you can watch the script grabbing gigabytes of RAM in a few seconds, and zRAM won't come to the rescue, since the script will never be satisfied enough.
– Teresa e Junior
Jan 9 '14 at 16:28
2
2
I wasn't the one who downvoted you, but generally, lowering
vm.swappiness
does more harm than good, even more on systems suffering from low memory issues.– Teresa e Junior
Jan 9 '14 at 15:46
I wasn't the one who downvoted you, but generally, lowering
vm.swappiness
does more harm than good, even more on systems suffering from low memory issues.– Teresa e Junior
Jan 9 '14 at 15:46
Not when you compress the ram first and you then avoid disk use that is much slower and can be making your computer freeze.
– Brask
Jan 9 '14 at 16:02
Not when you compress the ram first and you then avoid disk use that is much slower and can be making your computer freeze.
– Brask
Jan 9 '14 at 16:02
In theory, zRAM is a nice thing, but it is CPU hungry, and generally not worth the cost. Memory is generally way cheaper than electricity. And, on a laptop, where upgrading the RAM is more expensive, CPU usage is mostly undesirable.
– Teresa e Junior
Jan 9 '14 at 16:18
In theory, zRAM is a nice thing, but it is CPU hungry, and generally not worth the cost. Memory is generally way cheaper than electricity. And, on a laptop, where upgrading the RAM is more expensive, CPU usage is mostly undesirable.
– Teresa e Junior
Jan 9 '14 at 16:18
What he is asking for is to have a more stable system zRAM and changing swappiness will make his system use more CPU resource yes, but what he is limited atm and having errors with is the memory, he wants to fix the problem not a theory lesson of what happens when you install zRAM.
– Brask
Jan 9 '14 at 16:22
What he is asking for is to have a more stable system zRAM and changing swappiness will make his system use more CPU resource yes, but what he is limited atm and having errors with is the memory, he wants to fix the problem not a theory lesson of what happens when you install zRAM.
– Brask
Jan 9 '14 at 16:22
It's clear from his question that he may write an improper script that eats more than it should (and I have already done this myself). In a situation like this, you can watch the script grabbing gigabytes of RAM in a few seconds, and zRAM won't come to the rescue, since the script will never be satisfied enough.
– Teresa e Junior
Jan 9 '14 at 16:28
It's clear from his question that he may write an improper script that eats more than it should (and I have already done this myself). In a situation like this, you can watch the script grabbing gigabytes of RAM in a few seconds, and zRAM won't come to the rescue, since the script will never be satisfied enough.
– Teresa e Junior
Jan 9 '14 at 16:28
|
show 2 more comments
Thanks for contributing an answer to Ask Ubuntu!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f398236%2foom-killer-not-working%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
I'm not sure why it's not working. Try
tail -n+1 /proc/sys/vm/overcommit_*
and add the output. See here also: How Do I configure oom-killer– kiri
Jan 2 '14 at 22:48
So what is happening with your swap space? Can you post some vmstat output like #vmstat 1 100 or something like that? and also show us cat /etc/fstab What should happen is at a certain amount of memory usage, you should start writing to swap. Killing processes shouldn't happen until memory and swap space are "full".
– j0h
Jan 4 '14 at 21:44
also try #swapon -a
– j0h
Jan 4 '14 at 21:58
@j0h With swap it seems to work well (after some time the process crashed with something like
Allocation failed
). But without swap it just freezes the computer. It is supposed to work this way (only kill when using swap)?– Salem
Jan 4 '14 at 22:24
2
With SysRq you can also invoke OOM (SysRq + F iirc)
– Lekensteyn
Jan 9 '14 at 14:15