Discussion:
Cron jobs run out of memory?
(too old to reply)
Jacob S
2005-02-12 14:50:10 UTC
Permalink
I have a simple download script that runs from a cron job every morning
to pull an audio stream from a url and then convert it to mp3 (the
script is attached to this e-mail). It runs fine the first couple times
after a reboot, but then fails at various stages, leaving these messages
in my mailbox:

/home/jacob/bin/download.sh: fork: Cannot allocate memory

/home/jacob/bin/download.sh: xrealloc: ../bash/subst.c:468: cannot
reallocate 99 178368 bytes (0 bytes allocated)

When it gives the fork error, it has usually made it through the mplayer
section but not lame. When it gives the xrealloc error it doesn't even
make it that far. However, when I run this script manually from a
command line it works perfectly.

The computer is an Athlon 2200+XP on a NForce2 chipset with 512MB of ram
and 160GB SATA. Here is an example output from 'free' after the cron job
has failed, but running the program manually worked fine:

$ free
total used free shared buffers cached
Mem: 516308 514288 2020 0 1712 101836
-/+ buffers/cache:410740 105568
Swap: 497968 467868 30100

Any suggestions what might be causing this out of memory problem, or
ways to work around it?

TIA,
Jacob
Uwe Dippel
2005-02-12 18:10:10 UTC
Permalink
Post by Jacob S
$ free
total used free shared buffers cached
Mem: 516308 514288 2020 0 1712 101836
-/+ buffers/cache:410740 105568
Swap: 497968 467868 30100
This is simply too little of free memory to expect reliable behaviour.
Try to get more memory (RAM or larger swap) and come back if it doesn't
work. I almost bet it will.
30 k plus 2 isn't enough for crond plus your script.

HTH,

Uwe
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Jacob S
2005-02-12 19:40:13 UTC
Permalink
On Sun, 13 Feb 2005 01:35:12 +0800
Post by Uwe Dippel
Post by Jacob S
$ free
total used free shared buffers cached
Mem: 516308 514288 2020 0 1712 101836
-/+ buffers/cache:410740 105568
Swap: 497968 467868 30100
This is simply too little of free memory to expect reliable behaviour.
Try to get more memory (RAM or larger swap) and come back if it
doesn't work. I almost bet it will.
30 k plus 2 isn't enough for crond plus your script.
I should have mentioned - I have gotten this error with 768MB of ram
installed before too. How much ram do I need before I have "enough"?
Since it's running in the middle of the night, and is the only cron job
running at that time, I would have expected the system to find some
memory that it could free up somewhere.

Also, to help my learning... How is cron treated differently enough that
it runs out of memory, but I can leave my computer running for a month
and all that time the script runs manually from the CLI without a hitch?
(Galeon, Firefox, Gaim, Xmms and Xine are all running perfectly as
well.)

Thanks,
Jacob
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Uwe Dippel
2005-02-13 02:30:13 UTC
Permalink
Post by Jacob S
Post by Uwe Dippel
This is simply too little of free memory to expect reliable behaviour.
Try to get more memory (RAM or larger swap) and come back if it
doesn't work. I almost bet it will.
30 k plus 2 isn't enough for crond plus your script.
My fault: of cause it is *M*. But still, not enough for normal usage.
Post by Jacob S
I should have mentioned - I have gotten this error with 768MB of ram
installed before too. How much ram do I need before I have "enough"?
I bet you find something when you google. I wouldn't want to argue
further. Once you have - let's say - 200 MB free and it fails, I'd be more
tempted to consider the case questionable. It doesn't matter how much RAM
you have (or swap), there must be enough.
Post by Jacob S
Since it's running in the middle of the night, and is the only cron job
running at that time, I would have expected the system to find some
memory that it could free up somewhere.
What is 'somewhere', except it kills one or another ? 'Freeing up' means
from RAM into swap. But your swap is full as well.

Uwe
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Jacob S
2005-02-13 03:20:10 UTC
Permalink
On Sun, 13 Feb 2005 10:06:37 +0800
<snip>
Post by Uwe Dippel
Post by Jacob S
I should have mentioned - I have gotten this error with 768MB of ram
installed before too. How much ram do I need before I have "enough"?
I bet you find something when you google. I wouldn't want to argue
further. Once you have - let's say - 200 MB free and it fails, I'd be
more tempted to consider the case questionable. It doesn't matter how
much RAM you have (or swap), there must be enough.
Ok. I'll do some more research. (And sorry, this is not meant to be an
argument on my part. I just figured this would be a good time for me to
learn more about the internals of my favorite OS.)
Post by Uwe Dippel
Post by Jacob S
Since it's running in the middle of the night, and is the only cron
job running at that time, I would have expected the system to find
some memory that it could free up somewhere.
What is 'somewhere', except it kills one or another ? 'Freeing up'
means from RAM into swap. But your swap is full as well.
I think my confusion is coming from the fact that I don't know enough
about how Linux utilizes ram, swaps some out, cleans out unneeded stuff,
etc. Are normal jobs from the cli treated differently from cron jobs,
even when both are being run as the same user?

After all (to the best of my knowledge), the same programs are running
in the middle of the night as are running in the day time when I execute
the script manually. I am not closing any programs to run it
manually, neither do I see any get killed or find oom messages in
syslog. How is it finding ram that it can use when I execute it
manually, yet it can't find enough ram when it is being run from cron?
I'm guessing it prioritizes stuff, but what and how?

On the other hand, Woody rarely touched more than 50MB of swap when I
had 768MB of ram, so I guess it's probably time to upgrade my ram now
that I'm running Sarge. (Yes, I run a couple extra Firefox windows and
use xine a lot more in Sarge, so it's not "just" the upgrade.)

Thanks again,
Jacob
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Roel Schroeven
2005-02-13 06:10:08 UTC
Permalink
Post by Jacob S
$ free
total used free shared buffers cached
Mem: 516308 514288 2020 0 1712 101836
-/+ buffers/cache:410740 105568
Swap: 497968 467868 30100
Any suggestions what might be causing this out of memory problem, or
ways to work around it?
Not counting buffers and cache, you are using 410740 kilobytes of
physical memory and 467868 kilobytes of swap, for a total of 878608
kilobytes. That is really very very much. It seems you are running one
or more very memory-intensive processes.

Using more swap and/or installing more RAM would probably solve your
problem, but if I were you I would investigate why your system is using
that much memory in the first place. As a first step, I'd start top and
sort by memory usage.
--
"Codito ergo sum"
Roel Schroeven
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Jacob S
2005-02-25 13:10:09 UTC
Permalink
On Sat, 12 Feb 2005 08:43:51 -0600
Post by Jacob S
I have a simple download script that runs from a cron job every
morning to pull an audio stream from a url and then convert it to mp3
(the script is attached to this e-mail). It runs fine the first couple
times after a reboot, but then fails at various stages, leaving these
/home/jacob/bin/download.sh: fork: Cannot allocate memory
/home/jacob/bin/download.sh: xrealloc: ../bash/subst.c:468: cannot
reallocate 99 178368 bytes (0 bytes allocated)
When it gives the fork error, it has usually made it through the
mplayer section but not lame. When it gives the xrealloc error it
doesn't even make it that far. However, when I run this script
manually from a command line it works perfectly.
The computer is an Athlon 2200+XP on a NForce2 chipset with 512MB of
ram and 160GB SATA. Here is an example output from 'free' after the
$ free
total used free shared buffers cached
Mem: 516308 514288 2020 0 1712 101836
-/+ buffers/cache:410740 105568
Swap: 497968 467868 30100
At the suggestion of Uwe Dippel, I've been doing some playing around to
see how much ram needs to be free before the cron will complete
successfully.

I closed some of the programs last night that would have normally stayed
open all night. But sure enough, I still got the "fork: Cannot allocate
memory" error in my mailbox from cron. Here's what free reported only 5
seconds after the failure:

$ free
total used free shared buffers cached
Mem: 516308 200140 316168 0 1288 29320
-/+ buffers/cache: 169532 346776
Swap: 497968 274256 223712

That seems to me like it should have had plenty. After running lame
manually to convert the file, free reported this:

$ free
total used free shared buffers cached
Mem: 516308 385868 130440 0 3856 196952
-/+ buffers/cache: 185060 331248
Swap: 497968 270548 227420

I'm obviously no expert, but it sure seems like there is something
limiting the memory on cron jobs run as a user. Does anyone have some
tips on how I might track this down?

TIA,
Jacob
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Michael Marsh
2005-02-25 13:20:10 UTC
Permalink
Post by Jacob S
I'm obviously no expert, but it sure seems like there is something
limiting the memory on cron jobs run as a user. Does anyone have some
tips on how I might track this down?
Run the following as a cron job:

#! /bin/bash
ulimit -a
--
Michael A. Marsh
http://www.umiacs.umd.edu/~mmarsh
http://mamarsh.blogspot.com
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Jacob S
2005-03-05 20:40:12 UTC
Permalink
On Fri, 25 Feb 2005 08:15:15 -0500
Post by Michael Marsh
Post by Jacob S
I'm obviously no expert, but it sure seems like there is something
limiting the memory on cron jobs run as a user. Does anyone have
some tips on how I might track this down?
#! /bin/bash
ulimit -a
Thanks, I hadn't thought about ulimit.

When run manually, as a non-root user, ulimit -a outputs the following:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited

When run from a cron job, ulimit -a produces the following output:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4095
virtual memory (kbytes, -v) unlimited

So it looks like I need to increase the 'max locked memory' for cron
jobs. Except, aren't cron jobs run as the same user that owns the
crontab? How would I change the ulimit settings for the cron jobs
separate from the ulimits when I'm doing stuff from a cli? (I'm guessing
that only root can change a user's ulimit settings, is that right?)

Thanks,
Jacob
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Continue reading on narkive:
Loading...