[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

out of memory

>>ps: linux behaved the same way, then it grew up...

although it was meant as a throwaway remark, it is nevertheless misleading.
in the particular case of memory allocation, adding a paging
subsystem doesn't get you out of trouble: it just defers it,
replacing running out of main memory by running out of disc space.

even when you use a good paging algorithm (and linux does not),
you can still hit the buffers when you run out of paging space,
especially when you don't match each page in the total virtual
space with a disc page (including empty pages).

if you don't preallocate space on the page file, and many
unix and non-unix systems don't, to support large sparse address spaces,
you will get stuck when a page fault occurs.
you've run out of memory, and you haven't the paging space (disc space)
to page something else out.  now you can wait in the Micawberish hope
that something will turn up, and risk deadlock, or you can do what linux
and some other unix systems do, and send a signal (SIGBUS in linux's case)
to whichever process happened to take that page fault.
if that's the window system, for instance, your working
environment vanishes. if it's a critical task, the system crashes.
it's almost never `the last process you ought not to have run'!
the depths of a trap handler turns out to be a bad place to
make global resource allocation decisions, too late.