Why do we have stack overflows?

Adrian Hey ahey at iee.org
Thu May 3 11:24:26 EDT 2007


Hello Folks,

Just wondering about this. Please understand I'm not asking why
programs use a lot of stack sometimes, but specifically why is
using a lot of stack (vs. using a lot of heap) generally regarded
as "bad". Or at least it seems that way given that ghc run time
makes distinction between the two and sets separate
limits for them (default max stack size being relatively small
whereas default max heap size in unlimited). So programs can
fail with a stack overflow despite having bucket loads of heap
available?

Frankly I don't care if my program fails because it's used
a lot of stack or a lot of heap. I would rather set some
common memory budget and have them fail if that budget was
exceeded.

This policy seems to have unfortunate consequences. Sometimes
you end up re-writing stuff in a manner that just trades stack
use for heap use (I.E. doesn't do anything to reduce overall
memory consumption). Given the cost of reclaiming heap
is rather high (compared to stack), this seems like bad idea
(the version that used a lot of stack would be better IMO
if only it didn't risk stack overflow).

Example..

-- Strict version of take
stackGobbler :: Int -> [x] -> [x]
stackGobbler 0 _      = []
stackGobbler _ []     = []
stackGobbler n (x:xs) = let xs' = stackGobbler (n-1) xs
                         in  xs' `seq` (x:xs')

-- Another strict version of take
heapGobbler :: Int -> [x] -> [x]
heapGobbler = heapGobbler' []
  where heapGobbler' rxs 0 _      = reverse rxs
        heapGobbler' rxs _ []     = reverse rxs
        heapGobbler' rxs n (x:xs) = heapGobbler' (x:rxs) (n-1) xs

But I guess everyone here is already aware of this, hence the question
(current ghc memory system design seems a bit odd, but maybe there's
a good reason why the rts can't work the way I would like).

Thanks
--
Adrian Hey




More information about the Glasgow-haskell-users mailing list