Lock table is out of available locker entries
620276Jan 29 2008 — edited Feb 4 2008Hi,
I am fairly new to BerkeleyDB. So, maybe my question is silly.
I have set up a multiprocess system using the BerkeleyDB Perl module. Normally a few hundred processes are involved. The system uses transactions. I have configured 5000 lockers via DB_CONFIG:
set_lk_max_locks 5000
set_lk_max_lockers 5000
set_lk_max_objects 5000
Nevertheless I just got:
"Lock table is out of available locker entries"
After that I noticed that some processes had segfaulted. But the segfault has occurred outside any transaction or any other BerkeleyDB operation.
Can it be that a segfaulting process eats up a lock table entry?
Or do I have to raise the figures? Is there a method to estimate how many lock table entries are needed?
Thanks,
Torsten