In trying to come up with an efficient and quick way to count up the number of file within a directory and all of its subdirectories, testing has shown
that is some cases there's a much higher amount of CPU usage (as peaks) then at other times using different directories. Is anyone aware of any characteristic such as a deep number of subdirectories or subdirectories with large number of files or ? that would cause larger spikes of CPU usage?
I have yet to investigate to see if I can find any patterns as some of this information is reported by to me by others who are doing some testing.
And while I'm at it, is anyone aware of some way that is extremely fast for a directory with a large number of files within its subdirectories? I have tried various flavors of ls, find, python scripts, perl scripts etc. and have yet to find any one approach that really makes much of a difference.
Googling about both of these topics doesn't come up with an outstanding answer to either issue.
Thanks!