The Original Mindcraft Report and the revision
http://www.mindcraft.com/whitepapers/nts4rhlinux.html
Problems
File Handles
Thanks. I just checked that out. It does appear that they
asked a single question about Apache performance. I remember seeing that
posting myself and blowing it off because there wasn't enough info to tell
him anything and I didn't feel like going into the give-and-take to get enough
info to do something. (I do enough of that supporting my own customers!).
Now, in hindsight, knowing what he did not do to Linux, the answer is obvious:
he was running out of file handles. Do the math. An inactive Apache server
has 8 file handles open. 127 servers max * 8 = 1016. Default file_max is
1024 for Linux, of which 150 or so are usually open while the system is at
a rest. Apache could not bind a socket to a file handle for incoming connections
because there were no file handles. So Apache was basically deadlocked, waiting
for file handles to come free so it could accept() the socket but it was already
holding all the file handles!
RAM
"The Linux kernel limited itself to use only 960 MB of RAM"
If they had availed themselves of a Linux expert (or gotten
Linux pre-installed by a good VAR), they would have tuned the http://www.heise.de/ct/english//99/13/186-1/kernel
to at least use 2G of RAM. All you have to do is change __PAGE_OFFSET to
0x80000000.
Samba Config
wide links = no
That creates a bottleneck in Samba performance, see
here
. In case you haven't guessed, that will lower the performance enormously.
It adds 3 chdir() calls and 3 getwd() calls to every filename lookup. That
will especially hurt on a SMP system.
NT's TCP Config
Tcpip\Parameters\Tcpwindowsize = 65535
that makes huge boost on network performance, but only on local network
where packets don't get lost.
The C"T Benchmark
http://www.heise.de/ct/english//99/13/186-1/
The ZDNET benchmark
http://www.zdnet.com/products/stories/reviews/0,4161,2776396,00.html
The TCP Benchmarks
http://www.tpc.org/