In this test, I'm using a single dual-core machine (E6850) with 4GB memory on a 1333 MHz bus with CL9. The forking server is very crude, which basically forks after accepting a connection. This could certainly be done more effectively by pooling threads and so on, but that is what the sock queue already simulates. The idea is to show the effect of creating new processes to handle new traffic and how this affects overall machine performance.
The forking server shows varying performance. At some point, the socket queue of the process within the kernel is full and the client is either waiting 3, 9 or 12 seconds to obtain a connection, probably because the kernel stalls connections (SYN flood protection?) or needs to clean up resources before it continues to accept new ones. Actually, I didn't wait for this process to finish. When you add the time required, it was running about 1 minute and did 5800 connections. So that performance is very poor. Programming wise though, things are very easy, especially using standard C available on every decent UNIX machine, since it was just an accept and a fork. (The gotcha was that you need to close the parent's file descriptor and waitpid on child's processes, otherwise it's holding on to resources).
All processes had their file descriptor limit increased to 65536 (ulimit -n 65536) and processes were run as root to prevent having to build in some setrlimit call and other complexities.
The socket queue with 8 threads is basically a round-robin fire and forget mechanism. It is implemented with glib and similarly to fork, it isn't as difficult to implement. Basically, it has one thread to continuously accept new connections as fast as it can, then it hands the socket descriptor to one of the queues in a round-robin way. The threads at the other end consume from their queue when a descriptor becomes available and process the socket until empty. Since this is all running locally, there is virtually zero latency and we know that each client connects and writes as fast as possible. Since the server doesn't create new threads or sockets, this is actually quite efficient. Problem may become when sockets actually start blocking, then you may need more threads to handle them.
The epoll() method with 8 threads is the most fancy one and actually skips a couple of other possibilities in the middle. It's strictly using asynchronous I/O, which is also the most difficult to implement. The advantage is that none of the threads ever block and are therefore always looking to do something useful. In this case, I configured eight worker threads as well. The performance is very similar to the queue one, but in different circumstances the epoll() method will certainly outperform the others. The reason is that the previous one will block waiting for something to read. Alternatives to epoll() is poll() with a self-maintained list of socket array. Eventually, those lists need maintenance and therefore they need to be processed linearly in each thread at some point. That is O(n) time, so it doesn't scale as well as epoll(), which has O(1) performance, plus the array maintenance that needs to be coded.
The simple client used in these tests is an absolute psychopath. It attempts to connect up to 125 sockets at the same time and maintain that number (that means a barrage of 125 SYN packets in a very quick succession from one thread in the beginning and as many as it can, up to 125, whenever connections get processed). This client also uses asynchronous sockets. The technique is similar to non-blocking connect() calls with select() or poll(), but in this case you feed the descriptors to epoll() to let the kernel figure everything out. The gotcha here is that the data structure used in the epoll_event structure is not a 4-attribute structure, but a union. That means that you can set only any one of the fields, but not two. For example, writing the fd field to store the socket descriptor overwrites a ptr and vice-versa.
Thus, the implementation for the client is like:
- open socket
- set socket to non-blocking
- call connect(), this doesn't wait around until it completes. It almost always returns "EINPROGRESS".
- store the socket + connected info in a specialized structure in epoll through the data.ptr attribute.
- open another socket.
- see if any sockets have completed already, write a little bit of data and close the socket.
- this may free up a couple of places, after which new SYN's are sent.
The idea of the above is to develop software where the system becomes network-constrained. A lot of software, due to its architecture, doesn't use all of the network's bandwidth, but it probably uses all of CPU or memory. Threads and processes require a stack space to operate, because they call functions and leave other data on the stack for reference. The 90's way of handling traffic was to add threads and processes, which increases the requirements for memory and CPU. The context switch and process management becomes a bigger bottleneck (since that's what the kernel does). The Linux kernel has a very nice scheduling system now, which is default, which also has O(1) performance. So you could have 10,000 processes hanging around quite easily, but at some point you do noticeably see performance degradation, especially once the kernel needs to traverse the process lists every x (here 1) ms. That is why having one process/thread per socket is a bad idea.
Increasing throughput on LAN's has some more considerations and options for tweaking. Most Linux distributions are pre-configured for internet use, which means that their window sizes are different from a LAN. This is because if the round trip time is high and you send a large packet, which somehow gets lost, the cost of resending that packet is high. So a smaller packet would do better. But if you only send 1 byte per packet, then you're not getting anywhere either. Linux actually seems to optimize that in the kernel now. High performance servers sometimes need a bit more local ports to deal with incoming traffic. This can be modified non-permanently in /proc/sys/net/ipv4/ip_local_port_range.