19 lines
1.6 KiB
Plaintext
19 lines
1.6 KiB
Plaintext
this test wants to see how much data is possible to gather in 60 seconds with select and epoll.
|
|
For that reason, I'm removing the random time between provider data generation and send it asap.
|
|
I'll spawn 50 clients and let them connect and send data as fast as they can while I measure total received data on server side.
|
|
I'm ignoring all bottlenecks for now. All tests of this section are brought out on my local machine.
|
|
|
|
EDIT: by default all clients allocate a massive 100 megabytes of space to send big chunks of data.
|
|
To test with a bigger number of descriptors (I'm making 100, 500, 1000, 5000, 10000) i'm lowering the chunk size as well as changing
|
|
the generation method of the data to send.
|
|
So trashing first result with 50 and re executing with all the different clien numbers.
|
|
|
|
|
|
since no big performance difference (as expected, since with all clients active also epoll has to iterate through everything since it gets notified from all),
|
|
retesting with timeout on majority of clients. files with *_TIMEOUT.csv have only 50 clients without timeout, files with *TIMEOUT_HARD.csv have only 2 clients.
|
|
The timeout is pretty hard, we're talking 1 second, to hilight the difference in performance.
|
|
With epoll we see a big rise in throughput with the 2 clients, since we kill cpu time for iteration and optimize the 30 seconds time analysis,
|
|
in the 50 clients it's not so evident. With 50 we start to see already big improvements with epoll vs select.
|
|
|
|
Now I try to multithread epoll, to see if the data acquired increases. I'll measure the full bandwith received, I expect a semi-linear increase in performance.
|
|
For the multithreading thread check the threads folder |