SimpleEventBuilder/testing/connection_method
MasterRoby3 db138d1b60 Final Commit 2023-09-27 17:47:46 +02:00
..
ev_builder_epoll.out Various changes and extended preliminary testing 2023-08-21 23:39:16 +02:00
ev_builder_select.out Various changes and extended preliminary testing 2023-08-21 23:39:16 +02:00
event_builder_epoll.cxx Various changes and extended preliminary testing 2023-08-21 23:39:16 +02:00
event_builder_select.cxx Various changes and extended preliminary testing 2023-08-21 23:39:16 +02:00
fragment_dataformat.h Various changes and extended preliminary testing 2023-08-21 23:39:16 +02:00
notes_on_connection_method.md Various changes and extended preliminary testing 2023-08-21 23:39:16 +02:00
prov.out Various changes and extended preliminary testing 2023-08-21 23:39:16 +02:00
provider.cxx Various changes and extended preliminary testing 2023-08-21 23:39:16 +02:00
readme.txt Final Commit 2023-09-27 17:47:46 +02:00
spawn_clients.sh Various changes and extended preliminary testing 2023-08-21 23:39:16 +02:00

readme.txt

this test wants to see how much data is possible to gather in 60 seconds with select and epoll. 
For that reason, I'm removing the random time between provider data generation and send it asap. 
I'll spawn 50 clients and let them connect and send data as fast as they can while I measure total received data on server side.
I'm ignoring all bottlenecks for now. All tests of this section are brought out on my local machine.

EDIT: by default all clients allocate a massive 100 megabytes of space to send big chunks of data.
To test with a bigger number of descriptors (I'm making 100, 500, 1000) i'm lowering the chunk size as well as changing
the generation method of the data to send.
So trashing first result with 50 and re executing with all the different clien numbers.


since no big performance difference (as expected, since with all clients active also epoll has to iterate through everything since it gets notified from all),
retesting with timeout on majority of clients. files with *_TIMEOUT.csv have only 50 clients without timeout, files with *TIMEOUT_HARD.csv have only 2 clients.
The timeout is pretty hard, we're talking 1 second, to hilight the difference in performance.
With epoll we see a big rise in throughput with the 2 clients, since we kill cpu time for iteration and optimize the 30 seconds time analysis, 
in the 50 clients it's not so evident. With 50 we start to see already big improvements with epoll vs select.

Now I try to multithread epoll, to see if the data acquired increases. I'll measure the full bandwith received, I expect a semi-linear increase in performance.
For the multithreading thread check the threads folder