You’re definitively going the wrong way about it.
Let say you have have two point of access in HTTP that does the following :
- Give the current time of the server
- Perform a write in the database
If you test the first and succesfully handle 100k connections, that doesn’t mean your site will reliabely work for every kind of request up to 100k connections.
I think you’re on the development side, not system/network. If you really want to test something, you should be settings up a server with enough test datas in it to match what it could be in production, then for each request you can make, you define a reasonable response time (usually < 500ms or 1s).
If you’re really aiming for more than 10k connections handled in parallel, I would advise to get some advise from specialists for the hardware, network and database stuff. Unless you’re just providing a fully static HTML site.
Also note that if some languages are better for near real time thing, pretty much all of them have some unpredictability (malloc in C), and your computer may also switch on others task from time to time. If you really want to go on the stress test and can’t get enough requests with one computer, just set up another one, the only thing that matter after all is the number of request that your server receive.