Hello,
I have an application receiving events, and reacting to them by sending a message to a server through sockets / TCP...
I'm trying to reduce the time between the reception of the event and the confirmation the message has been sent.
right now I'm interested in the part writing on the socket:
public void sendRawMessage(String msg) {
try {
_connection.send(msg);
}
catch(java.io.IOException ioe) {
ioe.printStackTrace(System.out);
}
}
and my 'connection' object:
_socket = new Socket(_host, _port);
_socket.setSoTimeout(3000);
_out = new DataOutputStream(new BufferedOutputStream(_socket.getOutputStream()));
public void send(String msg) throws IOException {
if (!_exit) {
_out.writeBytes(msg);
_out.flush();
}
}
I'm measuring the time elapsed before and after my sendRawMessage method (with System.nanoTime())
the machine is a HP x86 server dual cpu, quad core 3GHz, running Solaris x86, Gigabit Ethernet, and the server I'm sending my messages to is running on the same machine.
I get an average of 60us for messages around 180 bytes long, and trying to improve that, as I'm hearing people getting much better results for similar tasks...
CPU usage average is close to 0% along the life of the application, so ressources are available, not too clogged up. I imagine the improvement should come for better coding or configuration of the server / TCP stack, and asking the gurus here for help :-)
do you have any idea or any ressource to point me towards ?
thanks