Hello,
I am testing the NPB benchamarks on LAM in a fully simulated environment.
Both computing nodes and network are simulated, using Simics for nodes and a
custom interconnection network simulator for network.
We stress the network to test congestion issues on it. But there are some
faulty results maybe due to timeouts on LAM. We are using the -ssi rpi lamd
option to do every communication over UDP, because the TCP congestion
control fakes our tests.
Our network simulator does not lose any packet, but I think LAM timeouts are
ocurring. Maybe LAM is droping any packet due to full buffers? How can I
manipulate LAM timeouts not to allow to occur?
I have changed LAM_TO_DLO_ACK from 500000 to 50000000, but I think the
application now lasts much more. I have changed TO_DLO_ESTIMATE from 200 to
2000 and DOMAXPENDING from 3 to 30, without any success. What is the precise
meaning of these variables?
Now the application lasts more, it could be losing packets and last more due
to the increased LAM_TO_DLO_ACK? To stress the network we have made it very
slow, but injected packets never get lost, they are buffered in queues.
Any help?
----------------------------------------
Francisco Javier Ridruejo Pérez
Red Académica i2BASK (UPV/EHU)
Parque Tecnológico de San Sebastián
Pº Mikeletegi, 69 - Torre Arbide Norte
20009 Donostia - San Sebastián
Tel.: +34 943 018 705
Fax.: +34 943 015 590
E-mail: franciscojavier.ridruejo_at_[hidden]
----------------------------------------
|