* [dpdk-dev] 82599 TX flush on carrier loss
@ 2019-12-13 22:25 Dave Burton
0 siblings, 0 replies; only message in thread
From: Dave Burton @ 2019-12-13 22:25 UTC (permalink / raw)
To: dev
Howdy!
I would like to know how to get the TX queue drained when the link is down (rte_eth_link_get_nowait return rte_eth_link.link_status = 0).
Background: our appliance has multi-port 82599 multimode-fiber NICs, and acts as a bump-on-a-wire, taking packets off port N, perhaps processing them, and passing them to port N+1. In this particular case, we are a simple packet forwarder. For the test, with a client connected to port N, and the server connected to port N+1:
client: ping -i0.2 -f $server_ip
server: tcpdump -i $iface
Allow the ping flood to run for a while, then pull the cable from port N+1 (link goes down), and the client ping starts printing dots, and the tcpdump on the server stops seeing ICMP-echos from the client. After a few tens of seconds, kill the client ping. After another few tens of seconds, reconnect the cable.
Immediately upon link-up, we are seeing ~31 ICMP-echo packets sequentially from the last packet before the cable was pulled.
The desire here is all these packets are discarded, or we can flush them somehow. I’ve tried using rte_eth_dev_tx_queue_stop and rte_eth_dev_tx_queue_start, but this makes no difference. I can get these “flushed" by stopping the device: rte_eth_dev_stop and rte_eth_dev_start, but this is undesirable for other reasons.
Is there a way to configure an 82599 fiber to discard packets if the device cannot TX them? If not, is there a way to flush all the TX queues so they will not be delivered once the link is restored?
— Dave
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2019-12-13 22:26 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-13 22:25 [dpdk-dev] 82599 TX flush on carrier loss Dave Burton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).