DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] net/af_packet: fix socket close on device stop
@ 2025-02-04 16:45 Tudor Cornea
  2025-02-07  1:26 ` Stephen Hemminger
  2025-02-07 17:57 ` Stephen Hemminger
  0 siblings, 2 replies; 3+ messages in thread
From: Tudor Cornea @ 2025-02-04 16:45 UTC (permalink / raw)
  To: stephen; +Cc: dev, Tudor Cornea

Currently, if we call rte_eth_dev_stop(), the sockets are closed.
If we attempt to start the port again, socket related operations
will not work correctly.

This can be alleviated by closing the socket at the same place in
which we currently free the memory, in eth_dev_close().

If an application calls rte_eth_dev_stop() on a port managed
by the af_packet PMD, the port becomes unusable. This is in contrast
with ports managed by other drivers (e.g virtio).

I also managed to reproduce the issue using testpmd.

sudo ip link add test-veth0 type veth peer name test-veth1

sudo ip link set test-veth0 up
sudo ip link set test-veth1 up

AF_PACKET_ARGS=\
"blocksz=4096,framesz=2048,framecnt=512,qpairs=1,qdisc_bypass=0"

sudo ./dpdk-testpmd \
        -l 0-3 \
        -m 1024 \
        --no-huge \
        --no-shconf \
        --no-pci \
        --vdev=net_af_packet0,iface=test-veth0,${AF_PACKET_ARGS} \
        --vdev=net_af_packet1,iface=test-veth1,${AF_PACKET_ARGS} \
        -- \
        -i

testpmd> start tx_first

Forwarding will start, and we will see traffic on the interfaces.

testpmd> stop
testpmd> port stop 0
Stopping ports...
Checking link statuses...
Done
testpmd> port stop 1
Stopping ports...
Checking link statuses...
Done

testpmd> port start 0
AFPACKET: eth_dev_macaddr_set(): receive socket not found
Port 0: CA:65:81:63:81:B2
Checking link statuses...
Done
testpmd> port start 1
AFPACKET: eth_dev_macaddr_set(): receive socket not found
Port 1: CA:12:D0:BE:93:3F
Checking link statuses...
Done

testpmd> start tx_first

When we start forwarding again, we can see that there is no traffic
on the interfaces. This does not happen when testing with other PMD
drivers (e.g virtio).

With the patch, the port should re-initialize correctly.

testpmd> port start 0
Port 0: CA:65:81:63:81:B2
Checking link statuses...
Done

Fixes: 364e08f2bbc0 ("af_packet: add PMD for AF_PACKET-based virtual devices")

Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
---
 drivers/net/af_packet/rte_eth_af_packet.c | 30 +++++++++++------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index ceb8d9356a..efae15a493 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -357,27 +357,12 @@ static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
 	unsigned i;
-	int sockfd;
 	struct pmd_internals *internals = dev->data->dev_private;
 
 	for (i = 0; i < internals->nb_queues; i++) {
-		sockfd = internals->rx_queue[i].sockfd;
-		if (sockfd != -1)
-			close(sockfd);
-
-		/* Prevent use after free in case tx fd == rx fd */
-		if (sockfd != internals->tx_queue[i].sockfd) {
-			sockfd = internals->tx_queue[i].sockfd;
-			if (sockfd != -1)
-				close(sockfd);
-		}
-
-		internals->rx_queue[i].sockfd = -1;
-		internals->tx_queue[i].sockfd = -1;
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
-
 	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
@@ -474,6 +459,7 @@ eth_dev_close(struct rte_eth_dev *dev)
 	struct pmd_internals *internals;
 	struct tpacket_req *req;
 	unsigned int q;
+	int sockfd;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -484,6 +470,20 @@ eth_dev_close(struct rte_eth_dev *dev)
 	internals = dev->data->dev_private;
 	req = &internals->req;
 	for (q = 0; q < internals->nb_queues; q++) {
+		sockfd = internals->rx_queue[q].sockfd;
+		if (sockfd != -1)
+			close(sockfd);
+
+		/* Prevent use after free in case tx fd == rx fd */
+		if (sockfd != internals->tx_queue[q].sockfd) {
+			sockfd = internals->tx_queue[q].sockfd;
+			if (sockfd != -1)
+				close(sockfd);
+		}
+
+		internals->rx_queue[q].sockfd = -1;
+		internals->tx_queue[q].sockfd = -1;
+
 		munmap(internals->rx_queue[q].map,
 			2 * req->tp_block_size * req->tp_block_nr);
 		rte_free(internals->rx_queue[q].rd);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] net/af_packet: fix socket close on device stop
  2025-02-04 16:45 [PATCH] net/af_packet: fix socket close on device stop Tudor Cornea
@ 2025-02-07  1:26 ` Stephen Hemminger
  2025-02-07 17:57 ` Stephen Hemminger
  1 sibling, 0 replies; 3+ messages in thread
From: Stephen Hemminger @ 2025-02-07  1:26 UTC (permalink / raw)
  To: Tudor Cornea; +Cc: dev

On Tue,  4 Feb 2025 18:45:08 +0200
Tudor Cornea <tudor.cornea@gmail.com> wrote:

> Currently, if we call rte_eth_dev_stop(), the sockets are closed.
> If we attempt to start the port again, socket related operations
> will not work correctly.
> 
> This can be alleviated by closing the socket at the same place in
> which we currently free the memory, in eth_dev_close().
> 
> If an application calls rte_eth_dev_stop() on a port managed
> by the af_packet PMD, the port becomes unusable. This is in contrast
> with ports managed by other drivers (e.g virtio).
> 
> I also managed to reproduce the issue using testpmd.
> 
> sudo ip link add test-veth0 type veth peer name test-veth1
> 
> sudo ip link set test-veth0 up
> sudo ip link set test-veth1 up
> 
> AF_PACKET_ARGS=\
> "blocksz=4096,framesz=2048,framecnt=512,qpairs=1,qdisc_bypass=0"
> 
> sudo ./dpdk-testpmd \
>         -l 0-3 \
>         -m 1024 \
>         --no-huge \
>         --no-shconf \
>         --no-pci \
>         --vdev=net_af_packet0,iface=test-veth0,${AF_PACKET_ARGS} \
>         --vdev=net_af_packet1,iface=test-veth1,${AF_PACKET_ARGS} \
>         -- \
>         -i
> 
> testpmd> start tx_first  
> 
> Forwarding will start, and we will see traffic on the interfaces.
> 
> testpmd> stop
> testpmd> port stop 0  
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> port stop 1  
> Stopping ports...
> Checking link statuses...
> Done
> 
> testpmd> port start 0  
> AFPACKET: eth_dev_macaddr_set(): receive socket not found
> Port 0: CA:65:81:63:81:B2
> Checking link statuses...
> Done
> testpmd> port start 1  
> AFPACKET: eth_dev_macaddr_set(): receive socket not found
> Port 1: CA:12:D0:BE:93:3F
> Checking link statuses...
> Done
> 
> testpmd> start tx_first  
> 
> When we start forwarding again, we can see that there is no traffic
> on the interfaces. This does not happen when testing with other PMD
> drivers (e.g virtio).
> 
> With the patch, the port should re-initialize correctly.
> 
> testpmd> port start 0  
> Port 0: CA:65:81:63:81:B2
> Checking link statuses...
> Done
> 
> Fixes: 364e08f2bbc0 ("af_packet: add PMD for AF_PACKET-based virtual devices")
> 
> Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>

Makes sense, applied to next-net

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] net/af_packet: fix socket close on device stop
  2025-02-04 16:45 [PATCH] net/af_packet: fix socket close on device stop Tudor Cornea
  2025-02-07  1:26 ` Stephen Hemminger
@ 2025-02-07 17:57 ` Stephen Hemminger
  1 sibling, 0 replies; 3+ messages in thread
From: Stephen Hemminger @ 2025-02-07 17:57 UTC (permalink / raw)
  To: Tudor Cornea; +Cc: dev

On Tue,  4 Feb 2025 18:45:08 +0200
Tudor Cornea <tudor.cornea@gmail.com> wrote:

> Currently, if we call rte_eth_dev_stop(), the sockets are closed.
> If we attempt to start the port again, socket related operations
> will not work correctly.
> 
> This can be alleviated by closing the socket at the same place in
> which we currently free the memory, in eth_dev_close().
> 
> If an application calls rte_eth_dev_stop() on a port managed
> by the af_packet PMD, the port becomes unusable. This is in contrast
> with ports managed by other drivers (e.g virtio).
> 
> I also managed to reproduce the issue using testpmd.
> 
> sudo ip link add test-veth0 type veth peer name test-veth1
> 
> sudo ip link set test-veth0 up
> sudo ip link set test-veth1 up
> 
> AF_PACKET_ARGS=\
> "blocksz=4096,framesz=2048,framecnt=512,qpairs=1,qdisc_bypass=0"
> 
> sudo ./dpdk-testpmd \
>         -l 0-3 \
>         -m 1024 \
>         --no-huge \
>         --no-shconf \
>         --no-pci \
>         --vdev=net_af_packet0,iface=test-veth0,${AF_PACKET_ARGS} \
>         --vdev=net_af_packet1,iface=test-veth1,${AF_PACKET_ARGS} \
>         -- \
>         -i
> 
> testpmd> start tx_first  
> 
> Forwarding will start, and we will see traffic on the interfaces.
> 
> testpmd> stop
> testpmd> port stop 0  
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> port stop 1  
> Stopping ports...
> Checking link statuses...
> Done
> 
> testpmd> port start 0  
> AFPACKET: eth_dev_macaddr_set(): receive socket not found
> Port 0: CA:65:81:63:81:B2
> Checking link statuses...
> Done
> testpmd> port start 1  
> AFPACKET: eth_dev_macaddr_set(): receive socket not found
> Port 1: CA:12:D0:BE:93:3F
> Checking link statuses...
> Done
> 
> testpmd> start tx_first  
> 
> When we start forwarding again, we can see that there is no traffic
> on the interfaces. This does not happen when testing with other PMD
> drivers (e.g virtio).
> 
> With the patch, the port should re-initialize correctly.
> 
> testpmd> port start 0  
> Port 0: CA:65:81:63:81:B2
> Checking link statuses...
> Done
> 
> Fixes: 364e08f2bbc0 ("af_packet: add PMD for AF_PACKET-based virtual devices")
> 
> Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>


Applied to next-net
Should this go to stable as well

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-02-07 17:57 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-04 16:45 [PATCH] net/af_packet: fix socket close on device stop Tudor Cornea
2025-02-07  1:26 ` Stephen Hemminger
2025-02-07 17:57 ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).