DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
@ 2022-03-01  7:28 Yuying Zhang
  2022-03-01  8:38 ` Ling, WeiX
  2022-03-01  8:43 ` David Marchand
  0 siblings, 2 replies; 8+ messages in thread
From: Yuying Zhang @ 2022-03-01  7:28 UTC (permalink / raw)
  To: dev, maxime.coquelin, chenbo.xia; +Cc: Yuying Zhang, stable

The PMD frees a packet mbuf back into its original mempool
after sending a packet. However, old data is not cleaned up
which causes error in payload of new packets. This patch clear
data of packet mbuf before freeing mbuf.

Fixes: ee584e9710b9 ("vhost: add driver on top of the library")
Cc: stable@dpdk.org

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
---
 drivers/net/vhost/rte_eth_vhost.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 070f0e6dfd..92ed07a334 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -417,10 +417,11 @@ static uint16_t
 eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 {
 	struct vhost_queue *r = q;
-	uint16_t i, nb_tx = 0;
+	uint16_t i, j, nb_tx = 0;
 	uint16_t nb_send = 0;
 	uint64_t nb_bytes = 0;
 	uint64_t nb_missed = 0;
+	void *data = NULL;
 
 	if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0))
 		return 0;
@@ -483,8 +484,13 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	for (i = nb_tx; i < nb_bufs; i++)
 		vhost_count_xcast_packets(r, bufs[i]);
 
-	for (i = 0; likely(i < nb_tx); i++)
+	for (i = 0; likely(i < nb_tx); i++) {
+		for (j = 0; j < bufs[i]->nb_segs; j++) {
+			data = rte_pktmbuf_mtod(bufs[i], void *);
+			memset(data, 0, bufs[i]->data_len);
+		}
 		rte_pktmbuf_free(bufs[i]);
+	}
 out:
 	rte_atomic32_set(&r->while_queuing, 0);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
  2022-03-01  7:28 [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts Yuying Zhang
@ 2022-03-01  8:38 ` Ling, WeiX
  2022-03-01  8:43 ` David Marchand
  1 sibling, 0 replies; 8+ messages in thread
From: Ling, WeiX @ 2022-03-01  8:38 UTC (permalink / raw)
  To: Zhang, Yuying, dev, maxime.coquelin, Xia, Chenbo; +Cc: Zhang, Yuying, stable

> -----Original Message-----
> From: Yuying Zhang <yuying.zhang@intel.com>
> Sent: Tuesday, March 1, 2022 3:28 PM
> To: dev@dpdk.org; maxime.coquelin@redhat.com; Xia, Chenbo
> <chenbo.xia@intel.com>
> Cc: Zhang, Yuying <yuying.zhang@intel.com>; stable@dpdk.org
> Subject: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> 
> The PMD frees a packet mbuf back into its original mempool after sending a
> packet. However, old data is not cleaned up which causes error in payload of
> new packets. This patch clear data of packet mbuf before freeing mbuf.
> 
> Fixes: ee584e9710b9 ("vhost: add driver on top of the library")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
> ---
Tested-by: Wei Ling <weix.ling@intel.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
  2022-03-01  7:28 [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts Yuying Zhang
  2022-03-01  8:38 ` Ling, WeiX
@ 2022-03-01  8:43 ` David Marchand
  2022-03-01  9:02   ` Zhang, Yuying
  1 sibling, 1 reply; 8+ messages in thread
From: David Marchand @ 2022-03-01  8:43 UTC (permalink / raw)
  To: Yuying Zhang; +Cc: dev, Maxime Coquelin, Xia, Chenbo, dpdk stable

On Tue, Mar 1, 2022 at 8:29 AM Yuying Zhang <yuying.zhang@intel.com> wrote:
>
> The PMD frees a packet mbuf back into its original mempool
> after sending a packet. However, old data is not cleaned up
> which causes error in payload of new packets. This patch clear
> data of packet mbuf before freeing mbuf.

This patch looks wrong to me.
What is the actual issue you want to fix?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
  2022-03-01  8:43 ` David Marchand
@ 2022-03-01  9:02   ` Zhang, Yuying
  2022-03-01  9:47     ` David Marchand
  0 siblings, 1 reply; 8+ messages in thread
From: Zhang, Yuying @ 2022-03-01  9:02 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Maxime Coquelin, Xia, Chenbo, dpdk stable

Hi Marchand,

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, March 1, 2022 4:44 PM
> To: Zhang, Yuying <yuying.zhang@intel.com>
> Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> 
> On Tue, Mar 1, 2022 at 8:29 AM Yuying Zhang <yuying.zhang@intel.com> wrote:
> >
> > The PMD frees a packet mbuf back into its original mempool after
> > sending a packet. However, old data is not cleaned up which causes
> > error in payload of new packets. This patch clear data of packet mbuf
> > before freeing mbuf.
> 
> This patch looks wrong to me.
> What is the actual issue you want to fix?

eth_vhost_tx() frees the packet mbuf back into its original mempool every time after a packet sent without clearing the data field.
Then packet transmit  function will get bulk directly without reset. New generated packet contains old data of previous packet. This is wrong.

> 
> 
> --
> David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
  2022-03-01  9:02   ` Zhang, Yuying
@ 2022-03-01  9:47     ` David Marchand
  2022-03-01 17:05       ` Stephen Hemminger
  2022-03-02  8:58       ` Zhang, Yuying
  0 siblings, 2 replies; 8+ messages in thread
From: David Marchand @ 2022-03-01  9:47 UTC (permalink / raw)
  To: Zhang, Yuying; +Cc: dev, Maxime Coquelin, Xia, Chenbo, dpdk stable

On Tue, Mar 1, 2022 at 10:02 AM Zhang, Yuying <yuying.zhang@intel.com> wrote:
> > -----Original Message-----
> > From: David Marchand <david.marchand@redhat.com>
> > Sent: Tuesday, March 1, 2022 4:44 PM
> > To: Zhang, Yuying <yuying.zhang@intel.com>
> > Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> > Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> > Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> >
> > On Tue, Mar 1, 2022 at 8:29 AM Yuying Zhang <yuying.zhang@intel.com> wrote:
> > >
> > > The PMD frees a packet mbuf back into its original mempool after
> > > sending a packet. However, old data is not cleaned up which causes
> > > error in payload of new packets. This patch clear data of packet mbuf
> > > before freeing mbuf.
> >
> > This patch looks wrong to me.
> > What is the actual issue you want to fix?
>
> eth_vhost_tx() frees the packet mbuf back into its original mempool every time after a packet sent without clearing the data field.
> Then packet transmit  function will get bulk directly without reset. New generated packet contains old data of previous packet. This is wrong.

With the proposed patch, if the mbuf refcnt is != 0, you are shooting
the data while some other part of the application might be needing it.

Plus, there should be no expectation about a mbuf data content when
retrieving one from a mempool.
The only bytes that are guaranteed to be initialised by the mbuf API
are its metadata.


If there is an issue somewhere in dpdk where the mbuf data content is
expected to be 0 on allocation, please point at it.
Or share the full test that failed.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
  2022-03-01  9:47     ` David Marchand
@ 2022-03-01 17:05       ` Stephen Hemminger
  2022-03-02  8:58       ` Zhang, Yuying
  1 sibling, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2022-03-01 17:05 UTC (permalink / raw)
  To: David Marchand
  Cc: Zhang, Yuying, dev, Maxime Coquelin, Xia, Chenbo, dpdk stable

On Tue, 1 Mar 2022 10:47:32 +0100
David Marchand <david.marchand@redhat.com> wrote:

> On Tue, Mar 1, 2022 at 10:02 AM Zhang, Yuying <yuying.zhang@intel.com> wrote:
> > > -----Original Message-----
> > > From: David Marchand <david.marchand@redhat.com>
> > > Sent: Tuesday, March 1, 2022 4:44 PM
> > > To: Zhang, Yuying <yuying.zhang@intel.com>
> > > Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> > > Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> > > Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> > >
> > > On Tue, Mar 1, 2022 at 8:29 AM Yuying Zhang <yuying.zhang@intel.com> wrote:  
> > > >
> > > > The PMD frees a packet mbuf back into its original mempool after
> > > > sending a packet. However, old data is not cleaned up which causes
> > > > error in payload of new packets. This patch clear data of packet mbuf
> > > > before freeing mbuf.  
> > >
> > > This patch looks wrong to me.
> > > What is the actual issue you want to fix?  
> >
> > eth_vhost_tx() frees the packet mbuf back into its original mempool every time after a packet sent without clearing the data field.
> > Then packet transmit  function will get bulk directly without reset. New generated packet contains old data of previous packet. This is wrong.  
> 
> With the proposed patch, if the mbuf refcnt is != 0, you are shooting
> the data while some other part of the application might be needing it.
> 
> Plus, there should be no expectation about a mbuf data content when
> retrieving one from a mempool.
> The only bytes that are guaranteed to be initialised by the mbuf API
> are its metadata.
> 
> 
> If there is an issue somewhere in dpdk where the mbuf data content is
> expected to be 0 on allocation, please point at it.
> Or share the full test that failed.
> 
> 

Agree. There is no guarantee that mbuf you get was not just used by
some other driver or library. Only the fields in rte_pktmbuf_reset are
guaranteed to be set.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
  2022-03-01  9:47     ` David Marchand
  2022-03-01 17:05       ` Stephen Hemminger
@ 2022-03-02  8:58       ` Zhang, Yuying
  2022-03-03  6:49         ` Xia, Chenbo
  1 sibling, 1 reply; 8+ messages in thread
From: Zhang, Yuying @ 2022-03-02  8:58 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Maxime Coquelin, Xia, Chenbo, dpdk stable, stephen

Hi Marchand,

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, March 1, 2022 5:48 PM
> To: Zhang, Yuying <yuying.zhang@intel.com>
> Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> 
> On Tue, Mar 1, 2022 at 10:02 AM Zhang, Yuying <yuying.zhang@intel.com>
> wrote:

...

> >
> > eth_vhost_tx() frees the packet mbuf back into its original mempool every
> time after a packet sent without clearing the data field.
> > Then packet transmit  function will get bulk directly without reset. New
> generated packet contains old data of previous packet. This is wrong.
> 
> With the proposed patch, if the mbuf refcnt is != 0, you are shooting the data
> while some other part of the application might be needing it.
> 
> Plus, there should be no expectation about a mbuf data content when retrieving
> one from a mempool.
> The only bytes that are guaranteed to be initialised by the mbuf API are its
> metadata.
> 
> 
> If there is an issue somewhere in dpdk where the mbuf data content is expected
> to be 0 on allocation, please point at it.
> Or share the full test that failed.

According to the test_plan guide of dpdk (https://doc.dpdk.org/dts/test_plans/loopback_virtio_user_server_mode_test_plan.html),
Test Case 13 (loopback packed ring all path payload check test using server mode and multi-queues), the payload of each packet must be the same.
The packet of first stream is initialized value 0. Then this packet is put back into mempool(actually, the local cache of the core). 
The packet of rest stream is got from local_cache directly and contains the first packet's header data in the payload. Therefore, the payload of the packets
are different. 

> 
> 
> --
> David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
  2022-03-02  8:58       ` Zhang, Yuying
@ 2022-03-03  6:49         ` Xia, Chenbo
  0 siblings, 0 replies; 8+ messages in thread
From: Xia, Chenbo @ 2022-03-03  6:49 UTC (permalink / raw)
  To: Zhang, Yuying, David Marchand; +Cc: dev, Maxime Coquelin, dpdk stable, stephen

> -----Original Message-----
> From: Zhang, Yuying <yuying.zhang@intel.com>
> Sent: Wednesday, March 2, 2022 4:59 PM
> To: David Marchand <david.marchand@redhat.com>
> Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>; Xia,
> Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>;
> stephen@networkplumber.org
> Subject: RE: [PATCH v1] net/vhost: clear data of packet mbuf after sending
> pkts
> 
> Hi Marchand,
> 
> > -----Original Message-----
> > From: David Marchand <david.marchand@redhat.com>
> > Sent: Tuesday, March 1, 2022 5:48 PM
> > To: Zhang, Yuying <yuying.zhang@intel.com>
> > Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> > Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> > Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending
> pkts
> >
> > On Tue, Mar 1, 2022 at 10:02 AM Zhang, Yuying <yuying.zhang@intel.com>
> > wrote:
> 
> ...
> 
> > >
> > > eth_vhost_tx() frees the packet mbuf back into its original mempool every
> > time after a packet sent without clearing the data field.
> > > Then packet transmit  function will get bulk directly without reset. New
> > generated packet contains old data of previous packet. This is wrong.
> >
> > With the proposed patch, if the mbuf refcnt is != 0, you are shooting the
> data
> > while some other part of the application might be needing it.
> >
> > Plus, there should be no expectation about a mbuf data content when
> retrieving
> > one from a mempool.
> > The only bytes that are guaranteed to be initialised by the mbuf API are its
> > metadata.
> >
> >
> > If there is an issue somewhere in dpdk where the mbuf data content is
> expected
> > to be 0 on allocation, please point at it.
> > Or share the full test that failed.
> 
> According to the test_plan guide of dpdk
> (https://doc.dpdk.org/dts/test_plans/loopback_virtio_user_server_mode_test_pla
> n.html),
> Test Case 13 (loopback packed ring all path payload check test using server
> mode and multi-queues), the payload of each packet must be the same.
> The packet of first stream is initialized value 0. Then this packet is put
> back into mempool(actually, the local cache of the core).
> The packet of rest stream is got from local_cache directly and contains the
> first packet's header data in the payload. Therefore, the payload of the
> packets
> are different.

Could you explain more about the problem?

But anyway I think this fix is wrong. After we're clear about the problem,
there should be another solution.

Thanks,
Chenbo

> 
> >
> >
> > --
> > David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-03-03  6:49 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-01  7:28 [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts Yuying Zhang
2022-03-01  8:38 ` Ling, WeiX
2022-03-01  8:43 ` David Marchand
2022-03-01  9:02   ` Zhang, Yuying
2022-03-01  9:47     ` David Marchand
2022-03-01 17:05       ` Stephen Hemminger
2022-03-02  8:58       ` Zhang, Yuying
2022-03-03  6:49         ` Xia, Chenbo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).