From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 583447E75 for ; Tue, 14 Oct 2014 12:10:28 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP; 14 Oct 2014 03:17:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,862,1389772800"; d="scan'208";a="399928539" Received: from bricha3-mobl.ger.corp.intel.com (HELO bricha3-mobl.ir.intel.com) ([10.243.20.25]) by FMSMGA003.fm.intel.com with SMTP; 14 Oct 2014 03:10:45 -0700 Received: by bricha3-mobl.ir.intel.com (sSMTP sendmail emulation); Tue, 14 Oct 2014 11:17:53 +0001 Date: Tue, 14 Oct 2014 11:17:53 +0100 From: Bruce Richardson To: Masaru OKI Message-ID: <20141014101753.GA548@BRICHA3-MOBL> References: <1412691105-16127-1-git-send-email-m-oki@stratosphere.co.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1412691105-16127-1-git-send-email-m-oki@stratosphere.co.jp> Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.23 (2014-03-12) Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH] Fix librte_pmd_ring: connect primary and secondary ring with correct port. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Oct 2014 10:10:29 -0000 On Tue, Oct 07, 2014 at 11:11:45PM +0900, Masaru OKI wrote: > librte_pmd_ring provides created port and attached port. > Packet is received from attached port if packet is sent to created port. > So, packet is received from created port if packet is sent to attached port. > It must be need two rings such as "create to attach" and "attach to create". > But current implementation uses only one ring for rx/tx. > It causes incorrect result. > Fixed: > - Make ring both rx and tx > - Connect created (primary) ring and attached (secondary) ring > - Correct m->port like librte_pmd_pcap > > Signed-off-by: Masaru OKI >>From my reading, this is really two patches: 1. add in a port id to packets received from an eth_ring port 2. change the rte_eth_ring_create function to create two sets of rings. so perhaps this could be split up into two proposed patches. Unfortunately, while I have no issue with the first part, I disagree with the second part. A ring or a ring-ethdev is a undirectional channel, and this is explicitly by design. When you create or attach to a ring pmd instance (or ring ethdev), you get exactly that, an ethdev instance that has a ring, or set of rings internally. It's up to the application how to use those rings. If you need a bidirectional channel, you need to create two ring ethdevs, in exactly the same way that if you want bidirectional messaging between two cores using rte_rings, you need to create two rings, one for each direction. regards, /Bruce > --- > lib/librte_pmd_ring/rte_eth_ring.c | 36 +++++++++++++++++++++++++++--------- > 1 file changed, 27 insertions(+), 9 deletions(-) > > diff --git a/lib/librte_pmd_ring/rte_eth_ring.c b/lib/librte_pmd_ring/rte_eth_ring.c > index 4f1b6ed..d926d00 100644 > --- a/lib/librte_pmd_ring/rte_eth_ring.c > +++ b/lib/librte_pmd_ring/rte_eth_ring.c > @@ -54,6 +54,7 @@ struct ring_queue { > rte_atomic64_t rx_pkts; > rte_atomic64_t tx_pkts; > rte_atomic64_t err_pkts; > + uint8_t in_port; > }; > > struct pmd_internals { > @@ -80,10 +81,14 @@ eth_ring_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) > struct ring_queue *r = q; > const uint16_t nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng, > ptrs, nb_bufs); > + uint16_t cnt; > if (r->rng->flags & RING_F_SC_DEQ) > r->rx_pkts.cnt += nb_rx; > else > rte_atomic64_add(&(r->rx_pkts), nb_rx); > + for (cnt = 0; cnt < nb_rx; cnt++) { > + bufs[cnt]->port = r->in_port; > + } > return nb_rx; > } > > @@ -129,6 +134,8 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,uint16_t rx_queue_id, > { > struct pmd_internals *internals = dev->data->dev_private; > dev->data->rx_queues[rx_queue_id] = &internals->rx_ring_queues[rx_queue_id]; > + internals->rx_ring_queues[rx_queue_id].in_port = dev->data->port_id; > + > return 0; > } > > @@ -319,23 +326,34 @@ eth_dev_ring_create(const char *name, const unsigned numa_node, > /* rx and tx are so-called from point of view of first port. > * They are inverted from the point of view of second port > */ > - struct rte_ring *rxtx[RTE_PMD_RING_MAX_RX_RINGS]; > + struct rte_ring *rx[RTE_PMD_RING_MAX_RX_RINGS]; > + struct rte_ring *tx[RTE_PMD_RING_MAX_TX_RINGS]; > unsigned i; > - char rng_name[RTE_RING_NAMESIZE]; > + char rng_rxname[RTE_RING_NAMESIZE]; > + char rng_txname[RTE_RING_NAMESIZE]; > unsigned num_rings = RTE_MIN(RTE_PMD_RING_MAX_RX_RINGS, > RTE_PMD_RING_MAX_TX_RINGS); > > for (i = 0; i < num_rings; i++) { > - snprintf(rng_name, sizeof(rng_name), "ETH_RXTX%u_%s", i, name); > - rxtx[i] = (action == DEV_CREATE) ? > - rte_ring_create(rng_name, 1024, numa_node, > - RING_F_SP_ENQ|RING_F_SC_DEQ) : > - rte_ring_lookup(rng_name); > - if (rxtx[i] == NULL) > + snprintf(rng_rxname, sizeof(rng_rxname), > + "ETH_RX%u_%s", i, name); > + snprintf(rng_txname, sizeof(rng_txname), > + "ETH_TX%u_%s", i, name); > + rx[i] = (action == DEV_CREATE) ? > + rte_ring_create(rng_rxname, 1024, numa_node, > + RING_F_SP_ENQ|RING_F_SC_DEQ) : > + rte_ring_lookup(rng_txname); > + if (rx[i] == NULL) > + return -1; > + tx[i] = (action == DEV_CREATE) ? > + rte_ring_create(rng_txname, 1024, numa_node, > + RING_F_SP_ENQ|RING_F_SC_DEQ) : > + rte_ring_lookup(rng_rxname); > + if (tx[i] == NULL) > return -1; > } > > - if (rte_eth_from_rings(name, rxtx, num_rings, rxtx, num_rings, numa_node)) > + if (rte_eth_from_rings(name, rx, num_rings, tx, num_rings, numa_node)) > return -1; > > return 0; > -- > 1.9.1 >