From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by dpdk.space (Postfix) with ESMTP id 6D6DFA0AC5
	for <public@inbox.dpdk.org>; Mon, 27 May 2019 17:15:08 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id CE1B49E4;
	Mon, 27 May 2019 17:15:06 +0200 (CEST)
Received: from mga12.intel.com (mga12.intel.com [192.55.52.136])
 by dpdk.org (Postfix) with ESMTP id 0D023A3
 for <dev@dpdk.org>; Mon, 27 May 2019 17:15:04 +0200 (CEST)
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 27 May 2019 08:15:03 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.60,519,1549958400"; d="scan'208";a="178917963"
Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.237.221.36])
 ([10.237.221.36])
 by fmsmga002.fm.intel.com with ESMTP; 27 May 2019 08:15:02 -0700
To: Cian Ferriter <cian.ferriter@intel.com>,
 Bruce Richardson <bruce.richardson@intel.com>,
 John McNamara <john.mcnamara@intel.com>,
 Marko Kovacevic <marko.kovacevic@intel.com>
Cc: dev@dpdk.org
References: <20190411110048.6626-1-cian.ferriter@intel.com>
From: Ferruh Yigit <ferruh.yigit@intel.com>
Openpgp: preference=signencrypt
Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata=
 mQINBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy
 qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ
 +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9
 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb
 +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF
 YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy
 ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX
 CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1
 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz
 cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABtCVGZXJydWggWWln
 aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+iQJUBBMBCgA+AhsDAh4BAheABQkI71rKFiEE
 0jZTh0IuwoTjmYHH+TPrQ98TYR8FAlznMMQFCwkIBwMFFQoJCAsFFgIDAQAACgkQ+TPrQ98T
 YR/B9Q//a57esjq996nfZVm7AsUl7zbvhN+Ojity25ib2gcSVVsAN2j6lcQS4hf6/OVvRj3q
 CgebJ4o2gXR6X12UzWBJL7NE8Xpc70MvUIe0r11ykurQ9n9jUaWMjxdSqBPF93hU+Z/MZe5M
 1rW5O2VJLuTJzkDw3EYUCbHOwPjeaS8Qqj3RI0LYbGthbHBIp9CsjkgsJSjTT5GQ8AQWkE7I
 z+hvPx6f1rllfjxFyi4DI3jLhAI+j1Nm+l+ESyoX59HrLTHAvq4RPkLpTnGBj9gOnJ+5sVEr
 GE0fcffsNcuMSkpqSEoJCPAHmChoLgezskhhsy0BiU3xlSIj1Dx2XMDerUXFOK3ftlbYNRte
 HQy4EKubfZRB8H5Rvcpksom3fRBDcJT8zw+PTH14htRApU9f8I/RamQ7Ujks7KuaB7JX5QaG
 gMjfPzHGYX9PfF6KIchaFmAWLytIP1t0ht8LpJkjtvUCSQZ2VxpCXwKyUzPDIF3co3tp90o7
 X07uiC5ymX0K0+Owqs6zeslLY6DMxNdt8ye+h1TVkSZ5g4dCs4C/aiEF230+luL1CnejOv/K
 /s1iSbXQzJNM7be3FlRUz4FdwsfKiJJF7xYALSBnSvEB04R7I2P2V9Zpudkq6DRT6HZjBeJ1
 pBF2J655cdoenPBIeimjnnh4K7YZBzwOLJf2c6u76fe5Ag0EV9ZMvgEQAKc0Db17xNqtSwEv
 mfp4tkddwW9XA0tWWKtY4KUdd/jijYqc3fDD54ESYpV8QWj0xK4YM0dLxnDU2IYxjEshSB1T
 qAatVWz9WtBYvzalsyTqMKP3w34FciuL7orXP4AibPtrHuIXWQOBECcVZTTOdZYGAzaYzxiA
 ONzF9eTiwIqe9/oaOjTwTLnOarHt16QApTYQSnxDUQljeNvKYt1lZE/gAUUxNLWsYyTT+22/
 vU0GDUahsJxs1+f1yEr+OGrFiEAmqrzpF0lCS3f/3HVTU6rS9cK3glVUeaTF4+1SK5ZNO35p
 iVQCwphmxa+dwTG/DvvHYCtgOZorTJ+OHfvCnSVjsM4kcXGjJPy3JZmUtyL9UxEbYlrffGPQ
 I3gLXIGD5AN5XdAXFCjjaID/KR1c9RHd7Oaw0Pdcq9UtMLgM1vdX8RlDuMGPrj5sQrRVbgYH
 fVU/TQCk1C9KhzOwg4Ap2T3tE1umY/DqrXQgsgH71PXFucVjOyHMYXXugLT8YQ0gcBPHy9mZ
 qw5mgOI5lCl6d4uCcUT0l/OEtPG/rA1lxz8ctdFBVOQOxCvwRG2QCgcJ/UTn5vlivul+cThi
 6ERPvjqjblLncQtRg8izj2qgmwQkvfj+h7Ex88bI8iWtu5+I3K3LmNz/UxHBSWEmUnkg4fJl
 Rr7oItHsZ0ia6wWQ8lQnABEBAAGJAjwEGAEKACYCGwwWIQTSNlOHQi7ChOOZgcf5M+tD3xNh
 HwUCXOcvZgUJBvIWKAAKCRD5M+tD3xNhHxhBD/9toXMIaPIVFd9w1nKsRDM1GE6gZe4jie8q
 MJpeHB9O+936fSXA0W2X0het60wJQQ45O8TpTcxpc9nGzcE4MTaLAI3E8TjIXAO0cPqUNLyp
 g0DXezmTw5BU+SKZ51+jSKOtFmzJCHOJZQaMeCHD+G3CrdUHQVQBb5AeuH3KFv9ltgDcWsc8
 YO70o3+tGHwcEnyXLdrI0q05wV7ncnLdkgVo+VUN4092bNMPwYly1TZWcU3Jw5gczOUEfTY7
 sgo6E/sGX3B+FzgIs5t4yi1XOweCAQ/mPnb6uFeNENEFyGKyMG1HtjwBqnftbiFO3qitEIUY
 xWGQH23oKscv7i9lT0gg2D+ktzZhVWwHJVY/2vWSB9aCSWChcH2BT+lWrkwSpoPhy+almM84
 Qz2wF72/d4ce4L27pSrS+vOXtXHLGOOGcAn8yr9TV0kM4aR+NbGBRXGKhG6w4lY54uNd9IBa
 ARIPUhij5JSygxZCBaJKo+X64AHGkk5bXq+f0anwAMNuJXbYC/lz4DEdKmPgQGShOWNs1Y1a
 N3cI87Hun/RBVwQ0a3Tr1g6OWJ6xK8cYbMcoR8NZ7L9ALMeJeuUDQR39+fEeHg/6sQN0P0mv
 0sL+//BAJphCzDk8ztbrFw+JaPtgzZpRSM6JhxnY+YMAsatJRXA0WSpYP5zzl7yu/GZJIgsv VQ==
Message-ID: <b0d748be-bbca-de64-d95b-3b47d9b0f491@intel.com>
Date: Mon, 27 May 2019 16:15:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101
 Thunderbird/60.7.0
MIME-Version: 1.0
In-Reply-To: <20190411110048.6626-1-cian.ferriter@intel.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
Subject: Re: [dpdk-dev] [PATCH 19.08] net/pcap: enable infinitely rxing a
	pcap file
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On 4/11/2019 12:00 PM, Cian Ferriter wrote:
> It can be useful to use pcap files for some rudimental performance
> testing. This patch enables this functionality in the pcap driver.
> 
> At a high level, this works by creaing a ring of sufficient size to
> store the packets in the pcap file passed to the application. When the
> rx function for this mode is called, packets are dequeued from the ring
> for use by the application and also enqueued back on to the ring to be
> "received" again.
> 
> A tx_drop mode is also added since transmitting to a tx_pcap file isn't
> desirable at a high traffic rate.
> 
> Jumbo frames are not supported in this mode. When filling the ring at rx
> queue setup time, the presence of multi segment mbufs is checked for.
> The PMD will exit on detection of these multi segment mbufs.
> 
> Signed-off-by: Cian Ferriter <cian.ferriter@intel.com>
> ---
>  doc/guides/nics/pcap_ring.rst   |  15 +++
>  drivers/net/pcap/rte_eth_pcap.c | 209 ++++++++++++++++++++++++++++++--
>  2 files changed, 215 insertions(+), 9 deletions(-)
> 
> diff --git a/doc/guides/nics/pcap_ring.rst b/doc/guides/nics/pcap_ring.rst
> index c1ef9196b..45f55a345 100644
> --- a/doc/guides/nics/pcap_ring.rst
> +++ b/doc/guides/nics/pcap_ring.rst
> @@ -106,6 +106,21 @@ Runtime Config Options
>  
>     --vdev 'net_pcap0,iface=eth0,phy_mac=1'
>  
> +- Use the RX PCAP file to infinitely receive packets
> +
> + In case ``rx_pcap=`` configuration is set, user may want to use the selected PCAP file for rudimental
> + performance testing. This can be done with a ``devarg`` ``infinite_rx``, for example::
> +
> +   --vdev 'net_pcap0,rx_pcap=file_rx.pcap,infinite_rx=1,tx_drop=1'
> +
> + When this mode is used, it is recommended to use the ``tx_drop`` ``devarg``.
> +
> +- Drop all packets on transmit
> +
> + The user may want to drop all packets on tx for a device. This can be done with the ``tx_drop`` ``devarg``, for example::
> +
> +   --vdev 'net_pcap0,rx_pcap=file_rx.pcap,tx_drop=1'

Is there a performance drop when tx files are directed to 'loopback' interface,
like: --vdev 'net_pcap0,rx_pcap=file_rx.pcap,tx_iface=lo' ?
If no performance drop I am for using 'lo' interface instead of adding new devargs.

> +
>  Examples of Usage
>  ^^^^^^^^^^^^^^^^^
>  
> diff --git a/drivers/net/pcap/rte_eth_pcap.c b/drivers/net/pcap/rte_eth_pcap.c
> index 353538f16..b72db973a 100644
> --- a/drivers/net/pcap/rte_eth_pcap.c
> +++ b/drivers/net/pcap/rte_eth_pcap.c
<...>

> @@ -178,6 +186,40 @@ eth_pcap_gather_data(unsigned char *data, struct rte_mbuf *mbuf)
>  	}
>  }
>  
> +static uint16_t
> +eth_pcap_rx_infinite(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> +{
> +	int i;
> +	struct pcap_rx_queue *pcap_q = queue;
> +
> +	if (unlikely(nb_pkts == 0))
> +		return 0;
> +
> +	if (rte_pktmbuf_alloc_bulk(pcap_q->mb_pool, bufs, nb_pkts) != 0)
> +		return 0;
> +
> +	for (i = 0; i < nb_pkts; i++) {
> +		struct rte_mbuf *pcap_buf;
> +		int err = rte_ring_dequeue(pcap_q->pkts, (void **)&pcap_buf);
> +		if (err)
> +			rte_panic("failed to dequeue from pcap pkts ring\n");

Please don't have any rte_panic() in the driver code, there are a few more in patch.

> +
> +		rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *),
> +				rte_pktmbuf_mtod(pcap_buf, void *),
> +				pcap_buf->data_len);
> +		bufs[i]->data_len = pcap_buf->data_len;
> +		bufs[i]->pkt_len = pcap_buf->pkt_len;
> +		bufs[i]->port = pcap_q->port_id;
> +
> +		/* enqueue packet back on ring to allow infinite rx */
> +		rte_ring_enqueue(pcap_q->pkts, pcap_buf);
> +	}
> +
> +	pcap_q->rx_stat.pkts += i;

For consistency, can you please update "pcap_q->rx_stat.bytes" too?

> +
> +	return i;
> +}
> +
>  static uint16_t
>  eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>  {

<...>

> @@ -447,6 +509,24 @@ open_single_rx_pcap(const char *pcap_filename, pcap_t **pcap)
>  	return 0;
>  }
>  
> +static uint64_t
> +count_packets_in_pcaps(pcap_t **pcap, struct pcap_rx_queue *pcap_q)
> +{
> +	const u_char *packet;
> +	struct pcap_pkthdr header;
> +	uint64_t pcap_pkt_count;

Compiler is complaining about uninitialized 'pcap_pkt_count'.

> +
> +	while ((packet = pcap_next(*pcap, &header)))
> +		pcap_pkt_count++;

It seems there is no quicker way to get number of packets from a pcap file, if
anyone know a way, comment is welcome.

> +
> +	/* the pcap is reopened so it can be used as normal later */
> +	pcap_close(*pcap);
> +	*pcap = NULL;
> +	open_single_rx_pcap(pcap_q->name, pcap);
> +
> +	return pcap_pkt_count;
> +}
> +
>  static int
>  eth_dev_start(struct rte_eth_dev *dev)
>  {

<...>

> @@ -671,6 +752,49 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
>  	pcap_q->queue_id = rx_queue_id;
>  	dev->data->rx_queues[rx_queue_id] = pcap_q;
>  
> +	if (internals->infinite_rx) {
> +		struct pmd_process_private *pp;
> +		char ring_name[NAME_MAX];
> +		uint16_t num_rx = 0;
> +		static uint32_t ring_number;
> +		uint64_t pcap_pkt_count = 0;
> +		pcap_t **pcap;
> +
> +		pp = rte_eth_devices[pcap_q->port_id].process_private;
> +		pcap = &pp->rx_pcap[pcap_q->queue_id];
> +
> +		if (unlikely(*pcap == NULL))
> +			rte_panic("no packets in pcap to fill ring with\n");
> +
> +		pcap_pkt_count = count_packets_in_pcaps(pcap, pcap_q);
> +
> +		snprintf(ring_name, sizeof(ring_name), "PCAP_RING%" PRIu16,
> +				ring_number);
> +
> +		pcap_q->pkts = rte_ring_create(ring_name,
> +				rte_align64pow2(pcap_pkt_count + 1), 0,
> +				RING_F_SP_ENQ | RING_F_SC_DEQ);
> +		ring_number++;
> +		if (!pcap_q->pkts)
> +			rte_panic("failed to alloc ring\n");
> +
> +
> +		struct rte_mbuf *bufs[pcap_pkt_count];

Can you please move deceleration to the beginning of the function.

> +		if (rte_pktmbuf_alloc_bulk(mb_pool, bufs, pcap_pkt_count) != 0)
> +			return 0;

Should this return error?

> +
> +		num_rx = eth_pcap_rx(pcap_q, bufs, pcap_pkt_count);
> +
> +		/* check for multiseg mbufs */
> +		for (i = 0; i < num_rx; i++) {
> +			if (bufs[i]->nb_segs != 1)
> +				rte_panic("jumbo frames are not supported in infinite rx mode\n");

This need to be replaced with returning error, and please remember to cleanup in
that case.

> +		}
> +
> +		rte_ring_enqueue_bulk(pcap_q->pkts, (void * const *)bufs,
> +				num_rx, NULL);
> +	}
> +
>  	return 0;
>  }
>  
<...>

> @@ -1132,9 +1285,18 @@ eth_from_pcaps(struct rte_vdev_device *vdev,
>  		}
>  	}
>  
> -	eth_dev->rx_pkt_burst = eth_pcap_rx;
> +	/* assign rx ops */
> +	if (infinite_rx) {
> +		eth_dev->rx_pkt_burst = eth_pcap_rx_infinite;
> +		internals->infinite_rx = infinite_rx;

Since "infinite_rx" can be 0 or 1, isn't it better to set
"internals->infinite_rx" out of this check, although functionally it will be
same (since internals is all zero by default), logically better I think

> +	} else {
> +		eth_dev->rx_pkt_burst = eth_pcap_rx;
> +	}
>  
> -	if (using_dumpers)
> +	/* assign tx ops */
> +	if (infinite_rx)
> +		eth_dev->tx_pkt_burst = eth_tx_drop;

Shouldn't set 'tx_pkt_burst' with 'infinite_rx' check, I guess intention was
'tx_drop' check.

> +	else if (using_dumpers)
>  		eth_dev->tx_pkt_burst = eth_pcap_tx_dumper;
>  	else
>  		eth_dev->tx_pkt_burst = eth_pcap_tx;

<...>

> @@ -1217,6 +1380,17 @@ pmd_pcap_probe(struct rte_vdev_device *dev)
>  	pcaps.num_of_queue = 0;
>  
>  	if (is_rx_pcap) {
> +		/*
> +		 * We check whether we want to infinitely rx the pcap file
> +		 */
> +		if (rte_kvargs_count(kvlist, ETH_PCAP_INFINITE_RX_ARG) == 1) {

What do you think printing a warning if user provided the value more than once?

> +			ret = rte_kvargs_process(kvlist,
> +					ETH_PCAP_INFINITE_RX_ARG,
> +					&get_infinite_rx_arg, &infinite_rx);
> +			if (ret < 0)
> +				goto free_kvlist;
> +		}
> +
>  		ret = rte_kvargs_process(kvlist, ETH_PCAP_RX_PCAP_ARG,
>  				&open_rx_pcap, &pcaps);
>  	} else {
> @@ -1228,18 +1402,30 @@ pmd_pcap_probe(struct rte_vdev_device *dev)
>  		goto free_kvlist;
>  
>  	/*
> -	 * We check whether we want to open a TX stream to a real NIC or a
> -	 * pcap file
> +	 * We check whether we want to open a TX stream to a real NIC,
> +	 * a pcap file, or drop packets on tx
>  	 */
>  	is_tx_pcap = rte_kvargs_count(kvlist, ETH_PCAP_TX_PCAP_ARG) ? 1 : 0;
>  	dumpers.num_of_queue = 0;
>  
> -	if (is_tx_pcap)
> +	tx_drop = rte_kvargs_count(kvlist, ETH_PCAP_TX_DROP_ARG) ? 1 : 0;

Can drop this line, below already getting the 'tx_drop' value, overwriting this one.

> +	if (rte_kvargs_count(kvlist, ETH_PCAP_TX_DROP_ARG) == 1) {
> +		ret = rte_kvargs_process(kvlist, ETH_PCAP_TX_DROP_ARG,
> +				&get_tx_drop_arg, &tx_drop);
> +		if (ret < 0)
> +			goto free_kvlist;
> +	}
> +
> +	if (tx_drop) {
> +		/* add dummy queue which counts dropped packets */
> +		ret = add_queue(&dumpers, "dummy", "tx_drop", NULL, NULL);

This will break the multiple queue use case. Instead of adding a single queue,
need to find a way to add multiple queues as user want and all will just ignore
the packtets.

> +	} else if (is_tx_pcap) {
>  		ret = rte_kvargs_process(kvlist, ETH_PCAP_TX_PCAP_ARG,
>  				&open_tx_pcap, &dumpers);
> -	else
> +	} else {
>  		ret = rte_kvargs_process(kvlist, ETH_PCAP_TX_IFACE_ARG,
>  				&open_tx_iface, &dumpers);
> +	}
>  
>  	if (ret < 0)
>  		goto free_kvlist;

<...>