From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <ferruh.yigit@intel.com>
Received: from mga18.intel.com (mga18.intel.com [134.134.136.126])
 by dpdk.org (Postfix) with ESMTP id 498BD1D7
 for <dev@dpdk.org>; Tue, 13 Nov 2018 19:43:47 +0100 (CET)
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 13 Nov 2018 10:43:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.56,229,1539673200"; d="scan'208";a="90838065"
Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.237.221.79])
 ([10.237.221.79])
 by orsmga006.jf.intel.com with ESMTP; 13 Nov 2018 10:43:44 -0800
To: "Zhang, Qi Z" <qi.z.zhang@intel.com>, Thomas Monjalon <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "Lin, Xueqin" <xueqin.lin@intel.com>
References: <20181105210823.38757-1-qi.z.zhang@intel.com>
 <20181112165109.33296-1-qi.z.zhang@intel.com>
 <861c43b2-a082-fdf8-61b9-2df1f4ed919c@intel.com> <4093831.1gQSVtVNRZ@xps>
 <039ED4275CED7440929022BC67E70611532E343C@SHSMSX103.ccr.corp.intel.com>
From: Ferruh Yigit <ferruh.yigit@intel.com>
Openpgp: preference=signencrypt
Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata=
 xsFNBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy
 qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ
 +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9
 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb
 +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF
 YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy
 ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX
 CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1
 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz
 cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABzSVGZXJydWggWWln
 aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+wsGVBBMBAgA/AhsDBgsJCAcDAgYVCAIJCgsE
 FgIDAQIeAQIXgBYhBNI2U4dCLsKE45mBx/kz60PfE2EfBQJbughWBQkHwjOGAAoJEPkz60Pf
 E2Eft84QAIbKWqhgqRfoiw/BbXbA1+qm2o4UgkCRQ0yJgt9QsnbpOmPKydHH0ixCliNz1J8e
 mRXCkMini1bTpnzp7spOjQGLeAFkNFz6BMq8YF2mVWbGEDE9WgnAxZdi0eLY7ZQnHbE6AxKL
 SXmpe9INb6z3ztseFt7mqje/W/6DWYIMnH3Yz9KzxujFWDcq8UCAvPkxVQXLTMpauhFgYeEx
 Nub5HbvhxTfUkapLwRQsSd/HbywzqZ3s/bbYMjj5JO3tgMiM9g9HOjv1G2f1dQjHi5YQiTZl
 1eIIqQ3pTic6ROaiZqNmQFXPsoOOFfXF8nN2zg8kl/sSdoXWHhama5hbwwtl1vdaygQYlmdK
 H2ueiFh/UvT3WG3waNv2eZiEbHV8Rk52Xyn2w1G90lV0fYC6Ket1Xjoch7kjwbx793Kz/RfQ
 rmBY8/S4DTGn3oq3dMdQY+b6+7VMUeLMMh2CXYO9ErkOq+qNTD1IY+cBAkXnaDbQfz0zbste
 ZGWH74FAZ9nCpDOqbRTrBL42aMGhfOWEyeA1x7+hl6JZfabBWAuf4nnCXuorKHzBXTrf7u7p
 fXsKQClWRW77PF1VmzrtKNVSytQAmlCWApQIw20AarFipXmVdIjHmJPU611WoyxZPb4JTOxx
 5cv9B+nr/RIB+v5dcStyHCCwO1be7nBDdCgd4F6kTQPLzsFNBFfWTL4BEACnNA29e8TarUsB
 L5n6eLZHXcFvVwNLVlirWOClHXf44o2KnN3ww+eBEmKVfEFo9MSuGDNHS8Zw1NiGMYxLIUgd
 U6gGrVVs/VrQWL82pbMk6jCj98N+BXIri+6K1z+AImz7ax7iF1kDgRAnFWU0znWWBgM2mM8Y
 gDjcxfXk4sCKnvf6Gjo08Ey5zmqx7dekAKU2EEp8Q1EJY3jbymLdZWRP4AFFMTS1rGMk0/tt
 v71NBg1GobCcbNfn9chK/jhqxYhAJqq86RdJQkt3/9x1U1Oq0vXCt4JVVHmkxePtUiuWTTt+
 aYlUAsKYZsWvncExvw77x2ArYDmaK0yfjh37wp0lY7DOJHFxoyT8tyWZlLci/VMRG2Ja33xj
 0CN4C1yBg+QDeV3QFxQo42iA/ykdXPUR3ezmsND3XKvVLTC4DNb3V/EZQ7jBj64+bEK0VW4G
 B31VP00ApNQvSoczsIOAKdk97RNbpmPw6q10ILIB+9T1xbnFYzshzGF17oC0/GENIHATx8vZ
 masOZoDiOZQpeneLgnFE9JfzhLTxv6wNZcc/HLXRQVTkDsQr8ERtkAoHCf1E5+b5Yr7pfnE4
 YuhET746o25S53ELUYPIs49qoJsEJL34/oexMfPGyPIlrbufiNyty5jc/1MRwUlhJlJ5IOHy
 ZUa+6CLR7GdImusFkPJUJwARAQABwsF8BBgBAgAmAhsMFiEE0jZTh0IuwoTjmYHH+TPrQ98T
 YR8FAlu6CHAFCQXE7zIACgkQ+TPrQ98TYR9nXxAAqNBgkYNyGuWUuy0GwDQCbu3iiMyH1+D7
 llafPcK4NYy1Z4AYuVwC9nmLaoj+ozdqS3ncRo57ncRsKEJC46nDJJZYZ5LSJVn63Y3NBF86
 lxQAgjj2oyZEwaLKtKbAFsXL43jv1pUGgSvWwYtDwHITXXFQto9rZEuUDRFSx4sg9OR+Q6/6
 LY+nQQ3OdHlBkflzYMPcWgDcvcTAO6yasLEUf7UcYoSWTyMYjLB4QuNlXzTswzGVMssJF/vo
 V8lD1eqqaSUWG3STF6GVLQOr1NLvN5+kUBiEStHFxBpgSCvYY9sNV8FS6N24CAWMBl+10W+D
 2h1yiiP5dOdPcBDYKsgqDD91/sP0WdyMJkwdQJtD49f9f+lYloxHnSAxMleOpyscg1pldw+i
 mPaUY1bmIknLhhkqfMmjywQOXpac5LRMibAAYkcB8v7y3kwELnt8mhqqZy6LUsqcWygNbH/W
 K3GGt5tRpeIXeJ25x8gg5EBQ0Jnvp/IbBYQfPLtXH0Myq2QuAhk/1q2yEIbVjS+7iowEZNyE
 56K63WBJxsJPB2mvmLgn98GqB4G6GufP1ndS0XDti/2K0o8rep9xoY/JDGi0n0L0tk9BHyoP
 Y7kaEpu7UyY3nVdRLe5H1/MnFG8hdJ97WqnPS0buYZlrbTV0nRFL/NI2VABl18vEEXvNQiO+ vM8=
Message-ID: <fe2066f4-00f1-15d1-f261-9d56129785af@intel.com>
Date: Tue, 13 Nov 2018 18:43:43 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101
 Thunderbird/60.3.0
MIME-Version: 1.0
In-Reply-To: <039ED4275CED7440929022BC67E70611532E343C@SHSMSX103.ccr.corp.intel.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
Subject: Re: [dpdk-dev] [PATCH v2] net/pcap: enable data path on secondary
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Tue, 13 Nov 2018 18:43:48 -0000

On 11/13/2018 6:27 PM, Zhang, Qi Z wrote:
> First, apologies to make this in rush since I was somehow under pressure to make pdump works in 18.11.
> I agree there is lot of things need to improve, but the strategy here is to make it work quietly and not break anything else :) 
> add some comments inline.
> 
>> -----Original Message-----
>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
>> Sent: Tuesday, November 13, 2018 9:15 AM
>> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
>> Cc: dev@dpdk.org; Lin, Xueqin <xueqin.lin@intel.com>
>> Subject: Re: [PATCH v2] net/pcap: enable data path on secondary
>>
>> Just a quick comment:
>> There are probably some ideas to take from what was done for tap.
> 
>>
>>
>> 13/11/2018 17:56, Ferruh Yigit:
>>> On 11/12/2018 4:51 PM, Qi Zhang wrote:
>>>> Private vdev on secondary is never supported by the new shared
>>>> device mode but pdump still relies on a private pcap PMD on secondary.
>>>> The patch enables pcap PMD's data path on secondary so that pdump
>>>> can work as usual.
>>>
>>> It would be great if you described the problem a little more.
>>>
>>> Private vdev was the way previously, when pdump developed, now with
>>> shared device mode on virtual devices, pcap data path in secondary is not
>> working.
>>>
>>> What exactly not working is (please correct me if I am wrong):
>>> When secondary adds a virtual device, related data transferred to
>>> primary and primary creates the device and shares device back with
>> secondary.
>>> When pcap device created in primary, pcap handlers (pointers) are
>>> process local and they are not valid for secondary process. This breaks
>> secondary.
>>>
>>> So we can't directly share the pcap handlers, but need to create a new
>>> set of handlers for secondary, that is what you are doing in this
>>> patch, although I have some comments, please check below.
>>>
>>> Since there is single storage for pcap handlers that primary and
>>> secondary shares and they can't share the handlers, you can't make
>>> both primary and secondary data path work. Also freeing handlers is
>>> another concern. What is needed is `rte_eth_dev->process_private` which
>> has been added in this release.
> 
> You are right, we should prevent handler be opened in primary be corrupted during probe at secondary.
> Now, I see this problem in pcap , as an example: internals->tx_queue[i].dumper/pcap is shared but will be overwritten at secondary, we should fix them by use process_private, 
> 
>>>
>>>>
>>>> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>>>> Tested-by: Yufeng Mo <yufengx.mo@intel.com>
>>>
>>> <...>
>>>
>>>> @@ -934,6 +935,10 @@ pmd_init_internals(struct rte_vdev_device
>> *vdev,
>>>>  	 */
>>>>  	(*eth_dev)->dev_ops = &ops;
>>>>
>>>> +	/* store a copy of devargs for secondary process */
>>>> +	strlcpy(internals->devargs, rte_vdev_device_args(vdev),
>>>> +			ETH_PCAP_ARG_MAXLEN);
>>>
>>> Why we need to cover this in PMD level?
>>>
>>> Why secondary probe isn't getting devargs? Can't we fix this in eal level?
>>> It can be OK to workaround in the PMD taking account of the time of
>>> the release, but for long term I think this should be fixed in eal.
> 
> Yes this is the workaround for quick fix.
> Ideally secondary process should not take care of devargs, it just attach.
> And it's better to only parse devargs on one process ( primary process), the parsed result could be stored to intermediate result in shared memory,(examples, internal->nb_rx_queue_required) so secondary process don't need to parse it again.
>>>
>>> <...>
>>>
>>>> @@ -1122,23 +1126,37 @@ pmd_pcap_probe(struct rte_vdev_device
>> *dev)
>>>>  	start_cycles = rte_get_timer_cycles();
>>>>  	hz = rte_get_timer_hz();
>>>>
>>>> -	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
>>>> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>>>> +		kvlist = rte_kvargs_parse(rte_vdev_device_args(dev),
>>>> +				valid_arguments);
>>>> +		if (kvlist == NULL)
>>>> +			return -EINVAL;
>>>> +		if (rte_kvargs_count(kvlist, ETH_PCAP_IFACE_ARG) == 1)
>>>> +			nb_rx_queue = 1;
>>>> +		else
>>>> +			nb_rx_queue =
>>>> +				rte_kvargs_count(kvlist,
>>>> +					ETH_PCAP_RX_PCAP_ARG) ? 1 : 0;
>>>> +		nb_tx_queue = 1;
>>>
>>> This part is wrong. pcap pmd supports multi queue, you can't hardcode
>>> the number of queues. Also for Tx why it ignores `rx_iface` argument?
>>> This is just hacking the driver for a specific use case breaking others.
> 
> Previously the nb_tx_queue and nb_rx_queue is decided by pcaps.num_of_queue and dumpers.num_of_queues.
> I just can't figure out a way that we can have more than 1 queue during probe, look at below code.
> 
> If ETH_PCAP_IFACE_ARG
> 
> 	pcaps.num_of_queue = 1;
> 	dumpers.num_of_queue = 1;
> 
> else
> 	is_rx_pcap = rte_kvargs_count(kvlist, ETH_PCAP_RX_PCAP_ARG) ? 1 : 0;
>         pcaps.num_of_queue = 0;
> 	if (is_rx_pcap) {
>                 ret = rte_kvargs_process(kvlist, ETH_PCAP_RX_PCAP_ARG,
>                                 &open_rx_pcap, &pcaps);
> 
> 				// pcaps.num_of_queue = 1;
>         } else {
>                 ret = rte_kvargs_process(kvlist, NULL,
>                                 &rx_iface_args_process, &pcaps);
> 				// pcaps.num_of_queue = 0;
>         }
> 
> 		is_tx_pcap = rte_kvargs_count(kvlist, ETH_PCAP_TX_PCAP_ARG) ? 1 : 0;
>         dumpers.num_of_queue = 0;
> 
>         if (is_tx_pcap)
>                 ret = rte_kvargs_process(kvlist, ETH_PCAP_TX_PCAP_ARG,
>                                 &open_tx_pcap, &dumpers);
> 				// dumpers.num_of_queue = 1
>         else
>                 ret = rte_kvargs_process(kvlist, ETH_PCAP_TX_IFACE_ARG,
>                                 &open_tx_iface, &dumpers);
> 				// dumpers.num_of_queue = 1
> 
> That's the same logic I applied, did I missed something, would you explain more for this?

ETH_PCAP_IFACE_ARG is "iface=xxx" usage, both Rx and Tx use the same interface,
because of implementation limitation it only supports 1 queue.

rx_pcap, rx_iface, rx_iface_in supports multiple queues, by providing them
multiple time. Like "rx_pcap=q1.pcap,rx_pcap=q2.pcap,rx_pcap=q3.pcap" will
create 3 Rx queues each having their own .pcap file. Same is valid for Tx.

rte_kvargs_process() calls callback function per argument provided, so if an
argument provided multiple times, it will call same callback multiple times,
that is why 'num_of_queue' increased in callback functions.

In high-level, pmd_pcap_probe() first parses the arguments and creates pcap
handlers based on arguments, later as a last thing creates ethdev using these
information. I am for keeping this logic, doing something different for
secondary can cause issues in edge cases not obvious at first look.

> 
> Thanks
> Qi
> 
>>>
>>>> +		ret = pmd_init_internals(dev, nb_rx_queue,
>>>> +				nb_tx_queue, &eth_dev);
>>>
>>> I think it is not required to move pmd_init_internals() here.
>>> This can be done simpler, I will send a draft patch as a reply to this
>>> mail for possible solution.
>>> But again that can't be final solution, we need to use
>>> `process_private`
>>>
>>> <...>
>>>
>>
>>
>>
>>
>