From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Ido Goshen <Ido@cgstowernetworks.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] net/pcap: rx_iface_in stream type support
Date: Wed, 20 Jun 2018 19:04:43 +0100 [thread overview]
Message-ID: <b5234c8b-7b31-d383-24b8-60a7afc61f07@intel.com> (raw)
In-Reply-To: <AM5PR0901MB14275F33D43464861188F427D6710@AM5PR0901MB1427.eurprd09.prod.outlook.com>
On 6/18/2018 10:49 AM, Ido Goshen wrote:
> I'm really not sure that just setting the pcaps/dumpers.num_of_queue w/o actually creating multiple queues is good enough.
> If one uses N queues there are good changes he is using N cores.
> To be consistent with DPDK behavior it should be safe to concurrently tx to different queues.
> If pcap_sendpacket to the same pcap_t handle is not thread safe then it will require to pcap_open_live() for each queue
> If using multiple pcap_open_live() then it will cause the rx out direction problem again
Please do not reply top post, this thread become very hard to follow.
I am not suggesting just increase num_of_queue, simply what I was suggesting it
add capability to create multiple queues by using "iface" devarg, that is all.
This should solve your problem because with "iface" devarg I don't get Tx
packets back with my Rx handler. This has a downside that you create queues as
pairs, you can't get 4Rx 1Tx queues etc..
So I agree with your suggestion, but there was some comments to your initial
patch [1], can you please address them and send a new version?
[1]
https://mails.dpdk.org/archives/dev/2018-June/103476.html
Thanks.
>
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Monday, June 18, 2018 11:13 AM
> To: Ido Goshen <Ido@cgstowernetworks.com>
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support
>
> On 6/16/2018 10:27 AM, Ido Goshen wrote:
>> Is pcap_sendpacket() to the same pcap_t handle thread-safe? I couldn't find clear answer so I'd rather assume not.
>> If it's not thread-safe then supporting multiple "iface"'s will require multiple pcap_open_live()'s and we are back in the same place.
>
> I am not suggesting extra multi thread safety.
>
> Currently in "iface" path, following is hardcoded:
> pcaps.num_of_queue = 1;
> dumpers.num_of_queue = 1;
>
> It can be possible to update that path to support multiple queue while using "iface" devargs.
>
>>
>>>> I am not sure exiting behavior is intentional, which is capturing sent packages in Rx pcap handler to same interface.
>>>> Are you aware of any use case of existing behavior? Perhaps it can be option to set PCAP_D_IN by default for rx_iface argument.
>> Even if unintentional I find it very useful for testing, as this way it's very easy to send traffic to the app by tcpreplay on the same host the app is running on.
>> Using tcpreplay is in the out direction that will not be captured if PCAP_D_IN is set.
>> If PCAP_D_IN is the only option then it will require external device (or some networking trick) to send packets to the app.
>> So, I'd say it is good for testing but less good for real
>> functionality
>
> OK to keep it if there is a usecase.
>
>>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Friday, June 15, 2018 3:53 PM
>> To: Ido Goshen <Ido@cgstowernetworks.com>
>> Cc: dev@dpdk.org
>> Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support
>>
>> On 6/14/2018 9:44 PM, Ido Goshen wrote:
>>> I think we are starting to mix two things One is how to configure
>>> pcap eth dev with multiple queues and I totally agree it would have been nicer to just say something like "max_tx_queues =N" instead of needing to write "tx_iface" N times, but as it was already supported in that way (for any reason?) I wasn't trying to enhance or change it.
>>> The other issue is pcap direction API, which I was trying to expose to users of dpdk pcap device.
>>
>> Hi Ido,
>>
>> Assuming "iface" argument solves the direction issue, I am suggestion adding multiqueue support to "iface" argument as a solution to your problem.
>>
>> I am not suggesting using new arguments like "max_tx_queues =N", "iface" can be used same as how rx/tx_ifcase used now, provide it multiple times.
>>
>>> Refer to https://www.tcpdump.org/manpages/pcap_setdirection.3pcap.txt
>>> or man tcpdump for -P/--direction in|out|inout option, Actually I
>>> think a more realistic emulation of a physical device (non-virtual)
>>> would be to capture only the incoming direction (set PCAP_D_IN),
>>> again the existing behavior is very useful too and I didn't try to
>>> change or eliminate it but just add additional stream type option
>>
>> I am not sure exiting behavior is intentional, which is capturing sent packages in Rx pcap handler to same interface.
>> Are you aware of any use case of existing behavior? Perhaps it can be option to set PCAP_D_IN by default for rx_iface argument.
>>
>>>
>>> -----Original Message-----
>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>> Sent: Thursday, June 14, 2018 9:09 PM
>>> To: Ido Goshen <Ido@cgstowernetworks.com>
>>> Cc: dev@dpdk.org
>>> Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support
>>>
>>> On 6/14/2018 6:14 PM, Ido Goshen wrote:
>>>> I use "rx_iface","tx_iface" (and not just "iface") in order to have
>>>> multiple TX queues I just gave a simplified setting with 1 queue My
>>>> app does a full mesh between the ports (not fixed pairs like l2fwd)
>>>> so all the forwarding lcores can tx to the same port simultaneously and as DPDK docs say:
>>>> "Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance."
>>>> For example if I have 3 ports handled by 3 cores it'll be
>>>> myapp -c 7 -n1 --no-huge \
>>>> --vdev=eth_pcap0,rx_iface=eth0,tx_iface=eth0,tx_iface=eth0,tx_iface=eth0 \
>>>> --vdev=eth_pcap0,rx_iface=eth1,tx_iface=eth1,tx_iface=eth1,tx_iface=eth1 \
>>>> --vdev=eth_pcap0,rx_iface=eth2,tx_iface=eth2,tx_iface=eth2,tx_iface=eth2 \
>>>> -- -p 7
>>>> Is there another way to achieve multiple queues in pcap vdev?
>>>
>>> If you want to use multiple core you need multiple queues, as you said, and above is the way to create multiple queues for pcap.
>>>
>>> Currently "iface" argument only supports single interface in a hardcoded way, but technically it should be possible to update it to support multiple queue.
>>>
>>> So if "iface" arguments works for you, it can be better to add multi queue support to "iface" instead of introducing a new device argument.
>>>
>>>>
>>>> I do see that using "iface" behaves differently - I'll try to
>>>> investigate why
>>>
>>> pcap_open_live() is called for both arguments, for "rx_iface/tx_iface" pair it has been called twice one for each. Not sure if pcap library returns same handler or two different handlers for this case since iface name is same.
>>> For "iface" argument pcap_open_live() called once, so we have single handler for both Rx & Tx. This may be difference.
>>>
>>>> And still even when using "iface" I also see packets that are
>>>> transmitted out of eth1 (e.g. tcpreplay -i eth1 packets.pcap) and
>>>> not only packets that are received (e.g. ping from far end to eth0
>>>> ip)
>>>
>>> This is interesting, I have tried with external packet generator, "iface" was working as expected for me.
>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Sent: Wednesday, June 13, 2018 1:57 PM
>>>> To: Ido Goshen <Ido@cgstowernetworks.com>
>>>> Cc: dev@dpdk.org
>>>> Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support
>>>>
>>>> On 6/5/2018 6:10 PM, Ido Goshen wrote:
>>>>> The problem is if a dpdk app uses the same iface(s) both as rx_iface and tx_iface then it will receive back the packets it sends.
>>>>> If my app sends a packet to portid=X with rte_eth_tx_burst() then I
>>>>> wouldn't expect to receive it back by rte_eth_rx_burst() for that same portid=X (assuming of course there's no external loopback) This is coming from the default nature of pcap that like a sniffer captures both incoming and outgoing direction.
>>>>> The patch provides an option to limit pcap rx_iface to get only incoming traffic which is more like a real (non-pcap) dpdk device.
>>>>>
>>>>> for example:
>>>>> when using existing *rx_iface*
>>>>> l2fwd -c 3 -n1 --no-huge
>>>>> --vdev=eth_pcap0,rx_iface=eth1,tx_iface=eth1
>>>>> --vdev=eth_pcap1,rx_iface=dummy0,tx_iface=dummy0 -- -p 3 -T 1
>>>>> sending only 1 single packet into eth1 will end in an infinite loop
>>>>> -
>>>>
>>>> If you are using same interface for both Rx & Tx, why not using "iface=xxx"
>>>> argument, can you please test with following:
>>>>
>>>> l2fwd -c 3 -n1 --no-huge --vdev=eth_pcap0,iface=eth1
>>>> --vdev=eth_pcap1,iface=dummy0 -- -p 3 -T 1
>>>>
>>>>
>>>> I can't reproduce the issue with above command.
>>>>
>>>> Thanks,
>>>> ferruh
>>>>
>>>
>>
>
prev parent reply other threads:[~2018-06-20 18:04 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-05 9:39 ido goshen
2018-06-05 13:26 ` Ferruh Yigit
2018-06-05 17:10 ` Ido Goshen
2018-06-13 10:57 ` Ferruh Yigit
2018-06-14 17:14 ` Ido Goshen
2018-06-14 18:09 ` Ferruh Yigit
2018-06-14 20:44 ` Ido Goshen
2018-06-15 12:53 ` Ferruh Yigit
2018-06-16 9:27 ` Ido Goshen
2018-06-18 8:13 ` Ferruh Yigit
2018-06-18 9:49 ` Ido Goshen
2018-06-20 18:04 ` Ferruh Yigit [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b5234c8b-7b31-d383-24b8-60a7afc61f07@intel.com \
--to=ferruh.yigit@intel.com \
--cc=Ido@cgstowernetworks.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).