From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id E0DB11B560 for ; Wed, 20 Jun 2018 20:04:51 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Jun 2018 11:04:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,248,1526367600"; d="scan'208";a="238859005" Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.237.221.42]) ([10.237.221.42]) by fmsmga005.fm.intel.com with ESMTP; 20 Jun 2018 11:04:43 -0700 To: Ido Goshen Cc: "dev@dpdk.org" References: <1528191584-46149-1-git-send-email-ido@cgstowernetworks.com> <7f089b9b-c5da-5358-08c7-38079f5e38b3@intel.com> <47bb9ab0-eee9-00cc-5e57-3cc79efcd417@intel.com> <4c6b45cf-f8e2-436c-c06c-58d9ff2cd0db@intel.com> From: Ferruh Yigit Openpgp: preference=signencrypt Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata= xsFNBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABzSVGZXJydWggWWln aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+wsF+BBMBAgAoAhsDBgsJCAcDAgYVCAIJCgsE FgIDAQIeAQIXgAUCWZR3VQUJB33WBQAKCRD5M+tD3xNhH6DWEACVhEb8q1epPwZrUDoxzu7E TS1b8tmabOmnjXZRs6+EXgUVHkp2xxkCfDmL3pa5bC0G/74aJnWjNsdvE05V1cb4YK4kRQ62 FwDQ+hlrFrwFB3PtDZk1tpkzCRHvJgnIil+0MuEh32Y57ig6hy8yO8ql7Lohyrnpfk/nNpm4 jQGEF5qEeHcEFe1AZQlPHN/STno8NZSz2nl0b2cw+cujN1krmvB52Ah/2KugQ6pprVyrGrzB c34ZQO9OsmSjJlETCZk6EZzuhfe16iqBFbOSadi9sPcJRwaUQBid+xdFWl7GQ8qC3zNPibSF HmU43yBZUqJDZlhIcl6/cFpOSjv2sDWdtjEXTDn5y/0FsuY0mFE78ItC4kCTIVk17VZoywcd fmbbnwOSWzDq7hiUYuQGkIudJw5k/A1CMsyLkoUEGN3sLfsw6KASgS4XrrmPO4UVr3mH5bP1 yC7i1OVNpzvOxtahmzm481ID8sk72GC2RktTOHb0cX+qdoiMMfYgo3wRRDYCBt6YoGYUxF1p msjocXyqToKhhnFbXLaZlVfnQ9i2i8jsj9SKig+ewC2p3lkPj6ncye9q95bzhmUeJO6sFhJg Hiz6syOMg8yCcq60j07airybAuHIDNFWk0gaWAmtHZxLObZx2PVn2nv9kLYGohFekw0AOsIW ta++5m48dnCoAc7BTQRX1ky+ARAApzQNvXvE2q1LAS+Z+ni2R13Bb1cDS1ZYq1jgpR13+OKN ipzd8MPngRJilXxBaPTErhgzR0vGcNTYhjGMSyFIHVOoBq1VbP1a0Fi/NqWzJOowo/fDfgVy K4vuitc/gCJs+2se4hdZA4EQJxVlNM51lgYDNpjPGIA43MX15OLAip73+ho6NPBMuc5qse3X pAClNhBKfENRCWN428pi3WVkT+ABRTE0taxjJNP7bb+9TQYNRqGwnGzX5/XISv44asWIQCaq vOkXSUJLd//cdVNTqtL1wreCVVR5pMXj7VIrlk07fmmJVALCmGbFr53BMb8O+8dgK2A5mitM n44d+8KdJWOwziRxcaMk/LclmZS3Iv1TERtiWt98Y9AjeAtcgYPkA3ld0BcUKONogP8pHVz1 Ed3s5rDQ91yr1S0wuAzW91fxGUO4wY+uPmxCtFVuBgd9VT9NAKTUL0qHM7CDgCnZPe0TW6Zj 8OqtdCCyAfvU9cW5xWM7Icxhde6AtPxhDSBwE8fL2ZmrDmaA4jmUKXp3i4JxRPSX84S08b+s DWXHPxy10UFU5A7EK/BEbZAKBwn9ROfm+WK+6X5xOGLoRE++OqNuUudxC1GDyLOPaqCbBCS9 +P6HsTHzxsjyJa27n4jcrcuY3P9TEcFJYSZSeSDh8mVGvugi0exnSJrrBZDyVCcAEQEAAcLB ZQQYAQIADwIbDAUCWZR1ZwUJA59cIQAKCRD5M+tD3xNhH5b+D/9XG44Ci6STdcA5RO/ur05J EE3Ux1DCHZ5V7vNAtX/8Wg4l4GZfweauXwuJ1w7Sp7fklwcNC6wsceI+EmNjGMqfIaukGetG +jBGqsQ7moOZodfXUoCK98gblKgt/BPYMVidzlGC8Q/+lZg1+o29sPnwImW+MXt/Z5az/Z17 Qc265g+p5cqJHzq6bpQdnF7Fu6btKU/kv6wJghENvgMXBuyThqsyFReJWFh2wfaKyuix3Zyj ccq7/blkhzIKmtFWgDcgaSc2UAuJU+x9nuYjihW6WobpKP/nlUDu3BIsbIq09UEke+uE/QK+ FJ8PTJkAsXOf1Bc2C0XbW4Y2hf103+YY6L8weUCBsWC5VH5VtVmeuh26ENURclwfeXhWQ9Og 77yzpTXWr5g1Z0oLpYpWPv745J4bE7pv+dzxOrFdM1xNkzY2pvXph/A8OjxZNQklDkHQ7PIB Lki5L2F4XkEOddUUQchJwzMqTPsggPDmGjgLZrqgO+s4ECZK5+nLD3HEpAbPa3JLDaScy+90 Nu1lAqPUHSnP3vYZVw85ZYm6UCxHE4VLMnnJsN09ZhsOSVR+GyP5Nyw9rT1V3lcsuH7M5Naa 2Xobn9m7l9bRCD/Ji8kG15eV1WTxx1HXVQGjdUYDI7UwegBNbwMLh17XDy+3sn/6SgcqtECA Q6pZKA2mTQxEKMLBZQQYAQIADwIbDAUCWZR3hQUJA59eRwAKCRD5M+tD3xNhH4a/D/4jLAZu UhvU1swWcNEVVCELZ0D3LOV14XcY2MXa3QOpeZ9Bgq7YYJ4S5YXK+SBQS0FkRZdjGNvlGZoG ZdpU+NsQmQFhqHGwX0IT9MeTFM8uvKgxNKGwMVcV9g0IOqwBhGHne+BFboRA9362fgGW5AYQ zT0mzzRKEoOh4r3AQvbM6kLISxo0k1ujdYiI5nj/5WoKDqxTwwfuN1uDUHsWo3tzenRmpMyU NyW3Dc+1ajvXLyo09sRRq7BnM99Rix1EGL8Qhwy+j0YAv+FuspWxUX9FxXYho5PvGLHLsHfK FYQ7x/RRbpMjkJWVfIe/xVnfvn4kz+MTA5yhvsuNi678fLwY9hBP0y4lO8Ob2IhEPdfnTuIs tFVxXuelJ9xAe5TyqP0f+fQjf1ixsBZkqOohsBXDfje0iaUpYa/OQ/BBeej0dUdg2JEu4jAC x41HpVCnP9ipLpD0fYz1d/dX0F/VY2ovW6Eba/y/ngOSAR6C+u881m7oH2l0G47MTwkaQCBA bLGXPj4TCdX3lftqt4bcBPBJ+rFAnJmRHtUuyyaewBnZ81ZU2YAptqFM1kTh+aSvMvGhfVsQ qZL2rk2OPN1hg+KXhErlbTZ6oPtLCFhSHQmuxQ4oc4U147wBTUuOdwNjtnNatUhRCp8POc+3 XphVR5G70mnca1E2vzC77z+XSlTyRA== Message-ID: Date: Wed, 20 Jun 2018 19:04:43 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v2] net/pcap: rx_iface_in stream type support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Jun 2018 18:04:52 -0000 On 6/18/2018 10:49 AM, Ido Goshen wrote: > I'm really not sure that just setting the pcaps/dumpers.num_of_queue w/o actually creating multiple queues is good enough. > If one uses N queues there are good changes he is using N cores. > To be consistent with DPDK behavior it should be safe to concurrently tx to different queues. > If pcap_sendpacket to the same pcap_t handle is not thread safe then it will require to pcap_open_live() for each queue > If using multiple pcap_open_live() then it will cause the rx out direction problem again Please do not reply top post, this thread become very hard to follow. I am not suggesting just increase num_of_queue, simply what I was suggesting it add capability to create multiple queues by using "iface" devarg, that is all. This should solve your problem because with "iface" devarg I don't get Tx packets back with my Rx handler. This has a downside that you create queues as pairs, you can't get 4Rx 1Tx queues etc.. So I agree with your suggestion, but there was some comments to your initial patch [1], can you please address them and send a new version? [1] https://mails.dpdk.org/archives/dev/2018-June/103476.html Thanks. > > -----Original Message----- > From: Ferruh Yigit > Sent: Monday, June 18, 2018 11:13 AM > To: Ido Goshen > Cc: dev@dpdk.org > Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support > > On 6/16/2018 10:27 AM, Ido Goshen wrote: >> Is pcap_sendpacket() to the same pcap_t handle thread-safe? I couldn't find clear answer so I'd rather assume not. >> If it's not thread-safe then supporting multiple "iface"'s will require multiple pcap_open_live()'s and we are back in the same place. > > I am not suggesting extra multi thread safety. > > Currently in "iface" path, following is hardcoded: > pcaps.num_of_queue = 1; > dumpers.num_of_queue = 1; > > It can be possible to update that path to support multiple queue while using "iface" devargs. > >> >>>> I am not sure exiting behavior is intentional, which is capturing sent packages in Rx pcap handler to same interface. >>>> Are you aware of any use case of existing behavior? Perhaps it can be option to set PCAP_D_IN by default for rx_iface argument. >> Even if unintentional I find it very useful for testing, as this way it's very easy to send traffic to the app by tcpreplay on the same host the app is running on. >> Using tcpreplay is in the out direction that will not be captured if PCAP_D_IN is set. >> If PCAP_D_IN is the only option then it will require external device (or some networking trick) to send packets to the app. >> So, I'd say it is good for testing but less good for real >> functionality > > OK to keep it if there is a usecase. > >> >> -----Original Message----- >> From: Ferruh Yigit >> Sent: Friday, June 15, 2018 3:53 PM >> To: Ido Goshen >> Cc: dev@dpdk.org >> Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support >> >> On 6/14/2018 9:44 PM, Ido Goshen wrote: >>> I think we are starting to mix two things One is how to configure >>> pcap eth dev with multiple queues and I totally agree it would have been nicer to just say something like "max_tx_queues =N" instead of needing to write "tx_iface" N times, but as it was already supported in that way (for any reason?) I wasn't trying to enhance or change it. >>> The other issue is pcap direction API, which I was trying to expose to users of dpdk pcap device. >> >> Hi Ido, >> >> Assuming "iface" argument solves the direction issue, I am suggestion adding multiqueue support to "iface" argument as a solution to your problem. >> >> I am not suggesting using new arguments like "max_tx_queues =N", "iface" can be used same as how rx/tx_ifcase used now, provide it multiple times. >> >>> Refer to https://www.tcpdump.org/manpages/pcap_setdirection.3pcap.txt >>> or man tcpdump for -P/--direction in|out|inout option, Actually I >>> think a more realistic emulation of a physical device (non-virtual) >>> would be to capture only the incoming direction (set PCAP_D_IN), >>> again the existing behavior is very useful too and I didn't try to >>> change or eliminate it but just add additional stream type option >> >> I am not sure exiting behavior is intentional, which is capturing sent packages in Rx pcap handler to same interface. >> Are you aware of any use case of existing behavior? Perhaps it can be option to set PCAP_D_IN by default for rx_iface argument. >> >>> >>> -----Original Message----- >>> From: Ferruh Yigit >>> Sent: Thursday, June 14, 2018 9:09 PM >>> To: Ido Goshen >>> Cc: dev@dpdk.org >>> Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support >>> >>> On 6/14/2018 6:14 PM, Ido Goshen wrote: >>>> I use "rx_iface","tx_iface" (and not just "iface") in order to have >>>> multiple TX queues I just gave a simplified setting with 1 queue My >>>> app does a full mesh between the ports (not fixed pairs like l2fwd) >>>> so all the forwarding lcores can tx to the same port simultaneously and as DPDK docs say: >>>> "Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance." >>>> For example if I have 3 ports handled by 3 cores it'll be >>>> myapp -c 7 -n1 --no-huge \ >>>> --vdev=eth_pcap0,rx_iface=eth0,tx_iface=eth0,tx_iface=eth0,tx_iface=eth0 \ >>>> --vdev=eth_pcap0,rx_iface=eth1,tx_iface=eth1,tx_iface=eth1,tx_iface=eth1 \ >>>> --vdev=eth_pcap0,rx_iface=eth2,tx_iface=eth2,tx_iface=eth2,tx_iface=eth2 \ >>>> -- -p 7 >>>> Is there another way to achieve multiple queues in pcap vdev? >>> >>> If you want to use multiple core you need multiple queues, as you said, and above is the way to create multiple queues for pcap. >>> >>> Currently "iface" argument only supports single interface in a hardcoded way, but technically it should be possible to update it to support multiple queue. >>> >>> So if "iface" arguments works for you, it can be better to add multi queue support to "iface" instead of introducing a new device argument. >>> >>>> >>>> I do see that using "iface" behaves differently - I'll try to >>>> investigate why >>> >>> pcap_open_live() is called for both arguments, for "rx_iface/tx_iface" pair it has been called twice one for each. Not sure if pcap library returns same handler or two different handlers for this case since iface name is same. >>> For "iface" argument pcap_open_live() called once, so we have single handler for both Rx & Tx. This may be difference. >>> >>>> And still even when using "iface" I also see packets that are >>>> transmitted out of eth1 (e.g. tcpreplay -i eth1 packets.pcap) and >>>> not only packets that are received (e.g. ping from far end to eth0 >>>> ip) >>> >>> This is interesting, I have tried with external packet generator, "iface" was working as expected for me. >>> >>>> >>>> >>>> -----Original Message----- >>>> From: Ferruh Yigit >>>> Sent: Wednesday, June 13, 2018 1:57 PM >>>> To: Ido Goshen >>>> Cc: dev@dpdk.org >>>> Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support >>>> >>>> On 6/5/2018 6:10 PM, Ido Goshen wrote: >>>>> The problem is if a dpdk app uses the same iface(s) both as rx_iface and tx_iface then it will receive back the packets it sends. >>>>> If my app sends a packet to portid=X with rte_eth_tx_burst() then I >>>>> wouldn't expect to receive it back by rte_eth_rx_burst() for that same portid=X (assuming of course there's no external loopback) This is coming from the default nature of pcap that like a sniffer captures both incoming and outgoing direction. >>>>> The patch provides an option to limit pcap rx_iface to get only incoming traffic which is more like a real (non-pcap) dpdk device. >>>>> >>>>> for example: >>>>> when using existing *rx_iface* >>>>> l2fwd -c 3 -n1 --no-huge >>>>> --vdev=eth_pcap0,rx_iface=eth1,tx_iface=eth1 >>>>> --vdev=eth_pcap1,rx_iface=dummy0,tx_iface=dummy0 -- -p 3 -T 1 >>>>> sending only 1 single packet into eth1 will end in an infinite loop >>>>> - >>>> >>>> If you are using same interface for both Rx & Tx, why not using "iface=xxx" >>>> argument, can you please test with following: >>>> >>>> l2fwd -c 3 -n1 --no-huge --vdev=eth_pcap0,iface=eth1 >>>> --vdev=eth_pcap1,iface=dummy0 -- -p 3 -T 1 >>>> >>>> >>>> I can't reproduce the issue with above command. >>>> >>>> Thanks, >>>> ferruh >>>> >>> >> >