From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-181.mimecast.com (us-smtp-delivery-181.mimecast.com [63.128.21.181]) by dpdk.org (Postfix) with ESMTP id D223D4D3A for ; Mon, 15 Oct 2018 20:52:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rbbn.com; s=mimecast20180816; t=1539629544; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=axda5Kx3O9ovVQ4UGS0kJd+a0hlRFiOVKfQGpNL1YWg=; b=ZIIkYo4LeoH8GhyUNiIFElRiV6ENsX9+c45onz+DwuMmUGOMNWPfzYihttcHM9fLgQgGiqYzA4r86M4obgSPUKLNkdCm+ZxDa6aznS8rgVzG6E6Io4AeSzCp6ARCuBr5oL6XU0iIJukt0212ZDXpFfrhGTPafl/6LJUlgCNLC+Y= Received: from NAM03-DM3-obe.outbound.protection.outlook.com (mail-dm3nam03lp0021.outbound.protection.outlook.com [207.46.163.21]) (Using TLS) by us-smtp-1.mimecast.com with ESMTP id us-mta-140-Chbry6PoMMmrqAmF3JkoJQ-1; Mon, 15 Oct 2018 14:52:21 -0400 Received: from SN2PR03MB2143.namprd03.prod.outlook.com (10.166.209.134) by SN2PR03MB2286.namprd03.prod.outlook.com (10.166.210.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1228.23; Mon, 15 Oct 2018 18:52:20 +0000 Received: from SN2PR03MB2143.namprd03.prod.outlook.com ([fe80::dc74:4f16:90ba:7ed3]) by SN2PR03MB2143.namprd03.prod.outlook.com ([fe80::dc74:4f16:90ba:7ed3%3]) with mapi id 15.20.1228.027; Mon, 15 Oct 2018 18:52:20 +0000 From: "Dey, Souvik" To: Ruinan CC: "dev@dpdk.org" , "users@dpdk.org" Thread-Topic: [dpdk-users] i40e VF PMD not getting multicast packets Thread-Index: AdRktGv+672TkLzTSKi5u8xknwrnMQAAQsSAAACPqIA= Date: Mon, 15 Oct 2018 18:52:20 +0000 Message-ID: References: <17849E86-931B-494B-90D7-74E2016C0D3C@gmail.com> In-Reply-To: <17849E86-931B-494B-90D7-74E2016C0D3C@gmail.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [208.45.178.4] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; SN2PR03MB2286; 20:SBmYGYJu19d2LvEM3GGQ2VcG0BUQJXkoPPyuwXUJ6UnoYRGlm/2JIXaV12ofjDAfQNJLiAINaitfs/uJrKYAPJn5L3zEvnNC3KZRw88VQO5d4CNPItHWMxKmr9d8HmL/nUw0EZM5bXd/g1ImzDNtZ3tXipcnctHPfkHtop/Ctxs= x-ms-exchange-antispam-srfa-diagnostics: SOS; x-ms-office365-filtering-correlation-id: e04249f0-7d15-4387-85dd-08d632cf560c x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(2017052603328)(7153060)(7193020); SRVR:SN2PR03MB2286; x-ms-traffictypediagnostic: SN2PR03MB2286: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(85827821059158)(21748063052155)(28532068793085)(190501279198761)(227612066756510); x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(93006095)(93001095)(10201501046)(3002001)(3231355)(944501410)(52105095)(149066)(150057)(6041310)(20161123562045)(20161123558120)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(201708071742011)(7699051)(76991067); SRVR:SN2PR03MB2286; BCL:0; PCL:0; RULEID:; SRVR:SN2PR03MB2286; x-forefront-prvs: 0826B2F01B x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(376002)(136003)(396003)(346002)(366004)(39860400002)(189003)(199004)(50944005)(53754006)(97736004)(6436002)(7736002)(55016002)(790700001)(3846002)(9686003)(68736007)(6116002)(2906002)(54906003)(236005)(54896002)(14454004)(6306002)(8936002)(6916009)(478600001)(476003)(105586002)(71190400001)(71200400001)(316002)(106356001)(53936002)(446003)(4326008)(33656002)(53546011)(26005)(6506007)(25786009)(66066001)(229853002)(1411001)(11346002)(6246003)(76176011)(99286004)(8676002)(2900100001)(102836004)(74316002)(5660300001)(81166006)(486006)(81156014)(14444005)(7696005)(86362001)(186003)(19609705001)(5250100002)(39060400002)(256004); DIR:OUT; SFP:1101; SCL:1; SRVR:SN2PR03MB2286; H:SN2PR03MB2143.namprd03.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-microsoft-antispam-message-info: R82xjd7VZ2y5uJksu5i+Lij1pIXblDnxvuU2pjlNUtc9/EVUqGJJpndeDSzz8toXKlvkjLyB/sD0c/XkothrPmAtl4WTW+f1pNSoj49YnxujYNOkpBpiNRzRAedfeYsjLouV9v+ikqSTiN59Z/TBv/2RA8J4Av32HvyFMfqt29hrRkLQs0nRC9enhkWcxZhkbg12nQB++4ue5xWsWCMdpBgwvuj4b5b4yRPCqiJt2H7cVgd5foJ8JN8thg/RqxiHlG7X5Kf2OriAaQzhUeOXy8XqywJWs8QUnBbcDZkfPCkL6YvOIXRjECoi9uh8BNkbcS4TvWQoa+gMhlles9/OLkwjj+IIlnnQuLDHA14UH5U= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: rbbn.com X-MS-Exchange-CrossTenant-Network-Message-Id: e04249f0-7d15-4387-85dd-08d632cf560c X-MS-Exchange-CrossTenant-originalarrivaltime: 15 Oct 2018 18:52:20.1853 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 29a671dc-ed7e-4a54-b1e5-8da1eb495dc3 X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN2PR03MB2286 X-MC-Unique: Chbry6PoMMmrqAmF3JkoJQ-1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: base64 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] i40e VF PMD not getting multicast packets X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Oct 2018 18:52:26 -0000 Tm8gdGhlIHRydXN0IG1vZGUgaXMgb2ZmIGN1cnJlbnRseSBvbiB0aGUgaG9zdC4gSSBjYW4gdHJ5 IHdpdGggdHJ1c3QgbW9kZSBvbiB0b28uIEJ1dCBJIGhhdmUgYSBkb3VidCwgaXMgdHJ1c3QgbW9k ZSBtYW5kYXRvcnkgdG8gYmUgdHVybmVkIG9uIHRvIG1ha2UgdGhlIFZGIHJlY2VpdmUgdjYgbXVs dGljYXN0IHBhY2tldHMgPyBpZiB5ZXMgdGhlbiBob3cgd2lsbCB0aGlzIHdvcmsgaW4gb3BlbnN0 YWNrIChpNDBlIFZGIGRwZGspIHdpdGggdjYgPw0KDQotLQ0KUmVnYXJkcywNClNvdXZpaw0KDQpG cm9tOiBSdWluYW4gPGh1cnVpbmFuQGdtYWlsLmNvbT4NClNlbnQ6IE1vbmRheSwgT2N0b2JlciAx NSwgMjAxOCAyOjMzIFBNDQpUbzogRGV5LCBTb3V2aWsgPHNvZGV5QHJiYm4uY29tPg0KQ2M6IGRl dkBkcGRrLm9yZzsgdXNlcnNAZHBkay5vcmcNClN1YmplY3Q6IFJlOiBbZHBkay11c2Vyc10gaTQw ZSBWRiBQTUQgbm90IGdldHRpbmcgbXVsdGljYXN0IHBhY2tldHMNCg0KX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX18NCk5PVElDRTogVGhpcyBlbWFpbCB3YXMgcmVjZWl2ZWQgZnJvbSBh biBFWFRFUk5BTCBzZW5kZXINCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQoNCkhp LA0KDQpEaWQgeW91IGVuYWJsZSB0cnVzdCBtb2RlIG9uIHZmPyBJZiB0cnVzdCBpcyBvZmYgb24g aG9zdCwgcHJvbWlzY3VvdXMgY2Fu4oCZdCBlbmFibGVkIGZyb20gdm0uDQoNClJ1aW5hbiBIdQ0K cnVpbmFuLmh1QGNhc2Etc3lzdGVtcy5jb208bWFpbHRvOnJ1aW5hbi5odUBjYXNhLXN5c3RlbXMu Y29tPg0KKDg1NykgMjA5LTE5NTUNCg0KPiBPbiBPY3QgMTUsIDIwMTgsIGF0IDE0OjI2LCBEZXks IFNvdXZpayA8c29kZXlAcmJibi5jb208bWFpbHRvOnNvZGV5QHJiYm4uY29tPj4gd3JvdGU6DQo+ DQo+IEhpIEFsbCwNCj4gSSBhbSBjdXJyZW50bHkgZmFjaW5nIGlzc3VlcyB3aXRoIGdldHRpbmcg dGhlIG11bHRpY2FzdCBJUHY2IHBhY2tldHMgd2hlbiB1c2luZyB0aGUgaTQwZXZmIHBtZC4NCj4N Cj4gSSBkbyBzZWUgdGhlcmUgaXMgYSBsaW1pdGF0aW9uIG1lbnRpb25lZCBpbiB0aGUgcmVsZWFz ZSBub3RlcyBvZiBEUERLIHRoYXQNCj4NCj4gMTYuMjcuIEk0MGUgVkYgbWF5IG5vdCByZWNlaXZl IHBhY2tldHMgaW4gdGhlIHByb21pc2N1b3VzIG1vZGUNCj4gRGVzY3JpcHRpb246DQo+IFByb21p c2N1b3VzIG1vZGUgaXMgbm90IHN1cHBvcnRlZCBieSB0aGUgRFBESyBpNDBlIFZGIGRyaXZlciB3 aGVuIHVzaW5nIHRoZSBpNDBlIExpbnV4IGtlcm5lbCBkcml2ZXIgYXMgaG9zdCBkcml2ZXIuDQo+ IEltcGxpY2F0aW9uOg0KPiBUaGUgaTQwZSBWRiBkb2VzIG5vdCByZWNlaXZlIHBhY2tldHMgd2hl biB0aGUgZGVzdGluYXRpb24gTUFDIGFkZHJlc3MgaXMgdW5rbm93bi4NCj4gUmVzb2x1dGlvbi9X b3JrYXJvdW5kOg0KPiBVc2UgYSBleHBsaWNpdCBkZXN0aW5hdGlvbiBNQUMgYWRkcmVzcyB0aGF0 IG1hdGNoZXMgdGhlIFZGLg0KPiBBZmZlY3RlZCBFbnZpcm9ubWVudC9QbGF0Zm9ybToNCj4gQWxs Lg0KPiBEcml2ZXIvTW9kdWxlOg0KPiBQb2xsIE1vZGUgRHJpdmVyIChQTUQpLg0KPg0KPg0KPiBE byB0aGlzIG1lYW4gdGhhdCBtdWx0aWNhc3QgcHJvbWlzY3VvdXMgbW9kZSBpcyBhbHNvIG5vdCBz dXBwb3J0ZWQgPyBJIGFtIGN1cnJlbnRseSBmYWNpbmcgaXNzdWVzIHdpdGggcmVjZWl2aW5nIHBh Y2tldHMgbXVsdGljYXN0IHBhY2tldHMgZm9yIElQdjYgd2hpY2ggaXMgbWFraW5nIElQdjYgdG8g ZmFpbCB3aXRoIGk0MGV2ZiBwbWQuDQo+IEkgYW0gY3VycmVudGx5IHVzaW5nIDE3LjExLjIgRFBE SyB3aXRoIHRoZSBDZW50b3MgNy41IGFuZCBpNDBlIDIuNC42IHZlcnNpb24gb24gdGhlIGhvc3Qu IEkgaGF2ZSB0aGUgcnRlX2V0aF9hbGxtdWx0aWNhc3RfZW5hYmxlKCkgYWxzbyBzZXQgZm9yIHRo ZSBwb3J0IGJ1dCBzdGlsbCBJIHNlZSBhbGwgbXVsdGljYXN0IHBhY2tldHMgYXJlIG5vdCByZWFj aGluZyB0aGUgcG1kLg0KPiBBbHNvIGlmIHByb21pc2N1b3VzIG1vZGUgaGFzIHRvIGJlIHR1cm5l ZCBvZmYgdGhlbiBob3cgc2hvdWxkIHdlIGFkZCBtdWx0aWNhc3QgYWRkcmVzcyB0byB0aGUgUEYg ZnJvbSB0aGUgVkYgPw0KPiAtLQ0KPiBSZWdhcmRzLA0KPiBTb3V2aWsNCg== >From emmericp@net.in.tum.de Mon Oct 15 21:08:54 2018 Return-Path: Received: from mail-out1.informatik.tu-muenchen.de (mail-out1.in.tum.de [131.159.0.8]) by dpdk.org (Postfix) with ESMTP id DC3414CC5 for ; Mon, 15 Oct 2018 21:08:54 +0200 (CEST) Received: from [127.0.0.1] (localhost [127.0.0.1]) by mail.net.in.tum.de (Postfix) with ESMTPSA id 9FFBF289E805; Mon, 15 Oct 2018 21:08:47 +0200 (CEST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) From: Paul Emmerich In-Reply-To: Date: Mon, 15 Oct 2018 21:08:51 +0200 Cc: Andrew Bainbridge , philippb.ontour@gmail.com, users Content-Transfer-Encoding: quoted-printable Message-Id: <24D5A228-47D5-4FEE-9DD4-A6CFBAFC3EB5@net.in.tum.de> References: To: Cliff Burdick X-Mailer: Apple Mail (2.3124) Subject: Re: [dpdk-users] rte_eth_tx_burst: Can I insert timing gaps X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Oct 2018 19:08:55 -0000 > Am 15.10.2018 um 16:29 schrieb Cliff Burdick : >=20 > My best guess would be that you can send the filler > packets to your own MAC address so that the switch will drop it. A variant of that (with broken CRC checksums; require patched drivers) of that yielded the best results in our evaluation linked above. Our implementation for this is here: https://github.com/emmericp/MoonGen/blob/master/src/crc-rate-limiter.c Implementation of a busy-wait based rate limiter that is often good = enough is here: = https://github.com/emmericp/MoonGen/blob/master/src/software-rate-limiter.= cpp#L48 It runs in a separate thread that is fed through an rte_ring, so there = is only one tight loop on that core that sleeps and sends out packets. Paul >=20 >=20 >=20 > On Mon, Oct 15, 2018, 04:21 Andrew Bainbridge = wrote: >=20 >> Is the feature you are describing is called packet "pacing"? Here's a >> Mellanox document describing it: >> https://community.mellanox.com/docs/DOC-2551 >>=20 >> I grep'ed the DPDK source for "rate_limit" and found >> rte_eth_set_queue_rate_limit(). Is that the function you need? >>=20 >> =46rom my quick grep'ing, it looks to me like this feature isn't = supported >> on Mellanox drivers but is in the ixgbe driver. However, this is all = guess >> work. I'm not an expert. I'd like to know the real answer! >>=20 >> - Andrew >>=20 >> -----Original Message----- >> From: users On Behalf Of Philipp B >> Sent: 15 October 2018 11:45 >> To: shaklee3@gmail.com >> Cc: users@dpdk.org >> Subject: Re: [dpdk-users] rte_eth_tx_burst: Can I insert timing gaps >>=20 >> Maybe I should explain with some ASCII art. It's not about sleeping = just >> the remaining period of 20ms. This is exactly what I am doing = already, and >> this works fine. >>=20 >> Think of 20ms intervals like this >>=20 >> |XXXXXXXXXXXXXXXXXX_____|XXXXXXXXXXXXXXXXXX_____ >>=20 >> Where '|' is the start of the 20ms interval, X is a packet, and _ is = an >> idle gap of one packet. (Let's just pretend there are 1000s of Xs per >> interval). >>=20 >> As I said, I let the CPU sleep upto the beginning of the interval = ('|'). >> This beginning of interval is the only moment where CPU timing = controls >> adapter timing. Then I send out a few 1000 packets. In this phase, I = have a >> few 100 packets buffered by DPDK, so it will not help to sleep on the = CPU. >>=20 >> The pattern above is what I can easily produce just with an OS sleep, = a >> single buffer pool and rte_eth_tx_burst. What I am looking for, is a = way to >> e.g. remove every second packet from that pattern, while keeping the = other >> packet's timings unchanged: >>=20 >> |X_X_X_X_X_X_X_X_X______|X_X_X_X_X_X_X_X_X______ >>=20 >> Basically, I do not need to transmit anything in the gaps. I just = need the >> delay. However, as my timing of CPU isn't coupled tightly to the = adapter, >> sleeping on the cpu will not help. This is intended by >> design: I want to blow out a massive number of packets with exact = timing >> and virtually no CPU requirement. >>=20 >> What I look for is a sleep instruction executed by the adapter, which = is >> buffered in order with the packets issued by rte_eth_tx_burst. >> (Plus some basic math rules how to convert packet sizes to durations, >> based on line speeds). >>=20 >> Am Sa., 13. Okt. 2018 um 23:05 Uhr schrieb Cliff Burdick < >> shaklee3@gmail.com>: >>>=20 >>> Maybe I'm misunderstanding the problem, but do you need to transmit >> anything? Can you just use the rte_cycles functions to sleep for the >> remaining period in 20ms? >>>=20 >>> On Thu, Oct 11, 2018 at 2:04 AM Philipp B = >> wrote: >>>>=20 >>>> Hi all! >>>>=20 >>>> I am working on an RTP test traffic generator. The basic idea is >>>> clock_nanosleep providing a 20ms clock cycle to start a (big) = number >>>> of rte_eth_tx_bursts, sending equally sized buffers. As long as the >>>> timing within a 20ms cycle is dictated primarily by the line speed, = I >>>> can be sure that not just the first buffer of each cycle has a = period >>>> of 20ms, but also the n-th buffer. (I have sent n-1 buffers before >>>> with the same size.) >>>>=20 >>>> Basically, I see one 20ms interval as a series of time slots, each >>>> capable to store an active RTP stream. My question now is, what to = to >>>> with inactive time slots? As long as all active streams are located >>>> at consecutive time slots from the start of the 20ms interval, >>>> everything is fine. But I cannot guarantee this. >>>>=20 >>>> What I need is some kind of dummy buffer, which is not transmitted >>>> but generates a tx timing gap as a buffer of X bytes would take to = be >>>> transferred. >>>>=20 >>>> Is such a functionality provided? As a workaround, I already = thought >>>> about sending invalid packets (bad IP Header checksum?). However, >>>> this won't be optimal when multiple lines are aggregated. >>>>=20 >>>> Thanks! >>>> Philipp Beyer >>=20 --=20 Chair of Network Architectures and Services Department of Informatics Technical University of Munich Boltzmannstr. 3 85748 Garching bei M=C3=BCnchen, Germany=20