From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-181.mimecast.com (us-smtp-delivery-181.mimecast.com [63.128.21.181]) by dpdk.org (Postfix) with ESMTP id BF5014F90 for ; Mon, 15 Oct 2018 21:27:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rbbn.com; s=mimecast20180816; t=1539631636; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gErIkIFfK/GmH+MqAk1JE2kHSZCatiXS8To+ui3zKhU=; b=HDPaVUGikXLeT1auiFdY3cv0MRP86QVA3zwsiF+B4pNMNqggEYS0/CkFw47/4YqbYsoITUH2Xpzmf+CSstjMgDOwsOr/EtnXOCDGLMq9u67iJ6VCJDaLFVzp+aWBL5heDgYQok02uweTz7vBv+xqC25gTvgJfMzRsGrAE1RqU5o= Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03lp0053.outbound.protection.outlook.com [216.32.180.53]) (Using TLS) by us-smtp-1.mimecast.com with ESMTP id us-mta-26-qyJi3f3pNOi4tQVSZ3XwYw-1; Mon, 15 Oct 2018 15:27:14 -0400 Received: from SN2PR03MB2143.namprd03.prod.outlook.com (10.166.209.134) by SN2PR03MB2223.namprd03.prod.outlook.com (10.166.209.154) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1228.25; Mon, 15 Oct 2018 19:27:10 +0000 Received: from SN2PR03MB2143.namprd03.prod.outlook.com ([fe80::dc74:4f16:90ba:7ed3]) by SN2PR03MB2143.namprd03.prod.outlook.com ([fe80::dc74:4f16:90ba:7ed3%3]) with mapi id 15.20.1228.027; Mon, 15 Oct 2018 19:27:10 +0000 From: "Dey, Souvik" To: Ruinan , "ruinan.hu@casa-systems.com" CC: "dev@dpdk.org" , "users@dpdk.org" Thread-Topic: [dpdk-users] i40e VF PMD not getting multicast packets Thread-Index: AdRktGv+672TkLzTSKi5u8xknwrnMQAAQsSAAACPqIAAAUhPwA== Date: Mon, 15 Oct 2018 19:27:10 +0000 Message-ID: References: <17849E86-931B-494B-90D7-74E2016C0D3C@gmail.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [208.45.178.4] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; SN2PR03MB2223; 20:uMLY5JsJK0xvv7Mdpma8GDZR7Y3MDUdz7QNkodhXD5Hvgh0Mvg/wvq3LcC+kIM/acC1SguNMylmJsZaCjpjrVwNKfG+VoOP0lPtJMFHGxjCNhoBBc7bq3A42s7Bu73uaHBekey8zdkT1lgM06qpaXZ/uH7bv1iIUHnK50xC4NKU= x-ms-exchange-antispam-srfa-diagnostics: SOS; x-ms-office365-filtering-correlation-id: 72f1e7f1-8037-4312-5a73-08d632d43433 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(5600074)(711020)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060)(7193020); SRVR:SN2PR03MB2223; x-ms-traffictypediagnostic: SN2PR03MB2223: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(85827821059158)(21748063052155)(28532068793085)(190501279198761)(227612066756510); x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3002001)(3231355)(944501410)(52105095)(10201501046)(93006095)(93001095)(149066)(150057)(6041310)(20161123560045)(20161123558120)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123562045)(201708071742011)(7699051)(76991067); SRVR:SN2PR03MB2223; BCL:0; PCL:0; RULEID:; SRVR:SN2PR03MB2223; x-forefront-prvs: 0826B2F01B x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(39850400004)(376002)(396003)(136003)(346002)(366004)(189003)(199004)(53754006)(50944005)(86362001)(76176011)(102836004)(7736002)(6506007)(53546011)(74316002)(55016002)(11346002)(25786009)(19609705001)(99286004)(7696005)(66066001)(5250100002)(186003)(26005)(8676002)(4326008)(478600001)(2501003)(316002)(6246003)(54906003)(81166006)(81156014)(71190400001)(106356001)(71200400001)(110136005)(256004)(8936002)(33656002)(14454004)(5660300001)(9686003)(54896002)(97736004)(486006)(2900100001)(2940100002)(236005)(229853002)(93156006)(790700001)(6306002)(3846002)(6116002)(2906002)(446003)(6436002)(105586002)(39060400002)(476003)(14444005)(68736007)(53936002); DIR:OUT; SFP:1101; SCL:1; SRVR:SN2PR03MB2223; H:SN2PR03MB2143.namprd03.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-microsoft-antispam-message-info: dOY8+8xe0R5d5Laffz+StWzOg6eZLS3I9sgScjxG+NVLJHVE1PrL2e6BzNld6jy3kvfdsTtxS/+kK+1kiSdyNlTZz4a3JeydLu5HZDsvA6InCrUi4kjJo7X7IECBtFyfYmzezCIwJWzHdhbhdhn0A556YKmVDmnBOPFf9LoJxFmKbcIzxOauXpyfN14Gz+IzibmmEB2uTEF5eFeINOGmdwCXYkKiRF1thcohMguxfnnqgCGM8GJAcgexzrsBBD7WBbBHtGFlcWInReuYAtof//s7PRGMYfqTt9VeoAgCzqVNOXah0dQSaGIUleXSFGt5vq6VWvt+wbzPBgkOcVtUuUPVHJZ48q3GHu9zxQJCivE= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: rbbn.com X-MS-Exchange-CrossTenant-Network-Message-Id: 72f1e7f1-8037-4312-5a73-08d632d43433 X-MS-Exchange-CrossTenant-originalarrivaltime: 15 Oct 2018 19:27:10.8859 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 29a671dc-ed7e-4a54-b1e5-8da1eb495dc3 X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN2PR03MB2223 X-MC-Unique: qyJi3f3pNOi4tQVSZ3XwYw-1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: base64 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] i40e VF PMD not getting multicast packets X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Oct 2018 19:27:17 -0000 TW9yZW92ZXIgSSB3YW50ZWQgdG8gbWVudGlvbiBvbmUgbW9yZSB0aGluZywgdGhhdCB0aGUgaXNz dWUgb2YgbXVsdGljYXN0IHBhY2tldHMgbm90IHdvcmtpbmcgaXMgb25seSB3aXRoIERQREsgMTQw ZVZGIFBNRC4gSW4gY2FzZSBJIHVzZSB0aGUgaTQwZXZmIG9mIHRoZSBrZXJuZWwgdGhlbiBpdCBp cyB3b3JraW5nIGZpbmUgYW5kIEkgZG9u4oCZdCBzZWUgYW55IGlzc3VlcyB3aXRoIHY2LiBTbyB3 aHkgaXMgdGhlIHRydXN0IG1vZGUgYSByZXF1aXJlbWVudCBmb3IgRFBESyBwbWQgPw0KDQpGcm9t OiBEZXksIFNvdXZpaw0KU2VudDogTW9uZGF5LCBPY3RvYmVyIDE1LCAyMDE4IDI6NTIgUE0NClRv OiBSdWluYW4gPGh1cnVpbmFuQGdtYWlsLmNvbT4NCkNjOiBkZXZAZHBkay5vcmc7IHVzZXJzQGRw ZGsub3JnDQpTdWJqZWN0OiBSRTogW2RwZGstdXNlcnNdIGk0MGUgVkYgUE1EIG5vdCBnZXR0aW5n IG11bHRpY2FzdCBwYWNrZXRzDQoNCk5vIHRoZSB0cnVzdCBtb2RlIGlzIG9mZiBjdXJyZW50bHkg b24gdGhlIGhvc3QuIEkgY2FuIHRyeSB3aXRoIHRydXN0IG1vZGUgb24gdG9vLiBCdXQgSSBoYXZl IGEgZG91YnQsIGlzIHRydXN0IG1vZGUgbWFuZGF0b3J5IHRvIGJlIHR1cm5lZCBvbiB0byBtYWtl IHRoZSBWRiByZWNlaXZlIHY2IG11bHRpY2FzdCBwYWNrZXRzID8gaWYgeWVzIHRoZW4gaG93IHdp bGwgdGhpcyB3b3JrIGluIG9wZW5zdGFjayAoaTQwZSBWRiBkcGRrKSB3aXRoIHY2ID8NCg0KLS0N ClJlZ2FyZHMsDQpTb3V2aWsNCg0KRnJvbTogUnVpbmFuIDxodXJ1aW5hbkBnbWFpbC5jb208bWFp bHRvOmh1cnVpbmFuQGdtYWlsLmNvbT4+DQpTZW50OiBNb25kYXksIE9jdG9iZXIgMTUsIDIwMTgg MjozMyBQTQ0KVG86IERleSwgU291dmlrIDxzb2RleUByYmJuLmNvbTxtYWlsdG86c29kZXlAcmJi bi5jb20+Pg0KQ2M6IGRldkBkcGRrLm9yZzxtYWlsdG86ZGV2QGRwZGsub3JnPjsgdXNlcnNAZHBk ay5vcmc8bWFpbHRvOnVzZXJzQGRwZGsub3JnPg0KU3ViamVjdDogUmU6IFtkcGRrLXVzZXJzXSBp NDBlIFZGIFBNRCBub3QgZ2V0dGluZyBtdWx0aWNhc3QgcGFja2V0cw0KDQpfX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fXw0KTk9USUNFOiBUaGlzIGVtYWlsIHdhcyByZWNlaXZlZCBmcm9t IGFuIEVYVEVSTkFMIHNlbmRlcg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCg0K SGksDQoNCkRpZCB5b3UgZW5hYmxlIHRydXN0IG1vZGUgb24gdmY/IElmIHRydXN0IGlzIG9mZiBv biBob3N0LCBwcm9taXNjdW91cyBjYW7igJl0IGVuYWJsZWQgZnJvbSB2bS4NCg0KUnVpbmFuIEh1 DQpydWluYW4uaHVAY2FzYS1zeXN0ZW1zLmNvbTxtYWlsdG86cnVpbmFuLmh1QGNhc2Etc3lzdGVt cy5jb20+DQooODU3KSAyMDktMTk1NQ0KDQo+IE9uIE9jdCAxNSwgMjAxOCwgYXQgMTQ6MjYsIERl eSwgU291dmlrIDxzb2RleUByYmJuLmNvbTxtYWlsdG86c29kZXlAcmJibi5jb20+PiB3cm90ZToN Cj4NCj4gSGkgQWxsLA0KPiBJIGFtIGN1cnJlbnRseSBmYWNpbmcgaXNzdWVzIHdpdGggZ2V0dGlu ZyB0aGUgbXVsdGljYXN0IElQdjYgcGFja2V0cyB3aGVuIHVzaW5nIHRoZSBpNDBldmYgcG1kLg0K Pg0KPiBJIGRvIHNlZSB0aGVyZSBpcyBhIGxpbWl0YXRpb24gbWVudGlvbmVkIGluIHRoZSByZWxl YXNlIG5vdGVzIG9mIERQREsgdGhhdA0KPg0KPiAxNi4yNy4gSTQwZSBWRiBtYXkgbm90IHJlY2Vp dmUgcGFja2V0cyBpbiB0aGUgcHJvbWlzY3VvdXMgbW9kZQ0KPiBEZXNjcmlwdGlvbjoNCj4gUHJv bWlzY3VvdXMgbW9kZSBpcyBub3Qgc3VwcG9ydGVkIGJ5IHRoZSBEUERLIGk0MGUgVkYgZHJpdmVy IHdoZW4gdXNpbmcgdGhlIGk0MGUgTGludXgga2VybmVsIGRyaXZlciBhcyBob3N0IGRyaXZlci4N Cj4gSW1wbGljYXRpb246DQo+IFRoZSBpNDBlIFZGIGRvZXMgbm90IHJlY2VpdmUgcGFja2V0cyB3 aGVuIHRoZSBkZXN0aW5hdGlvbiBNQUMgYWRkcmVzcyBpcyB1bmtub3duLg0KPiBSZXNvbHV0aW9u L1dvcmthcm91bmQ6DQo+IFVzZSBhIGV4cGxpY2l0IGRlc3RpbmF0aW9uIE1BQyBhZGRyZXNzIHRo YXQgbWF0Y2hlcyB0aGUgVkYuDQo+IEFmZmVjdGVkIEVudmlyb25tZW50L1BsYXRmb3JtOg0KPiBB bGwuDQo+IERyaXZlci9Nb2R1bGU6DQo+IFBvbGwgTW9kZSBEcml2ZXIgKFBNRCkuDQo+DQo+DQo+ IERvIHRoaXMgbWVhbiB0aGF0IG11bHRpY2FzdCBwcm9taXNjdW91cyBtb2RlIGlzIGFsc28gbm90 IHN1cHBvcnRlZCA/IEkgYW0gY3VycmVudGx5IGZhY2luZyBpc3N1ZXMgd2l0aCByZWNlaXZpbmcg cGFja2V0cyBtdWx0aWNhc3QgcGFja2V0cyBmb3IgSVB2NiB3aGljaCBpcyBtYWtpbmcgSVB2NiB0 byBmYWlsIHdpdGggaTQwZXZmIHBtZC4NCj4gSSBhbSBjdXJyZW50bHkgdXNpbmcgMTcuMTEuMiBE UERLIHdpdGggdGhlIENlbnRvcyA3LjUgYW5kIGk0MGUgMi40LjYgdmVyc2lvbiBvbiB0aGUgaG9z dC4gSSBoYXZlIHRoZSBydGVfZXRoX2FsbG11bHRpY2FzdF9lbmFibGUoKSBhbHNvIHNldCBmb3Ig dGhlIHBvcnQgYnV0IHN0aWxsIEkgc2VlIGFsbCBtdWx0aWNhc3QgcGFja2V0cyBhcmUgbm90IHJl YWNoaW5nIHRoZSBwbWQuDQo+IEFsc28gaWYgcHJvbWlzY3VvdXMgbW9kZSBoYXMgdG8gYmUgdHVy bmVkIG9mZiB0aGVuIGhvdyBzaG91bGQgd2UgYWRkIG11bHRpY2FzdCBhZGRyZXNzIHRvIHRoZSBQ RiBmcm9tIHRoZSBWRiA/DQo+IC0tDQo+IFJlZ2FyZHMsDQo+IFNvdXZpaw0K >From wajeeha.javed123@gmail.com Tue Oct 16 06:42:43 2018 Return-Path: Received: from mail-yw1-f41.google.com (mail-yw1-f41.google.com [209.85.161.41]) by dpdk.org (Postfix) with ESMTP id 0E9124C8C for ; Tue, 16 Oct 2018 06:42:43 +0200 (CEST) Received: by mail-yw1-f41.google.com with SMTP id v1-v6so8413857ywv.6 for ; Mon, 15 Oct 2018 21:42:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=J7oa4oNAwz10IVuTlI9EMBwHEq/iS6Xwws6wnfAI+Fo=; b=FbX0YB2rcWWa/UMyz4qB2LqCzUl2MLtA4AwJsacWKHysyB6rQ7KqFEk3XQ6q5FKedu owHDLOna7mj+C1ctxbR+ulZAzSEZ8imKlihBWKW1Qs31QviNrKHVYIXCHTOUN1p9OHQ+ kYUGS+fo8jyqNcgreK8E2/gnnCJmzHNyege2YxoeoQv5R83wf3zoP1OozqDntU3obGHZ 4Dt44mmXIMq595AZFq8xkMSXCdHyk3MKLTuXvcP5BBXARCZx7x0QpsYsa3nAXD7Get6E VFai5WNYRUeOHz8ZAR/npZcJ96CYo9pqH+9hnMUN97+rnNBHzewro0Zvb2RmGMRCBUcD 7Zwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=J7oa4oNAwz10IVuTlI9EMBwHEq/iS6Xwws6wnfAI+Fo=; b=cQjtB1fmNDuZm8GSXZ1rl53rigtyJhLP9ePNM/Evpk4EVMeyH/WO/N5hSilr9IQs49 bkWs/bRZYVjox8atwBK++2yQ/0oO3xO5bUXoVckikh12b+D3fXM2B8S3Q+P/htWbkzfO mzMD4sBu7nSx1/VR5eLApGB7PD037Ogi/YPKnJOpBkrvU+8SIsJZGiMqNUgwqA+Z2DB2 yKUfZnuvyY57YtuaCIgMDKsBubZiuXfqfwgoy462qvOnehxNqcCLW6LbNHRMxYlVGQrS /vnHVG94pWgYJVcrapU+s3fqVsMBPEE65SKppDAWHcEB2C6hVgi1paJ+On/7l/RJXeBm 0HcA== X-Gm-Message-State: ABuFfoj46IB0Xs+jERVfX9j/AgR5rZI13K7g/FghWtAcmGYVcYVaq2Y8 5L1XGb6qKqS2GjmbjS8A1hiSnRkgf89f+g0YSAE= X-Google-Smtp-Source: ACcGV60eM70S50lL/GyWWvmdE/htt2CNzXlffcVDn1Y16xzCqYdqrJ82wZpIAHm5/zQjx5ZvT83HoOp+DbAK15dtHI8= X-Received: by 2002:a0d:ed06:: with SMTP id w6-v6mr10621279ywe.26.1539664962290; Mon, 15 Oct 2018 21:42:42 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Wajeeha Javed Date: Tue, 16 Oct 2018 09:42:08 +0500 Message-ID: To: waqasahmed1471@gmail.com, users@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] users Digest, Vol 155, Issue 7 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Oct 2018 04:42:44 -0000 Hi, Thanks, everyone for your reply. Please find below my comments. *I've failed to find explicit limitations from the first glance.* * NB_MBUF define is typically internal to examples/apps.* * The question I'd like to double-check if the host has enought* * RAM and hugepages allocated? 5 million mbufs already require about* * 10G.* Total Ram = 128 GB Available Memory = 23GB free Total Huge Pages = 80 Free Huge Page = 38 Huge Page Size = 1GB *The mempool uses uint32_t for most sizes and the number of mempool items is uint32_t so ok with the number of entries in a can be ~4G as stated be make sure you have enough * *memory as the over head for mbufs is not just the header + the packet size* Right. Currently, there are total of 80 huge pages, 40 for each numa node (Numa node 0 and Numa node 1). I observed that I was using only 16 huge pages while the other 16 huge pages were used by other dpdk application. By running only my dpdk application on numa node 0, I was able to increase the mempool size to 14M that uses all the huge pages of Numa node 0. *My question is why are you copying the mbuf and not just linking the mbufs into a link list? Maybe I do not understand the reason. I would try to make sure you do not do a copy of the * *data and just link the mbufs together using the next pointer in the mbuf header unless you have chained mbufs already.* The reason for copying the Mbuf is due to the NIC limitations, I cannot have more than 16384 Rx descriptors, whereas I want to withhold all the packets coming at a line rate of 10GBits/sec for each port. I created a circular queue running on a FIFO basis. Initially, I thought of using rte_mbuf* packet burst for a delay of 2 secs. Now at line rate, we receive 14Million Packet/s, so descriptor get full and I don't have other option left than copying the mbuf to the circular queue rather than using a rte_mbuf* pointer. I know I have to make a compromise on performance to achieve a delay for packets. So for copying mbufs, I allocate memory from Mempool to copy the mbuf received and then free it. Please find the code snippet below. How we can chain different mbufs together? According to my understanding chained mbufs in the API are used for storing segments of the fragmented packets that are greater than MTU. Even If we chain the mbufs together using next pointer we need to free the mbufs received, otherwise we will not be able to get free Rx descriptors at a line rate of 10GBits/sec and eventually all the Rx descriptors will be filled and NIC will not receive any more packets. for( j = 0; j < nb_rx; j++) { m = pkts_burst[j]; struct rte_mbuf* copy_mbuf = pktmbuf_copy(m, pktmbuf_pool[sockid]); .... rte_pktmbuf_free(m); } *The other question is can you drop any packets if not then you only have the linking option IMO. If you can drop packets then you can just start dropping them when the ring is getting full. Holding onto 28m packets for two seconds can cause other protocol related problems and TCP could be sending retransmitted packets and now you have caused a bunch of work on the RX side * *at **the end point.* I would like my DPDK application to have zero packet loss, it only delays all the received packet for 2 secs than transmitted them as it is without any change or processing to packets. Moreover, DPDK application is receiving tap traffic(monitoring traffic) rather than real-time traffic. So there will not be any TCP or any other protocol-related problems. Looking forward to your reply. Best Regards, Wajeeha Javed