From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR02-VE1-obe.outbound.protection.outlook.com (mail-eopbgr20088.outbound.protection.outlook.com [40.107.2.88]) by dpdk.org (Postfix) with ESMTP id 9885D1B9A4 for ; Thu, 20 Dec 2018 10:45:42 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=t8hd09vJMjU1vizujCeBGhx/wmJbqjD29DTY7wskWJg=; b=FZCfs03mHdd5yGUPi5ljdHRpS/WHM0KgdenxSuQngt1o3v5Gkv/N05w58d87NjOVwxx7v83xpcMcIW4G6dtJ4IaTcb64V7PmiRw/R/Dr0/P8DdCrrBcrrV3vhhiP5F59IaImS3BHfp2lCetRm03OeSn2DaJKEMiMC9N8599uoC0= Received: from AM0PR05MB4388.eurprd05.prod.outlook.com (52.134.91.161) by AM0PR05MB4788.eurprd05.prod.outlook.com (52.133.56.151) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1425.22; Thu, 20 Dec 2018 09:45:40 +0000 Received: from AM0PR05MB4388.eurprd05.prod.outlook.com ([fe80::a0bf:9230:60f1:c586]) by AM0PR05MB4388.eurprd05.prod.outlook.com ([fe80::a0bf:9230:60f1:c586%5]) with mapi id 15.20.1446.020; Thu, 20 Dec 2018 09:45:40 +0000 From: Noa Ezra To: "Ananyev, Konstantin" CC: "dev@dpdk.org" , Asaf Penso , Matan Azrad , Shahaf Shuler Thread-Topic: [PATCH] examples/ip_fragmentation: support bigger packets Thread-Index: AQHUkt3gu0iuodRfTUegpyTPHcPA0KWGyhzwgAChObA= Date: Thu, 20 Dec 2018 09:45:40 +0000 Message-ID: References: <1544703399-32621-1-git-send-email-noae@mellanox.com> <2601191342CEEE43887BDE71AB977258010D8BCD9A@IRSMSX106.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB977258010D8BCD9A@IRSMSX106.ger.corp.intel.com> Accept-Language: en-US, he-IL Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [193.47.165.251] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; AM0PR05MB4788; 6:/JRrhDXvsfxpl7GXK06DOyuGgesto8xp6rWO9r7CDXp50OJEkC9byUqOvBL6oGLAF+jSASMzOCo/I8taCdCtfF3fHHbkmOaZBRDoRbc+VY+yADJz4Rv09FHpGQ8XRKk3Fw/uKFIzmG3pougXuz42FYf8BAtPTF22nX7jiOmM4tTcxiLg7NXRJVue4sGIaShrrr37cLqePv83CqKC9Eoa1JAzIF1ijwKkXDT4HlBQtTiGd0PfFsvepJ96aMRv4eK3AfsKAw+SZ9US6lpGbfqIsFYUgYIxoJ4ymMSsp9ccfahLMBN1VJo5jy73/wttPI7e/KcQgjowNPbl++doVeAVSfsSrbKlq8+uyl/EpgzcHIgPl0MMljyVE+y96ZT62EjgOTHBwJ9OTI6NgknEP+2RhqWLD3lJQoHUAH/IBY3sDoLliDux3XejjDTE+nw98Qv76IsmJ7t+jUtvCCSbC5+tnw==; 5:4sooYWj5w7akCWfIrkCI7W2tuRBu+1IJ3/ScVlE9BBrrPM6Vdgx5f1jmnR1dKUH0LkhBwRdcpxegstbssZKD9UdBal/MyhOU4EUOjUNOEcSciIyT4jNRgHro3qHxH+tQHJoGW8Y/IAUKuVbZKgsLZvuX26UzU/ayMsNH+DxyE44=; 7:4Oe6jwwhzgUNdup8vP2N7z2lt6iLq1q8iffsUZEJC0rgx9Ac6uAqlMESGArP0YAjEXhDMSKDEoJorl9xrP26xkllzNcngJjIEoTqJ/jWgm2+z+7GO83KJBfsihPhmNNTEUpwDQUAoF8a1KYDMB3cJg== x-ms-exchange-antispam-srfa-diagnostics: SOS; x-ms-office365-filtering-correlation-id: 882d6d28-a635-4e96-f9b2-08d6665fe744 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600074)(711020)(4618075)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060)(7193020); SRVR:AM0PR05MB4788; x-ms-traffictypediagnostic: AM0PR05MB4788: authentication-results: spf=none (sender IP is ) smtp.mailfrom=noae@mellanox.com; x-microsoft-antispam-prvs: x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(3230021)(999002)(5005026)(6040522)(2401047)(8121501046)(3231475)(944501520)(52105112)(93006095)(93001095)(3002001)(10201501046)(6055026)(149066)(150057)(6041310)(20161123562045)(20161123558120)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(201708071742011)(7699051)(76991095); SRVR:AM0PR05MB4788; BCL:0; PCL:0; RULEID:; SRVR:AM0PR05MB4788; x-forefront-prvs: 0892FA9A88 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(396003)(136003)(376002)(366004)(346002)(39860400002)(13464003)(199004)(189003)(97736004)(74316002)(53546011)(25786009)(305945005)(102836004)(7736002)(316002)(6916009)(9686003)(106356001)(86362001)(76176011)(105586002)(55016002)(7696005)(256004)(53936002)(66066001)(6506007)(14444005)(478600001)(99286004)(8676002)(54906003)(486006)(33656002)(107886003)(68736007)(8936002)(6246003)(2906002)(6116002)(186003)(5660300001)(6436002)(4326008)(446003)(3846002)(26005)(14454004)(71190400001)(81156014)(81166006)(229853002)(476003)(71200400001)(11346002)(135533001); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR05MB4788; H:AM0PR05MB4388.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: xo5MSyYOcrD3D/gIIzOlFyDJtouB+5xbPIihaVVaXiTwOPXvwgmRV0YeKx++KdXBlm+S10rghqFoQHWMUu9Y+OFVtA1l4m3gvTFZGFcDVLDCxqmdceTkfN7/3kiaheDtMDvAMxPMXVXnSfyswGPqVzELvQ80MT6Ugbtl7vllI7rWZAL8ml+HyC9PIqxLkstqhk1zn/Jjjv0CJdB4d21sXF//JnMSQVkdDzE9sFdjIj9wIwJ9t1g5HCVy/2q77N5mFmHVqLR8M0Yp5zsn9vf4xDhHFDk+0KfGLgX+gj1qIIUMzp/69rtS/3Y8I9Xp4fmk spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 882d6d28-a635-4e96-f9b2-08d6665fe744 X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Dec 2018 09:45:40.6196 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR05MB4788 Subject: Re: [dpdk-dev] [PATCH] examples/ip_fragmentation: support bigger packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Dec 2018 09:45:42 -0000 Hi, In some vendores (like Mellanox, that reflects the linux behavior in this c= ase) the Rx and Tx configurations must be the same, therefore it is not eno= ugh to configure the max_rx_pkt_len to JUMBO_FRAME_MAX_SIZE, we also need t= o configure the MTU size in order to receive large packets. In order to avoid from adding another configuration to the command line, we= can configure the MTU to be equal to max_rx_pkt_len, and it won't change = the functionality of the test. In addition there are PMDs that need to enable the scatter-gather so it wil= l be functional for RX frames bigger then mbuf size. We can add the configu= ration and avoid changing the mbuf size. What do you think about this solution? Thanks, Noa. -----Original Message----- From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com]=20 Sent: Thursday, December 20, 2018 2:18 AM To: Noa Ezra Cc: dev@dpdk.org Subject: RE: [PATCH] examples/ip_fragmentation: support bigger packets Hi,=20 >=20 > Adding MTU and mbuf size configuration to the application's command=20 > line, in order to be able to receive all packet sizes by the NIC and=20 > DPDK application. > The maximum transmission unit (MTU) is the largest size packet in=20 > bytes that can be sent on the network, therefore before adding MTU=20 > parameter, the NIC could not receive packets larger than 1500 bytes,=20 > which is the default MTU size. I wonder why is that? Currently ip_fragmentation sets max_rx_pkt_len to up to 9.5KB: static struct rte_eth_conf port_conf =3D { .rxmode =3D { .max_rx_pkt_len =3D JUMBO_FRAME_MAX_SIZE, ... local_port_conf.rxmode.max_rx_pkt_len =3D RTE_MIN( dev_info.max_rx_pktlen, local_port_conf.rxmode.max_rx_pkt_len); That in theory should be enough to enable jumbo frame RX. Did you find it is not working as expected, if yes on which NIC? > The mbuf is the memory buffer that contains the packet. Before adding=20 > mbuf parameter, the DPDK application could not receive packets larger=20 > than 2KB, which is the default mbuf size. Again why is that? All packets that do support scatter-gather RX should be able to RX frames b= igger then mbuf size (if properly configured of course). Are you trying to make it work on NICs with no multi-segment support for RX= /TX? But then how you plan to do TX (usually it is symmetric for RX/TX)? >=20 > Signed-off-by: Noa Ezra > --- > doc/guides/sample_app_ug/ip_frag.rst | 18 ++++++++- > examples/ip_fragmentation/main.c | 77 ++++++++++++++++++++++++++++++= +++--- > 2 files changed, 88 insertions(+), 7 deletions(-) >=20 > diff --git a/doc/guides/sample_app_ug/ip_frag.rst=20 > b/doc/guides/sample_app_ug/ip_frag.rst > index 7914a97..13933c7 100644 > --- a/doc/guides/sample_app_ug/ip_frag.rst > +++ b/doc/guides/sample_app_ug/ip_frag.rst > @@ -53,7 +53,7 @@ Application usage: >=20 > .. code-block:: console >=20 > - ./build/ip_fragmentation [EAL options] -- -p PORTMASK [-q NQ] > + ./build/ip_fragmentation [EAL options] -- -p PORTMASK [-q NQ] [-b=20 > + MBUFSIZE] [-m MTUSIZE] >=20 > where: >=20 > @@ -61,6 +61,15 @@ where: >=20 > * -q NQ is the number of queue (=3Dports) per lcore (the default is 1) >=20 > +* -b MBUFSIZE is the mbuf size in bytes (the default is 2048) > + > +* -m MTUSIZE is the mtu size in bytes (the default is 1500) > + > +The MTU is the maximum size of a single data unit that can be=20 > +transmitted over the network, therefore it must be greater than the requ= ested max packet size, otherwise the NIC won't be able to get the packet. > +The mbuf is a buffer that is used by the DPDK application to store=20 > +message buffers.If not using scatter then the mbuf size must be=20 > +greater than the requested max packet size, otherwise the DPDK will not = be able to receive the packet. > To run the example in linuxapp environment with 2 lcores (2,4) over 2 po= rts(0,2) with 1 RX queue per lcore: >=20 > .. code-block:: console > @@ -96,6 +105,13 @@ To run the example in linuxapp environment with 1=20 > lcore (4) over 2 ports(0,2) wi >=20 > ./build/ip_fragmentation -l 4 -n 3 -- -p 5 -q 2 >=20 > +To run the example with defined MTU size 4000 bytes and mbuf size 9000 b= yes: > + > +.. code-block:: console > + > + ./build/ip_fragmentation -l 4 -n 3 -- -p 5 -m 4000 -b 9000 > + > + > To test the application, flows should be set up in the flow generator=20 > that match the values in the l3fwd_ipv4_route_array and/or l3fwd_ipv6_ro= ute_array table. >=20 > diff --git a/examples/ip_fragmentation/main.c=20 > b/examples/ip_fragmentation/main.c > index 17a877d..0cf23b4 100644 > --- a/examples/ip_fragmentation/main.c > +++ b/examples/ip_fragmentation/main.c > @@ -111,6 +111,10 @@ >=20 > static int rx_queue_per_lcore =3D 1; >=20 > +static int mbuf_size =3D RTE_MBUF_DEFAULT_BUF_SIZE; > + > +static int mtu_size =3D ETHER_MTU; > + > #define MBUF_TABLE_SIZE (2 * MAX(MAX_PKT_BURST, MAX_PACKET_FRAG)) >=20 > struct mbuf_table { > @@ -425,7 +429,6 @@ struct rte_lpm6_config lpm6_config =3D { > * Read packet from RX queues > */ > for (i =3D 0; i < qconf->n_rx_queue; i++) { > - > portid =3D qconf->rx_queue_list[i].portid; > nb_rx =3D rte_eth_rx_burst(portid, 0, pkts_burst, > MAX_PKT_BURST); > @@ -455,9 +458,11 @@ struct rte_lpm6_config lpm6_config =3D { static=20 > void print_usage(const char *prgname) { > - printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n" > + printf("%s [EAL options] -- -p PORTMASK [-q NQ] [-b MBUFSIZE] [-m MTUSI= ZE]\n" > " -p PORTMASK: hexadecimal bitmask of ports to configure\n" > - " -q NQ: number of queue (=3Dports) per lcore (default is 1)\n"= , > + " -q NQ: number of queue (=3Dports) per lcore (default is 1)\n" > + " -b MBUFSIZE: set the data size of mbuf\n" > + " -m MTUSIZE: set the MTU size\n", > prgname); > } >=20 > @@ -496,6 +501,38 @@ struct rte_lpm6_config lpm6_config =3D { > return n; > } >=20 > +static int > +parse_mbufsize(const char *q_arg) > +{ > + char *end =3D NULL; > + unsigned long mbuf; > + > + /* parse hexadecimal string */ You expect decimal string below. > + mbuf =3D strtoul(q_arg, &end, 10); > + if ((q_arg[0] =3D=3D '\0') || (end =3D=3D NULL) || (*end !=3D '\0')) > + return -1; You probably need to set and test errno too. > + if (mbuf =3D=3D 0) > + return -1; > + > + return mbuf; > +} These 2 parse functions looks identical. Why you need two of them? > + > +static int > +parse_mtusize(const char *q_arg) > +{ > + char *end =3D NULL; > + unsigned long mtu; > + > + /* parse hexadecimal string */ > + mtu =3D strtoul(q_arg, &end, 10); > + if ((q_arg[0] =3D=3D '\0') || (end =3D=3D NULL) || (*end !=3D '\0')) > + return -1; > + if (mtu =3D=3D 0) > + return -1; > + > + return mtu; > +} > + > /* Parse the argument given in the command line of the application */ =20 > static int parse_args(int argc, char **argv) @@ -510,7 +547,7 @@=20 > struct rte_lpm6_config lpm6_config =3D { >=20 > argvopt =3D argv; >=20 > - while ((opt =3D getopt_long(argc, argvopt, "p:q:", > + while ((opt =3D getopt_long(argc, argvopt, "p:q:b:m:", > lgopts, &option_index)) !=3D EOF) { >=20 > switch (opt) { > @@ -534,6 +571,26 @@ struct rte_lpm6_config lpm6_config =3D { > } > break; >=20 > + /* mbufsize */ > + case 'b': > + mbuf_size =3D parse_mbufsize(optarg); > + if (mbuf_size < 0) { > + printf("invalid mbuf size\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > + /* mtusize */ > + case 'm': > + mtu_size =3D parse_mtusize(optarg); > + if (mtu_size < 0) { > + printf("invalid mtu size\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > /* long options */ > case 0: > print_usage(prgname); > @@ -777,9 +834,8 @@ struct rte_lpm6_config lpm6_config =3D { > RTE_LOG(INFO, IP_FRAG, "Creating direct mempool on socket %i\n", > socket); > snprintf(buf, sizeof(buf), "pool_direct_%i", socket); > - > mp =3D rte_pktmbuf_pool_create(buf, NB_MBUF, 32, > - 0, RTE_MBUF_DEFAULT_BUF_SIZE, socket); > + 0, mbuf_size, socket); > if (mp =3D=3D NULL) { > RTE_LOG(ERR, IP_FRAG, "Cannot create direct mempool\n"); > return -1; > @@ -892,6 +948,15 @@ struct rte_lpm6_config lpm6_config =3D { > dev_info.max_rx_pktlen, > local_port_conf.rxmode.max_rx_pkt_len); >=20 > + /* set the mtu to the maximum received packet size */ > + ret =3D rte_eth_dev_set_mtu(portid, mtu_size); > + if (ret < 0) { > + printf("\n"); > + rte_exit(EXIT_FAILURE, "Set MTU failed: " > + "err=3D%d, port=3D%d\n", > + ret, portid); > + } > + > /* get the lcore_id for this port */ > while (rte_lcore_is_enabled(rx_lcore_id) =3D=3D 0 || > qconf->n_rx_queue =3D=3D (unsigned)rx_queue_per_lcore) { > -- > 1.8.3.1