From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 5AE971DDBA for ; Thu, 14 Jun 2018 14:58:10 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Jun 2018 05:58:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,222,1526367600"; d="scan'208";a="57280827" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by FMSMGA003.fm.intel.com with ESMTP; 14 Jun 2018 05:58:06 -0700 Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.319.2; Thu, 14 Jun 2018 05:58:06 -0700 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.220]) by FMSMSX157.amr.corp.intel.com ([169.254.14.71]) with mapi id 14.03.0319.002; Thu, 14 Jun 2018 05:58:06 -0700 From: "Wiles, Keith" To: Ophir Munk CC: DPDK , Pascal Mazon , "Thomas Monjalon" , Olga Shern Thread-Topic: [PATCH v4 2/2] net/tap: support TSO (TCP Segment Offload) Thread-Index: AQHUAmr3BG+m+4IWp0SAx5DEpgQx8qRe0OUAgAEK0ACAAFN5gA== Date: Thu, 14 Jun 2018 12:58:05 +0000 Message-ID: <2B233CF0-F33E-439D-BAF7-CA0CD8540AAC@intel.com> References: <1520629826-23055-2-git-send-email-ophirmu@mellanox.com> <1528821108-12405-1-git-send-email-ophirmu@mellanox.com> <1528821108-12405-3-git-send-email-ophirmu@mellanox.com> <37D23262-6A5D-4931-A874-1733643C7F95@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.252.137.112] Content-Type: text/plain; charset="us-ascii" Content-ID: <466596115A378341A6F93B7175C3D356@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v4 2/2] net/tap: support TSO (TCP Segment Offload) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Jun 2018 12:58:10 -0000 > On Jun 14, 2018, at 2:59 AM, Ophir Munk wrote: >=20 >=20 >=20 >> -----Original Message----- >> From: Wiles, Keith [mailto:keith.wiles@intel.com] >> Sent: Wednesday, June 13, 2018 7:04 PM >> To: Ophir Munk >> Cc: DPDK ; Pascal Mazon ; >> Thomas Monjalon ; Olga Shern >> >> Subject: Re: [PATCH v4 2/2] net/tap: support TSO (TCP Segment Offload) >>=20 >>=20 >>=20 >>> On Jun 12, 2018, at 11:31 AM, Ophir Munk >> wrote: >>>=20 >>> This commit implements TCP segmentation offload in TAP. >>> librte_gso library is used to segment large TCP payloads (e.g. packets >>> of 64K bytes size) into smaller MTU size buffers. >>> By supporting TSO offload capability in software a TAP device can be >>> used as a failsafe sub device and be paired with another PCI device >>> which supports TSO capability in HW. >>>=20 >>> For more details on librte_gso implementation please refer to dpdk >>> documentation. >>> The number of newly generated TCP TSO segments is limited to 64. >>>=20 >>> Reviewed-by: Raslan Darawsheh >>> Signed-off-by: Ophir Munk >>> --- >>> drivers/net/tap/Makefile | 2 +- >>> drivers/net/tap/rte_eth_tap.c | 159 >> +++++++++++++++++++++++++++++++++++------- >>> drivers/net/tap/rte_eth_tap.h | 3 + >>> mk/rte.app.mk | 4 +- >>> 4 files changed, 138 insertions(+), 30 deletions(-) >>=20 >> You have setup the mempool with no cache size, which means you have to >> take a lock for each allocate. This could changed to have a small cache = per >> lcore say 8, but the total number of mbufs needs to be large enough to n= ot >> allow starvation for a lcore. total_mbufs =3D (max_num_ports * cache_si= ze) + >> some_extra mbufs; >>=20 >=20 > I will set cache_size as 4.=20 > The total_mbufs should be mbufs_per_core(128) * cache_size(4) where the m= ax_num_ports=20 > is already taken into consideration in mbufs_per_core. For example, for a= TCP packet of 1024 bytes and TSO max seg size of 256 bytes GSO will alloca= te 5 mbufs (one direct and four indirect) regardless of the number of ports= . Sounds good, thanks. Regards, Keith