From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 1AB3C5F2E for ; Sun, 1 Apr 2018 14:08:39 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Apr 2018 05:08:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,391,1517904000"; d="scan'208";a="43055753" Received: from irsmsx151.ger.corp.intel.com ([163.33.192.59]) by fmsmga001.fm.intel.com with ESMTP; 01 Apr 2018 05:08:35 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.216]) by IRSMSX151.ger.corp.intel.com ([169.254.4.3]) with mapi id 14.03.0319.002; Sun, 1 Apr 2018 13:08:34 +0100 From: "Ananyev, Konstantin" To: "Zhang, Qi Z" , "Dai, Wei" , "Wang, Xiao W" CC: "'dev@dpdk.org'" Thread-Topic: [dpdk-dev] [PATCH v2 1/2] net/fm10k: convert to new Rx offloads API Thread-Index: AQHTxnytaXMQOEIDLkWau+9ilN6UWaPmqxIAgABHfuD///aVAIAAFTZA///1noCABOB/sA== Date: Sun, 1 Apr 2018 12:08:33 +0000 Message-ID: <2601191342CEEE43887BDE71AB977258A0AB71B4@irsmsx105.ger.corp.intel.com> References: <20180302141105.4954-1-wei.dai@intel.com> <20180328080037.16207-1-wei.dai@intel.com> <20180328080037.16207-2-wei.dai@intel.com> <039ED4275CED7440929022BC67E706115317576C@SHSMSX103.ccr.corp.intel.com> <039ED4275CED7440929022BC67E7061153175E35@SHSMSX103.ccr.corp.intel.com> <2601191342CEEE43887BDE71AB977258A0AB5BB9@irsmsx105.ger.corp.intel.com> <039ED4275CED7440929022BC67E7061153175FD0@SHSMSX103.ccr.corp.intel.com> <2601191342CEEE43887BDE71AB977258A0AB5C19@irsmsx105.ger.corp.intel.com> <039ED4275CED7440929022BC67E706115317602B@SHSMSX103.ccr.corp.intel.com> In-Reply-To: <039ED4275CED7440929022BC67E706115317602B@SHSMSX103.ccr.corp.intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZjM1NjQ1NzYtMjFhNS00YTE4LWIxNmItZjUxOTkzNzcyYWM1IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6IllISmNZYzcrZWRHa29sM1FPK0tCa1NWeWFyMW9qZUZHUWNwZnp5azg4Ukk9In0= x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2 1/2] net/fm10k: convert to new Rx offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Apr 2018 12:08:40 -0000 Hi Qi, > > > > > > > > > > > > Hi Daiwei: > > > > > > > > > > > > > +static uint64_t fm10k_get_rx_queue_offloads_capa(struct > > > > > > > +rte_eth_dev > > > > > > > +*dev) { > > > > > > > + RTE_SET_USED(dev); > > > > > > > + > > > > > > > + return (uint64_t)(DEV_RX_OFFLOAD_SCATTER); > > > > > > > +} > > > > > > > > > > > > why per queue rx scattered feature here? > > > > > > My understanding is either we use scattered rx function that > > > > > > enable this feature for all queues or we use non-scattered rx > > > > > > function that disable this feature for all queues, right? > > > > > > > > > > Checked with Dai Wei offline, fm10k have per queue register that > > > > > can be configured to support rx scattered, So it is per queue off= load. > > > > > > > > Ok, but these days we have one RX function per device. > > > > Looking at fm10k - it clearly has different RX function for > > > > scattered and non-scattered case. > > > > Yes, HW does support scatter/non-scatter selection per queue, but > > > > our SW - doesn't (same for ixgbe and i40e) So how it could be per q= ueue > > offload? > > > > > > We saw the implementation of fm10k is a little bit different with i40= e. > > > It set per queue register "FM10K_SRRCTL_BUFFER_CHAINING_EN" to turn > > on multi-seg feature when offload is required. > > > > > > That means two queues can have different behavior when process a > > > packet that exceed the buffer size base on the register setting, thou= gh we > > use the same rx scattered function, so we think this is per queue featu= re, is > > that make sense? > > > > Ok, suppose we have 2 functions configured. > > One with DEV_RX_OFFLOAD_SCATTER is on, second with > > DEV_RX_OFFLOAD_SCATTER is off. > > So scatter RX function will be selected, but for second queue HW suppo= rt > > will not be enabled, so packets bigger then RX buffer will be silently = dropped > > by HW, right? >=20 > Yes according to datasheet >=20 > Bit FM10K_SRRCTL_BUFFER_CHAINING_EN: >=20 > 0b =3D Any packet longer than the data buffer size is terminated with a > TOO_BIG error status in Rx descriptor write-back. The remainder of the > frame is not posted to host, it is silently dropped. > 1b =3D A packet can be spread over more than one single receive data buff= er >=20 Ok, that's a bit unusual approach but understandable. Thanks Konstantin