From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 3A7B83237 for ; Thu, 23 Apr 2015 16:00:46 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 23 Apr 2015 07:00:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,631,1422950400"; d="scan'208";a="484689659" Received: from unknown ([10.237.220.106]) by FMSMGA003.fm.intel.com with SMTP; 23 Apr 2015 07:00:44 -0700 Received: by (sSMTP sendmail emulation); Thu, 23 Apr 2015 15:00:43 +0025 Date: Thu, 23 Apr 2015 15:00:43 +0100 From: Bruce Richardson To: Ravi Kerur Message-ID: <20150423140042.GA7248@bricha3-MOBL3> References: <1429716828-19012-1-git-send-email-rkerur@gmail.com> <1429716828-19012-2-git-send-email-rkerur@gmail.com> <55389E44.8030603@intel.com> <20150423081138.GA8592@bricha3-MOBL3> <2601191342CEEE43887BDE71AB97725821420FC7@irsmsx105.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.23 (2014-03-12) Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH] Implement memcmp using AVX/SSE instructio X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Apr 2015 14:00:48 -0000 On Thu, Apr 23, 2015 at 06:53:44AM -0700, Ravi Kerur wrote: > On Thu, Apr 23, 2015 at 2:23 AM, Ananyev, Konstantin < > konstantin.ananyev@intel.com> wrote: > > > > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson > > > Sent: Thursday, April 23, 2015 9:12 AM > > > To: Wodkowski, PawelX > > > Cc: dev@dpdk.org > > > Subject: Re: [dpdk-dev] [PATCH] Implement memcmp using AVX/SSE instructio > > > > > > On Thu, Apr 23, 2015 at 09:24:52AM +0200, Pawel Wodkowski wrote: > > > > On 2015-04-22 17:33, Ravi Kerur wrote: > > > > >+/** > > > > >+ * Compare bytes between two locations. The locations must not > > overlap. > > > > >+ * > > > > >+ * @note This is implemented as a macro, so it's address should not > > be taken > > > > >+ * and care is needed as parameter expressions may be evaluated > > multiple times. > > > > >+ * > > > > >+ * @param src_1 > > > > >+ * Pointer to the first source of the data. > > > > >+ * @param src_2 > > > > >+ * Pointer to the second source of the data. > > > > >+ * @param n > > > > >+ * Number of bytes to compare. > > > > >+ * @return > > > > >+ * true if equal otherwise false. > > > > >+ */ > > > > >+static inline bool > > > > >+rte_memcmp(const void *src_1, const void *src, > > > > >+ size_t n) __attribute__((always_inline)); > > > > You are exposing this as public API, so I think you should follow > > > > description bellow or not call this _memcmp_ > > > > > > > > int memcmp(const void *s1, const void *s2, size_t n); > > > > > > > > The memcmp() function returns an integer less than, equal to, or > > greater > > > > than > > > > zero if the first n bytes of s1 is found, respectively, > > to be > > > > less than, to > > > > match, or be greater than the first n bytes of s2. > > > > > > > > > > +1 to this point. > > > > > > Also, if I read your quoted performance numbers in your earlier mail > > correctly, > > > we are only looking at a 1-4% performance increase. Is the additional > > code to > > > maintain worth the benefit? > > > > Yep, same thought here, is it really worth it? > > Konstantin > > > > > > > > /Bruce > > > > > > > -- > > > > Pawel > > > > I think I haven't exploited every thing x86 has to offer to improve > performance. I am looking for inputs. Until we have exhausted all avenues I > don't want to drop it. One thing I have noticed is that bigger key size > gets better performance numbers. I plan to re-run perf tests with 64 and > 128 bytes key size and will report back. Any other avenues to try out > please let me know I will give it a shot. > > Thanks, > Ravi Hi Ravi, are 128 byte comparisons realistic? An IPv6 5-tuple with double vlan tags is still only 41 bytes, or 48 with some padding added? While for a memcpy function, you can see cases where you are going to copy a whole packet, meaning that sizes of 128B+ (up to multiple k) are realistic, it's harder to see that for a compare function. In any case, we await the results of your further optimization work to see how that goes. Regards, /Bruce