From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id BCD818DA0 for ; Tue, 18 Aug 2015 12:32:11 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 18 Aug 2015 03:32:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,701,1432623600"; d="scan'208";a="770933993" Received: from irsmsx102.ger.corp.intel.com ([163.33.3.155]) by fmsmga001.fm.intel.com with ESMTP; 18 Aug 2015 03:32:09 -0700 Received: from irsmsx155.ger.corp.intel.com (163.33.192.3) by IRSMSX102.ger.corp.intel.com (163.33.3.155) with Microsoft SMTP Server (TLS) id 14.3.224.2; Tue, 18 Aug 2015 11:32:08 +0100 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.22]) by irsmsx155.ger.corp.intel.com ([169.254.14.43]) with mapi id 14.03.0224.002; Tue, 18 Aug 2015 11:32:08 +0100 From: "Dumitrescu, Cristian" To: "Yeddula, Avinash" , "dev@dpdk.org" Thread-Topic: [ 2nd try ] Lookup mechanim in DPDK HASH table. Thread-Index: AdDWEC6OvyssoEwzRkW/rjWnlKZeIQDi14tw Date: Tue, 18 Aug 2015 10:32:07 +0000 Message-ID: <3EB4FA525960D640B5BDFFD6A3D89126478A458A@IRSMSX108.ger.corp.intel.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "Bly, Mike" Subject: Re: [dpdk-dev] [ 2nd try ] Lookup mechanim in DPDK HASH table. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 10:32:12 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yeddula, Avinash > Sent: Thursday, August 13, 2015 10:37 PM > To: dev@dpdk.org > Subject: [dpdk-dev] [ 2nd try ] Lookup mechanim in DPDK HASH table. >=20 > Any comments on this question ? >=20 > Thanks > -Avinash >=20 Sorry for my delay, Avinash, I was out of office for a few days. > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yeddula, Avinash > Sent: Wednesday, August 12, 2015 3:04 PM > To: dev@dpdk.org > Subject: [dpdk-dev] Lookup mechanim in DPDK HASH table. >=20 > Hello All, >=20 > I'm using DPDK extendable hash tables. This question is with respect to t= he > lookup aspect of the hash table. > I see that there is just one "t->key_offset" that is pre-defined for the = hash > table. Just to avoid any confusion here implied by "pre-defined" statement, the ke= y offset is definitely not hardcoded at build time; the key offset is confi= gurable per each hash table instance when the hash table instance is initia= lized. Once set up for a particular hash table instance, it cannot be chang= ed for that instance. Different hash table instances can have different val= ues for this parameter. > I also understand that the frame needs to carry the "lookup_key / keys" i= n the meta data. >=20 > Here is my question: How to support more than one lookup with different > keys on the same frame on the same table. I agree with Venkat that one way of doing this is to do additional work of = extracting the lookup key from the packet (which can have different format,= depending on the point in the processing pipeline) and placing it at a fix= ed offset within the packet meta-data. This work can be done by:=20 - the action handler defined for the input ports of the same pipeline that = current table is part of - the action handler of a table in the same pipeline (located before the cu= rrent table) - an action handler (of input port/output port/table) from a different pipe= line instance (located before the current pipeline in the processing chain) I also agree with Mike Bly this is not the best way of doing things, and I = don't think you actually need it, based on your use-case. Please see below = for my suggestion. > Use case: Src mac lookup and dst mac lookup on the same mac table. Your use-case looks like classical L2 bridging. My understanding is:=20 - you need to lookup the MAC destination in the MAC forwarding table: on lo= okup hit, unicast forward the frame on the port read from the hit table ent= ry; on lookup hit, broadcast/flood the frame on all ports.=20 - you also need to learn the MAC source, i.e. make sure you add the MAC sou= rce to the same MAC forwarding table to make the association of MAC address= and the port where it is located for future lookups. As Jasvinder is point= ing out, you do not really need to do a lookup of the MAC source in the tab= le, what you need is to add the MAC source to the table. So one suggestion is to: - have a single lookup operation in the MAC forwarding table (based on MAC = destination) - have the table action handler (or the input port action handler, or the o= utput port action handler) to perform the add operation to the MAC forwardi= ng table (add the MAC source as a new key to the table); the add operation = is really an add/update, meaning that when the key is already present in th= e table, only the data associated with the key (i.e. the port where to forw= ard the frame) is modified, which can be handy to pick up automatically the= corner case of one station being moved from port A to port B (so one MAC a= ddress that previously showed up as being sourced on port A is not sourced = by port B) You can also optimize things a bit to reduce the rate of add operations to = the table, so you don't need to perform an add operation per frame: -have single lookup operation to table 1( MAC forwarding table), using the = MAC destination as the lookup key -have single lookup operation to table 2 (MAC learning cache table, which c= an be a small LRU-based table used to record the most frequently MAC addres= ses encountered lately), using the MAC source as the lookup key: only add t= he current MAC source to table 1 (MAC forwarding table) on lookup miss in t= able 2 I am sure that other people have even better ideas for optimizations. >=20 > Thanks > -Avinash