From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f41.google.com (mail-wm0-f41.google.com [74.125.82.41]) by dpdk.org (Postfix) with ESMTP id C7357370 for ; Mon, 18 Jul 2016 17:00:34 +0200 (CEST) Received: by mail-wm0-f41.google.com with SMTP id f126so107245853wma.1 for ; Mon, 18 Jul 2016 08:00:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:mail-followup-to:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=wlfKvgq0LxAr5ZLtj9T1b5UQvgM+FQd/PkDH12PyB8U=; b=0zz0/QXXZ6foOYumlhzT8JPLr73XMFX827Oylm/yooCXMoXXzUspQLhi1ZEoCXE1Km OZoILMj9clbiHlYZtFLL/ieQrBq9cxp9Ot/xLN7H02bW0V1i57bVAAFWSD20DDjvWAxW SiKNLPYcB57hcHYiqdjh1AQU8CHTZQmdv8+TEvdmDos+ru8IbAfXCqGZPMBp4ZAh2KQK 9SQGe0F5HDzn9i54s1jKvTbv/9/sLv0oOYmovfyoasIUkTerdqgqM/4a1cMEjzHCxywN ebM97mplj86lW3OMjQnQAsvpaQN73L57+z4Ia7oz00Z+CT1IuXcUGpMfcYDfYakJJVTj Rk7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :content-transfer-encoding:in-reply-to; bh=wlfKvgq0LxAr5ZLtj9T1b5UQvgM+FQd/PkDH12PyB8U=; b=eWdsICuQVvivlXLsjvhqdh0sNsDT4hrBGzpfJGchGfZjRgrU5knTtP8IGQLBhmOFWn Xf4r5W13QPGt48Sy7kpLngYAg3P/Qt7lGSrWrD7w2LJoMBu2a1aGV7eHnsnfSfQOR1LE 9oigd7OeQbEA/cL87/mygl8WZEE2vSWZZCVWD2CGZAowvmS0uHBfhAmq8nVHuF2XfnLp o05Asr2j+uUNLjMJFPAmQNJZgVonuoE4Dpg23LXBD9a/1zIBlQAAQY/rf5HI0Hqra1V0 nTntPwD2NXPXBpjCa8LQ+PytdR5HJbdjvoqc6yBeUW+cXnUblWGmfxkNHHHM+Z/R3OFj aGoQ== X-Gm-Message-State: ALyK8tIGRmyyd0tivyk+GNxmPLUUqSG+i1h9EaYoMjKFWuEqVBDTyw8G8ciOM0NUwuTnAGnA X-Received: by 10.194.227.34 with SMTP id rx2mr1883064wjc.23.1468854034453; Mon, 18 Jul 2016 08:00:34 -0700 (PDT) Received: from 6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id p23sm14288904wme.8.2016.07.18.08.00.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 18 Jul 2016 08:00:33 -0700 (PDT) Date: Mon, 18 Jul 2016 17:00:29 +0200 From: Adrien Mazarguil To: "Chandran, Sugesh" Cc: "dev@dpdk.org" , Thomas Monjalon , "Zhang, Helin" , "Wu, Jingjing" , Rasesh Mody , Ajit Khaparde , Rahul Lakkireddy , "Lu, Wenzhuo" , Jan Medala , John Daley , "Chen, Jing D" , "Ananyev, Konstantin" , Matej Vido , Alejandro Lucero , Sony Chacko , Jerin Jacob , "De Lara Guarch, Pablo" , Olga Shern , "Chilikin, Andrey" Message-ID: <20160718150029.GJ7621@6wind.com> Mail-Followup-To: "Chandran, Sugesh" , "dev@dpdk.org" , Thomas Monjalon , "Zhang, Helin" , "Wu, Jingjing" , Rasesh Mody , Ajit Khaparde , Rahul Lakkireddy , "Lu, Wenzhuo" , Jan Medala , John Daley , "Chen, Jing D" , "Ananyev, Konstantin" , Matej Vido , Alejandro Lucero , Sony Chacko , Jerin Jacob , "De Lara Guarch, Pablo" , Olga Shern , "Chilikin, Andrey" References: <20160705181646.GO7621@6wind.com> <2EF2F5C0CC56984AA024D0B180335FCB13DEA331@IRSMSX102.ger.corp.intel.com> <20160708130310.GD7621@6wind.com> <2EF2F5C0CC56984AA024D0B180335FCB13DEB236@IRSMSX102.ger.corp.intel.com> <20160713200327.GC7621@6wind.com> <2EF2F5C0CC56984AA024D0B180335FCB13DEE55F@IRSMSX102.ger.corp.intel.com> <20160715150402.GE7621@6wind.com> <2EF2F5C0CC56984AA024D0B180335FCB13E02938@IRSMSX102.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <2EF2F5C0CC56984AA024D0B180335FCB13E02938@IRSMSX102.ger.corp.intel.com> Subject: Re: [dpdk-dev] [RFC] Generic flow director/filtering/classification API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Jul 2016 15:00:34 -0000 On Mon, Jul 18, 2016 at 01:26:09PM +0000, Chandran, Sugesh wrote: > Hi Adrien, > Thank you for getting back on this. > Please find my comments below. Hi Sugesh, Same for me, removed again the parts we agree on. [...] > > > > > > > [Sugesh] Another concern is the cost and time of installing > > > > > > > these rules in the hardware. Can we make these APIs time > > > > > > > bound(or at least an option > > > > > > to > > > > > > > set the time limit to execute these APIs), so that Application > > > > > > > doesn’t have to wait so long when installing and deleting > > > > > > > flows > > > > > > with > > > > > > > slow hardware/NIC. What do you think? Most of the datapath > > > > > > > flow > > > > > > installations are > > > > > > > dynamic and triggered only when there is an ingress traffic. > > > > > > > Delay in flow insertion/deletion have unpredictable > > > > > > consequences. > > > > > > > > > > > > This API is (currently) aimed at the control path only, and must > > > > > > indeed be assumed to be slow. Creating million of rules may take > > > > > > quite long as it may involve syscalls and other time-consuming > > > > > > synchronization things on the PMD side. > > > > > > > > > > > > So currently there is no plan to have rules added from the data > > > > > > path with time constraints. I think it would be implemented > > > > > > through a different set of functions anyway. > > > > > > > > > > > > I do not think adding time limits is practical, even specifying > > > > > > in the API that creating a single flow rule must take less than > > > > > > a maximum number of seconds in order to be effective is too much > > > > > > of a constraint (applications that create all flows during init > > > > > > may not care after > > > > all). > > > > > > > > > > > > You should consider in any case that modifying flow rules will > > > > > > always be slower than receiving packets, there is no way around > > > > > > that. Applications have to live with it and provide a software > > > > > > fallback for incoming packets while managing flow rules. > > > > > > > > > > > > Moreover, think about what happens when you hit the maximum > > > > number > > > > > > of flow rules and cannot create any more. Applications need to > > > > > > implement some kind of fallback in their data path. > > > > > > > > > > > > Offloading flows in HW is also only useful if they live much > > > > > > longer than the time taken to create and delete them. Perhaps > > > > > > applications may choose to do so after detecting long lived > > > > > > flows such as TCP sessions. > > > > > > > > > > > > You may have one separate control thread dedicated to manage > > > > > > flows and keep your normal control thread unaffected by delays. > > > > > > Several threads can even be dedicated, one per device. > > > > > [Sugesh] I agree that the flow insertion cannot be as fast as the > > > > > packet receiving rate. From application point of view the problem > > > > > will be when hardware flow insertion takes longer than software > > > > > flow insertion. At least application has to know the cost of > > > > > inserting/deleting a rule in hardware beforehand. Otherwise how > > > > > application can choose the right flow candidate for hardware. My > > > > > point > > > > here is application is expecting a deterministic behavior from a > > > > classifier while inserting and deleting rules. > > > > > > > > Understood, however it will be difficult to estimate, particularly > > > > if a PMD must rearrange flow rules to make room for a new one due to > > > > priority levels collision or some other HW-related reason. I mean, > > > > spent time cannot be assumed to be constant, even PMDs cannot know > > > > in advance because it also depends on the performance of the host CPU. > > > > > > > > Such applications may find it easier to measure elapsed time for the > > > > rules they create, make statistics and extrapolate from this > > > > information for future rules. I do not think the PMD can help much here. > > > [Sugesh] From an application point of view this can be an issue. > > > Even there is a security concern when we program a short lived flow. > > > Lets consider the case, > > > > > > 1) Control plane programs the hardware with Queue termination flow. > > > 2) Software dataplane programmed to treat the packets from the specific > > queue accordingly. > > > 3) Remove the flow from the hardware. (Lets consider this is a long wait > > process..). > > > Or even there is a chance that hardware take more time to report the > > > status than removing it physically . Now the packets in the queue no longer > > consider as matched/flow hit. > > > . This is due to the software dataplane update is yet to happen. > > > We must need a way to sync between software datapath and classifier > > > APIs even though they are both programmed from a different control > > thread. > > > > > > Are we saying these APIs are only meant for user defined static flows?? > > > > No, that is definitely not the intent. These are good points. > > > > With the specified API, applications may have to adapt their logic and take > > extra precautions in order to remain on the safe side at all times. > > > > For your above example, the application cannot assume a rule is > > added/deleted as long as the PMD has not completed the related operation, > > which means keeping the SW rule/fallback in place in the meantime. Should > > handle security concerns as long as after removing a rule, packets end up in a > > default queue entirely processed by SW. Obviously this may worsen > > response time. > > > > The ID action can help with this. By knowing which rule a received packet is > > associated with, processing can be temporarily offloaded by another thread > > without much complexity. > [Sugesh] Setting ID for every flow may not viable especially when the size of ID > is small(just only 8 bits). I am not sure is this a valid case though. Agreed, I'm not saying this solution works for all devices, particularly those that do not support ID at all. > How about a hardware flow flag in packet descriptor that set when the > packets hits any hardware rule. This way software doesn’t worry /blocked by a > hardware rule . Even though there is an additional overhead of validating this flag, > software datapath can identify the hardware processed packets easily. > This way the packets traverses the software fallback path until the rule configuration is > complete. This flag avoids setting ID action for every hardware flow that are configuring. That makes sense. I see it as a sort of single bit ID but it could be implemented through a different action for less capable devices. PMDs that support 32 bit IDs could reuse the same code for both features. I understand you'd prefer having this feature always present, however we already know that not all PMDs/devices support it, and like everything else this is a kind of offload that needs to be explicitly requested by the application as it may not be needed. If we go with the separate action, then perhaps it would make sense to rename "ID" to "MARK" to make things clearer: RTE_FLOW_ACTION_TYPE_FLAG /* Flag packets processed by flow rule. */ RTE_FLOW_ACTION_TYPE_MARK /* Attach a 32 bit value to a packet. */ I guess the result of the FLAG action would be something in ol_flag. Thoughts? > > I think applications have to implement SW fallbacks all the time, as even > > some sort of guarantee on the flow rule processing time may not be enough > > to avoid misdirected packets and related security issues. > [Sugesh] Software fallback will be there always. However I am little bit confused on > the way software going to identify the packets that are already hardware processed . I feel we need some > notification in the packet itself, when a hardware rule hits. ID/flag/any other options? Yeah I think so too, as long as it is optional because we cannot assume all PMDs will support it. > > Let's wait for applications to start using this API and then consider an extra > > set of asynchronous / real-time functions when the need arises. It should not > > impact the way rules are specified > [Sugesh] Sure. I think the rule definition may not impact with this. Thanks for your comments. -- Adrien Mazarguil 6WIND