From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DAA0EA0C3F; Mon, 28 Jun 2021 14:55:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CF9540692; Mon, 28 Jun 2021 14:55:50 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 866BD4068A for ; Mon, 28 Jun 2021 14:55:48 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10028"; a="269074968" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="269074968" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 05:55:47 -0700 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="446553854" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.205.12]) ([10.213.205.12]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 05:55:45 -0700 To: Igor Ryzhov , Dan Gora Cc: dev , Aaron Conole , Stephen Hemminger References: <20190924193312.17381-1-iryzhov@nfware.com> <20190925093623.18419-1-iryzhov@nfware.com> <05b55e82-f992-c9d3-593d-3c608b7bb543@intel.com> <347b8a5c-f7c8-45d4-063f-2a4b5bdf90cc@intel.com> From: Ferruh Yigit X-User: ferruhy Message-ID: Date: Mon, 28 Jun 2021 13:55:41 +0100 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3] kni: rework rte_kni_update_link using ioctl X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/27/2019 8:16 PM, Igor Ryzhov wrote: > Hi Ferruh, Dan, > > Sure, I remember last year discussion but now I see the problem in current > implementation. > > Ferruh, here is an example: > > We have a thread in the application that processes KNI commands from the > kernel. > It receives config_network_if command to set interface up, calls > rte_eth_dev_start, and here is the problem. > We cannot call current rte_kni_update_link from here as the interface is > not yet up in the kernel, > as we didn't send a response for config_network_if yet. So we need to send > a response first and only > after that, we can use rte_kni_update_link. Actually, we don't even know > the exact time between we > send a response and the moment when the kernel receives it and the > interface becomes up. > We always have a dependency on the interface state in the kernel. With > ioctl approach, we don't > have such dependency - we can call rte_kni_update_link whenever we want, > even when the interface is > down in the kernel. As I explained, it's common when processing > config_network_if to set interface up. > Hi Igor, I agree with the mentioned problem. When the KNI interface is down, not able to update the link carrier status is not convenient. For a physical interface this may make sense, since interface won't be used by the OS, no need to power on the PHY and trace the carrier status. But for the intention of the original link set feature, it requires to be able to update the carrier status independent from the interface up/down status. Overall, also agree to not introduce a new ioctl and use existing interface, but for this case existing interface doesn't exactly fit to the intended use case and I am OK have the ioctl. Can you please send a new version rebasing latest head, we can continue on that one? Thanks, ferruh > Igor > > On Mon, Oct 14, 2019 at 11:56 PM Dan Gora wrote: > >> Here's another link to the thread where this was discussed last year.. >> Igor was actually on this thread as well... >> >> https://mails.dpdk.org/archives/dev/2018-August/110383.html >> >> On Mon, Oct 14, 2019 at 4:01 PM Dan Gora wrote: >>> >>> My original patch to add this feature was basically the same thing as >>> this: setting the link status via a KNI ioctl. That method was >>> rejected after _much_ discussion and we eventually settled on the >>> currently implementation. >>> >>> My original patch was here: Message-Id: < >> 20180628225548.21885-1-dg@adax.com> >>> >>> If you search for KNI and dg@adax.com in the DPDK devel list you >>> should be able to suss out the whole discussion that lead to the >>> current implementation. >>> >>> thanks >>> dan >>> >>> On Mon, Oct 14, 2019 at 1:17 PM Ferruh Yigit >> wrote: >>>> >>>> On 10/14/2019 5:10 PM, Ferruh Yigit wrote: >>>>> On 9/25/2019 10:36 AM, Igor Ryzhov wrote: >>>>>> Current implementation doesn't allow us to update KNI carrier if the >>>>>> interface is not yet UP in kernel. It means that we can't use it in >> the >>>>>> same thread which is processing rte_kni_ops.config_network_if, >> which is >>>>>> very convenient, because it allows us to have correct carrier status >>>>>> of the interface right after we enabled it and we don't have to use >> any >>>>>> additional thread to track link status. >>>>> >>>>> Hi Igor, >>>>> >>>>> The existing thread tracks the link status of the physical device >> and reflects >>>>> the changes to the kni netdev, but the "struct rte_kni_ops" >>>>> (rte_kni_ops.config_network_if) works other way around, it captures >> (some) >>>>> requests to kni netdev and reflects them to the underlying physical >> device. >>>>> Even 'rte_kni_update_link()' updated to use ioctl, the thread still >> looks >>>>> required and this patch doesn't really changes that part. >>>>> >>>>> Also I am reluctant to extend the KNI ioctl interface when there is >> a generic >>>>> way to do that work. >>>>> >>>>> What is the use case of updating kni netdev carrier status when the >> interface is >>>>> down? >>>> >>>> btw, if the problem is status of the interface being 'no-carrier' by >> default, >>>> this can be changed by "carrier=on" parameter of the kni kernel module: >>>> "insmod ./build/kmod/rte_kni.ko carrier=on" >>