From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0064.outbound.protection.outlook.com [104.47.2.64]) by dpdk.org (Postfix) with ESMTP id 393341B1EE for ; Wed, 17 Jan 2018 19:02:30 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=GdS5Ib/lNrLWKCBjR9ZaTuPbCY92sTqt21qb1LbjqLo=; b=vmvHf0dM4Wlqh6G8rC288wDH3QEl/80n6QYPzY5YQs7e262CcMerx9gy7JawEDoClUYtqjUU+EjsMYhQ6HWx0NPsvFsv3kaTXGfKk81y+G1Kb0+PPk6hcS5+VpEDMj5o2Xcez9u69fY7Ha5lNYEkupCB88RxS7/6WU2uDDLqne0= Received: from AM6PR0502MB3797.eurprd05.prod.outlook.com (52.133.21.26) by AM6PR0502MB3829.eurprd05.prod.outlook.com (52.133.21.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.407.7; Wed, 17 Jan 2018 18:02:27 +0000 Received: from AM6PR0502MB3797.eurprd05.prod.outlook.com ([fe80::6c28:c6b3:de94:a733]) by AM6PR0502MB3797.eurprd05.prod.outlook.com ([fe80::6c28:c6b3:de94:a733%13]) with mapi id 15.20.0407.012; Wed, 17 Jan 2018 18:02:28 +0000 From: Matan Azrad To: "Ananyev, Konstantin" , Thomas Monjalon , Gaetan Rivet , "Wu, Jingjing" CC: "dev@dpdk.org" , Neil Horman , "Richardson, Bruce" Thread-Topic: [PATCH v2 2/6] ethdev: add port ownership Thread-Index: AQHTihf/M9xg8LYorUSRFqZtTc27hqNtNdVQgAFomACAAAOCwIAAuwQAgABvY+CABQwJgIAAEk3ggABinoCAANUIAIAAxQGAgAAGCnCAAQnKAIAAAmNwgAAXCYCAAABqkIAAQhEAgAASagA= Date: Wed, 17 Jan 2018 18:02:27 +0000 Message-ID: References: <1511870281-15282-1-git-send-email-matan@mellanox.com> <1515318351-4756-1-git-send-email-matan@mellanox.com> <1515318351-4756-3-git-send-email-matan@mellanox.com> <2601191342CEEE43887BDE71AB97725880E3B9D6@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772588627B12A@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772588627CCB0@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772588627DC25@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772588627DE30@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772588627E954@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772588627EE60@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772588627EEDA@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772588627F076@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB9772588627F076@irsmsx105.ger.corp.intel.com> Accept-Language: en-US, he-IL Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=matan@mellanox.com; x-originating-ip: [85.64.81.213] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; AM6PR0502MB3829; 7:mSib38lPi8zYQMFCwBVPNy1heNT5GAMUbe65vU5d5xV5MbJdUr7YncyvNBN3JOi+IPtzwLZ/39GvuhB18pJQKYoKqDGgujlR4F0+WC9O4/7l2vPmyeuNuA4zse9e7f7noFK5NEPEZLMsSC/+9w2HB4Y5N1nsIByzGLkwzPBZfSEGbbQtOBNWPV8+VQ9U0QbIcgEeQluAy0wDEWTgIRBsq42T4dj5YZEe6gict580+MxyAPRBy6qXzStpr69nguzA x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-ht: Tenant x-ms-office365-filtering-correlation-id: 6bc4070b-273c-45f7-0601-08d55dd47894 x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(5600026)(4604075)(3008032)(2017052603307)(7153060)(7193020); SRVR:AM6PR0502MB3829; x-ms-traffictypediagnostic: AM6PR0502MB3829: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(60795455431006)(17755550239193); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(5005006)(8121501046)(3002001)(93006095)(93001095)(3231023)(2400048)(944501161)(10201501046)(6055026)(6041268)(20161123558120)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123564045)(6072148)(201708071742011); SRVR:AM6PR0502MB3829; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:AM6PR0502MB3829; x-forefront-prvs: 0555EC8317 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(376002)(346002)(39380400002)(396003)(366004)(39860400002)(189003)(199004)(6246003)(4326008)(316002)(25786009)(2906002)(102836004)(5660300001)(93886005)(97736004)(7696005)(3846002)(59450400001)(6116002)(26005)(76176011)(5890100001)(66066001)(478600001)(305945005)(81156014)(81166006)(8936002)(9686003)(8676002)(53936002)(5250100002)(6506007)(86362001)(106356001)(33656002)(7736002)(74316002)(53946003)(3660700001)(68736007)(14454004)(2900100001)(6436002)(54906003)(55016002)(3280700002)(105586002)(229853002)(2950100002)(110136005)(99286004)(21314002)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:AM6PR0502MB3829; H:AM6PR0502MB3797.eurprd05.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: bA4UMMl9rUgFwzwj5Es3FcpRHhRbX+ZikGshK02BMdiGEmcmCNVHeaQY80rSQ10DYx9CRXDPJM/JgDaZMgLn3Q== spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6bc4070b-273c-45f7-0601-08d55dd47894 X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2018 18:02:27.8959 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0502MB3829 Subject: Re: [dpdk-dev] [PATCH v2 2/6] ethdev: add port ownership X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jan 2018 18:02:30 -0000 Hi Konstantin From: Ananyev, Konstantin, Wednesday, January 17, 2018 6:53 PM > Hi Matan, >=20 > > > > Hi Konstantin > > From: Ananyev, Konstantin, Wednesday, January 17, 2018 2:55 PM > > > > > > > > > > > > Hi Konstantin > > > > From: Ananyev, Konstantin, Sent: Wednesday, January 17, 2018 1:24 > > > > PM > > > > > Hi Matan, > > > > > > > > > > > Hi Konstantin > > > > > > > > > > > > From: Ananyev, Konstantin, Tuesday, January 16, 2018 9:11 PM > > > > > > > Hi Matan, > > > > > > > > > > > > > > > > > > > > > > > Hi Konstantin > > > > > > > > From: Ananyev, Konstantin, Monday, January 15, 2018 8:44 > > > > > > > > PM > > > > > > > > > Hi Matan, > > > > > > > > > > Hi Konstantin > > > > > > > > > > From: Ananyev, Konstantin, Monday, January 15, 2018 > > > > > > > > > > 1:45 PM > > > > > > > > > > > Hi Matan, > > > > > > > > > > > > Hi Konstantin > > > > > > > > > > > > From: Ananyev, Konstantin, Friday, January 12, > > > > > > > > > > > > 2018 > > > > > > > > > > > > 2:02 AM > > > > > > > > > > > > > Hi Matan, > > > > > > > > > > > > > > Hi Konstantin > > > > > > > > > > > > > > From: Ananyev, Konstantin, Thursday, January > > > > > > > > > > > > > > 11, > > > > > > > > > > > > > > 2018 > > > > > > > > > > > > > > 2:40 PM > > > > > > > > > > > > > > > Hi Matan, > > > > > > > > > > > > > > > > Hi Konstantin > > > > > > > > > > > > > > > > From: Ananyev, Konstantin, Wednesday, > > > > > > > > > > > > > > > > January 10, > > > > > > > > > > > > > > > > 2018 > > > > > > > > > > > > > > > > 3:36 PM > > > > > > > > > > > > > > > > > Hi Matan, > > > > > > > > > > > > > > > > > > > > > > > > > It is good to see that now > > > > > > > > > > > > > > > > > scanning/updating rte_eth_dev_data[] is > > > > > > > > > > > > > > > > > lock protected, but it might be not very > > > > > > > > > > > > > > > > > plausible to protect both data[] and > > > > > > > > > > > > > > > > > next_owner_id using the > > > > > > > > > > > same lock. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I guess you mean to the owner structure in > > > > > > > > > > > rte_eth_dev_data[port_id]. > > > > > > > > > > > > > > > > The next_owner_id is read by ownership > > > > > > > > > > > > > > > > APIs(for owner validation), so it > > > > > > > > > > > > > > > makes sense to use the same lock. > > > > > > > > > > > > > > > > Actually, why not? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well to me next_owner_id and > > > > > > > > > > > > > > > rte_eth_dev_data[] are not directly > > > > > > > > > > > > > related. > > > > > > > > > > > > > > > You may create new owner_id but it doesn't > > > > > > > > > > > > > > > mean you would update rte_eth_dev_data[] > immediately. > > > > > > > > > > > > > > > And visa-versa - you might just want to > > > > > > > > > > > > > > > update rte_eth_dev_data[].name or .owner_id. > > > > > > > > > > > > > > > It is not very good coding practice to use > > > > > > > > > > > > > > > same lock for non-related data structures. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see the relation like next: > > > > > > > > > > > > > > Since the ownership mechanism synchronization > > > > > > > > > > > > > > is in ethdev responsibility, we must protect > > > > > > > > > > > > > > against user mistakes as much as we can by > > > > > > > > > > > > > using the same lock. > > > > > > > > > > > > > > So, if user try to set by invalid owner > > > > > > > > > > > > > > (exactly the ID which currently is > > > > > > > > > > > > > allocated) we can protect on it. > > > > > > > > > > > > > > > > > > > > > > > > > > Hmm, not sure why you can't do same checking > > > > > > > > > > > > > with different lock or atomic variable? > > > > > > > > > > > > > > > > > > > > > > > > > The set ownership API is protected by ownership > > > > > > > > > > > > lock and checks the owner ID validity By reading > > > > > > > > > > > > the next owner > > > ID. > > > > > > > > > > > > So, the owner ID allocation and set API should use > > > > > > > > > > > > the same atomic > > > > > > > > > > > mechanism. > > > > > > > > > > > > > > > > > > > > > > Sure but all you are doing for checking validity, is > > > > > > > > > > > check that owner_id > 0 &&& owner_id < > > > > > > > > > > > next_ownwe_id, > > > right? > > > > > > > > > > > As you don't allow owner_id overlap (16/3248 bits) > > > > > > > > > > > you can safely do same check with just > > > atomic_get(&next_owner_id). > > > > > > > > > > > > > > > > > > > > > It will not protect it, scenario: > > > > > > > > > > - current next_id is X. > > > > > > > > > > - call set ownership of port A with owner id X by > > > > > > > > > > thread 0(by user > > > > > > > mistake). > > > > > > > > > > - context switch > > > > > > > > > > - allocate new id by thread 1 and get X and change > > > > > > > > > > next_id to > > > > > > > > > > X+1 > > > > > > > > > atomically. > > > > > > > > > > - context switch > > > > > > > > > > - Thread 0 validate X by atomic_read and succeed to > > > > > > > > > > take > > > > > ownership. > > > > > > > > > > - The system loosed the port(or will be managed by two > > > > > > > > > > entities) - > > > > > > > crash. > > > > > > > > > > > > > > > > > > > > > > > > > > > Ok, and how using lock will protect you with such scenari= o? > > > > > > > > > > > > > > > > The owner set API validation by thread 0 should fail > > > > > > > > because the owner > > > > > > > validation is included in the protected section. > > > > > > > > > > > > > > Then your validation function would fail even if you'll use > > > > > > > atomic ops instead of lock. > > > > > > No. > > > > > > With atomic this specific scenario will cause the validation to= pass. > > > > > > > > > > Can you explain to me how? > > > > > > > > > > rte_eth_is_valid_owner_id(uint16_t owner_id) { > > > > > int32_t cur_owner_id =3D > > > > > RTE_MIN(rte_atomic32_get(next_owner_id), > > > > > UINT16_MAX); > > > > > > > > > > if (owner_id =3D=3D RTE_ETH_DEV_NO_OWNER || owner > > > > > > cur_owner_id) { > > > > > RTE_LOG(ERR, EAL, "Invalid owner_id=3D%d.\n", owner_id); > > > > > return 0; > > > > > } > > > > > return 1; > > > > > } > > > > > > > > > > Let say your next_owne_id=3D=3DX, and you invoke > > > > > rte_eth_is_valid_owner_id(owner_id=3DX+1) - it would fail. > > > > > > > > Explanation: > > > > The scenario with locks: > > > > next_owner_id =3D X. > > > > Thread 0 call to set API(with invalid owner Y=3DX) and take lock. > > > > > > Ok I see what you mean. > > > But, as I said before, if thread 0 will grab the lock first - you'll > > > experience the same failure. > > > I understand now that by some reason you treat these two scenarios > > > as something different, but for me it is pretty much the same case. > > > And to me it means that neither lock, neither atomic can fully > > > protect you here. > > > > > > > I agree that we are not fully protected even when using locks but one l= ock > are more protected than ether atomics or 2 different locks. > > So, I think keeping it as is (with one lock) makes sense. >=20 > Ok if that your preference - let's keep your current approach here. >=20 > > > > > > Context switch. > > > > Thread 1 call to owner_new and stuck in the lock. > > > > Context switch. > > > > Thread 0 does owner id validation and failed(Y>=3DX) - unlock the > > > > lock and > > > return failure to the user. > > > > Context switch. > > > > Thread 1 take the lock and update X to X+1, then, unlock the lock. > > > > Everything is OK! > > > > > > > > The same scenario with atomics: > > > > next_owner_id =3D X. > > > > Thread 0 call to set API(with invalid owner Y=3DX) and take lock. > > > > Context switch. > > > > Thread 1 call to owner_new and change X to X+1(atomically). > > > > Context switch. > > > > Thread 0 does owner id validation and success(Y<(atomic)X+1) - > > > > unlock the > > > lock and return success to the user. > > > > Problem! > > > > > > > > > > With lock no next_id changes can be done while the thread is > > > > > > in the set > > > > > API. > > > > > > > > > > > > > But in fact your code is not protected for that scenario - > > > > > > > doesn't matter will you'll use lock or atomic ops. > > > > > > > Let's considerer your current code with the following scenari= o: > > > > > > > > > > > > > > next_owner_id =3D=3D 1 > > > > > > > 1) Process 0: > > > > > > > rte_eth_dev_owner_new(&owner_id); > > > > > > > now owner_id =3D=3D 1 and next_owner_id =3D=3D 2 > > > > > > > 2) Process 1 (by mistake): > > > > > > > rte_eth_dev_owner_set(port_id=3D1, owner->id=3D1); It wil= l > > > > > > > complete successfully, as owner_id =3D=3D1 is considered as v= alid. > > > > > > > 3) Process 0: > > > > > > > rte_eth_dev_owner_set(port_id=3D1, owner->id=3D1); It w= ill > > > > > > > also complete with success, as owner->id is valid is equal > > > > > > > to current port > > > > > owner_id. > > > > > > > So you finished with 2 processes assuming that they do own > > > > > > > exclusively then same port. > > > > > > > > > > > > > > Honestly in that situation locking around nest_owner_id > > > > > > > wouldn't give you any advantages over atomic ops. > > > > > > > > > > > > > > > > > > > This is a different scenario that we can't protect on it with > > > > > > atomic or > > > locks. > > > > > > But for the first scenario I described I think we can. > > > > > > Please read it again, I described it step by step. > > > > > > > > > > > > > > > > > > > > > > > I don't think you can protect yourself against such > > > > > > > > > scenario with or without locking. > > > > > > > > > Unless you'll make it harder for the mis-behaving thread > > > > > > > > > to guess valid owner_id, or add some extra logic here. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The set(and others) ownership APIs already uses > > > > > > > > > > > > the ownership lock so I > > > > > > > > > > > think it makes sense to use the same lock also in ID > allocation. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In fact, for next_owner_id, you don't > > > > > > > > > > > > > > > > > need a lock - just rte_atomic_t should be > enough. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I don't think so, it is problematic in > > > > > > > > > > > > > > > > next_owner_id wraparound and may > > > > > > > > > > > > > > > complicate the code in other places which rea= d it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > IMO it is not that complicated, something > > > > > > > > > > > > > > > like that should work I > > > > > > > > > think. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /* init to 0 at startup*/ rte_atomic32_t > > > > > > > > > > > > > > > *owner_id; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > int new_owner_id(void) { > > > > > > > > > > > > > > > int32_t x; > > > > > > > > > > > > > > > x =3D rte_atomic32_add_return(&owner_id, = 1); > > > > > > > > > > > > > > > if (x > UINT16_MAX) { > > > > > > > > > > > > > > > rte_atomic32_dec(&owner_id); > > > > > > > > > > > > > > > return -EOVERWLOW; > > > > > > > > > > > > > > > } else > > > > > > > > > > > > > > > return x; } > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Why not just to keep it simple and using > > > > > > > > > > > > > > > > the same > > > lock? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Lock is also fine, I just think it better be > > > > > > > > > > > > > > > a separate one > > > > > > > > > > > > > > > - that would protext just next_owner_id. > > > > > > > > > > > > > > > Though if you are going to use uuid here - > > > > > > > > > > > > > > > all that probably not relevant any more. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I agree about the uuid but still think the > > > > > > > > > > > > > > same lock should be used for > > > > > > > > > > > both. > > > > > > > > > > > > > > > > > > > > > > > > > > But with uuid you don't need next_owner_id at all= , > right? > > > > > > > > > > > > > So lock will only be used for rte_eth_dev_data[] > > > > > > > > > > > > > fields > > > > > anyway. > > > > > > > > > > > > > > > > > > > > > > > > > Sorry, I meant uint64_t, not uuid. > > > > > > > > > > > > > > > > > > > > > > Ah ok, my thought uuid_t is better as with it you > > > > > > > > > > > don't need to support your own code to allocate new > > > > > > > > > > > owner_id, but rely on system libs > > > > > > > > > instead. > > > > > > > > > > > But wouldn't insist here. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Another alternative would be to use 2 > > > > > > > > > > > > > > > > > locks - one for next_owner_id second for > > > > > > > > > > > > > > > > > actual data[] > > > protection. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Another thing - you'll probably need to > > > grab/release > > > > > > > > > > > > > > > > > a lock inside > > > > > > > > > > > > > > > > > rte_eth_dev_allocated() too. > > > > > > > > > > > > > > > > > It is a public function used by drivers, > > > > > > > > > > > > > > > > > so need to be protected > > > > > > > > > too. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Yes, I thought about it, but decided not > > > > > > > > > > > > > > > > to use lock in > > > > > next: > > > > > > > > > > > > > > > > rte_eth_dev_allocated rte_eth_dev_count > > > > > > > > > > > > > > > > rte_eth_dev_get_name_by_port > > > > > > > > > rte_eth_dev_get_port_by_name > > > > > > > > > > > > > > > > maybe more... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > As I can see in patch #3 you protect by lock > > > > > > > > > > > > > > > access to rte_eth_dev_data[].name (which > > > > > > > > > > > > > > > seems like a good > > > > > thing). > > > > > > > > > > > > > > > So I think any other public function that > > > > > > > > > > > > > > > access rte_eth_dev_data[].name should be > > > > > > > > > > > > > > > protected by the > > > > > same > > > > > > > lock. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I don't think so, I can understand to use the > > > > > > > > > > > > > > ownership lock here(as in port > > > > > > > > > > > > > creation) but I don't think it is necessary too. > > > > > > > > > > > > > > What are we exactly protecting here? > > > > > > > > > > > > > > Don't you think it is just timing?(ask in the > > > > > > > > > > > > > > next moment and you may get another answer) I > > > > > > > > > > > > > > don't see optional > > > crash. > > > > > > > > > > > > > > > > > > > > > > > > > > Not sure what you mean here by timing... > > > > > > > > > > > > > As I understand rte_eth_dev_data[].name unique > > > identifies > > > > > > > > > > > > > device and is used by port > > > > > > > > > > > > > allocation/release/find > > > functions. > > > > > > > > > > > > > As you stated above: > > > > > > > > > > > > > "1. The port allocation and port release > > > > > > > > > > > > > synchronization will be managed by ethdev." > > > > > > > > > > > > > To me it means that ethdev layer has to make > > > > > > > > > > > > > sure that all accesses to rte_eth_dev_data[].name= are > atomic. > > > > > > > > > > > > > Otherwise what would prevent the situation when > > > > > > > > > > > > > one > > > > > process > > > > > > > > > > > > > does > > > > > > > > > > > > > rte_eth_dev_allocate()- > > > >snprintf(rte_eth_dev_data[x].name, > > > > > > > > > > > > > ...) while second one does > > > > > > > > > > > rte_eth_dev_allocated(rte_eth_dev_data[x].name, ...) = ? > > > > > > > > > > > > > > > > > > > > > > > > > The second will get True or False and that is it. > > > > > > > > > > > > > > > > > > > > > > Under race condition - in the worst case it might > > > > > > > > > > > crash, though for that you'll have to be really unluc= ky. > > > > > > > > > > > Though in most cases as you said it would just not > > > > > > > > > > > operate > > > > > correctly. > > > > > > > > > > > I think if we start to protect dev->name by lock we > > > > > > > > > > > need to do it for all instances (both read and write)= . > > > > > > > > > > > > > > > > > > > > > Since under the ownership rules, the user must take > > > > > > > > > > ownership > > > of a > > > > > > > > > > port > > > > > > > > > before using it, I still don't see a problem here. > > > > > > > > > > > > > > > > > > I am not talking about owner id or name here. > > > > > > > > > I am talking about dev->name. > > > > > > > > > > > > > > > > > So? The user still should take ownership of a device > > > > > > > > before using it > > > (by > > > > > > > name or by port id). > > > > > > > > It can just read it without owning it, but no managing it. > > > > > > > > > > > > > > > > > > Please, Can you describe specific crash scenario and > > > > > > > > > > explain how could the > > > > > > > > > locking fix it? > > > > > > > > > > > > > > > > > > Let say thread 0 doing rte_eth_dev_allocate()- > > > > > > > > > >snprintf(rte_eth_dev_data[x].name, ...), thread 1 doing > > > > > > > > > rte_pmd_ring_remove()->rte_eth_dev_allocated()- > >strcmp(). > > > > > > > > > And because of race condition - rte_eth_dev_allocated() > > > > > > > > > will > > > return > > > > > > > > > rte_eth_dev * for the wrong device. > > > > > > > > Which wrong device do you mean? I guess it is the device > > > > > > > > which > > > > > currently is > > > > > > > being created by thread 0. > > > > > > > > > Then rte_pmd_ring_remove() will call rte_free() for > > > > > > > > > related resources, while It can still be in use by someon= e else. > > > > > > > > The rte_pmd_ring_remove caller(some DPDK entity) must take > > > > > ownership > > > > > > > > (or validate that he is the owner) of a port before doing > > > > > > > > it(free, > > > > > release), so > > > > > > > no issue here. > > > > > > > > > > > > > > Forget about ownership for a second. > > > > > > > Suppose we have a process it created ring port for itself > > > > > > > (without > > > setting > > > > > any > > > > > > > ownership) and used it for some time. > > > > > > > Then it decided to remove it, so it calls rte_pmd_ring_remove= () > for it. > > > > > > > At the same time second process decides to call > > > rte_eth_dev_allocate() > > > > > (let > > > > > > > say for anither ring port). > > > > > > > They could collide trying to read (process 0) and modify > > > > > > > (process 1) > > > same > > > > > > > string rte_eth_dev_data[].name. > > > > > > > > > > > > > Do you mean that process 0 will compare successfully the > > > > > > process 1 > > > new > > > > > port name? > > > > > > > > > > Yes. > > > > > > > > > > > The state are in local process memory - so process 0 will not > > > > > > compare > > > the > > > > > process 1 port, from its point of view this port is in UNUSED > > > > > > state. > > > > > > > > > > > > > > > > Ok, and why it can't be in attached state in process 0 too? > > > > > > > > Someone in process 0 should attach it using protected > > > > attach_secondary > > > somewhere in your scenario. > > > > > > Yes, process 0 can have this port attached too, why not? > > See the function with inline comments: > > > > struct rte_eth_dev * > > rte_eth_dev_allocated(const char *name) { > > unsigned i; > > > > for (i =3D 0; i < RTE_MAX_ETHPORTS; i++) { > > > > The below state are in local process memory, > > So, if here process 1 will allocate a new port (the current i), > update its local state to ATTACHED and write the name, > > the state is not visible by process 0 until someone in process > 0 will attach it by rte_eth_dev_attach_secondary. > > So, to use rte_eth_dev_attach_secondary process 0 must > take the lock > > and it can't, because it is currently locked by process 1. >=20 > Ok I see. > Thanks for your patience. > BTW, that means that if let say process 0 will call rte_eth_dev_allocate(= "xxx") > and process 1 will call rte_eth_dev_allocate("yyy") we can endup with sam= e > port_id be used for different devices and 2 processes will overwrite the > same rte_eth_dev_data[port_id]? No, contrary to the state, the lock itself is in shared memory, so 2 proces= ses cannot allocate port in the same time.(you can see it in the next patch= of this series). > Konstantin >=20 > > > > if ((rte_eth_devices[i].state =3D=3D RTE_ETH_DEV_ATTACHED) > && > > strcmp(rte_eth_devices[i].data->name, name) =3D=3D 0) > > return &rte_eth_devices[i]; > > } > > return NULL; > > > >