From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B607AA04BC for ; Sat, 10 Oct 2020 00:03:03 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CDE141D619; Sat, 10 Oct 2020 00:02:56 +0200 (CEST) Received: from mailout2.w1.samsung.com (mailout2.w1.samsung.com [210.118.77.12]) by dpdk.org (Postfix) with ESMTP id E36171D609 for ; Sat, 10 Oct 2020 00:02:51 +0200 (CEST) Received: from eucas1p1.samsung.com (unknown [182.198.249.206]) by mailout2.w1.samsung.com (KnoxPortal) with ESMTP id 20201009220241euoutp02392e9b3554ee148458d8ba59599fb02e~8cguiJN0Z1685816858euoutp02n for ; Fri, 9 Oct 2020 22:02:41 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.w1.samsung.com 20201009220241euoutp02392e9b3554ee148458d8ba59599fb02e~8cguiJN0Z1685816858euoutp02n DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1602280961; bh=AqUBlf8gPTXwJQZGL7gAA/vvwSQFfIKWCLO6QqixeJI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Fdu2N1qDHjvMgKRgEbUdUbC46BIbNRBxrG0M7Ddua6sAAY5/sh/+MYjhUkrgYC8sy u3SNjN5/o/lIecNs/NImuLTJc3Ny4hFx/eTKTl7+N/SRACwGfab/C9uStzX1Knvppt xvQRCto0ZlAbdM2ZZb6PnCUpFB0au/m/Wq/vnz5Q= Received: from eusmges2new.samsung.com (unknown [203.254.199.244]) by eucas1p1.samsung.com (KnoxPortal) with ESMTP id 20201009220234eucas1p138b0dc40e4004b4a9ea6205f60792aca~8cgnsUJLs0993409934eucas1p1W; Fri, 9 Oct 2020 22:02:34 +0000 (GMT) Received: from eucas1p1.samsung.com ( [182.198.249.206]) by eusmges2new.samsung.com (EUCPMTA) with SMTP id 93.88.05997.AFDD08F5; Fri, 9 Oct 2020 23:02:34 +0100 (BST) Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by eucas1p2.samsung.com (KnoxPortal) with ESMTPA id 20201009220233eucas1p285b4d01402c0c8bcfd018673afeb05eb~8cgmqBlfZ1628316283eucas1p2K; Fri, 9 Oct 2020 22:02:33 +0000 (GMT) Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by eusmtrp2.samsung.com (KnoxPortal) with ESMTP id 20201009220233eusmtrp278c0a0fbe8bf8112577f0d522086f3b9~8cgmpe2iQ1642116421eusmtrp2a; Fri, 9 Oct 2020 22:02:33 +0000 (GMT) X-AuditID: cbfec7f4-677ff7000000176d-f5-5f80ddfa0734 Received: from eusmtip1.samsung.com ( [203.254.199.221]) by eusmgms2.samsung.com (EUCPMTA) with SMTP id D2.5F.06017.9FDD08F5; Fri, 9 Oct 2020 23:02:33 +0100 (BST) Received: from Padamandas.fritz.box (unknown [106.210.88.70]) by eusmtip1.samsung.com (KnoxPortal) with ESMTPA id 20201009220232eusmtip1aa0b547a59cee487d8b0e77ebdfa1362~8cglsNKW21334813348eusmtip1t; Fri, 9 Oct 2020 22:02:32 +0000 (GMT) From: Lukasz Wojciechowski To: David Hunt , Bruce Richardson Cc: dev@dpdk.org, l.wojciechow@partner.samsung.com, stable@dpdk.org Date: Sat, 10 Oct 2020 00:01:51 +0200 Message-Id: <20201009220202.20834-5-l.wojciechow@partner.samsung.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009220202.20834-1-l.wojciechow@partner.samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrGIsWRmVeSWpSXmKPExsWy7djPc7q/7jbEGxxpEra4screom/SRyaL d5+2M1k861nHaPGv4w+7A6vHrwVLWT0W73nJ5HHw3R6mAOYoLpuU1JzMstQifbsErozr69ML TuRXXP1+lK2B8WZEFyMnh4SAicTGnrXsXYxcHEICKxglbm9uZoJwvjBK9N1ZBJX5zCjx9Ncv 5i5GDrCW35MDIOLLGSW+nGlhhnA+MUoc3DmdCWQum4CtxJGZX1lBbBGBMInm5r0sIM3MAs4S T76ygYSFBbwl3i1rBCthEVCVmLZhKzOIzSvgKrFj1zF2iPPkJVZvOAAW5xRwk2jdtYwFZJeE wGU2iX8Tp7NBFLlIzF7ykhHCFpZ4dXwLVLOMxP+d85kgGrYxSlz9/ZMRwtnPKHG9dwVUlbXE 4X+/2SCu05RYv0sfIuwo0b9hARPEx3wSN94KgoSZgcxJ26ZDA4JXoqNNCKJaT+Jpz1RGmLV/ 1j5hgbA9JKYcOQwN0auMEov6VjNOYJSfhbBsASPjKkbx1NLi3PTUYqO81HK94sTc4tK8dL3k /NxNjMDYP/3v+JcdjLv+JB1iFOBgVOLhbUhuiBdiTSwrrsw9xCjBwawkwut09nScEG9KYmVV alF+fFFpTmrxIUZpDhYlcV7jRS9jhQTSE0tSs1NTC1KLYLJMHJxSDYwFn+9tUlKuFDD4ZWTI H55yKitFlqlvzbfsQt3GJhMxlf3qpjvbmHIDeBYaBgf3OO4+v+fs89NSj9e/sAl4bFKZuG21 gtLN9mUmnTelQyL499dNWR/y6vxcjsV8ZlXe79dP1j25bNf+DUePOESwC//9Iupi9Mhcp/Xh 25uhsoqHDrb9MegKmqjEUpyRaKjFXFScCABAeU1s+QIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrKLMWRmVeSWpSXmKPExsVy+t/xu7o/7zbEGyz6y2NxY5W9Rd+kj0wW 7z5tZ7J41rOO0eJfxx92B1aPXwuWsnos3vOSyePguz1MAcxRejZF+aUlqQoZ+cUltkrRhhZG eoaWFnpGJpZ6hsbmsVZGpkr6djYpqTmZZalF+nYJehnX16cXnMivuPr9KFsD482ILkYODgkB E4nfkwO6GLk4hASWMkpsufqQFSIuI/HhkkAXIyeQKSzx51oXG4gtJPCBUeLRTSMQm03AVuLI zK9g5SICYRInVvqDhJkF3CW2LJ7KDGILC3hLvFvWyApiswioSkzbsBUszivgKrFj1zF2iPHy Eqs3HACLcwq4SbTuWsYCcU4j0DkHXjJPYORbwMiwilEktbQ4Nz232EivODG3uDQvXS85P3cT IzAItx37uWUHY9e74EOMAhyMSjy8GokN8UKsiWXFlbmHGCU4mJVEeJ3Ono4T4k1JrKxKLcqP LyrNSS0+xGgKdNVEZinR5HxghOSVxBuaGppbWBqaG5sbm1koifN2CByMERJITyxJzU5NLUgt gulj4uCUamDUS/nY9b6+v/v25Nmt++NEPPdpXH79k3/vp299eWYV+f2Hpf9IaKzM/WlhOUvK rnIO44xbrTtMzfca3GyNWynx5N2VKyfYfjlsLk3YvsAycNN+hQ3L+3P+73WweutzoHGP350n sWdC0++Htmv5cxTP/rrw15V5V3v6+3NPv0qeW1s7Q6acqWq6EktxRqKhFnNRcSIAPkzQnFgC AAA= X-CMS-MailID: 20201009220233eucas1p285b4d01402c0c8bcfd018673afeb05eb X-Msg-Generator: CA Content-Type: text/plain; charset="utf-8" X-RootMTR: 20201009220233eucas1p285b4d01402c0c8bcfd018673afeb05eb X-EPHeader: CA CMS-TYPE: 201P X-CMS-RootMailID: 20201009220233eucas1p285b4d01402c0c8bcfd018673afeb05eb References: <20201008052323.11547-1-l.wojciechow@partner.samsung.com> <20201009220202.20834-1-l.wojciechow@partner.samsung.com> Subject: [dpdk-stable] [PATCH v6 04/15] distributor: handle worker shutdown in burst mode X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" The burst version of distributor implementation was missing proper handling of worker shutdown. A worker processing packets received from distributor can call rte_distributor_return_pkt() function informing distributor that it want no more packets. Further calls to rte_distributor_request_pkt() or rte_distributor_get_pkt() however should inform distributor that new packets are requested again. Lack of the proper implementation has caused that even after worker informed about returning last packets, new packets were still sent from distributor causing deadlocks as no one could get them on worker side. This patch adds handling shutdown of the worker in following way: 1) It fixes usage of RTE_DISTRIB_VALID_BUF handshake flag. This flag was formerly unused in burst implementation and now it is used for marking valid packets in retptr64 replacing invalid use of RTE_DISTRIB_RETURN_BUF flag. 2) Uses RTE_DISTRIB_RETURN_BUF as a worker to distributor handshake in retptr64 to indicate that worker has shutdown. 3) Worker that shuts down blocks also bufptr for itself with RTE_DISTRIB_RETURN_BUF flag allowing distributor to retrieve any in flight packets. 4) When distributor receives information about shutdown of a worker, it: marks worker as not active; retrieves any in flight and backlog packets and process them to different workers; unlocks bufptr64 by clearing RTE_DISTRIB_RETURN_BUF flag and allowing use in the future if worker requests any new packages. 5) Do not allow to: send or add to backlog any packets for not active workers. Such workers are also ignored if matched. 6) Adjust calls to handle_returns() and tags matching procedure to react for possible activation deactivation of workers. Fixes: 775003ad2f96 ("distributor: add new burst-capable library") Cc: david.hunt@intel.com Cc: stable@dpdk.org Signed-off-by: Lukasz Wojciechowski Acked-by: David Hunt --- lib/librte_distributor/distributor_private.h | 3 + lib/librte_distributor/rte_distributor.c | 175 +++++++++++++++---- 2 files changed, 146 insertions(+), 32 deletions(-) diff --git a/lib/librte_distributor/distributor_private.h b/lib/librte_distributor/distributor_private.h index 489aef2ac..689fe3e18 100644 --- a/lib/librte_distributor/distributor_private.h +++ b/lib/librte_distributor/distributor_private.h @@ -155,6 +155,9 @@ struct rte_distributor { enum rte_distributor_match_function dist_match_fn; struct rte_distributor_single *d_single; + + uint8_t active[RTE_DISTRIB_MAX_WORKERS]; + uint8_t activesum; }; void diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c index b720abe03..115443fc0 100644 --- a/lib/librte_distributor/rte_distributor.c +++ b/lib/librte_distributor/rte_distributor.c @@ -51,7 +51,7 @@ rte_distributor_request_pkt(struct rte_distributor *d, * Sync with worker on GET_BUF flag. */ while (unlikely(__atomic_load_n(retptr64, __ATOMIC_ACQUIRE) - & RTE_DISTRIB_GET_BUF)) { + & (RTE_DISTRIB_GET_BUF | RTE_DISTRIB_RETURN_BUF))) { rte_pause(); uint64_t t = rte_rdtsc()+100; @@ -67,11 +67,11 @@ rte_distributor_request_pkt(struct rte_distributor *d, for (i = count; i < RTE_DIST_BURST_SIZE; i++) buf->retptr64[i] = 0; - /* Set Return bit for each packet returned */ + /* Set VALID_BUF bit for each packet returned */ for (i = count; i-- > 0; ) buf->retptr64[i] = (((int64_t)(uintptr_t)(oldpkt[i])) << - RTE_DISTRIB_FLAG_BITS) | RTE_DISTRIB_RETURN_BUF; + RTE_DISTRIB_FLAG_BITS) | RTE_DISTRIB_VALID_BUF; /* * Finally, set the GET_BUF to signal to distributor that cache @@ -97,11 +97,13 @@ rte_distributor_poll_pkt(struct rte_distributor *d, return (pkts[0]) ? 1 : 0; } - /* If bit is set, return + /* If any of below bits is set, return. + * GET_BUF is set when distributor hasn't sent any packets yet + * RETURN_BUF is set when distributor must retrieve in-flight packets * Sync with distributor to acquire bufptrs */ if (__atomic_load_n(&(buf->bufptr64[0]), __ATOMIC_ACQUIRE) - & RTE_DISTRIB_GET_BUF) + & (RTE_DISTRIB_GET_BUF | RTE_DISTRIB_RETURN_BUF)) return -1; /* since bufptr64 is signed, this should be an arithmetic shift */ @@ -113,7 +115,7 @@ rte_distributor_poll_pkt(struct rte_distributor *d, } /* - * so now we've got the contents of the cacheline into an array of + * so now we've got the contents of the cacheline into an array of * mbuf pointers, so toggle the bit so scheduler can start working * on the next cacheline while we're working. * Sync with distributor on GET_BUF flag. Release bufptrs. @@ -175,7 +177,7 @@ rte_distributor_return_pkt(struct rte_distributor *d, * Sync with worker on GET_BUF flag. */ while (unlikely(__atomic_load_n(retptr64, __ATOMIC_ACQUIRE) - & RTE_DISTRIB_GET_BUF)) { + & (RTE_DISTRIB_GET_BUF | RTE_DISTRIB_RETURN_BUF))) { rte_pause(); uint64_t t = rte_rdtsc()+100; @@ -187,17 +189,25 @@ rte_distributor_return_pkt(struct rte_distributor *d, __atomic_thread_fence(__ATOMIC_ACQUIRE); for (i = 0; i < RTE_DIST_BURST_SIZE; i++) /* Switch off the return bit first */ - buf->retptr64[i] &= ~RTE_DISTRIB_RETURN_BUF; + buf->retptr64[i] = 0; for (i = num; i-- > 0; ) buf->retptr64[i] = (((int64_t)(uintptr_t)oldpkt[i]) << - RTE_DISTRIB_FLAG_BITS) | RTE_DISTRIB_RETURN_BUF; + RTE_DISTRIB_FLAG_BITS) | RTE_DISTRIB_VALID_BUF; + + /* Use RETURN_BUF on bufptr64 to notify distributor that + * we won't read any mbufs from there even if GET_BUF is set. + * This allows distributor to retrieve in-flight already sent packets. + */ + __atomic_or_fetch(&(buf->bufptr64[0]), RTE_DISTRIB_RETURN_BUF, + __ATOMIC_ACQ_REL); - /* set the GET_BUF but even if we got no returns. - * Sync with distributor on GET_BUF flag. Release retptrs. + /* set the RETURN_BUF on retptr64 even if we got no returns. + * Sync with distributor on RETURN_BUF flag. Release retptrs. + * Notify distributor that we don't request more packets any more. */ __atomic_store_n(&(buf->retptr64[0]), - buf->retptr64[0] | RTE_DISTRIB_GET_BUF, __ATOMIC_RELEASE); + buf->retptr64[0] | RTE_DISTRIB_RETURN_BUF, __ATOMIC_RELEASE); return 0; } @@ -267,6 +277,59 @@ find_match_scalar(struct rte_distributor *d, */ } +/* + * When worker called rte_distributor_return_pkt() + * and passed RTE_DISTRIB_RETURN_BUF handshake through retptr64, + * distributor must retrieve both inflight and backlog packets assigned + * to the worker and reprocess them to another worker. + */ +static void +handle_worker_shutdown(struct rte_distributor *d, unsigned int wkr) +{ + struct rte_distributor_buffer *buf = &(d->bufs[wkr]); + /* double BURST size for storing both inflights and backlog */ + struct rte_mbuf *pkts[RTE_DIST_BURST_SIZE * 2]; + unsigned int pkts_count = 0; + unsigned int i; + + /* If GET_BUF is cleared there are in-flight packets sent + * to worker which does not require new packets. + * They must be retrieved and assigned to another worker. + */ + if (!(__atomic_load_n(&(buf->bufptr64[0]), __ATOMIC_ACQUIRE) + & RTE_DISTRIB_GET_BUF)) + for (i = 0; i < RTE_DIST_BURST_SIZE; i++) + if (buf->bufptr64[i] & RTE_DISTRIB_VALID_BUF) + pkts[pkts_count++] = (void *)((uintptr_t) + (buf->bufptr64[i] + >> RTE_DISTRIB_FLAG_BITS)); + + /* Make following operations on handshake flags on bufptr64: + * - set GET_BUF to indicate that distributor can overwrite buffer + * with new packets if worker will make a new request. + * - clear RETURN_BUF to unlock reads on worker side. + */ + __atomic_store_n(&(buf->bufptr64[0]), RTE_DISTRIB_GET_BUF, + __ATOMIC_RELEASE); + + /* Collect backlog packets from worker */ + for (i = 0; i < d->backlog[wkr].count; i++) + pkts[pkts_count++] = (void *)((uintptr_t) + (d->backlog[wkr].pkts[i] >> RTE_DISTRIB_FLAG_BITS)); + + d->backlog[wkr].count = 0; + + /* Clear both inflight and backlog tags */ + for (i = 0; i < RTE_DIST_BURST_SIZE; i++) { + d->in_flight_tags[wkr][i] = 0; + d->backlog[wkr].tags[i] = 0; + } + + /* Recursive call */ + if (pkts_count > 0) + rte_distributor_process(d, pkts, pkts_count); +} + /* * When the handshake bits indicate that there are packets coming @@ -285,19 +348,33 @@ handle_returns(struct rte_distributor *d, unsigned int wkr) /* Sync on GET_BUF flag. Acquire retptrs. */ if (__atomic_load_n(&(buf->retptr64[0]), __ATOMIC_ACQUIRE) - & RTE_DISTRIB_GET_BUF) { + & (RTE_DISTRIB_GET_BUF | RTE_DISTRIB_RETURN_BUF)) { for (i = 0; i < RTE_DIST_BURST_SIZE; i++) { - if (buf->retptr64[i] & RTE_DISTRIB_RETURN_BUF) { + if (buf->retptr64[i] & RTE_DISTRIB_VALID_BUF) { oldbuf = ((uintptr_t)(buf->retptr64[i] >> RTE_DISTRIB_FLAG_BITS)); /* store returns in a circular buffer */ store_return(oldbuf, d, &ret_start, &ret_count); count++; - buf->retptr64[i] &= ~RTE_DISTRIB_RETURN_BUF; + buf->retptr64[i] &= ~RTE_DISTRIB_VALID_BUF; } } d->returns.start = ret_start; d->returns.count = ret_count; + + /* If worker requested packets with GET_BUF, set it to active + * otherwise (RETURN_BUF), set it to not active. + */ + d->activesum -= d->active[wkr]; + d->active[wkr] = !!(buf->retptr64[0] & RTE_DISTRIB_GET_BUF); + d->activesum += d->active[wkr]; + + /* If worker returned packets without requesting new ones, + * handle all in-flights and backlog packets assigned to it. + */ + if (unlikely(buf->retptr64[0] & RTE_DISTRIB_RETURN_BUF)) + handle_worker_shutdown(d, wkr); + /* Clear for the worker to populate with more returns. * Sync with distributor on GET_BUF flag. Release retptrs. */ @@ -322,11 +399,15 @@ release(struct rte_distributor *d, unsigned int wkr) unsigned int i; handle_returns(d, wkr); + if (unlikely(!d->active[wkr])) + return 0; /* Sync with worker on GET_BUF flag */ while (!(__atomic_load_n(&(d->bufs[wkr].bufptr64[0]), __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) { handle_returns(d, wkr); + if (unlikely(!d->active[wkr])) + return 0; rte_pause(); } @@ -366,7 +447,7 @@ rte_distributor_process(struct rte_distributor *d, int64_t next_value = 0; uint16_t new_tag = 0; uint16_t flows[RTE_DIST_BURST_SIZE] __rte_cache_aligned; - unsigned int i, j, w, wid; + unsigned int i, j, w, wid, matching_required; if (d->alg_type == RTE_DIST_ALG_SINGLE) { /* Call the old API */ @@ -374,11 +455,13 @@ rte_distributor_process(struct rte_distributor *d, mbufs, num_mbufs); } + for (wid = 0 ; wid < d->num_workers; wid++) + handle_returns(d, wid); + if (unlikely(num_mbufs == 0)) { /* Flush out all non-full cache-lines to workers. */ for (wid = 0 ; wid < d->num_workers; wid++) { /* Sync with worker on GET_BUF flag. */ - handle_returns(d, wid); if (__atomic_load_n(&(d->bufs[wid].bufptr64[0]), __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF) { release(d, wid); @@ -388,6 +471,9 @@ rte_distributor_process(struct rte_distributor *d, return 0; } + if (unlikely(!d->activesum)) + return 0; + while (next_idx < num_mbufs) { uint16_t matches[RTE_DIST_BURST_SIZE]; unsigned int pkts; @@ -412,22 +498,30 @@ rte_distributor_process(struct rte_distributor *d, for (; i < RTE_DIST_BURST_SIZE; i++) flows[i] = 0; - switch (d->dist_match_fn) { - case RTE_DIST_MATCH_VECTOR: - find_match_vec(d, &flows[0], &matches[0]); - break; - default: - find_match_scalar(d, &flows[0], &matches[0]); - } + matching_required = 1; + for (j = 0; j < pkts; j++) { + if (unlikely(!d->activesum)) + return next_idx; + + if (unlikely(matching_required)) { + switch (d->dist_match_fn) { + case RTE_DIST_MATCH_VECTOR: + find_match_vec(d, &flows[0], + &matches[0]); + break; + default: + find_match_scalar(d, &flows[0], + &matches[0]); + } + matching_required = 0; + } /* * Matches array now contain the intended worker ID (+1) of * the incoming packets. Any zeroes need to be assigned * workers. */ - for (j = 0; j < pkts; j++) { - next_mb = mbufs[next_idx++]; next_value = (((int64_t)(uintptr_t)next_mb) << RTE_DISTRIB_FLAG_BITS); @@ -447,12 +541,18 @@ rte_distributor_process(struct rte_distributor *d, */ /* matches[j] = 0; */ - if (matches[j]) { + if (matches[j] && d->active[matches[j]-1]) { struct rte_distributor_backlog *bl = &d->backlog[matches[j]-1]; if (unlikely(bl->count == RTE_DIST_BURST_SIZE)) { release(d, matches[j]-1); + if (!d->active[matches[j]-1]) { + j--; + next_idx--; + matching_required = 1; + continue; + } } /* Add to worker that already has flow */ @@ -462,11 +562,21 @@ rte_distributor_process(struct rte_distributor *d, bl->pkts[idx] = next_value; } else { - struct rte_distributor_backlog *bl = - &d->backlog[wkr]; + struct rte_distributor_backlog *bl; + + while (unlikely(!d->active[wkr])) + wkr = (wkr + 1) % d->num_workers; + bl = &d->backlog[wkr]; + if (unlikely(bl->count == RTE_DIST_BURST_SIZE)) { release(d, wkr); + if (!d->active[wkr]) { + j--; + next_idx--; + matching_required = 1; + continue; + } } /* Add to current worker worker */ @@ -485,9 +595,7 @@ rte_distributor_process(struct rte_distributor *d, matches[w] = wkr+1; } } - wkr++; - if (wkr >= d->num_workers) - wkr = 0; + wkr = (wkr + 1) % d->num_workers; } /* Flush out all non-full cache-lines to workers. */ @@ -663,6 +771,9 @@ rte_distributor_create(const char *name, for (i = 0 ; i < num_workers ; i++) d->backlog[i].tags = &d->in_flight_tags[i][RTE_DIST_BURST_SIZE]; + memset(d->active, 0, sizeof(d->active)); + d->activesum = 0; + dist_burst_list = RTE_TAILQ_CAST(rte_dist_burst_tailq.head, rte_dist_burst_list); -- 2.17.1