From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id D878CA00E6 for ; Thu, 13 Jun 2019 11:40:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4F24B1D392; Thu, 13 Jun 2019 11:40:36 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 834B71D390 for ; Thu, 13 Jun 2019 11:40:35 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Jun 2019 02:40:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,369,1557212400"; d="scan'208";a="184555477" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.237.220.99]) ([10.237.220.99]) by fmsmga002.fm.intel.com with ESMTP; 13 Jun 2019 02:40:33 -0700 To: Jakub Grajciar , dev@dpdk.org References: <20190613064248.4930-1-jgrajcia@cisco.com> From: "Burakov, Anatoly" Message-ID: Date: Thu, 13 Jun 2019 10:40:32 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: <20190613064248.4930-1-jgrajcia@cisco.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v1] net/memif: multi-process support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 13-Jun-19 7:42 AM, Jakub Grajciar wrote: > Multi-process support for memif PMD. > Primary process handles connection establishment. > Secondary process queries for memory regions. > > Signed-off-by: Jakub Grajciar > --- > +/* > + * Request regions > + * Called by secondary process, when ports link status goes up. > + */ > +static int > +memif_mp_request_regions(struct rte_eth_dev *dev) > +{ > + int ret, i; > + struct timespec timeout = {.tv_sec = 5, .tv_nsec = 0}; > + struct rte_mp_msg msg, *reply; > + struct rte_mp_reply replies; > + struct mp_region_msg *msg_param = (struct mp_region_msg *)msg.param; > + struct mp_region_msg *reply_param; > + struct memif_region *r; > + struct pmd_process_private *proc_private = dev->process_private; > + > + MIF_LOG(DEBUG, "Requesting memory regions"); > + > + for (i = 0; i < ETH_MEMIF_MAX_REGION_NUM; i++) { > + /* Prepare the message */ > + memset(&msg, 0, sizeof(msg)); > + strlcpy(msg.name, MEMIF_MP_SEND_REGION, sizeof(msg.name)); > + strlcpy(msg_param->port_name, dev->data->name, > + sizeof(msg_param->port_name)); > + msg_param->idx = i; > + msg.len_param = sizeof(*msg_param); > + > + /* Send message */ > + ret = rte_mp_request_sync(&msg, &replies, &timeout); > + if (ret < 0 || replies.nb_received != 1) { > + MIF_LOG(ERR, "Failed to send mp msg: %d", > + rte_errno); > + return -1; > + } > + > + reply = &replies.msgs[0]; > + reply_param = (struct mp_region_msg *)reply->param; Replies need to be freed after use, otherwise you're leaking memory. See rte_mp_request_sync API doc [1] and the PG [2]. [1] https://doc.dpdk.org/api/rte__eal_8h.html#abc35eb3d9139a0e7b9277a844dd2a61f [2] https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html#communication-between-multiple-processes > + > + if (reply_param->size > 0) { > + r = rte_zmalloc("region", sizeof(struct memif_region), 0); > + if (r == NULL) { > + MIF_LOG(ERR, "Failed to alloc memif region."); > + return -ENOMEM; > + } > + r->region_size = reply_param->size; > + if (reply->num_fds < 1) { > + MIF_LOG(ERR, "Missing file descriptor."); > + return -1; > + } > + r->fd = reply->fds[0]; > + r->addr = NULL; > + > + proc_private->regions[reply_param->idx] = r; > + proc_private->regions_num++; > + } > + } > + -- Thanks, Anatoly