From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id B8716A046B for ; Tue, 25 Jun 2019 12:26:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3DB191BB76; Tue, 25 Jun 2019 12:26:27 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id D87391BB71 for ; Tue, 25 Jun 2019 12:26:25 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jun 2019 03:26:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,415,1557212400"; d="scan'208";a="359896010" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.237.220.101]) ([10.237.220.101]) by fmsmga005.fm.intel.com with ESMTP; 25 Jun 2019 03:26:24 -0700 To: Jakub Grajciar , dev@dpdk.org References: <20190618084851.5322-1-jgrajcia@cisco.com> <20190625100502.20624-1-jgrajcia@cisco.com> From: "Burakov, Anatoly" Message-ID: <88888329-70b7-0f8d-4943-253254041edf@intel.com> Date: Tue, 25 Jun 2019 11:26:23 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <20190625100502.20624-1-jgrajcia@cisco.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3] net/memif: multi-process support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 25-Jun-19 11:05 AM, Jakub Grajciar wrote: > Multi-process support for memif PMD. > Primary process handles connection establishment. > Secondary process queries for memory regions. > > Signed-off-by: Jakub Grajciar > --- > +/* Message header to synchronize regions */ > +struct mp_region_msg { > + char port_name[RTE_DEV_NAME_MAX_LEN]; > + memif_region_index_t idx; > + memif_region_size_t size; > +}; > + > +static int > +memif_mp_send_region(const struct rte_mp_msg *msg, const void *peer) > +{ > + struct rte_eth_dev *dev; > + struct pmd_process_private *proc_private; > + const struct mp_region_msg *msg_param = (const struct mp_region_msg *)msg->param; > + struct rte_mp_msg reply; > + struct mp_region_msg *reply_param = (struct mp_region_msg *)reply.param; > + uint16_t port_id; > + int ret; > + > + /* Get requested port */ > + ret = rte_eth_dev_get_port_by_name(msg_param->port_name, &port_id); > + if (ret) { > + MIF_LOG(ERR, "Failed to get port id for %s", > + msg_param->port_name); > + return -1; > + } > + dev = &rte_eth_devices[port_id]; > + proc_private = dev->process_private; > + > + memset(&reply, 0, sizeof(reply)); > + strlcpy(reply.name, msg->name, sizeof(reply.name)); > + reply_param->idx = msg_param->idx; > + if (proc_private->regions[msg_param->idx] != NULL) { > + reply_param->size = proc_private->regions[msg_param->idx]->region_size; > + reply.fds[0] = proc_private->regions[msg_param->idx]->fd; > + reply.num_fds = 1; > + } > + reply.len_param = sizeof(*reply_param); > + if (rte_mp_reply(&reply, peer) < 0) { > + MIF_LOG(ERR, "Failed to reply to an add region request"); > + return -1; > + } > + > + return 0; > +} > + > +/* > + * Request regions > + * Called by secondary process, when ports link status goes up. > + */ > +static int > +memif_mp_request_regions(struct rte_eth_dev *dev) > +{ > + int ret, i; > + struct timespec timeout = {.tv_sec = 5, .tv_nsec = 0}; > + struct rte_mp_msg msg, *reply; > + struct rte_mp_reply replies; > + struct mp_region_msg *msg_param = (struct mp_region_msg *)msg.param; > + struct mp_region_msg *reply_param; > + struct memif_region *r; > + struct pmd_process_private *proc_private = dev->process_private; > + > + MIF_LOG(DEBUG, "Requesting memory regions"); > + > + for (i = 0; i < ETH_MEMIF_MAX_REGION_NUM; i++) { > + /* Prepare the message */ > + memset(&msg, 0, sizeof(msg)); > + strlcpy(msg.name, MEMIF_MP_SEND_REGION, sizeof(msg.name)); > + strlcpy(msg_param->port_name, dev->data->name, > + sizeof(msg_param->port_name)); > + msg_param->idx = i; > + msg.len_param = sizeof(*msg_param); > + > + /* Send message */ > + ret = rte_mp_request_sync(&msg, &replies, &timeout); > + if (ret < 0 || replies.nb_received != 1) { > + MIF_LOG(ERR, "Failed to send mp msg: %d", > + rte_errno); > + return -1; > + } > + > + reply = &replies.msgs[0]; > + reply_param = (struct mp_region_msg *)reply->param; > + > + if (reply_param->size > 0) { > + r = rte_zmalloc("region", sizeof(struct memif_region), 0); > + if (r == NULL) { > + MIF_LOG(ERR, "Failed to alloc memif region."); > + free(reply); > + return -ENOMEM; > + } > + r->region_size = reply_param->size; > + if (reply->num_fds < 1) { > + MIF_LOG(ERR, "Missing file descriptor."); > + free(reply); > + return -1; > + } > + r->fd = reply->fds[0]; > + r->addr = NULL; > + > + proc_private->regions[reply_param->idx] = r; > + proc_private->regions_num++; > + } > + free(reply); > + } > + > + return memif_connect(dev); > +} > + On the multiprocess/IPC part, Acked-by: Anatoly Burakov Please bear in mind that i did not look at other sections of the code. -- Thanks, Anatoly