From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1B4CAA10DA for ; Thu, 1 Aug 2019 20:29:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 093AB1C1D6; Thu, 1 Aug 2019 20:29:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id DD5981C1CB for ; Thu, 1 Aug 2019 20:29:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x71IPAMq020946; Thu, 1 Aug 2019 11:29:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0818; bh=ATkxyMc+K/qGXL9oL1XdADO2ErztMA98LiXq/SP8B4Y=; b=i3iJo1MxKG1tZxJmmrvmAViScoMhI3XsL2JLzlHZinK4iXazCXQfQrALs6Bh1Crr2UaA JLEAVIMvMUr/zN25q7pePxSXWz5xydJxNLJjhBk0RY3zxGzeyQb7mcBhDHDiTkFfiq6O 1DYEqm/BkhVSCnLPJi+9ms3G2UiaJdcYdlYNKLFTd+WXC4EMN5SagplXTcXgWKkRPR1b Kg4l1jKKaUqUhoc+KOkIhOqZv3kvs7a0odRkatRiuYRXjnjvU1UoZ6XynZKXB6nRvc09 hdiwg+pW5FG3JNxR8iaiDa0opc5pjle+aGN6y7SAFiMxAKD+z/40I2kDJuI5CmG/uP2a 5Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2u3jujmqwy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 01 Aug 2019 11:29:40 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 1 Aug 2019 11:29:39 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 1 Aug 2019 11:29:39 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 0BDAB3F703F; Thu, 1 Aug 2019 11:29:36 -0700 (PDT) From: To: CC: , , Vamsi Attunuru , Nithin Dabilpuram Date: Thu, 1 Aug 2019 23:59:28 +0530 Message-ID: <20190801182928.26216-1-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-08-01_08:2019-07-31,2019-08-01 signatures=0 Subject: [dpdk-dev] [PATCH v1 1/1] common/octeontx2: fix unaligned mbox memory accesses X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Octeontx2 PMD's mailbox client uses HW memory to send messages to mailbox server in the admin function Linux kernel driver. The device memory used for the mailbox communication needs to be qualified as volatile memory type to avoid unaligned device memory accesses because of compiler's memory access coalescing. This patch modifies the mailbox request and responses as volatile type which were non-volatile earlier and accessed from unaligned memory addresses. Fixes: 2b71657c8660 ("common/octeontx2: add mbox request and response ") Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- drivers/common/octeontx2/otx2_mbox.h | 12 ++++++------ drivers/mempool/octeontx2/otx2_mempool_debug.c | 4 ++-- drivers/mempool/octeontx2/otx2_mempool_ops.c | 6 +++--- drivers/net/octeontx2/otx2_ethdev_debug.c | 6 +++--- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h index b2c59c8..ceec406 100644 --- a/drivers/common/octeontx2/otx2_mbox.h +++ b/drivers/common/octeontx2/otx2_mbox.h @@ -547,7 +547,7 @@ struct npa_aq_enq_req { uint32_t __otx2_io aura_id; uint8_t __otx2_io ctype; uint8_t __otx2_io op; - union { + __otx2_io union { /* Valid when op == WRITE/INIT and ctype == AURA. * LF fills the pool_id in aura.pool_addr. AF will translate * the pool_id to pool context pointer. @@ -557,7 +557,7 @@ struct npa_aq_enq_req { struct npa_pool_s pool; }; /* Mask data when op == WRITE (1=write, 0=don't write) */ - union { + __otx2_io union { /* Valid when op == WRITE and ctype == AURA */ struct npa_aura_s aura_mask; /* Valid when op == WRITE and ctype == POOL */ @@ -567,7 +567,7 @@ struct npa_aq_enq_req { struct npa_aq_enq_rsp { struct mbox_msghdr hdr; - union { + __otx2_io union { /* Valid when op == READ and ctype == AURA */ struct npa_aura_s aura; /* Valid when op == READ and ctype == POOL */ @@ -653,7 +653,7 @@ struct nix_aq_enq_req { uint32_t __otx2_io qidx; uint8_t __otx2_io ctype; uint8_t __otx2_io op; - union { + __otx2_io union { /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */ struct nix_rq_ctx_s rq; /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */ @@ -666,7 +666,7 @@ struct nix_aq_enq_req { struct nix_rx_mce_s mce; }; /* Mask data when op == WRITE (1=write, 0=don't write) */ - union { + __otx2_io union { /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */ struct nix_rq_ctx_s rq_mask; /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */ @@ -682,7 +682,7 @@ struct nix_aq_enq_req { struct nix_aq_enq_rsp { struct mbox_msghdr hdr; - union { + __otx2_io union { struct nix_rq_ctx_s rq; struct nix_sq_ctx_s sq; struct nix_cq_ctx_s cq; diff --git a/drivers/mempool/octeontx2/otx2_mempool_debug.c b/drivers/mempool/octeontx2/otx2_mempool_debug.c index eef61ef..4d40fde 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_debug.c +++ b/drivers/mempool/octeontx2/otx2_mempool_debug.c @@ -7,7 +7,7 @@ #define npa_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__) static inline void -npa_lf_pool_dump(struct npa_pool_s *pool) +npa_lf_pool_dump(__otx2_io struct npa_pool_s *pool) { npa_dump("W0: Stack base\t\t0x%"PRIx64"", pool->stack_base); npa_dump("W1: ena \t\t%d\nW1: nat_align \t\t%d\nW1: stack_caching \t%d", @@ -45,7 +45,7 @@ npa_lf_pool_dump(struct npa_pool_s *pool) } static inline void -npa_lf_aura_dump(struct npa_aura_s *aura) +npa_lf_aura_dump(__otx2_io struct npa_aura_s *aura) { npa_dump("W0: Pool addr\t\t0x%"PRIx64"\n", aura->pool_addr); diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index ff63be5..f5a4fe3 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -355,14 +355,14 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, aura_init_req->aura_id = aura_id; aura_init_req->ctype = NPA_AQ_CTYPE_AURA; aura_init_req->op = NPA_AQ_INSTOP_INIT; - memcpy(&aura_init_req->aura, aura, sizeof(*aura)); + otx2_mbox_memcpy(&aura_init_req->aura, aura, sizeof(*aura)); pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); pool_init_req->aura_id = aura_id; pool_init_req->ctype = NPA_AQ_CTYPE_POOL; pool_init_req->op = NPA_AQ_INSTOP_INIT; - memcpy(&pool_init_req->pool, pool, sizeof(*pool)); + otx2_mbox_memcpy(&pool_init_req->pool, pool, sizeof(*pool)); otx2_mbox_msg_send(mbox, 0); rc = otx2_mbox_wait_for_rsp(mbox, 0); @@ -605,9 +605,9 @@ npa_lf_aura_range_update_check(uint64_t aura_handle) uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle); struct otx2_npa_lf *lf = otx2_npa_lf_obj_get(); struct npa_aura_lim *lim = lf->aura_lim; + __otx2_io struct npa_pool_s *pool; struct npa_aq_enq_req *req; struct npa_aq_enq_rsp *rsp; - struct npa_pool_s *pool; int rc; req = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox); diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c index 9f06e55..c8b4cd5 100644 --- a/drivers/net/octeontx2/otx2_ethdev_debug.c +++ b/drivers/net/octeontx2/otx2_ethdev_debug.c @@ -235,7 +235,7 @@ otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs) } static inline void -nix_lf_sq_dump(struct nix_sq_ctx_s *ctx) +nix_lf_sq_dump(__otx2_io struct nix_sq_ctx_s *ctx) { nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d", ctx->sqe_way_mask, ctx->cq); @@ -295,7 +295,7 @@ nix_lf_sq_dump(struct nix_sq_ctx_s *ctx) } static inline void -nix_lf_rq_dump(struct nix_rq_ctx_s *ctx) +nix_lf_rq_dump(__otx2_io struct nix_rq_ctx_s *ctx) { nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x", ctx->wqe_aura, ctx->substream); @@ -355,7 +355,7 @@ nix_lf_rq_dump(struct nix_rq_ctx_s *ctx) } static inline void -nix_lf_cq_dump(struct nix_cq_ctx_s *ctx) +nix_lf_cq_dump(__otx2_io struct nix_cq_ctx_s *ctx) { nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base); -- 2.8.4