* Re: [dpdk-dev] [PATCH v2 1/1] net/hinic: use mutex replace spin lock
@ 2019-07-09 12:20 Xuanziyang (William, Chip Application Design Logic and Hardware Development Dept IT_Products & Solutions)
0 siblings, 0 replies; 4+ messages in thread
From: Xuanziyang (William, Chip Application Design Logic and Hardware Development Dept IT_Products & Solutions) @ 2019-07-09 12:20 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: Wangxiaoyun (Cloud, Network Chip Application Development Dept),
Luoxianjun, Tanya Brokhman
> On 7/5/2019 7:47 AM, Ziyang Xuan wrote:
> > Using spin lock to protect critical resources of sending mgmt
> > messages. This will make high CPU usage for rte_delay_ms when sending
> > mgmt messages frequently. We can use mutex to protect the critical
> > resources and usleep to reduce CPU usage while keep functioning
> > properly.
> >
> > Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
>
> <...>
>
> > +static inline int hinic_mutex_init(pthread_mutex_t *pthreadmutex,
> > + const pthread_mutexattr_t *mattr) {
> > + int err;
> > +
> > + err = pthread_mutex_init(pthreadmutex, mattr);
> > + if (unlikely(err))
> > + PMD_DRV_LOG(ERR, "Fail to initialize mutex, error: %d", err);
> > +
> > + return err;
> > +}
> > +
> > +static inline int hinic_mutex_destroy(pthread_mutex_t *pthreadmutex)
> > +{
> > + int err;
> > +
> > + err = pthread_mutex_destroy(pthreadmutex);
> > + if (unlikely(err))
> > + PMD_DRV_LOG(ERR, "Fail to destroy mutex, error: %d", err);
> > +
> > + return err;
> > +}
> > +
>
> There was a comment from Stephen to use pthread APIs directly, can you
> please comment on that?
I have reply him already.
>
>
> > @@ -713,7 +718,7 @@ int hinic_aeq_poll_msg(struct hinic_eq *eq, u32
> timeout, void *param)
> > }
> >
> > if (timeout != 0)
> > - rte_delay_ms(1);
> > + usleep(1000);
>
> Why is this change required? Aren't these are same?
The function rte_delay_ms is blocked and usleep is dispatched.
We get high CPU usage when we use rte_delay_ms but usleep.
It is the purpose of this patch.
Thanks!
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH v2 1/1] net/hinic: use mutex replace spin lock
@ 2019-07-05 6:47 Ziyang Xuan
2019-07-08 18:15 ` Ferruh Yigit
2019-07-10 8:06 ` Ferruh Yigit
0 siblings, 2 replies; 4+ messages in thread
From: Ziyang Xuan @ 2019-07-05 6:47 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, cloud.wangxiaoyun, luoxianjun, tanya.brokhman, Ziyang Xuan
Using spin lock to protect critical resources
of sending mgmt messages. This will make high
CPU usage for rte_delay_ms when sending mgmt
messages frequently. We can use mutex to protect
the critical resources and usleep to reduce CPU
usage while keep functioning properly.
Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
---
drivers/net/hinic/Makefile | 1 +
drivers/net/hinic/base/hinic_compat.h | 25 +++++++++++++++++++++++++
drivers/net/hinic/base/hinic_pmd_mgmt.c | 19 ++++++++++++-------
drivers/net/hinic/base/hinic_pmd_mgmt.h | 6 ++----
4 files changed, 40 insertions(+), 11 deletions(-)
diff --git a/drivers/net/hinic/Makefile b/drivers/net/hinic/Makefile
index 123a626..42b4a78 100644
--- a/drivers/net/hinic/Makefile
+++ b/drivers/net/hinic/Makefile
@@ -20,6 +20,7 @@ endif
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
LDLIBS += -lrte_ethdev -lrte_net -lrte_hash
LDLIBS += -lrte_bus_pci
+LDLIBS += -lpthread
EXPORT_MAP := rte_pmd_hinic_version.map
diff --git a/drivers/net/hinic/base/hinic_compat.h b/drivers/net/hinic/base/hinic_compat.h
index 48643c8..f599947 100644
--- a/drivers/net/hinic/base/hinic_compat.h
+++ b/drivers/net/hinic/base/hinic_compat.h
@@ -7,6 +7,8 @@
#include <stdint.h>
#include <sys/time.h>
+#include <unistd.h>
+#include <pthread.h>
#include <rte_common.h>
#include <rte_byteorder.h>
#include <rte_memzone.h>
@@ -253,4 +255,27 @@ static inline void hinic_be32_to_cpu(void *data, u32 len)
}
}
+static inline int hinic_mutex_init(pthread_mutex_t *pthreadmutex,
+ const pthread_mutexattr_t *mattr)
+{
+ int err;
+
+ err = pthread_mutex_init(pthreadmutex, mattr);
+ if (unlikely(err))
+ PMD_DRV_LOG(ERR, "Fail to initialize mutex, error: %d", err);
+
+ return err;
+}
+
+static inline int hinic_mutex_destroy(pthread_mutex_t *pthreadmutex)
+{
+ int err;
+
+ err = pthread_mutex_destroy(pthreadmutex);
+ if (unlikely(err))
+ PMD_DRV_LOG(ERR, "Fail to destroy mutex, error: %d", err);
+
+ return err;
+}
+
#endif /* _HINIC_COMPAT_H_ */
diff --git a/drivers/net/hinic/base/hinic_pmd_mgmt.c b/drivers/net/hinic/base/hinic_pmd_mgmt.c
index bc18765..a18e567 100644
--- a/drivers/net/hinic/base/hinic_pmd_mgmt.c
+++ b/drivers/net/hinic/base/hinic_pmd_mgmt.c
@@ -342,8 +342,9 @@ static int hinic_pf_to_mgmt_init(struct hinic_hwdev *hwdev)
hwdev->pf_to_mgmt = pf_to_mgmt;
pf_to_mgmt->hwdev = hwdev;
- spin_lock_init(&pf_to_mgmt->async_msg_lock);
- spin_lock_init(&pf_to_mgmt->sync_msg_lock);
+ err = hinic_mutex_init(&pf_to_mgmt->sync_msg_lock, NULL);
+ if (err)
+ goto mutex_init_err;
err = alloc_msg_buf(pf_to_mgmt);
if (err) {
@@ -363,6 +364,9 @@ static int hinic_pf_to_mgmt_init(struct hinic_hwdev *hwdev)
free_msg_buf(pf_to_mgmt);
alloc_msg_buf_err:
+ hinic_mutex_destroy(&pf_to_mgmt->sync_msg_lock);
+
+mutex_init_err:
kfree(pf_to_mgmt);
return err;
@@ -378,6 +382,7 @@ static void hinic_pf_to_mgmt_free(struct hinic_hwdev *hwdev)
hinic_api_cmd_free(pf_to_mgmt->cmd_chain);
free_msg_buf(pf_to_mgmt);
+ hinic_mutex_destroy(&pf_to_mgmt->sync_msg_lock);
kfree(pf_to_mgmt);
}
@@ -391,7 +396,7 @@ static void hinic_pf_to_mgmt_free(struct hinic_hwdev *hwdev)
u32 timeo;
int err, i;
- spin_lock(&pf_to_mgmt->sync_msg_lock);
+ pthread_mutex_lock(&pf_to_mgmt->sync_msg_lock);
SYNC_MSG_ID_INC(pf_to_mgmt);
recv_msg = &pf_to_mgmt->recv_resp_msg_from_mgmt;
@@ -450,7 +455,7 @@ static void hinic_pf_to_mgmt_free(struct hinic_hwdev *hwdev)
unlock_sync_msg:
if (err && out_size)
*out_size = 0;
- spin_unlock(&pf_to_mgmt->sync_msg_lock);
+ pthread_mutex_unlock(&pf_to_mgmt->sync_msg_lock);
return err;
}
@@ -497,13 +502,13 @@ int hinic_msg_to_mgmt_no_ack(void *hwdev, enum hinic_mod_type mod, u8 cmd,
return err;
}
- spin_lock(&pf_to_mgmt->sync_msg_lock);
+ pthread_mutex_lock(&pf_to_mgmt->sync_msg_lock);
err = send_msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
HINIC_MSG_NO_ACK, HINIC_MSG_DIRECT_SEND,
MSG_NO_RESP);
- spin_unlock(&pf_to_mgmt->sync_msg_lock);
+ pthread_mutex_unlock(&pf_to_mgmt->sync_msg_lock);
return err;
}
@@ -713,7 +718,7 @@ int hinic_aeq_poll_msg(struct hinic_eq *eq, u32 timeout, void *param)
}
if (timeout != 0)
- rte_delay_ms(1);
+ usleep(1000);
} while (time_before(jiffies, end));
if (err != HINIC_OK) /*poll time out*/
diff --git a/drivers/net/hinic/base/hinic_pmd_mgmt.h b/drivers/net/hinic/base/hinic_pmd_mgmt.h
index 23951cb..7804708 100644
--- a/drivers/net/hinic/base/hinic_pmd_mgmt.h
+++ b/drivers/net/hinic/base/hinic_pmd_mgmt.h
@@ -81,10 +81,8 @@ enum comm_pf_to_mgmt_event_state {
struct hinic_msg_pf_to_mgmt {
struct hinic_hwdev *hwdev;
- /* Async cmd can not be scheduling */
- spinlock_t async_msg_lock;
- /* spinlock for sync message */
- spinlock_t sync_msg_lock;
+ /* mutex for sync message */
+ pthread_mutex_t sync_msg_lock;
void *async_msg_buf;
void *sync_msg_buf;
--
1.8.3.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/1] net/hinic: use mutex replace spin lock
2019-07-05 6:47 Ziyang Xuan
@ 2019-07-08 18:15 ` Ferruh Yigit
2019-07-10 8:06 ` Ferruh Yigit
1 sibling, 0 replies; 4+ messages in thread
From: Ferruh Yigit @ 2019-07-08 18:15 UTC (permalink / raw)
To: Ziyang Xuan, dev; +Cc: cloud.wangxiaoyun, luoxianjun, tanya.brokhman
On 7/5/2019 7:47 AM, Ziyang Xuan wrote:
> Using spin lock to protect critical resources
> of sending mgmt messages. This will make high
> CPU usage for rte_delay_ms when sending mgmt
> messages frequently. We can use mutex to protect
> the critical resources and usleep to reduce CPU
> usage while keep functioning properly.
>
> Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
<...>
> +static inline int hinic_mutex_init(pthread_mutex_t *pthreadmutex,
> + const pthread_mutexattr_t *mattr)
> +{
> + int err;
> +
> + err = pthread_mutex_init(pthreadmutex, mattr);
> + if (unlikely(err))
> + PMD_DRV_LOG(ERR, "Fail to initialize mutex, error: %d", err);
> +
> + return err;
> +}
> +
> +static inline int hinic_mutex_destroy(pthread_mutex_t *pthreadmutex)
> +{
> + int err;
> +
> + err = pthread_mutex_destroy(pthreadmutex);
> + if (unlikely(err))
> + PMD_DRV_LOG(ERR, "Fail to destroy mutex, error: %d", err);
> +
> + return err;
> +}
> +
There was a comment from Stephen to use pthread APIs directly, can you please
comment on that?
> @@ -713,7 +718,7 @@ int hinic_aeq_poll_msg(struct hinic_eq *eq, u32 timeout, void *param)
> }
>
> if (timeout != 0)
> - rte_delay_ms(1);
> + usleep(1000);
Why is this change required? Aren't these are same?
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/1] net/hinic: use mutex replace spin lock
2019-07-05 6:47 Ziyang Xuan
2019-07-08 18:15 ` Ferruh Yigit
@ 2019-07-10 8:06 ` Ferruh Yigit
1 sibling, 0 replies; 4+ messages in thread
From: Ferruh Yigit @ 2019-07-10 8:06 UTC (permalink / raw)
To: Ziyang Xuan, dev; +Cc: cloud.wangxiaoyun, luoxianjun, tanya.brokhman
On 7/5/2019 7:47 AM, Ziyang Xuan wrote:
> Using spin lock to protect critical resources
> of sending mgmt messages. This will make high
> CPU usage for rte_delay_ms when sending mgmt
> messages frequently. We can use mutex to protect
> the critical resources and usleep to reduce CPU
> usage while keep functioning properly.
>
> Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
Applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-07-10 8:07 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-09 12:20 [dpdk-dev] [PATCH v2 1/1] net/hinic: use mutex replace spin lock Xuanziyang (William, Chip Application Design Logic and Hardware Development Dept IT_Products & Solutions)
-- strict thread matches above, loose matches on Subject: below --
2019-07-05 6:47 Ziyang Xuan
2019-07-08 18:15 ` Ferruh Yigit
2019-07-10 8:06 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).