automatic DPDK test reports
 help / color / mirror / Atom feed
From: checkpatch@dpdk.org
To: test-report@dpdk.org
Cc: <eagostini@nvidia.com>
Subject: [dpdk-test-report] |WARNING| pw103676 [PATCH v2 1/1] gpu/cuda: introduce CUDA driver
Date: Wed,  3 Nov 2021 18:52:23 +0100 (CET)	[thread overview]
Message-ID: <20211103175223.26BEC123056@dpdk.org> (raw)
In-Reply-To: <20211104020128.13165-2-eagostini@nvidia.com>

Test-Label: checkpatch
Test-Status: WARNING
http://dpdk.org/patch/103676

_coding style issues_


WARNING:TYPO_SPELLING: 'straighforward' may be misspelled - perhaps 'straightforward'?
#165: FILE: doc/guides/gpus/cuda.rst:20:
+is quite straighforward and doesn't create any compatibility problem.

WARNING:TYPO_SPELLING: 'enviroment' may be misspelled - perhaps 'environment'?
#176: FILE: doc/guides/gpus/cuda.rst:31:
+If the CUDA driver enviroment has been already initialized, the ``cuInit(0)``

ERROR:GLOBAL_INITIALISERS: do not initialise globals to NULL
#338: FILE: drivers/gpu/cuda/cuda.c:87:
+struct mem_entry *mem_alloc_list_head = NULL;

ERROR:GLOBAL_INITIALISERS: do not initialise globals to NULL
#339: FILE: drivers/gpu/cuda/cuda.c:88:
+struct mem_entry *mem_alloc_list_tail = NULL;

ERROR:GLOBAL_INITIALISERS: do not initialise globals to 0
#340: FILE: drivers/gpu/cuda/cuda.c:89:
+uint32_t mem_alloc_list_last_elem = 0;

WARNING:LONG_LINE: line length of 103 exceeds 100 columns
#361: FILE: drivers/gpu/cuda/cuda.c:110:
+		mem_alloc_list_head = rte_zmalloc(NULL, sizeof(struct mem_entry), RTE_CACHE_LINE_SIZE);

WARNING:LONG_LINE: line length of 120 exceeds 100 columns
#371: FILE: drivers/gpu/cuda/cuda.c:120:
+		struct mem_entry *mem_alloc_list_cur = rte_zmalloc(NULL, sizeof(struct mem_entry), RTE_CACHE_LINE_SIZE);

WARNING:LONG_LINE: line length of 110 exceeds 100 columns
#496: FILE: drivers/gpu/cuda/cuda.c:245:
+		dev->mpshared->dev_private = rte_zmalloc(NULL, sizeof(struct cuda_info), RTE_CACHE_LINE_SIZE);

WARNING:LONG_LINE: line length of 105 exceeds 100 columns
#587: FILE: drivers/gpu/cuda/cuda.c:336:
+	res = cuPointerSetAttribute(&flag, CU_POINTER_ATTRIBUTE_SYNC_MEMOPS, mem_alloc_list_tail->ptr_d);

WARNING:LONG_LINE: line length of 138 exceeds 100 columns
#589: FILE: drivers/gpu/cuda/cuda.c:338:
+		rte_gpu_log(ERR, "Could not set SYNC MEMOP attribute for GPU memory at %llx , err %d
", mem_alloc_list_tail->ptr_d, res);

WARNING:LONG_LINE: line length of 147 exceeds 100 columns
#655: FILE: drivers/gpu/cuda/cuda.c:404:
+	res = cuMemHostRegister(mem_alloc_list_tail->ptr_h, mem_alloc_list_tail->size, CU_MEMHOSTREGISTER_PORTABLE | CU_MEMHOSTREGISTER_DEVICEMAP);

WARNING:LONG_LINE: line length of 113 exceeds 100 columns
#659: FILE: drivers/gpu/cuda/cuda.c:408:
+						err_string, mem_alloc_list_tail->ptr_h, mem_alloc_list_tail->size

WARNING:LONG_LINE: line length of 132 exceeds 100 columns
#666: FILE: drivers/gpu/cuda/cuda.c:415:
+									CU_DEVICE_ATTRIBUTE_CAN_USE_HOST_POINTER_FOR_REGISTERED_MEM,

WARNING:LONG_LINE: line length of 130 exceeds 100 columns
#667: FILE: drivers/gpu/cuda/cuda.c:416:
+									((struct cuda_info *)(dev->mpshared->dev_private))->cu_dev

WARNING:LONG_LINE: line length of 110 exceeds 100 columns
#679: FILE: drivers/gpu/cuda/cuda.c:428:
+		res = cuMemHostGetDevicePointer(&(mem_alloc_list_tail->ptr_d), mem_alloc_list_tail->ptr_h, 0);

WARNING:LONG_LINE: line length of 103 exceeds 100 columns
#687: FILE: drivers/gpu/cuda/cuda.c:436:
+		if ((uintptr_t) mem_alloc_list_tail->ptr_d != (uintptr_t) mem_alloc_list_tail->ptr_h) {

WARNING:LONG_LINE: line length of 105 exceeds 100 columns
#696: FILE: drivers/gpu/cuda/cuda.c:445:
+	res = cuPointerSetAttribute(&flag, CU_POINTER_ATTRIBUTE_SYNC_MEMOPS, mem_alloc_list_tail->ptr_d);

WARNING:LONG_LINE: line length of 138 exceeds 100 columns
#698: FILE: drivers/gpu/cuda/cuda.c:447:
+		rte_gpu_log(ERR, "Could not set SYNC MEMOP attribute for GPU memory at %llx , err %d
", mem_alloc_list_tail->ptr_d, res);

WARNING:LONG_LINE: line length of 102 exceeds 100 columns
#781: FILE: drivers/gpu/cuda/cuda.c:530:
+			rte_gpu_log(ERR, "cuMemHostUnregister current failed with %s.
", err_string);

WARNING:UNNECESSARY_ELSE: else is not generally useful after a break or return
#787: FILE: drivers/gpu/cuda/cuda.c:536:
+		return mem_list_del_item(hk);
+	} else {

WARNING:LONG_LINE: line length of 108 exceeds 100 columns
#873: FILE: drivers/gpu/cuda/cuda.c:622:
+	res = cuDeviceGetAttribute(&(processor_count), CU_DEVICE_ATTRIBUTE_MULTIPROCESSOR_COUNT, cu_dev_id);

WARNING:LONG_LINE: line length of 102 exceeds 100 columns
#898: FILE: drivers/gpu/cuda/cuda.c:647:
+	dev->mpshared->dev_private = rte_zmalloc(NULL, sizeof(struct cuda_info), RTE_CACHE_LINE_SIZE);

total: 3 errors, 19 warnings, 849 lines checked
Warning in drivers/gpu/cuda/cuda.c:
Using %l format, prefer %PRI*64 if type is [u]int64_t

       reply	other threads:[~2021-11-03 17:52 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20211104020128.13165-2-eagostini@nvidia.com>
2021-11-03 17:52 ` checkpatch [this message]
2021-11-03 18:08 [dpdk-test-report] |WARNING| pw103676 [PATCH] [v2, " dpdklab

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211103175223.26BEC123056@dpdk.org \
    --to=checkpatch@dpdk.org \
    --cc=eagostini@nvidia.com \
    --cc=test-report@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).