On 12/12/2023 14:25, Koikkara Reeny, Shibin wrote:
Thank you Maryam updating the document.

I have added some comment below. 
Also what do you think about changing the name of the document file "af_xdp_cni.rst" to "af_xdp_dp.rst" ?

Yes - I can update.

<snip>

 Background
 ----------

-The standard :doc:`../nics/af_xdp` initialization process involves loading an
eBPF program -onto the kernel netdev to be used by the PMD.
-This operation requires root or escalated Linux privileges -and thus prevents
the PMD from working in an unprivileged container.
-The AF_XDP CNI plugin handles this situation -by providing a device plugin
that performs the program loading.
-
-At a technical level the CNI opens a Unix Domain Socket and listens for a
client -to make requests over that socket.
-A DPDK application acting as a client connects and initiates a configuration
"handshake".
-The client then receives a file descriptor which points to the XSKMAP -
associated with the loaded eBPF program.
-The XSKMAP is a BPF map of AF_XDP sockets (XSK).
-The client can then proceed with creating an AF_XDP socket -and inserting
that socket into the XSKMAP pointed to by the descriptor.
-
-The EAL vdev argument ``use_cni`` is used to indicate that the user wishes -
to run the PMD in unprivileged mode and to receive the XSKMAP file
descriptor -from the CNI.
-When this flag is set,
-the ``XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD`` libbpf flag -should be
used when creating the socket -to instruct libbpf not to load the default
libbpf program on the netdev.
-Instead the loading is handled by the CNI.
+The standard :doc:`../nics/af_xdp` initialization process involves
+loading an eBPF program onto the kernel netdev to be used by the PMD.
+This operation requires root or escalated Linux privileges and prevents
+the PMD from working in an unprivileged container. The AF_XDP Device
+plugin addresses this situation by providing an entity that manages
+eBPF program lifecycle for Pod interfaces that wish to use AF_XDP, this
+in turn allows the pod to be used without privilege escalation.
+
+In order for the pod to run without privilege escalation, the AF_XDP DP
It will good add DP is an abbreviation for Device Plugin.


I will add the abbreviation further up.




+creates a Unix Domain Socket (UDS) and listens for Pods to make
+requests for XSKMAP(s) File Descriptors (FDs) for interfaces in their
network namespace.
+In other words, the DPDK application running in the Pod connects to
+this UDS and initiates a "handshake" to retrieve the XSKMAP(s) FD(s).
+Upon a successful "handshake", the DPDK application receives the FD(s)
+for the XSKMAP(s) associated with the relevant netdevs. The DPDK
+application can then create the AF_XDP socket(s), and attach the socket(s)
to the netdev queue(s) by inserting the socket(s) into the XSKMAP(s).
+
+The EAL vdev argument ``uds_path`` is used to indicate that the user
+wishes to run the AF_XDP PMD in unprivileged mode and to receive the
+XSKMAP FD from the AF_XDP DP. When this param is used, the
+``XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD`` libbpf flag is used when
+creating the AF_XDP socket to instruct libbpf/libxdp not to load the
+default eBPF redirect program for AF_XDP on the netdev. Instead the
+lifecycle management of the eBPF program is handled by the AF_XDP DP.

 .. note::

-   The Unix Domain Socket file path appear in the end user is
"/tmp/afxdp.sock".
-
+   The UDS file path inside the pod appears at
"/tmp/afxdp_dp/<netdev>/afxdp.sock".

Initially 'Note' was created since it was not explicitly know to the user where the sock was created inside the Pod. Now since we are passing it as argument if you want you can remove it.

I think this is fine. It highlights the path explicitly.



 Prerequisites
 -------------

-Docker and container prerequisites:
-
-* Set up the device plugin
-  as described in the instructions for `AF_XDP Plugin for Kubernetes`_.
-
-* The Docker image should contain the libbpf and libxdp libraries,
-  which are dependencies for AF_XDP,
-  and should include support for the ``ethtool`` command.
+Device Plugin and DPDK container prerequisites:
+* Create a DPDK container image.
Formatting is need here. It get displayed as:
"Device Plugin and DPDK container prerequisites: * Create a DPDK container image."

I will update.

<snip>

-     CNI plugin has a dependence on the config.json.
+   .. _daemonset.yml :
+ https://github.com/intel/afxdp-plugins-for-kubernetes/blob/main/deploy
+ ments/daemonset.yml

-  Sample Config.json
+  .. code-block:: yaml

-  .. code-block:: json
+    image: afxdp-device-plugin:latest

-     {
-        "logLevel":"debug",
-        "logFile":"afxdp-dp-e2e.log",
-        "pools":[
-           {
-              "name":"e2e",
-              "mode":"primary",
-              "timeout":30,
-              "ethtoolCmds" : ["-L -device- combined 1"],
-              "devices":[
-                 {
-                    "name":"ens785f0"
-                 }
-              ]
-           }
-        ]
-     }
+  .. note::
"Config.json" is removed. Is it because "ethtoolCmds" is moved to the "nad.yaml"?
What about the "drivers or devices" ?


It's not needed as the appropriate file to use is https://github.com/intel/afxdp-plugins-for-kubernetes/blob/main/deployments/daemonset.yml, as updated in the notes. And there's also plenty of configuration documentation in the AF_XDP DP project, I don' t think it should be duplicated here.

<snip>


   .. note::

-     All the files that need to COPY-ed should be in the same directory as the
Dockerfile
+    Ensure the Dockerfile is placed in the top level DPDK directory.
Do you mean the Dockerfile should be in same directory where "DPDK directory" is?


The Dockerfile should be placed in the top level DPDK directory.




 * Run the Pod

@@ -205,49 +226,52 @@ Howto run dpdk-testpmd with CNI plugin:

   .. code-block:: yaml

-     apiVersion: v1
-     kind: Pod
-     metadata:
-       name: afxdp-e2e-test
-       annotations:
-         k8s.v1.cni.cncf.io/networks: afxdp-e2e-test
-     spec:
-       containers:
-       - name: afxdp
-         image: afxdp-e2e-test:latest
-         imagePullPolicy: Never
-         env:
-         - name: LD_LIBRARY_PATH
-           value: /usr/lib64/:/usr/local/lib/
-         command: ["tail", "-f", "/dev/null"]
-         securityContext:
+    apiVersion: v1
+    kind: Pod
+    metadata:
+     name: dpdk
+     annotations:
+       k8s.v1.cni.cncf.io/networks: afxdp-network
+    spec:
+      containers:
+      - name: testpmd
+        image: dpdk:latest
+        command: ["tail", "-f", "/dev/null"]
+        securityContext:
           capabilities:
-             add:
-               - CAP_NET_RAW
-               - CAP_BPF
-         resources:
-           requests:
-             hugepages-2Mi: 2Gi
-             memory: 2Gi
-             afxdp/e2e: '1'
-           limits:
-             hugepages-2Mi: 2Gi
-             memory: 2Gi
-             afxdp/e2e: '1'
+            add:
+              - NET_RAW
+              - IPC_LOCK
Should we add both NET_RAW and IPC_LOCK to the Prerequisites ?


We don't need to keep duplicating info across DPDK and AF_XDP DP. It's in all the Pod examples in the AF_XDP DP already.



+        resources:
+          requests:
+            afxdp/myPool: '1'
+          limits:
+            hugepages-1Gi: 2Gi
+            cpu: 2
+            memory: 256Mi
+            afxdp/myPool: '1'
+        volumeMounts:
+        - name: hugepages
+          mountPath: /dev/hugepages
+      volumes:
+      - name: hugepages
+        emptyDir:
+          medium: HugePages

   For further reference please use the `pod.yaml`_

-  .. _pod.yaml: https://github.com/intel/afxdp-plugins-for-
kubernetes/blob/v0.0.2/test/e2e/pod-1c1d.yaml
+  .. _pod.yaml:
+ https://github.com/intel/afxdp-plugins-for-kubernetes/blob/main/exampl
+ es/pod-spec.yaml

-* Run DPDK with a command like the following:
+.. note::

-  .. code-block:: console
+   For Kernel versions older than 5.19 `CAP_BPF` is also required in
+   the container capabilities stanza.

-     kubectl exec -i <Pod name> --container <containers name> -- \
-           /<Path>/dpdk-testpmd -l 0,1 --no-pci \
-           --vdev=net_af_xdp0,use_cni=1,iface=<interface name> \
-           -- --no-mlockall --in-memory
+* Run DPDK with a command like the following:

-For further reference please use the `e2e`_ test case in `AF_XDP Plugin for
Kubernetes`_
+  .. code-block:: console

-  .. _e2e: https://github.com/intel/afxdp-plugins-for-
kubernetes/tree/v0.0.2/test/e2e
+     kubectl exec -i dpdk --container testpmd -- \
+           ./build/app/dpdk-testpmd -l 0-2 --no-pci --main-lcore=2 \
+           --vdev net_af_xdp,iface=<interface
name>,start_queue=22,queue_count=1,uds_path=/tmp/afxdp_dp/<interfa
ce name>/afxdp.sock \
+           -- -i --a --nb-cores=2 --rxq=1 --txq=1
+ --forward-mode=macswap;

Do you think if we should add  "uds_path=<af_xdp  UDS path>" in the command ?  And after that add a note or example generally uds_path is of format "/tmp/afxdp_dp/<interface name>/afxdp.sock "?

I think what's there is pretty clear and the reader doesn't need to go looking for what the <af_xdp  UDS path> is somewhere else. A note would be superfluous.


QQ : uds_path argument name, do you think we should add something to show if this UDS passed here is for AF_XDP ex:- "cni_uds_path" ? In future other features will also use UDS and want to pass the socket path? 


We already agreed on the name from the v2 of the respin. I don't think this needs a respin. I'm not aware of any other future features that want to use the UDS.

<snip>