Browse Prior Art Database

Interface Specific Socket/Protocol Buffers (ISPB)

IP.com Disclosure Number: IPCOM000244677D
Publication Date: 2016-Jan-06
Document File: 2 page(s) / 24K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method named Interface Specific Protocol Buffers (ISPB) to reduce CPU utilization during TCPIP network I/O

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 01 of 2

Interface Specific Socket/Protocol Buffers (ISPB)

Biggest CPU consumption during network I/O is copying data to kernel (socket) buffers from user space buffers and then copying socket buffers to
buffers that were already Direct Memory Access (DMA) mapped at Network Interface Card (NIC) driver level. On fly DMA mapping of buffer is
also costly and should be avoided if the buffer/mbuf is not reused.

Existing Interface Specific Buffers (ISB) implementation allocates and DMA maps buffers/mbufs by NIC driver on behalf of socket layer.

NIC driver allocates memory pools of different sizes and then DMA maps buffers during NIC driver initialization time.

When socket layer needs to copy data from user space to kernel, without ISB concept, socket layer
allocates buffers from system pool which are not DMA mapped. In transmit side, data in non-DMA mapped buffers

provided by socket layer needs to be copied by NIC driver into its buffers which are already DMA mapped whose bus/physical address needs
to be given to adapter for sending data out. With ISB concept, socket layer will make an IOCTL call to NIC driver and

NIC driver will return a buffer to socket layer which was already DMA mapped. Socket layer directly copies data into
this DMA mapped buffer provided by NIC driver. So, NIC driver can directly provide physical address of buffer to adapter.

So a copy of buffer was avoided at NIC driver level with ISB.

Disadvantage of ISB concept is, NIC driver locks lot of memory even if port is just configured (ifconfig/mktcpip) and not used.

Other issue is, when there is lot of TCP load on ports, there is a chance of buffers of required size to
exhaust quickly because number of buffers are constant/limited which are configured during NIC driver initialization.

With high TCP load, chance of ISB buffers to exhaust is high because already send data in socket buffers is not
dropped till an acknowledgement is received. This will result in usage of system buffers (non-DMA mapped) which will result in performance loss.

When there is lot of TCP bulk transfer or Large send offload happening, there is always chance of NIC driver mapped large buffers (32K and 64K)
exhaust because the larger the buffer is, the lower number are allocated and mapped at NIC driver level to save memory.

So there is a trade off between memory locked (number of buffers) by NIC driver and

perfor...