https://docs.nvidia.com/cuda/npp/
NVIDIA 2D Image And Signal Performance Primitives (NPP) Version 10.1.1 |
NVIDIA 2D Image and Signal Processing Performance Primitives
What is NPP?
NVIDIA NPP is a library of functions for performing CUDA accelerated 2D image and signal processing. The primary set of functionality in the library focuses on image processing and is widely applicable for developers in these areas. NPP will evolve over time to encompass more of the compute heavy tasks in a variety of problem domains. The NPP library is written to maximize flexibility, while maintaining high performance.
NPP can be used in one of two ways:
- A stand-alone library for adding GPU acceleration to an application with minimal effort. Using this route allows developers to add GPU acceleration to their applications in a matter of hours.
- A cooperative library for interoperating with a developer's GPU code efficiently.
Either route allows developers to harness the massive compute resources of NVIDIA GPUs, while simultaneously reducing development times. After reading this Main Page it is recommended that you read the General API Conventions page below and either the Image-Processing Specific API Conventions page or Signal-Processing Specific API Conventions page depending on the kind of processing you expect to do. Finally, if you select the Modules tab at the top of this page you can find the kinds of functions available for the NPP operations that support your needs.
Documentation
- General API Conventions
- Imaging-Processing Specific API Conventions
- Signal-Processing Specific API Conventions
Files
NPP is comprises the following files:
Header Files
All those header files are located in the CUDA Toolkit's
/include/
directory.
Library Files
Starting with Version 5.5 NPP's functionality is now split up into 3 distinct library groups:
- A core library (NPPC) containing basic functionality from the npp.h header files as well as functionality shared by the other two libraries.
- The image processing library NPPI. Any functions from the nppi.h header file (or the various header files named "nppi_xxx.h" are bundled into the NPPI library.
- The signal processing library NPPS. Any function from the npps.h header file (or the various header files named "npps_xxx.h" are bundled into the NPPS library.
On the Windows platform the NPP stub libraries are found in the CUDA Toolkit's library directory:
/lib/nppc.lib
/lib/nppial.lib
/lib/nppicc.lib
/lib/nppicom.lib
/lib/nppidei.lib
/lib/nppif.lib
/lib/nppig.lib
/lib/nppim.lib
/lib/nppist.lib
/lib/nppisu.lib
/lib/nppitc.lib
/lib/npps.lib
The matching DLLs are located in the CUDA Toolkit's binary directory. Example
/bin/nppial64_101_<build_no>.dll // Dynamic image-processing library for 64-bit Windows.
On Linux and Mac platforms the dynamic libraries are located in the lib directory
/lib/libnppc.so.10.1.<build_no> // NPP dynamic core library for Linux /lib/libnpps.10.1.dylib // NPP dynamic signal processing library for Mac
Library Organization
Note: The static NPP libraries depend on a common thread abstraction layer library called cuLIBOS (libculibos.a) that is now distributed as part of the toolkit. Consequently, cuLIBOS must be provided to the linker when the static library is being linked against. To minimize library loading and CUDA runtime startup times it is recommended to use the static library(s) whenever possible. To improve loading and runtime performance when using dynamic libraries, NPP recently replaced it with a full set of nppi sub-libraries. Linking to only the sub-libraries that contain functions that your application uses can significantly improve load time and runtime startup performance. Some nppi functions make calls to other nppi and/or npps functions internally so you may need to link to a few extra libraries depending on what function calls your application makes. The nppi sub-libraries are split into sections corresponding to the way that nppi header files are split. This list of sub-libraries is as follows:
* nppc NPP core library which MUST be included when linking any application, functions are listed in nppCore.h
* nppial arithmetic and logical operation functions in nppi_arithmetic_and_logical_operations.h
* nppicc color conversion and sampling functions in nppi_color_conversion.h
* nppicom JPEG compression and decompression functions in nppi_compression_functions.h
* nppidei data exchange and initialization functions in nppi_data_exchange_and_initialization.h
* nppif filtering and computer vision functions in nppi_filter_functions.h
* nppig geometry transformation functions found in nppi_geometry_transforms.h
* nppim morphological operation functions found in nppi_morphological_operations.h
* nppist statistics and linear transform in nppi_statistics_functions.h and nppi_linear_transforms.h
* nppisu memory support functions in nppi_support_functions.h
* nppitc threshold and compare operation functions in nppi_threshold_and_compare_operations.h
*
For example, on Linux, to compile a small color conversion application foo using NPP against the dynamic library, the following command can be used:
* nvcc foo.c -lnppc -lnppicc -o foo
*
Whereas to compile against the static NPP library, the following command has to be used:
* nvcc foo.c -lnppc_static -lnppicc_static -lculibos -o foo
*
It is also possible to use the native host C++ compiler. Depending on the host operating system, some additional libraries like pthread or dl might be needed on the linking line. The following command on Linux is suggested:
* g++ foo.c -lnppc_static -lnppicc_static -lculibos -lcudart_static -lpthread -ldl
* -I <cuda-toolkit-path>/include -L <cuda-toolkit-path>/lib64 -o foo
*
NPP is a stateless API, as of NPP 6.5 the ONLY state that NPP remembers between function calls is the current stream ID, i.e. the stream ID that was set in the most recent nppSetStream() call and a few bits of device specific information about that stream. The default stream ID is 0. If an application intends to use NPP with multiple streams then it is the responsibility of the application to use the fully stateless application managed stream context interface described below or call nppSetStream() whenever it wishes to change stream IDs. Any NPP function call which does not use an application managed stream context will use the stream set by the most recent call to nppSetStream() and nppGetStream() and other "nppGet" type function calls which do not contain an application managed stream context parameter will also always use that stream. All NPP functions should be thread safe except for the following functions:
*
* nppiDCTQuantFwd8x8LS_JPEG_8u16s_C1R
*
* nppiDCTQuantInv8x8LS_JPEG_16s8u_C1R
*
*
Note: New to NPP 10.1 is support for the fp16 (__half) data type in GPU architectures of Volta and beyond in some NPP image processing functions. NPP image functions that support pixels of __half data types have function names of type 16f and pointers to pixels of that data type need to be passed to NPP as NPP data type Npp16f. Here is an example of how to pass image pointers of type __half to an NPP 16f function that should work on all compilers including Armv7.
*
* nppiAdd_16f_C3R(reinterpret_cast<const Npp16f *>((const void *)(pSrc1Data)), nSrc1Pitch,
* reinterpret_cast<const Npp16f *>((const void *)(pSrc2Data)), nSrc2Pitch,
* reinterpret_cast<Npp16f *>((void *)(pDstData)), nDstPitch,
* oDstROI);
*
*
Application Managed Stream Context
Note: Also new to NPP 10.1 is support for application managed stream contexts. Application managed stream contexts make NPP truely stateless internally allowing for rapid, no overhead, stream context switching. While it is recommended that all new NPP application code use application managed stream contexts, existing application code can continue to use nppSetStream() and nppGetStream() to manage stream contexts (also with no overhead now) but over time NPP will likely deprecate the older non-application managed stream context API. Both the new and old stream management techniques can be intermixed in applications but any NPP calls using the old API will use the stream set by the most recent call to nppSetStream() and nppGetStream() calls will also return that stream ID. All NPP function names ending in _Ctx expect application managed stream contexts to be passed as a parameter to that function. The new NppStreamContext application managed stream context structure is defined in nppdefs.h and should be initialized by the application to the Cuda device ID and values associated with a particular stream. Applications can use multiple fixed stream contexts or change the values in a particular stream context on the fly whenever a different stream is to be used. Note that some of the "GetBufferSize" style functions now have application managed stream contexts associated with them and should be used with the same stream context that the associated application managed stream context NPP function will use. Note that NPP does minimal checking of the parameters in an application managed stream context structure so it is up to the application to assure that they are correct and valid when passed to NPP functions.
Technical Specifications
Supported Platforms: - Microsoft Windows 7, 8, and 10 (64-bit and 32-bit) - Microsoft Windows Vista (64-bit and 32-bit) - Linux (Centos, Ubuntu, and several others) (64-bit and 32-bit) - Mac OS X (64-bit) - Android on Arm (32-bit and 64-bit)
Supported NVIDIA Hardware
NPP runs on all CUDA capable NVIDIA hardware. For details please see http://www.nvidia.com/object/cuda_learn_products.html