36 lines
1.8 KiB
Plaintext
36 lines
1.8 KiB
Plaintext
Metadata-Version: 2.1
|
|
Name: nvidia-nccl-cu11
|
|
Version: 2.14.3
|
|
Summary: NVIDIA Collective Communication Library (NCCL) Runtime
|
|
Home-page: https://developer.nvidia.com/cuda-zone
|
|
Author: Nvidia CUDA Installer Team
|
|
Author-email: cuda_installer@nvidia.com
|
|
License: NVIDIA Proprietary Software
|
|
Keywords: cuda,nvidia,runtime,machine learning,deep learning
|
|
Classifier: Development Status :: 4 - Beta
|
|
Classifier: Intended Audience :: Developers
|
|
Classifier: Intended Audience :: Education
|
|
Classifier: Intended Audience :: Science/Research
|
|
Classifier: License :: Other/Proprietary License
|
|
Classifier: Natural Language :: English
|
|
Classifier: Programming Language :: Python :: 3
|
|
Classifier: Programming Language :: Python :: 3.5
|
|
Classifier: Programming Language :: Python :: 3.6
|
|
Classifier: Programming Language :: Python :: 3.7
|
|
Classifier: Programming Language :: Python :: 3.8
|
|
Classifier: Programming Language :: Python :: 3.9
|
|
Classifier: Programming Language :: Python :: 3.10
|
|
Classifier: Programming Language :: Python :: 3.11
|
|
Classifier: Programming Language :: Python :: 3 :: Only
|
|
Classifier: Topic :: Scientific/Engineering
|
|
Classifier: Topic :: Scientific/Engineering :: Mathematics
|
|
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
Classifier: Topic :: Software Development
|
|
Classifier: Topic :: Software Development :: Libraries
|
|
Classifier: Operating System :: POSIX :: Linux
|
|
Classifier: Operating System :: Microsoft :: Windows
|
|
Requires-Python: >=3
|
|
License-File: License.txt
|
|
|
|
NCCL (pronounced "Nickel") is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, and reduce-scatter. It has been optimized to achieve high bandwidth on any platform using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets.
|