|
3 | 3 | <img alt="C++20" src="https://img.shields.io/badge/C%2B%2B-20-blue?style=plastic&logo=cplusplus&logoColor=blue"> <img alt="CUDA-12" src="https://img.shields.io/badge/CUDA-12-green?style=plastic&logo=nvidia"> <img alt="Static Badge" src="https://img.shields.io/badge/python-3-blue?style=plastic&logo=python&logoColor=blue"> <img alt="Static Badge" src="https://img.shields.io/badge/pytorch-2-orange?style=plastic&logo=pytorch"> |
4 | 4 | </div> |
5 | 5 |
|
6 | | -## Quick Start |
| 6 | +## Environment |
7 | 7 |
|
8 | | -Create a new conda environment: |
| 8 | +The simplest way is to use my docker image {{<href text="jamesnulliu/deeplearning:latest" url="https://hub.docker.com/r/jamesnulliu/deeplearning">}} which contains all the softwares you need to build the project: |
9 | 9 |
|
10 | 10 | ```bash |
11 | | -conda create -n cuda-learn python=3.12 |
| 11 | +docker pull jamesnulliu/deeplearning:latest |
| 12 | +``` |
| 13 | + |
| 14 | +> 📝**NOTE** |
| 15 | +> Check my blog: [Docker Container with Nvidia GPU Support](/blogs/docker-container-with-nvidia-gpu-support) if you need any help. |
| 16 | +
|
| 17 | +Or if you are planing to build your own environment, here are some tips: |
| 18 | + |
| 19 | +You should install all the softwares with corresponding versions listed bellow: |
| 20 | + |
| 21 | +- Miniconda/Anaconda |
| 22 | +- gcc >= 12.0, nvcc >= 12.0 |
| 23 | +- CMake >= 3.30 |
| 24 | +- Ninja |
| 25 | +- vcpkg, pkg-config |
| 26 | +- [managed by conda] python >= 3.10, pytorch >= 2.0 |
| 27 | +- [managed by vcpkg] cxxopts, fmt, spdlog, proxy, gtest, yamel-cpp |
| 28 | + |
| 29 | +**🎯Miniconda** |
| 30 | + |
| 31 | +Managing python environment with miniconda is always a good choice. Check [the official website](https://docs.anaconda.com/miniconda/install/#quick-command-line-install) for an installation guide. |
| 32 | + |
| 33 | +After installation, if you do not intend to install all the packages in `base` environment, create a new conda environment named `PMPP` (or whatever you like) and activate it: |
| 34 | + |
| 35 | +```bash {linenos=true} |
| 36 | +# python version should be larger than 3.10 |
| 37 | +conda create -n PMPP python=3.12 |
| 38 | +conda activate PMPP # Activate this environment |
| 39 | +# In my experience, when your system gcc version is larger than 12, it is highly possible that you have to update libstd++ in conda for running the later compiled targets. All you need to do is to run this command: |
12 | 40 | conda upgrade libstdcxx-ng -c conda-forge |
13 | | -conda activate cuda-learn |
14 | | -pip install torch torchvision torchaudio |
15 | 41 | ``` |
16 | 42 |
|
17 | | -To build the C++ part only (lib pmpp): |
| 43 | +**🎯PyTorch** |
| 44 | + |
| 45 | +Install pytorch **with pip (not conda)** in environment `PMPP` following the steps on [the official website](https://pytorch.org/get-started/locally/#start-locally). In my case I installed `torch-2.6.0 + cuda 12.6`. |
| 46 | + |
| 47 | +> 📝**NOTE** |
| 48 | +> All the python packages you installed can be found under the directory of `$CONDA_PREFIX/lib/python3.12/site-packages`. |
| 49 | +
|
| 50 | +**🎯CUDA** |
| 51 | + |
| 52 | +To compile cuda code, you need to install **cuda toolkit** on your system. Usually, even if `torch-2.6.0 + cuda 12.6` is installed in your conda environment while `cuda 12.1` is installed on the system, you can run torch in python without any mistakes. But in some cases, you still have to install `cuda 12.6` to exactly match the torch you chose. |
| 53 | + |
| 54 | +You can find all versions of cuda on [the official website](https://developer.nvidia.com/cuda-toolkit-archive). |
| 55 | + |
| 56 | +> 📝**NOTE** |
| 57 | +> Installing and using multiple versions of cuda is possible by managing the `PATH` and `LD_LIBRARY_PATH` environment variables on linux, and you can do this manually or refering to my methods in [this blog](/blogs/environment-variable-management). |
| 58 | +
|
| 59 | +## Quick Start |
| 60 | + |
| 61 | +To build the C++ part only: |
18 | 62 |
|
19 | 63 | ```bash |
20 | 64 | bash scripts/build.sh |
21 | 65 | ``` |
22 | 66 |
|
23 | | -Run ctest to test lib pmpp: |
| 67 | +You will find "./build/lib/libPmppTorchOps.so" which is the operator library and "./build/test/pmpp_test" which is the test executable (with gtest). |
| 68 | + |
| 69 | +Execute the test executable to test the library manually: |
| 70 | + |
| 71 | +```bash |
| 72 | +./build/test/pmpp_test |
| 73 | +``` |
| 74 | + |
| 75 | +Note that the test is already integrated into CMake build system (with ctest); In "[scripts/build.sh](scripts/build.sh)", the last line shows how to run the test: |
24 | 76 |
|
25 | 77 | ```bash |
26 | | -ctest --test-dir ./build --output-on-failure |
| 78 | +# $BUILD_DIR is "./build" by default |
| 79 | + |
| 80 | +# If the library has not been build, target `all` before `check` is required |
| 81 | +GTEST_COLOR=yes cmake --build $BUILD_DIR -j $(nproc) --target all check |
| 82 | +# Or if the library has been build, `check` is enough |
| 83 | +GTEST_COLOR=yes cmake --build $BUILD_DIR -j $(nproc) --target check |
27 | 84 | ``` |
28 | 85 |
|
29 | | -To build and instll the corresponding python lib: |
| 86 | +To build and install the python package `pmpp` in current activated conda environment (pmpp operator library would be built automatically if has not been built): |
30 | 87 |
|
31 | 88 | ```bash |
32 | 89 | pip3 install --no-build-isolation -v . |
33 | 90 | ``` |
34 | 91 |
|
35 | | -`torch.ops.pmpp.vector_add` will be available after installation; |
36 | | -See [test.py](test/test.py) for usage. |
| 92 | +`torch.ops.pmpp.vector_add` will be available after installation; See [test.py](test/test.py) for example. |
0 commit comments