2019년 5월 8일 수요일
ppc64le Ubuntu 18.04 환경에서 PyTorch 설치하기
여기서는 Ubuntu 18.04 기반에 CUDA 10-0 버전이 설치되어 있는 docker image 속에 PyTorch v1.0.1을 설치하는 과정을 보시겠습니다.
먼저 nvidia-docker2로 CUDA 10-0과 Anaconda3 v4.4이 설치되어 있는 Ubuntu 18.04 기반의 docker image를 run 시킵니다.
[root@ac922 ~]# docker run --runtime=nvidia -ti --rm -v /data/files:/mnt bsyu/ubuntu18.04_cuda10-0_python352_ppc64le:v0.1
Docker container 안에서, 먼저 openblas 등의 필요 SW를 설치합니다.
root@d7a720117c80:/# cd /mnt
root@d7a720117c80:/mnt# apt-get install -y libblas-dev libopenblas-base
root@d7a720117c80:/mnt# conda install numpy pyyaml setuptools cmake cffi openblas
다음과 같이 ONNX를 먼저 설치합니다.
root@d7a720117c80:/mnt# git clone --recursive https://github.com/onnx/onnx.git
root@d7a720117c80:/mnt# pip install -e onnx/
이제 pytorch의 source를 git clone으로 받습니다.
root@d7a720117c80:/mnt# git clone https://github.com/pytorch/pytorch.git
root@d7a720117c80:/mnt# cd pytorch
root@d7a720117c80:/mnt/pytorch# git submodule update --init
root@d7a720117c80:/mnt/pytorch# git checkout tags/v1.0.1
root@d7a720117c80:/mnt/pytorch# export CMAKE_PREFIX_PATH=/opt/anaconda3
여기서 한가지 source file을 수정해야 합니다. 아래와 같이 4줄을 //으로 comment-out하는 것이 수정의 내용입니다. 이렇게 수정하지 않으면 "fatal error: onnx/onnx.pb.h: No such file or directory"라는 error를 만나시게 됩니다.
root@d7a720117c80:/mnt/pytorch# vi third_party/onnx/onnx/onnx_pb.h
...
//#ifdef ONNX_ML
#include "onnx/onnx-ml.pb.h"
//#else
//#include "onnx/onnx.pb.h"
//#endif
이제 아래와 같이 setup.py를 수행하여 install 합니다.
root@d7a720117c80:/mnt/pytorch# python setup.py install
이제 install이 끝났습니다. 아래와 같이 다른 directory로 이동하여 python을 구동한 뒤, import torch를 해보십시요.
root@d7a720117c80:/mnt/pytorch# cd ..
root@d7a720117c80:/mnt# python
Python 3.5.6 |Anaconda custom (64-bit)| (default, Aug 26 2018, 22:03:11)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> from __future__ import print_function
>>> x = torch.Tensor(5, 3)
>>> print(x)
tensor([[0.0000e+00, 0.0000e+00, 7.1479e+22],
[7.3909e+22, 2.5318e-12, 8.1465e-33],
[1.3563e-19, 1.8888e+31, 4.7414e+16],
[2.5171e-12, 8.0221e+17, 1.3556e-19],
[1.3563e-19, 1.3563e-19, 1.8561e-19]])
이때 GPU를 제대로 사용하는지 확인해보시려면 아래와 같이 cuda()를 사용해보십시요.
>>> if torch.cuda.is_available():
... x = x.cuda()
... y = y.cuda()
... x + y
...
tensor([[-1.3789e-07, 4.3664e-41, -1.3789e-07],
[ 7.3909e+22, 4.8930e-12, 1.1625e+33],
[ 8.9605e-01, 1.1632e+33, 5.6003e-02],
[ 7.0374e+22, 1.5301e+10, 1.0795e+30],
[ 6.1205e+10, 1.8812e+31, 1.3567e-19]], device='cuda:0')
2017년 11월 10일 금요일
tensorflow 1.3, caffe2, pytorch의 nvidia-docker를 이용한 테스트
tensorflow 1.3, caffe2, pytorch의 nvidia-docker를 이용한 테스트 방법입니다.
1) tensorflow v1.3
다음과 같이 tensorflow 1.3 docker image를 구동합니다.
root@minsky:~# nvidia-docker run -ti --rm -v /data:/data bsyu/tf1.3-ppc64le:v0.1 bash
먼저 각종 PATH 환경 변수를 확인합니다.
root@67c0e6901bb2:/# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/anaconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHONPATH=/opt/anaconda3/lib/python3.6/site-packages
cifar10 관련된 example code가 들어있는 directory로 이동합니다.
root@67c0e6901bb2:/# cd /data/imsi/tensorflow/models/tutorials/image/cifar10
수행할 cifar10_multi_gpu_train.py code를 일부 수정합니다. (원래는 --train_dir 등의 명령어 파라미터로 조정이 가능해야 하는데, 실제로는 직접 source를 수정해야 제대로 수행되는 것 같습니다.)
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512 --num_gpus 2
usage: cifar10_multi_gpu_train.py [-h] [--batch_size BATCH_SIZE]
[--data_dir DATA_DIR] [--use_fp16 USE_FP16]
cifar10_multi_gpu_train.py: error: unrecognized arguments: --num_gpus 2
위와 같은 error를 막기 위해, 아래와 같이 직접 code를 수정합니다.
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# vi cifar10_multi_gpu_train.py
...
#parser.add_argument('--train_dir', type=str, default='/tmp/cifar10_train',
parser.add_argument('--train_dir', type=str, default='/data/imsi/test/tf1.3',
help='Directory where to write event logs and checkpoint.')
#parser.add_argument('--max_steps', type=int, default=1000000,
parser.add_argument('--max_steps', type=int, default=10000,
help='Number of batches to run.')
#parser.add_argument('--num_gpus', type=int, default=1,
parser.add_argument('--num_gpus', type=int, default=4,
help='How many GPUs to use.')
이제 다음과 같이 run 하시면 됩니다. 여기서는 batch_size를 512로 했는데, 더 크게 잡아도 될 것 같습니다.
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512
>> Downloading cifar-10-binary.tar.gz 6.1%
...
2017-11-10 01:20:23.628755: step 9440, loss = 0.63 (15074.6 examples/sec; 0.034 sec/batch)
2017-11-10 01:20:25.052011: step 9450, loss = 0.64 (14615.4 examples/sec; 0.035 sec/batch)
2017-11-10 01:20:26.489564: step 9460, loss = 0.55 (14872.0 examples/sec; 0.034 sec/batch)
2017-11-10 01:20:27.860303: step 9470, loss = 0.61 (14515.9 examples/sec; 0.035 sec/batch)
2017-11-10 01:20:29.289386: step 9480, loss = 0.54 (13690.6 examples/sec; 0.037 sec/batch)
2017-11-10 01:20:30.799570: step 9490, loss = 0.69 (15940.8 examples/sec; 0.032 sec/batch)
2017-11-10 01:20:32.239056: step 9500, loss = 0.54 (12581.4 examples/sec; 0.041 sec/batch)
2017-11-10 01:20:34.219832: step 9510, loss = 0.60 (14077.9 examples/sec; 0.036 sec/batch)
...
다음으로는 전체 CPU, 즉 2개 chip 총 16-core의 절반인 1개 chip 8-core와, 전체 GPU 4개 중 2개의 GPU만 할당한 docker를 수행합니다. 여기서 --cpuset-cpus을 써서 CPU 자원을 control할 때, 저렇게 CPU 번호를 2개씩 그룹으로 줍니다. 이는 IBM POWER8가 SMT(hyperthread)가 core당 8개씩 낼 수 있는 특성 때문에 core 1개당 8개의 logical CPU 번호를 할당하기 때문입니다. 현재는 deep learning의 성능 최적화를 위해 SMT를 8이 아닌 2로 맞추어 놓았습니다.
root@minsky:~# NV_GPU=0,1 nvidia-docker run -ti --rm --cpuset-cpus="0,1,8,9,16,17,24,25,32,33,40,41,48,49" -v /data:/data bsyu/tf1.3-ppc64le:v0.1 bash
root@3b2c2614811d:~# nvidia-smi
Fri Nov 10 02:24:14 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... On | 0002:01:00.0 Off | 0 |
| N/A 38C P0 30W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... On | 0003:01:00.0 Off | 0 |
| N/A 40C P0 33W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
root@3b2c2614811d:/# cd /data/imsi/tensorflow/models/tutorials/image/cifar10
이제 GPU가 4개가 아니라 2개이므로, cifar10_multi_gpu_train.py도 아래와 같이 수정합니다.
root@3b2c2614811d:/data/imsi/tensorflow/models/tutorials/image/cifar10# vi cifar10_multi_gpu_train.py
...
#parser.add_argument('--num_gpus', type=int, default=1,
parser.add_argument('--num_gpus', type=int, default=2,
help='How many GPUs to use.')
수행하면 잘 돌아갑니다.
root@3b2c2614811d:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512
>> Downloading cifar-10-binary.tar.gz 1.7%
...
2017-11-10 02:35:50.040462: step 120, loss = 4.07 (15941.4 examples/sec; 0.032 sec/batch)
2017-11-10 02:35:50.587970: step 130, loss = 4.14 (19490.7 examples/sec; 0.026 sec/batch)
2017-11-10 02:35:51.119347: step 140, loss = 3.91 (18319.8 examples/sec; 0.028 sec/batch)
2017-11-10 02:35:51.655916: step 150, loss = 3.87 (20087.1 examples/sec; 0.025 sec/batch)
2017-11-10 02:35:52.181703: step 160, loss = 3.90 (19215.5 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:52.721608: step 170, loss = 3.82 (17780.1 examples/sec; 0.029 sec/batch)
2017-11-10 02:35:53.245088: step 180, loss = 3.92 (18888.4 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:53.777146: step 190, loss = 3.80 (19103.7 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:54.308063: step 200, loss = 3.76 (18554.2 examples/sec; 0.028 sec/batch)
...
2) caffe2
여기서는 처음부터 GPU 2개와 CPU core 8개만 가지고 docker를 띄워 보겠습니다.
root@minsky:~# NV_GPU=0,1 nvidia-docker run -ti --rm --cpuset-cpus="0,1,8,9,16,17,24,25,32,33,40,41,48,49" -v /data:/data bsyu/caffe2-ppc64le:v0.3 bash
보시는 바와 같이 GPU가 2개만 올라옵니다.
root@dc853a5495a0:/# nvidia-smi
Fri Nov 10 07:22:21 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... On | 0002:01:00.0 Off | 0 |
| N/A 32C P0 29W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... On | 0003:01:00.0 Off | 0 |
| N/A 35C P0 32W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
환경변수를 확인합니다. 여기서는 caffe2가 /opt/caffe2에 설치되어 있으므로, LD_LIBRARY_PATH나 PYTHONPATH를 거기에 맞춥니다.
root@dc853a5495a0:/# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/opt/caffe2/lib:/opt/DL/nccl/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/caffe2/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHONPATH=/opt/caffe2
caffe2는 아래의 resnet50_trainer.py를 이용해 테스트합니다. 그 전에, 먼저 https://github.com/caffe2/caffe2/issues/517 에 나온 lmdb 생성 문제를 해결하기 위해 이 URL에서 제시하는 대로 아래와 같이 code 일부를 수정합니다.
root@dc853a5495a0:/# cd /data/imsi/caffe2/caffe2/python/examples
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# vi lmdb_create_example.py
...
flatten_img = img_data.reshape(np.prod(img_data.shape))
# img_tensor.float_data.extend(flatten_img)
img_tensor.float_data.extend(flatten_img.flat)
이어서 다음과 같이 lmdb를 생성합니다. 이미 1번 수행했으므로 다시 할 경우 매우 빨리 수행될 것입니다.
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# python lmdb_create_example.py --output_file /data/imsi/test/caffe2/lmdb
>>> Write database...
Inserted 0 rows
Inserted 16 rows
Inserted 32 rows
Inserted 48 rows
Inserted 64 rows
Inserted 80 rows
Inserted 96 rows
Inserted 112 rows
Checksum/write: 1744827
>>> Read database...
Checksum/read: 1744827
그 다음에 training을 다음과 같이 수행합니다. 여기서는 GPU가 2개만 보이는 환경이므로, --gpus에 0,1,2,3 대신 0,1만 써야 합니다.
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# time python resnet50_trainer.py --train_data /data/imsi/test/caffe2/lmdb --gpus 0,1 --batch_size 128 --num_epochs 1
수행하면 다음과 같이 'not a valid file'이라는 경고 메시지가 나옵니다만, github 등을 googling해보면 무시하셔도 되는 메시지입니다.
Ignoring @/caffe2/caffe2/contrib/nccl:nccl_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops_gpu as it is not a valid file.
Ignoring @/caffe2/caffe2/distributed:file_store_handler_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/distributed:redis_store_handler_ops as it is not a valid file.
INFO:resnet50_trainer:Running on GPUs: [0, 1]
INFO:resnet50_trainer:Using epoch size: 1499904
INFO:data_parallel_model:Parallelizing model for devices: [0, 1]
INFO:data_parallel_model:Create input and model training operators
INFO:data_parallel_model:Model for GPU : 0
INFO:data_parallel_model:Model for GPU : 1
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Add initial parameter sync
WARNING:data_parallel_model:------- DEPRECATED API, please use data_parallel_model.OptimizeGradientMemory() -----
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.252535104752 secs
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.253523111343 secs
INFO:resnet50_trainer:Starting epoch 0/1
INFO:resnet50_trainer:Finished iteration 1/11718 of epoch 0 (27.70 images/sec)
INFO:resnet50_trainer:Training loss: 7.39205980301, accuracy: 0.0
INFO:resnet50_trainer:Finished iteration 2/11718 of epoch 0 (378.51 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 3/11718 of epoch 0 (387.87 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 4/11718 of epoch 0 (383.28 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 5/11718 of epoch 0 (381.71 images/sec)
...
다만 위와 같이 처음부터 accuracy가 1.0으로 나오는 문제가 있습니다. 이 resnet50_trainer.py 문제에 대해서는 caffe2의 github에 아래와 같이 discussion들이 있었습니다만, 아직 뾰족한 해결책은 없는 상태입니다. 하지만 상대적 시스템 성능 측정에는 별 문제가 없습니다.
https://github.com/caffe2/caffe2/issues/810
3) pytorch
이번에는 pytorch 이미지로 테스트하겠습니다.
root@8ccd72116fee:~# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/anaconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
먼저 docker image를 아래와 같이 구동합니다. 단, 여기서는 --ipc=host 옵션을 씁니다. 이유는 https://discuss.pytorch.org/t/imagenet-example-is-crashing/1363/2 에서 언급된 hang 현상을 피하기 위한 것입니다.
root@minsky:~# nvidia-docker run -ti --rm --ipc=host -v /data:/data bsyu/pytorch-ppc64le:v0.1 bash
가장 간단한 example인 mnist를 아래와 같이 수행합니다. 10 epochs를 수행하는데 대략 1분 30초 정도가 걸립니다.
root@8ccd72116fee:/data/imsi/examples/mnist# time python main.py --batch-size 512 --epochs 10
...
rain Epoch: 9 [25600/60000 (42%)] Loss: 0.434816
Train Epoch: 9 [30720/60000 (51%)] Loss: 0.417652
Train Epoch: 9 [35840/60000 (59%)] Loss: 0.503125
Train Epoch: 9 [40960/60000 (68%)] Loss: 0.477776
Train Epoch: 9 [46080/60000 (76%)] Loss: 0.346416
Train Epoch: 9 [51200/60000 (85%)] Loss: 0.361492
Train Epoch: 9 [56320/60000 (93%)] Loss: 0.383941
Test set: Average loss: 0.1722, Accuracy: 9470/10000 (95%)
Train Epoch: 10 [0/60000 (0%)] Loss: 0.369119
Train Epoch: 10 [5120/60000 (8%)] Loss: 0.377726
Train Epoch: 10 [10240/60000 (17%)] Loss: 0.402854
Train Epoch: 10 [15360/60000 (25%)] Loss: 0.349409
Train Epoch: 10 [20480/60000 (34%)] Loss: 0.295271
...
다만 이건 single-GPU만 사용하는 example입니다. Multi-GPU를 사용하기 위해서는 아래의 imagenet example을 수행해야 하는데, 그러자면 ilsvrc2012 dataset을 download 받아 풀어놓아야 합니다. 그 data는 아래와 같이 /data/imagenet_dir/train과 /data/imagenet_dir/val에 각각 JPEG 형태로 풀어놓았습니다.
root@minsky:/data/imagenet_dir/train# while read SYNSET; do
> mkdir -p ${SYNSET}
> tar xf ../../ILSVRC2012_img_train.tar "${SYNSET}.tar"
> tar xf "${SYNSET}.tar" -C "${SYNSET}"
> rm -f "${SYNSET}.tar"
> done < /opt/DL/caffe-nv/data/ilsvrc12/synsets.txt
root@minsky:/data/imagenet_dir/train# ls -1 | wc -l
1000
root@minsky:/data/imagenet_dir/train# du -sm .
142657 .
root@minsky:/data/imagenet_dir/train# find . | wc -l
1282168
root@minsky:/data/imagenet_dir/val# ls -1 | wc -l
50000
이 상태에서 그대로 main.py를 수행하면 다음과 같은 error를 겪게 됩니다. 이유는 이 main.py는 val 디렉토리 밑에도 label별 디렉토리에 JPEG 파일들이 들어가 있기를 기대하는 구조이기 때문입니다.
RuntimeError: Found 0 images in subfolders of: /data/imagenet_dir/val
Supported image extensions are: .jpg,.JPG,.jpeg,.JPEG,.png,.PNG,.ppm,.PPM,.bmp,.BMP
따라서 아래와 같이 inception 디렉토리의 preprocess_imagenet_validation_data.py를 이용하여 label별 디렉토리로 JPEG 파일들을 분산 재배치해야 합니다.
root@minsky:/data/models/research/inception/inception/data# python preprocess_imagenet_validation_data.py /data/imagenet_dir/val imagenet_2012_validation_synset_labels.txt
이제 다시 보면 label별로 재분배된 것을 보실 수 있습니다.
root@minsky:/data/imagenet_dir/val# ls | head -n 3
n01440764
n01443537
n01484850
root@minsky:/data/imagenet_dir/val# ls | wc -l
1000
root@minsky:/data/imagenet_dir/val# find . | wc -l
51001
이제 다음과 같이 main.py를 수행하면 됩니다.
root@8ccd72116fee:~# cd /data/imsi/examples/imagenet
root@8ccd72116fee:/data/imsi/examples/imagenet# time python main.py -a resnet18 --epochs 1 /data/imagenet_dir
=> creating model 'resnet18'
Epoch: [0][0/5005] Time 11.237 (11.237) Data 2.330 (2.330) Loss 7.0071 (7.0071) Prec@1 0.391 (0.391) Prec@5 0.391 (0.391)
Epoch: [0][10/5005] Time 0.139 (1.239) Data 0.069 (0.340) Loss 7.1214 (7.0515) Prec@1 0.000 (0.284) Prec@5 0.000 (1.065)
Epoch: [0][20/5005] Time 0.119 (0.854) Data 0.056 (0.342) Loss 7.1925 (7.0798) Prec@1 0.000 (0.260) Prec@5 0.781 (0.930)
...
* 위에서 사용된 docker image들은 다음과 같이 backup을 받아두었습니다.
root@minsky:/data/docker_save# docker save --output caffe2-ppc64le.v0.3.tar bsyu/caffe2-ppc64le:v0.3
root@minsky:/data/docker_save# docker save --output pytorch-ppc64le.v0.1.tar bsyu/pytorch-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output tf1.3-ppc64le.v0.1.tar bsyu/tf1.3-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output cudnn6-conda2-ppc64le.v0.1.tar bsyu/cudnn6-conda2-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output cudnn6-conda3-ppc64le.v0.1.tar bsyu/cudnn6-conda3-ppc64le:v0.1
root@minsky:/data/docker_save# ls -l
total 28023280
-rw------- 1 root root 4713168896 Nov 10 16:48 caffe2-ppc64le.v0.3.tar
-rw------- 1 root root 4218520064 Nov 10 17:10 cudnn6-conda2-ppc64le.v0.1.tar
-rw------- 1 root root 5272141312 Nov 10 17:11 cudnn6-conda3-ppc64le.v0.1.tar
-rw------- 1 root root 6921727488 Nov 10 16:51 pytorch-ppc64le.v0.1.tar
-rw------- 1 root root 7570257920 Nov 10 16:55 tf1.3-ppc64le.v0.1.tar
비상시엔 이 이미지들을 docker load 명령으로 load 하시면 됩니다.
(예) docker load --input caffe2-ppc64le.v0.3.tar
1) tensorflow v1.3
다음과 같이 tensorflow 1.3 docker image를 구동합니다.
root@minsky:~# nvidia-docker run -ti --rm -v /data:/data bsyu/tf1.3-ppc64le:v0.1 bash
먼저 각종 PATH 환경 변수를 확인합니다.
root@67c0e6901bb2:/# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/anaconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHONPATH=/opt/anaconda3/lib/python3.6/site-packages
cifar10 관련된 example code가 들어있는 directory로 이동합니다.
root@67c0e6901bb2:/# cd /data/imsi/tensorflow/models/tutorials/image/cifar10
수행할 cifar10_multi_gpu_train.py code를 일부 수정합니다. (원래는 --train_dir 등의 명령어 파라미터로 조정이 가능해야 하는데, 실제로는 직접 source를 수정해야 제대로 수행되는 것 같습니다.)
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512 --num_gpus 2
usage: cifar10_multi_gpu_train.py [-h] [--batch_size BATCH_SIZE]
[--data_dir DATA_DIR] [--use_fp16 USE_FP16]
cifar10_multi_gpu_train.py: error: unrecognized arguments: --num_gpus 2
위와 같은 error를 막기 위해, 아래와 같이 직접 code를 수정합니다.
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# vi cifar10_multi_gpu_train.py
...
#parser.add_argument('--train_dir', type=str, default='/tmp/cifar10_train',
parser.add_argument('--train_dir', type=str, default='/data/imsi/test/tf1.3',
help='Directory where to write event logs and checkpoint.')
#parser.add_argument('--max_steps', type=int, default=1000000,
parser.add_argument('--max_steps', type=int, default=10000,
help='Number of batches to run.')
#parser.add_argument('--num_gpus', type=int, default=1,
parser.add_argument('--num_gpus', type=int, default=4,
help='How many GPUs to use.')
이제 다음과 같이 run 하시면 됩니다. 여기서는 batch_size를 512로 했는데, 더 크게 잡아도 될 것 같습니다.
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512
>> Downloading cifar-10-binary.tar.gz 6.1%
...
2017-11-10 01:20:23.628755: step 9440, loss = 0.63 (15074.6 examples/sec; 0.034 sec/batch)
2017-11-10 01:20:25.052011: step 9450, loss = 0.64 (14615.4 examples/sec; 0.035 sec/batch)
2017-11-10 01:20:26.489564: step 9460, loss = 0.55 (14872.0 examples/sec; 0.034 sec/batch)
2017-11-10 01:20:27.860303: step 9470, loss = 0.61 (14515.9 examples/sec; 0.035 sec/batch)
2017-11-10 01:20:29.289386: step 9480, loss = 0.54 (13690.6 examples/sec; 0.037 sec/batch)
2017-11-10 01:20:30.799570: step 9490, loss = 0.69 (15940.8 examples/sec; 0.032 sec/batch)
2017-11-10 01:20:32.239056: step 9500, loss = 0.54 (12581.4 examples/sec; 0.041 sec/batch)
2017-11-10 01:20:34.219832: step 9510, loss = 0.60 (14077.9 examples/sec; 0.036 sec/batch)
...
다음으로는 전체 CPU, 즉 2개 chip 총 16-core의 절반인 1개 chip 8-core와, 전체 GPU 4개 중 2개의 GPU만 할당한 docker를 수행합니다. 여기서 --cpuset-cpus을 써서 CPU 자원을 control할 때, 저렇게 CPU 번호를 2개씩 그룹으로 줍니다. 이는 IBM POWER8가 SMT(hyperthread)가 core당 8개씩 낼 수 있는 특성 때문에 core 1개당 8개의 logical CPU 번호를 할당하기 때문입니다. 현재는 deep learning의 성능 최적화를 위해 SMT를 8이 아닌 2로 맞추어 놓았습니다.
root@minsky:~# NV_GPU=0,1 nvidia-docker run -ti --rm --cpuset-cpus="0,1,8,9,16,17,24,25,32,33,40,41,48,49" -v /data:/data bsyu/tf1.3-ppc64le:v0.1 bash
root@3b2c2614811d:~# nvidia-smi
Fri Nov 10 02:24:14 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... On | 0002:01:00.0 Off | 0 |
| N/A 38C P0 30W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... On | 0003:01:00.0 Off | 0 |
| N/A 40C P0 33W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
root@3b2c2614811d:/# cd /data/imsi/tensorflow/models/tutorials/image/cifar10
이제 GPU가 4개가 아니라 2개이므로, cifar10_multi_gpu_train.py도 아래와 같이 수정합니다.
root@3b2c2614811d:/data/imsi/tensorflow/models/tutorials/image/cifar10# vi cifar10_multi_gpu_train.py
...
#parser.add_argument('--num_gpus', type=int, default=1,
parser.add_argument('--num_gpus', type=int, default=2,
help='How many GPUs to use.')
수행하면 잘 돌아갑니다.
root@3b2c2614811d:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512
>> Downloading cifar-10-binary.tar.gz 1.7%
...
2017-11-10 02:35:50.040462: step 120, loss = 4.07 (15941.4 examples/sec; 0.032 sec/batch)
2017-11-10 02:35:50.587970: step 130, loss = 4.14 (19490.7 examples/sec; 0.026 sec/batch)
2017-11-10 02:35:51.119347: step 140, loss = 3.91 (18319.8 examples/sec; 0.028 sec/batch)
2017-11-10 02:35:51.655916: step 150, loss = 3.87 (20087.1 examples/sec; 0.025 sec/batch)
2017-11-10 02:35:52.181703: step 160, loss = 3.90 (19215.5 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:52.721608: step 170, loss = 3.82 (17780.1 examples/sec; 0.029 sec/batch)
2017-11-10 02:35:53.245088: step 180, loss = 3.92 (18888.4 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:53.777146: step 190, loss = 3.80 (19103.7 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:54.308063: step 200, loss = 3.76 (18554.2 examples/sec; 0.028 sec/batch)
...
2) caffe2
여기서는 처음부터 GPU 2개와 CPU core 8개만 가지고 docker를 띄워 보겠습니다.
root@minsky:~# NV_GPU=0,1 nvidia-docker run -ti --rm --cpuset-cpus="0,1,8,9,16,17,24,25,32,33,40,41,48,49" -v /data:/data bsyu/caffe2-ppc64le:v0.3 bash
보시는 바와 같이 GPU가 2개만 올라옵니다.
root@dc853a5495a0:/# nvidia-smi
Fri Nov 10 07:22:21 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... On | 0002:01:00.0 Off | 0 |
| N/A 32C P0 29W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... On | 0003:01:00.0 Off | 0 |
| N/A 35C P0 32W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
환경변수를 확인합니다. 여기서는 caffe2가 /opt/caffe2에 설치되어 있으므로, LD_LIBRARY_PATH나 PYTHONPATH를 거기에 맞춥니다.
root@dc853a5495a0:/# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/opt/caffe2/lib:/opt/DL/nccl/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/caffe2/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHONPATH=/opt/caffe2
caffe2는 아래의 resnet50_trainer.py를 이용해 테스트합니다. 그 전에, 먼저 https://github.com/caffe2/caffe2/issues/517 에 나온 lmdb 생성 문제를 해결하기 위해 이 URL에서 제시하는 대로 아래와 같이 code 일부를 수정합니다.
root@dc853a5495a0:/# cd /data/imsi/caffe2/caffe2/python/examples
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# vi lmdb_create_example.py
...
flatten_img = img_data.reshape(np.prod(img_data.shape))
# img_tensor.float_data.extend(flatten_img)
img_tensor.float_data.extend(flatten_img.flat)
이어서 다음과 같이 lmdb를 생성합니다. 이미 1번 수행했으므로 다시 할 경우 매우 빨리 수행될 것입니다.
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# python lmdb_create_example.py --output_file /data/imsi/test/caffe2/lmdb
>>> Write database...
Inserted 0 rows
Inserted 16 rows
Inserted 32 rows
Inserted 48 rows
Inserted 64 rows
Inserted 80 rows
Inserted 96 rows
Inserted 112 rows
Checksum/write: 1744827
>>> Read database...
Checksum/read: 1744827
그 다음에 training을 다음과 같이 수행합니다. 여기서는 GPU가 2개만 보이는 환경이므로, --gpus에 0,1,2,3 대신 0,1만 써야 합니다.
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# time python resnet50_trainer.py --train_data /data/imsi/test/caffe2/lmdb --gpus 0,1 --batch_size 128 --num_epochs 1
수행하면 다음과 같이 'not a valid file'이라는 경고 메시지가 나옵니다만, github 등을 googling해보면 무시하셔도 되는 메시지입니다.
Ignoring @/caffe2/caffe2/contrib/nccl:nccl_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops_gpu as it is not a valid file.
Ignoring @/caffe2/caffe2/distributed:file_store_handler_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/distributed:redis_store_handler_ops as it is not a valid file.
INFO:resnet50_trainer:Running on GPUs: [0, 1]
INFO:resnet50_trainer:Using epoch size: 1499904
INFO:data_parallel_model:Parallelizing model for devices: [0, 1]
INFO:data_parallel_model:Create input and model training operators
INFO:data_parallel_model:Model for GPU : 0
INFO:data_parallel_model:Model for GPU : 1
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Add initial parameter sync
WARNING:data_parallel_model:------- DEPRECATED API, please use data_parallel_model.OptimizeGradientMemory() -----
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.252535104752 secs
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.253523111343 secs
INFO:resnet50_trainer:Starting epoch 0/1
INFO:resnet50_trainer:Finished iteration 1/11718 of epoch 0 (27.70 images/sec)
INFO:resnet50_trainer:Training loss: 7.39205980301, accuracy: 0.0
INFO:resnet50_trainer:Finished iteration 2/11718 of epoch 0 (378.51 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 3/11718 of epoch 0 (387.87 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 4/11718 of epoch 0 (383.28 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 5/11718 of epoch 0 (381.71 images/sec)
...
다만 위와 같이 처음부터 accuracy가 1.0으로 나오는 문제가 있습니다. 이 resnet50_trainer.py 문제에 대해서는 caffe2의 github에 아래와 같이 discussion들이 있었습니다만, 아직 뾰족한 해결책은 없는 상태입니다. 하지만 상대적 시스템 성능 측정에는 별 문제가 없습니다.
https://github.com/caffe2/caffe2/issues/810
3) pytorch
이번에는 pytorch 이미지로 테스트하겠습니다.
root@8ccd72116fee:~# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/anaconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
먼저 docker image를 아래와 같이 구동합니다. 단, 여기서는 --ipc=host 옵션을 씁니다. 이유는 https://discuss.pytorch.org/t/imagenet-example-is-crashing/1363/2 에서 언급된 hang 현상을 피하기 위한 것입니다.
root@minsky:~# nvidia-docker run -ti --rm --ipc=host -v /data:/data bsyu/pytorch-ppc64le:v0.1 bash
가장 간단한 example인 mnist를 아래와 같이 수행합니다. 10 epochs를 수행하는데 대략 1분 30초 정도가 걸립니다.
root@8ccd72116fee:/data/imsi/examples/mnist# time python main.py --batch-size 512 --epochs 10
...
rain Epoch: 9 [25600/60000 (42%)] Loss: 0.434816
Train Epoch: 9 [30720/60000 (51%)] Loss: 0.417652
Train Epoch: 9 [35840/60000 (59%)] Loss: 0.503125
Train Epoch: 9 [40960/60000 (68%)] Loss: 0.477776
Train Epoch: 9 [46080/60000 (76%)] Loss: 0.346416
Train Epoch: 9 [51200/60000 (85%)] Loss: 0.361492
Train Epoch: 9 [56320/60000 (93%)] Loss: 0.383941
Test set: Average loss: 0.1722, Accuracy: 9470/10000 (95%)
Train Epoch: 10 [0/60000 (0%)] Loss: 0.369119
Train Epoch: 10 [5120/60000 (8%)] Loss: 0.377726
Train Epoch: 10 [10240/60000 (17%)] Loss: 0.402854
Train Epoch: 10 [15360/60000 (25%)] Loss: 0.349409
Train Epoch: 10 [20480/60000 (34%)] Loss: 0.295271
...
다만 이건 single-GPU만 사용하는 example입니다. Multi-GPU를 사용하기 위해서는 아래의 imagenet example을 수행해야 하는데, 그러자면 ilsvrc2012 dataset을 download 받아 풀어놓아야 합니다. 그 data는 아래와 같이 /data/imagenet_dir/train과 /data/imagenet_dir/val에 각각 JPEG 형태로 풀어놓았습니다.
root@minsky:/data/imagenet_dir/train# while read SYNSET; do
> mkdir -p ${SYNSET}
> tar xf ../../ILSVRC2012_img_train.tar "${SYNSET}.tar"
> tar xf "${SYNSET}.tar" -C "${SYNSET}"
> rm -f "${SYNSET}.tar"
> done < /opt/DL/caffe-nv/data/ilsvrc12/synsets.txt
root@minsky:/data/imagenet_dir/train# ls -1 | wc -l
1000
root@minsky:/data/imagenet_dir/train# du -sm .
142657 .
root@minsky:/data/imagenet_dir/train# find . | wc -l
1282168
root@minsky:/data/imagenet_dir/val# ls -1 | wc -l
50000
이 상태에서 그대로 main.py를 수행하면 다음과 같은 error를 겪게 됩니다. 이유는 이 main.py는 val 디렉토리 밑에도 label별 디렉토리에 JPEG 파일들이 들어가 있기를 기대하는 구조이기 때문입니다.
RuntimeError: Found 0 images in subfolders of: /data/imagenet_dir/val
Supported image extensions are: .jpg,.JPG,.jpeg,.JPEG,.png,.PNG,.ppm,.PPM,.bmp,.BMP
따라서 아래와 같이 inception 디렉토리의 preprocess_imagenet_validation_data.py를 이용하여 label별 디렉토리로 JPEG 파일들을 분산 재배치해야 합니다.
root@minsky:/data/models/research/inception/inception/data# python preprocess_imagenet_validation_data.py /data/imagenet_dir/val imagenet_2012_validation_synset_labels.txt
이제 다시 보면 label별로 재분배된 것을 보실 수 있습니다.
root@minsky:/data/imagenet_dir/val# ls | head -n 3
n01440764
n01443537
n01484850
root@minsky:/data/imagenet_dir/val# ls | wc -l
1000
root@minsky:/data/imagenet_dir/val# find . | wc -l
51001
이제 다음과 같이 main.py를 수행하면 됩니다.
root@8ccd72116fee:~# cd /data/imsi/examples/imagenet
root@8ccd72116fee:/data/imsi/examples/imagenet# time python main.py -a resnet18 --epochs 1 /data/imagenet_dir
=> creating model 'resnet18'
Epoch: [0][0/5005] Time 11.237 (11.237) Data 2.330 (2.330) Loss 7.0071 (7.0071) Prec@1 0.391 (0.391) Prec@5 0.391 (0.391)
Epoch: [0][10/5005] Time 0.139 (1.239) Data 0.069 (0.340) Loss 7.1214 (7.0515) Prec@1 0.000 (0.284) Prec@5 0.000 (1.065)
Epoch: [0][20/5005] Time 0.119 (0.854) Data 0.056 (0.342) Loss 7.1925 (7.0798) Prec@1 0.000 (0.260) Prec@5 0.781 (0.930)
...
* 위에서 사용된 docker image들은 다음과 같이 backup을 받아두었습니다.
root@minsky:/data/docker_save# docker save --output caffe2-ppc64le.v0.3.tar bsyu/caffe2-ppc64le:v0.3
root@minsky:/data/docker_save# docker save --output pytorch-ppc64le.v0.1.tar bsyu/pytorch-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output tf1.3-ppc64le.v0.1.tar bsyu/tf1.3-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output cudnn6-conda2-ppc64le.v0.1.tar bsyu/cudnn6-conda2-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output cudnn6-conda3-ppc64le.v0.1.tar bsyu/cudnn6-conda3-ppc64le:v0.1
root@minsky:/data/docker_save# ls -l
total 28023280
-rw------- 1 root root 4713168896 Nov 10 16:48 caffe2-ppc64le.v0.3.tar
-rw------- 1 root root 4218520064 Nov 10 17:10 cudnn6-conda2-ppc64le.v0.1.tar
-rw------- 1 root root 5272141312 Nov 10 17:11 cudnn6-conda3-ppc64le.v0.1.tar
-rw------- 1 root root 6921727488 Nov 10 16:51 pytorch-ppc64le.v0.1.tar
-rw------- 1 root root 7570257920 Nov 10 16:55 tf1.3-ppc64le.v0.1.tar
비상시엔 이 이미지들을 docker load 명령으로 load 하시면 됩니다.
(예) docker load --input caffe2-ppc64le.v0.3.tar
2017년 11월 1일 수요일
caffe2, tensorflow 1.3, pytorch가 설치된 docker image 만들기
docker hub에서 제공되는 nvidia/cuda-ppc64le 이미지를 이용하여, 거기에 이것저것 원하는 package를 설치하고 docker commit 명령을 통해 새로운 이미지를 만드는 방법을 보시겠습니다.
먼저 parent ubuntu OS에서 필요한 file들을 아래처럼 docker라는 directory에 모아둡니다.
root@firestone:~/docker# ls -l
total 2369580
-rwxr-xr-x 1 root root 284629257 Oct 31 22:27 Anaconda2-4.4.0.1-Linux-ppc64le.sh
-rwxr-xr-x 1 root root 299425582 Oct 31 22:28 Anaconda3-4.4.0.1-Linux-ppc64le.sh
-rw-r--r-- 1 root root 1321330418 Oct 31 22:35 cuda-repo-ubuntu1604-8-0-local-ga2v2_8.0.61-1_ppc64el.deb
-rwxr-xr-x 1 root root 8788 Oct 31 21:40 debootstrap.sh
-rw-r--r-- 1 root root 68444212 Oct 31 22:35 libcudnn6_6.0.21-1+cuda8.0_ppc64el.deb
-rw-r--r-- 1 root root 59820704 Oct 31 22:35 libcudnn6-dev_6.0.21-1+cuda8.0_ppc64el.deb
-rw-r--r-- 1 root root 6575300 Oct 31 22:35 libcudnn6-doc_6.0.21-1+cuda8.0_ppc64el.deb
-rw-r--r-- 1 root root 386170568 Oct 31 22:36 mldl-repo-local_4.0.0_ppc64el.deb
drwxr-xr-x 21 root root 4096 Oct 31 21:55 ubuntu
이미 nvidia/cuda-ppc64le 이미지는 docker pull 명령으로 당겨왔습니다.
root@firestone:~/docker# docker images | grep nvidia
nvidia-docker build 405ee913a07e About an hour ago 1.02GB
nvidia/cuda-ppc64le 8.0-cudnn6-runtime-ubuntu16.04 bf28cd22ff84 6 weeks ago 974MB
nvidia/cuda-ppc64le latest 9b0a21e35c66 6 weeks ago 1.72GB
이제 nvidia/cuda-ppc64le:latest를 interactive mode로 구동합니다. 이때 docker directory를 /docker라는 이름으로 마운트합니다.
root@firestone:~/docker# docker run -ti -v ~/docker:/docker nvidia/cuda-ppc64le:latest bash
이제 nvidia/cuda-ppc64le:latest 안에 들어왔습니다. /docker로 가서 동일한 file들이 보이는지 확인합니다.
root@deeed8ce922f:/# cd /docker
root@deeed8ce922f:/docker# ls
Anaconda2-4.4.0.1-Linux-ppc64le.sh libcudnn6-doc_6.0.21-1+cuda8.0_ppc64el.deb
Anaconda3-4.4.0.1-Linux-ppc64le.sh libcudnn6_6.0.21-1+cuda8.0_ppc64el.deb
cuda-repo-ubuntu1604-8-0-local-ga2v2_8.0.61-1_ppc64el.deb mldl-repo-local_4.0.0_ppc64el.deb
debootstrap.sh ubuntu
libcudnn6-dev_6.0.21-1+cuda8.0_ppc64el.deb
이제 libcudnn6을 먼저 설치합니다. 아울러 NCCL 및 bazel 등을 쓸 수도 있으니 PowerAI 4.0 (mldl-repo-local_4.0.0_ppc64el.deb)의 local repo도 설치합니다.
root@deeed8ce922f:/docker# dpkg -i libcudnn6_6.0.21-1+cuda8.0_ppc64el.deb libcudnn6-dev_6.0.21-1+cuda8.0_ppc64el.deb mldl-repo-local_4.0.0_ppc64el.deb
root@deeed8ce922f:/docker# apt-get update
이제 cuda와 nccl, openblas 등을 설치합니다.
root@deeed8ce922f:/docker# apt-get install cuda
root@deeed8ce922f:/docker# apt-get install -y libnccl-dev libnccl1 python-ncclient bazel libopenblas-dev libopenblas libopenblas-base
이번엔 다른 ssh 세션에서, parent OS에서 docker ps 명령어로 현재 우리가 쓰고 있는 container ID를 확인합니다.
root@firestone:~# docker ps | grep -v k8s
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
deeed8ce922f nvidia/cuda-ppc64le:latest "bash" About an hour ago Up About an hour gracious_bhaskara
저 container ID에 대해 docker commit 명령을 다음과 같이 날립니다.
root@firestone:~# docker commit deeed8ce922f bsyu/libcudnn6-ppc64le:xenial
이제 새로운 docker image가 생성된 것을 볼 수 있습니다.
root@firestone:~# docker images | grep -v ibm
REPOSITORY TAG IMAGE ID CREATED SIZE
bsyu/libcudnn6-ppc64le xenial 6d621d9d446b 48 seconds ago 7.52GB
nvidia-docker build 405ee913a07e 2 hours ago 1.02GB
nvidia/cuda-ppc64le 8.0-cudnn6-runtime-ubuntu16.04 bf28cd22ff84 6 weeks ago 974MB
nvidia/cuda-ppc64le latest 9b0a21e35c66 6 weeks ago 1.72GB
ppc64le/golang 1.6.3 6a579d02d32f 14 months ago 705MB
적절히 tagging한 뒤, docker에 login하여 docker hub으로 push 해둡니다.
root@firestone:~# docker tag bsyu/libcudnn6-ppc64le:xenial bsyu/libcudnn6-ppc64le:latest
root@firestone:~# docker login -u bsyu
Password:
Login Succeeded
root@firestone:~# docker push bsyu/libcudnn6-ppc64le:xenial
The push refers to a repository [docker.io/bsyu/libcudnn6-ppc64le]
de3b55a17936: Pushed
9eb05620c635: Mounted from nvidia/cuda-ppc64le
688827f0a03b: Mounted from nvidia/cuda-ppc64le
a36322f4fa68: Mounted from nvidia/cuda-ppc64le
6665818dfb83: Mounted from nvidia/cuda-ppc64le
4cad4acd0601: Mounted from nvidia/cuda-ppc64le
f12b406a6a23: Mounted from nvidia/cuda-ppc64le
bb179c8bb840: Mounted from nvidia/cuda-ppc64le
cd51df595e0c: Mounted from nvidia/cuda-ppc64le
4a7a95d650cf: Mounted from nvidia/cuda-ppc64le
22c3301fbf0b: Mounted from nvidia/cuda-ppc64le
xenial: digest: sha256:3993ac50b857979694cdc41cf12d672cc078583f1babb79f6c25e0688ed603ed size: 2621
이제 여기에 추가로 caffe2를 설치합니다. 이전 포스팅(http://hwengineer.blogspot.kr/2017/10/minsky-caffe2-jupyter-notebook-mnist.html)에서 build 해두었던 /opt/caffe2 directory를 통째로 tar로 말아두었던 것을 여기에 풀겠습니다.
root@deeed8ce922f:/docker# ls
Anaconda2-4.4.0.1-Linux-ppc64le.sh libcudnn6-doc_6.0.21-1+cuda8.0_ppc64el.deb
Anaconda3-4.4.0.1-Linux-ppc64le.sh libcudnn6_6.0.21-1+cuda8.0_ppc64el.deb
caffe2.tgz mldl-repo-local_4.0.0_ppc64el.deb
cuda-repo-ubuntu1604-8-0-local-ga2v2_8.0.61-1_ppc64el.deb site-packages.tgz
debootstrap.sh ubuntu
libcudnn6-dev_6.0.21-1+cuda8.0_ppc64el.deb
root@deeed8ce922f:/docker# cd /opt
root@deeed8ce922f:/opt# tar -zxf /docker/caffe2.tgz
root@deeed8ce922f:/opt# vi ~/.bashrc
...
export LD_LIBRARY_PATH=/opt/DL/nccl/lib:/opt/DL/openblas/lib:/usr/local/cuda-8.0/lib6:/usr/lib:/usr/local/lib:/opt/caffe2/lib:/usr/lib/powerpc64le-linux-gnu
export PATH=/opt/anaconda2/bin:/opt/caffe2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export PYTHONPATH=/opt/caffe2
caffe2가 정상 작동하기 위해 필요한 package들을 추가 설치합니다.
root@deeed8ce922f:/opt# conda install protobuf future
root@deeed8ce922f:/opt# apt-get install libprotobuf-dev python-protobuf libgoogle-glog-dev libopenmpi-dev liblmdb-dev python-lmdb libleveldb-dev python-leveldb libopencv-core-dev libopencv-gpu-dev python-opencv libopencv-highgui-dev libopencv-dev
이제 다시 parent OS에서 다른 이름으로 docker commit 합니다.
root@firestone:~# docker commit deeed8ce922f bsyu/caffe2-ppc64le-xenial:v0.1
이제 GPU를 사용하기 위해 nvidia-docker로 구동해봅니다. 그러자면 (혹시 아직 안 하셨다면) 먼저 nvidia-docker-plugin을 background로 구동해야 합니다.
root@firestone:~# nohup nvidia-docker-plugin &
root@firestone:~# nvidia-docker run -ti --rm -v ~/docker:/docker bsyu/caffe2-ppc64le-xenial:v0.1 bash
bsyu/caffe2-ppc64le-xenial:v0.1 컨테이너에서 caffe2가 성공적으로 import 되는 것을 확인합니다.
root@0e58f6f69c44:/# python -c 'from caffe2.python import core' 2>/dev/null && echo "Success" || echo "Failure"
Success
먼저 parent ubuntu OS에서 필요한 file들을 아래처럼 docker라는 directory에 모아둡니다.
root@firestone:~/docker# ls -l
total 2369580
-rwxr-xr-x 1 root root 284629257 Oct 31 22:27 Anaconda2-4.4.0.1-Linux-ppc64le.sh
-rwxr-xr-x 1 root root 299425582 Oct 31 22:28 Anaconda3-4.4.0.1-Linux-ppc64le.sh
-rw-r--r-- 1 root root 1321330418 Oct 31 22:35 cuda-repo-ubuntu1604-8-0-local-ga2v2_8.0.61-1_ppc64el.deb
-rwxr-xr-x 1 root root 8788 Oct 31 21:40 debootstrap.sh
-rw-r--r-- 1 root root 68444212 Oct 31 22:35 libcudnn6_6.0.21-1+cuda8.0_ppc64el.deb
-rw-r--r-- 1 root root 59820704 Oct 31 22:35 libcudnn6-dev_6.0.21-1+cuda8.0_ppc64el.deb
-rw-r--r-- 1 root root 6575300 Oct 31 22:35 libcudnn6-doc_6.0.21-1+cuda8.0_ppc64el.deb
-rw-r--r-- 1 root root 386170568 Oct 31 22:36 mldl-repo-local_4.0.0_ppc64el.deb
drwxr-xr-x 21 root root 4096 Oct 31 21:55 ubuntu
이미 nvidia/cuda-ppc64le 이미지는 docker pull 명령으로 당겨왔습니다.
root@firestone:~/docker# docker images | grep nvidia
nvidia-docker build 405ee913a07e About an hour ago 1.02GB
nvidia/cuda-ppc64le 8.0-cudnn6-runtime-ubuntu16.04 bf28cd22ff84 6 weeks ago 974MB
nvidia/cuda-ppc64le latest 9b0a21e35c66 6 weeks ago 1.72GB
이제 nvidia/cuda-ppc64le:latest를 interactive mode로 구동합니다. 이때 docker directory를 /docker라는 이름으로 마운트합니다.
root@firestone:~/docker# docker run -ti -v ~/docker:/docker nvidia/cuda-ppc64le:latest bash
이제 nvidia/cuda-ppc64le:latest 안에 들어왔습니다. /docker로 가서 동일한 file들이 보이는지 확인합니다.
root@deeed8ce922f:/# cd /docker
root@deeed8ce922f:/docker# ls
Anaconda2-4.4.0.1-Linux-ppc64le.sh libcudnn6-doc_6.0.21-1+cuda8.0_ppc64el.deb
Anaconda3-4.4.0.1-Linux-ppc64le.sh libcudnn6_6.0.21-1+cuda8.0_ppc64el.deb
cuda-repo-ubuntu1604-8-0-local-ga2v2_8.0.61-1_ppc64el.deb mldl-repo-local_4.0.0_ppc64el.deb
debootstrap.sh ubuntu
libcudnn6-dev_6.0.21-1+cuda8.0_ppc64el.deb
이제 libcudnn6을 먼저 설치합니다. 아울러 NCCL 및 bazel 등을 쓸 수도 있으니 PowerAI 4.0 (mldl-repo-local_4.0.0_ppc64el.deb)의 local repo도 설치합니다.
root@deeed8ce922f:/docker# dpkg -i libcudnn6_6.0.21-1+cuda8.0_ppc64el.deb libcudnn6-dev_6.0.21-1+cuda8.0_ppc64el.deb mldl-repo-local_4.0.0_ppc64el.deb
root@deeed8ce922f:/docker# apt-get update
이제 cuda와 nccl, openblas 등을 설치합니다.
root@deeed8ce922f:/docker# apt-get install cuda
root@deeed8ce922f:/docker# apt-get install -y libnccl-dev libnccl1 python-ncclient bazel libopenblas-dev libopenblas libopenblas-base
이번엔 다른 ssh 세션에서, parent OS에서 docker ps 명령어로 현재 우리가 쓰고 있는 container ID를 확인합니다.
root@firestone:~# docker ps | grep -v k8s
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
deeed8ce922f nvidia/cuda-ppc64le:latest "bash" About an hour ago Up About an hour gracious_bhaskara
저 container ID에 대해 docker commit 명령을 다음과 같이 날립니다.
root@firestone:~# docker commit deeed8ce922f bsyu/libcudnn6-ppc64le:xenial
이제 새로운 docker image가 생성된 것을 볼 수 있습니다.
root@firestone:~# docker images | grep -v ibm
REPOSITORY TAG IMAGE ID CREATED SIZE
bsyu/libcudnn6-ppc64le xenial 6d621d9d446b 48 seconds ago 7.52GB
nvidia-docker build 405ee913a07e 2 hours ago 1.02GB
nvidia/cuda-ppc64le 8.0-cudnn6-runtime-ubuntu16.04 bf28cd22ff84 6 weeks ago 974MB
nvidia/cuda-ppc64le latest 9b0a21e35c66 6 weeks ago 1.72GB
ppc64le/golang 1.6.3 6a579d02d32f 14 months ago 705MB
적절히 tagging한 뒤, docker에 login하여 docker hub으로 push 해둡니다.
root@firestone:~# docker tag bsyu/libcudnn6-ppc64le:xenial bsyu/libcudnn6-ppc64le:latest
root@firestone:~# docker login -u bsyu
Password:
Login Succeeded
root@firestone:~# docker push bsyu/libcudnn6-ppc64le:xenial
The push refers to a repository [docker.io/bsyu/libcudnn6-ppc64le]
de3b55a17936: Pushed
9eb05620c635: Mounted from nvidia/cuda-ppc64le
688827f0a03b: Mounted from nvidia/cuda-ppc64le
a36322f4fa68: Mounted from nvidia/cuda-ppc64le
6665818dfb83: Mounted from nvidia/cuda-ppc64le
4cad4acd0601: Mounted from nvidia/cuda-ppc64le
f12b406a6a23: Mounted from nvidia/cuda-ppc64le
bb179c8bb840: Mounted from nvidia/cuda-ppc64le
cd51df595e0c: Mounted from nvidia/cuda-ppc64le
4a7a95d650cf: Mounted from nvidia/cuda-ppc64le
22c3301fbf0b: Mounted from nvidia/cuda-ppc64le
xenial: digest: sha256:3993ac50b857979694cdc41cf12d672cc078583f1babb79f6c25e0688ed603ed size: 2621
이제 여기에 추가로 caffe2를 설치합니다. 이전 포스팅(http://hwengineer.blogspot.kr/2017/10/minsky-caffe2-jupyter-notebook-mnist.html)에서 build 해두었던 /opt/caffe2 directory를 통째로 tar로 말아두었던 것을 여기에 풀겠습니다.
root@deeed8ce922f:/docker# ls
Anaconda2-4.4.0.1-Linux-ppc64le.sh libcudnn6-doc_6.0.21-1+cuda8.0_ppc64el.deb
Anaconda3-4.4.0.1-Linux-ppc64le.sh libcudnn6_6.0.21-1+cuda8.0_ppc64el.deb
caffe2.tgz mldl-repo-local_4.0.0_ppc64el.deb
cuda-repo-ubuntu1604-8-0-local-ga2v2_8.0.61-1_ppc64el.deb site-packages.tgz
debootstrap.sh ubuntu
libcudnn6-dev_6.0.21-1+cuda8.0_ppc64el.deb
root@deeed8ce922f:/docker# cd /opt
root@deeed8ce922f:/opt# tar -zxf /docker/caffe2.tgz
root@deeed8ce922f:/opt# vi ~/.bashrc
...
export LD_LIBRARY_PATH=/opt/DL/nccl/lib:/opt/DL/openblas/lib:/usr/local/cuda-8.0/lib6:/usr/lib:/usr/local/lib:/opt/caffe2/lib:/usr/lib/powerpc64le-linux-gnu
export PATH=/opt/anaconda2/bin:/opt/caffe2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export PYTHONPATH=/opt/caffe2
caffe2가 정상 작동하기 위해 필요한 package들을 추가 설치합니다.
root@deeed8ce922f:/opt# conda install protobuf future
root@deeed8ce922f:/opt# apt-get install libprotobuf-dev python-protobuf libgoogle-glog-dev libopenmpi-dev liblmdb-dev python-lmdb libleveldb-dev python-leveldb libopencv-core-dev libopencv-gpu-dev python-opencv libopencv-highgui-dev libopencv-dev
이제 다시 parent OS에서 다른 이름으로 docker commit 합니다.
root@firestone:~# docker commit deeed8ce922f bsyu/caffe2-ppc64le-xenial:v0.1
이제 GPU를 사용하기 위해 nvidia-docker로 구동해봅니다. 그러자면 (혹시 아직 안 하셨다면) 먼저 nvidia-docker-plugin을 background로 구동해야 합니다.
root@firestone:~# nohup nvidia-docker-plugin &
root@firestone:~# nvidia-docker run -ti --rm -v ~/docker:/docker bsyu/caffe2-ppc64le-xenial:v0.1 bash
bsyu/caffe2-ppc64le-xenial:v0.1 컨테이너에서 caffe2가 성공적으로 import 되는 것을 확인합니다.
root@0e58f6f69c44:/# python -c 'from caffe2.python import core' 2>/dev/null && echo "Success" || echo "Failure"
Success
이 이미지를 이용하여 또 tensorflow 1.3 및 pytorch 0.2.0이 들어간 docker image도 만듭니다.
root@firestone:~# docker run -ti --rm -v ~/docker:/docker bsyu/caffe2-ppc64le-xenial:v0.1 bash
root@8cfeaf93f28b:/# cd /opt
root@8cfeaf93f28b:/opt# ls
DL anaconda2 anaconda3 caffe2
root@8cfeaf93f28b:/opt# rm -rf caffe2
root@8cfeaf93f28b:/opt# vi ~/.bashrc
...
export LD_LIBRARY_PATH=/opt/DL/nccl/lib:/opt/DL/openblas/lib:/usr/local/cuda-8.0/lib6:/usr/lib:/usr/local/lib:/usr/lib/powerpc64le-linux-gnu
export PATH=/opt/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export PYTHONPATH=/opt/anaconda3/lib/python3.6/site-packages
root@8cfeaf93f28b:~# apt-get install libcupti-dev openjdk-8-jdk openjdk-8-jdk-headless git
root@8cfeaf93f28b:~# conda install bazel numpy
root@8cfeaf93f28b:~# git clone --recursive https://github.com/tensorflow/tensorflow.git
root@8cfeaf93f28b:~# cd tensorflow/
root@8cfeaf93f28b:~/tensorflow# git checkout r1.3
root@8cfeaf93f28b:~/tensorflow# ./configure
root@8cfeaf93f28b:~/tensorflow# bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
root@8cfeaf93f28b:~/tensorflow# bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
root@8cfeaf93f28b:~/tensorflow# pip install /tmp/tensorflow_pkg/tensorflow-1.3.1-cp36-cp36m-linux_ppc64le.whl
root@8cfeaf93f28b:~/tensorflow# conda list | grep tensor
tensorflow 1.3.1 <pip>
tensorflow-tensorboard 0.1.8 <pip>
이제 tensorflow 1.3이 설치되었으므로 이를 docker commit으로 저장합니다.
root@firestone:~# docker ps | grep -v k8s
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8cfeaf93f28b bsyu/caffe2-ppc64le-xenial:v0.1 "bash" 2 hours ago Up 2 hours vigilant_ptolemy
root@firestone:~# docker commit 8cfeaf93f28b bsyu/tf1.3-caffe2-ppc64le-xenial:v0.1
이 이미지에 다시 pytorch를 설치합니다.
root@8cfeaf93f28b:~# git clone --recursive https://github.com/pytorch/pytorch.git
root@8cfeaf93f28b:~# cd /pytorch
root@8cfeaf93f28b:~/pytorch# export CMAKE_PREFIX_PATH=/opt/pytorch
root@8cfeaf93f28b:~/pytorch# conda install numpy pyyaml setuptools cmake cffi openblas
root@8cfeaf93f28b:~/pytorch# python setup.py install
root@8cfeaf93f28b:~# python
Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 15:31:35)
[GCC 4.8.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from __future__ import print_function
>>> import torch
>>>
이제 최종적으로 bsyu/pytorch-tf1.3-caffe2-ppc64le-xenial:v0.1 라는 이미지를 commit 합니다.
root@firestone:~# docker commit 8cfeaf93f28b bsyu/pytorch-tf1.3-caffe2-ppc64le-xenial:v0.1
root@firestone:~# docker push bsyu/pytorch-tf1.3-caffe2-ppc64le-xenial:v0.1
root@firestone:~# docker run -ti --rm -v ~/docker:/docker bsyu/caffe2-ppc64le-xenial:v0.1 bash
root@8cfeaf93f28b:/# cd /opt
root@8cfeaf93f28b:/opt# ls
DL anaconda2 anaconda3 caffe2
root@8cfeaf93f28b:/opt# rm -rf caffe2
root@8cfeaf93f28b:/opt# vi ~/.bashrc
...
export LD_LIBRARY_PATH=/opt/DL/nccl/lib:/opt/DL/openblas/lib:/usr/local/cuda-8.0/lib6:/usr/lib:/usr/local/lib:/usr/lib/powerpc64le-linux-gnu
export PATH=/opt/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export PYTHONPATH=/opt/anaconda3/lib/python3.6/site-packages
root@8cfeaf93f28b:~# apt-get install libcupti-dev openjdk-8-jdk openjdk-8-jdk-headless git
root@8cfeaf93f28b:~# conda install bazel numpy
root@8cfeaf93f28b:~# git clone --recursive https://github.com/tensorflow/tensorflow.git
root@8cfeaf93f28b:~# cd tensorflow/
root@8cfeaf93f28b:~/tensorflow# git checkout r1.3
root@8cfeaf93f28b:~/tensorflow# ./configure
root@8cfeaf93f28b:~/tensorflow# bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
root@8cfeaf93f28b:~/tensorflow# bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
root@8cfeaf93f28b:~/tensorflow# pip install /tmp/tensorflow_pkg/tensorflow-1.3.1-cp36-cp36m-linux_ppc64le.whl
root@8cfeaf93f28b:~/tensorflow# conda list | grep tensor
tensorflow 1.3.1 <pip>
tensorflow-tensorboard 0.1.8 <pip>
이제 tensorflow 1.3이 설치되었으므로 이를 docker commit으로 저장합니다.
root@firestone:~# docker ps | grep -v k8s
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8cfeaf93f28b bsyu/caffe2-ppc64le-xenial:v0.1 "bash" 2 hours ago Up 2 hours vigilant_ptolemy
root@firestone:~# docker commit 8cfeaf93f28b bsyu/tf1.3-caffe2-ppc64le-xenial:v0.1
이 이미지에 다시 pytorch를 설치합니다.
root@8cfeaf93f28b:~# git clone --recursive https://github.com/pytorch/pytorch.git
root@8cfeaf93f28b:~# cd /pytorch
root@8cfeaf93f28b:~/pytorch# export CMAKE_PREFIX_PATH=/opt/pytorch
root@8cfeaf93f28b:~/pytorch# conda install numpy pyyaml setuptools cmake cffi openblas
root@8cfeaf93f28b:~/pytorch# python setup.py install
root@8cfeaf93f28b:~# python
Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 15:31:35)
[GCC 4.8.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from __future__ import print_function
>>> import torch
>>>
이제 최종적으로 bsyu/pytorch-tf1.3-caffe2-ppc64le-xenial:v0.1 라는 이미지를 commit 합니다.
root@firestone:~# docker commit 8cfeaf93f28b bsyu/pytorch-tf1.3-caffe2-ppc64le-xenial:v0.1
root@firestone:~# docker push bsyu/pytorch-tf1.3-caffe2-ppc64le-xenial:v0.1
2017년 10월 10일 화요일
Minsky 위에서의 PyTorch 설치, 그리고 MNIST 수행
IBM에서 제공되는 주요 오픈소스 기반 Deep Learning framework toolkit인 PowerAI에는 아직 PyTorch가 포함되어 있지 않습니다. 그러나 요즘 python의 인기를 타고 PyTorch를 사용하는 사례가 점점 늘고 있습니다.
ppc64le 아키텍처 기반의 IBM Minsky 서버에서는 그렇다면 아직 PyTorch를 사용 못하는 것인가 ? 아닙니다. 오픈소스 좋은 것이 무엇이겠습니까 ? 그냥 source에서 빌드하셔서 사용하시면 됩니다. 여기서는 그 과정을 한번 보겠습니다. 제가 가난하여 Minsky가 없는 관계로, Kolon Benit에서 잠깐 빌려주신 Firestone 서버 (POWER8 + K80)에서 빌드하고 테스트했습니다. 여기서는 Ubuntu 16.04.03에 CUDA 8.0.61을 썼습니다.
root@ubuntu:/data/examples/mnist# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
root@ubuntu:/data/examples/mnist# dpkg -l | grep cuda
ii cuda 8.0.61-1 ppc64el CUDA meta-package
ii cuda-8-0 8.0.61-1 ppc64el CUDA 8.0 meta-package
먼저 아래 URL에 따라 anaconda를 설치합니다.
http://hwengineer.blogspot.kr/2017/09/minsky-anaconda-installer-full-package.html
다음으로 github에서 source download 합니다.
root@ubuntu:/data# git clone https://github.com/pytorch/pytorch.git
Cloning into 'pytorch'...
remote: Counting objects: 40225, done.
remote: Compressing objects: 100% (39/39), done.
remote: Total 40225 (delta 33), reused 30 (delta 24), pack-reused 40162
Receiving objects: 100% (40225/40225), 15.52 MiB | 6.41 MiB/s, done.
Resolving deltas: 100% (30571/30571), done.
Checking connectivity... done.
root@ubuntu:/data# cd pytorch
Anaconda로부터 conda가 제대로 설치되었는지 확인하고, 그에 따라 CMAKE_PREFIX_PATH를 설정합니다.
root@ubuntu:/data/pytorch# which conda
/opt/anaconda2/bin/conda
root@ubuntu:/data/pytorch# export CMAKE_PREFIX_PATH=/opt/anaconda2
이어서 numpy와 pyyaml 등 PyTorch에 필요한 python package들을 conda 명령으로 설치합니다. 이때 원래의 build instruction (https://github.com/pytorch/pytorch#from-source)에서는 intel에만 있는 mkl도 conda로 설치하라고 나옵니다만, ppc64le에서는 그 대신 openblas를 설치하시면 됩니다.
root@ubuntu:/data/pytorch# conda install numpy pyyaml setuptools cmake cffi openblas
...
Package plan for installation in environment /opt/anaconda2:
The following NEW packages will be INSTALLED:
bzip2: 1.0.6-3
certifi: 2016.2.28-py27_0
cmake: 3.6.3-0
The following packages will be UPDATED:
anaconda: 4.4.0-np112py27_0 --> custom-py27_0
astropy: 1.3.2-np112py27_0 --> 2.0.1-np113py27_1
bottleneck: 1.2.1-np112py27_0 --> 1.2.1-np113py27_1
conda: 4.3.21-py27_0 --> 4.3.27-py27_0
h5py: 2.7.0-np112py27_0 --> 2.7.0-np113py27_1
matplotlib: 2.0.2-np112py27_0 --> 2.0.2-np113py27_0
numexpr: 2.6.2-np112py27_0 --> 2.6.2-np113py27_1
numpy: 1.12.1-py27_0 --> 1.13.1-py27_1
pandas: 0.20.1-np112py27_0 --> 0.20.3-py27_1
pytables: 3.2.2-np112py27_4 --> 3.4.2-np113py27_0
pywavelets: 0.5.2-np112py27_0 --> 0.5.2-np113py27_1
scikit-image: 0.13.0-np112py27_0 --> 0.13.0-np113py27_0
scikit-learn: 0.18.1-np112py27_1 --> 0.19.0-np113py27_1
scipy: 0.19.0-np112py27_0 --> 0.19.1-np113py27_1
setuptools: 27.2.0-py27_0 --> 36.4.0-py27_1
statsmodels: 0.8.0-np112py27_0 --> 0.8.0-np113py27_1
Proceed ([y]/n)? y
bzip2-1.0.6-3. 100% |################################| Time: 0:00:00 10.23 MB/s
anaconda-custo 100% |################################| Time: 0:00:00 15.66 MB/s
certifi-2016.2 100% |################################| Time: 0:00:01 147.40 kB/s
cmake-3.6.3-0. 100% |################################| Time: 0:00:36 225.32 kB/s
numpy-1.13.1-p 100% |################################| Time: 0:00:04 1.68 MB/s
bottleneck-1.2 100% |################################| Time: 0:00:01 224.70 kB/s
h5py-2.7.0-np1 100% |################################| Time: 0:00:02 1.09 MB/s
numexpr-2.6.2- 100% |################################| Time: 0:00:01 288.12 kB/s
pywavelets-0.5 100% |################################| Time: 0:00:05 1.08 MB/s
scipy-0.19.1-n 100% |################################| Time: 0:01:25 459.82 kB/s
setuptools-36. 100% |################################| Time: 0:00:01 347.78 kB/s
pandas-0.20.3- 100% |################################| Time: 0:00:56 407.34 kB/s
pytables-3.4.2 100% |################################| Time: 0:00:41 168.51 kB/s
scikit-learn-0 100% |################################| Time: 0:01:19 158.86 kB/s
astropy-2.0.1- 100% |################################| Time: 0:00:15 644.67 kB/s
statsmodels-0. 100% |################################| Time: 0:00:44 178.04 kB/s
conda-4.3.27-p 100% |################################| Time: 0:00:00 44.12 MB/s
matplotlib-2.0 100% |################################| Time: 0:00:04 2.51 MB/s
scikit-image-0 100% |################################| Time: 0:02:18 245.94 kB/s
- https://repo.continuum.io/pkgs/free/noarch
- https://repo.continuum.io/pkgs/r/linux-ppc64le
- https://repo.continuum.io/pkgs/r/noarch
- https://repo.continuum.io/pkgs/pro/linux-ppc64le
- https://repo.continuum.io/pkgs/pro/noarch
이제 pytorch를 설치할 차례입니다.
root@ubuntu:/data/pytorch# python setup.py install
Could not find /data/pytorch/torch/lib/gloo/CMakeLists.txt
Did you run 'git submodule update --init'?
억, git clone할 때 --recursive를 안 붙여줬기 때문에 이런 error를 겪나 봅니다. 시키는 대로 git submodule 명령을 수행하시면 이 error는 안 생깁니다.
root@ubuntu:/data/pytorch# git submodule update --init
Submodule 'torch/lib/gloo' (https://github.com/facebookincubator/gloo) registered for path 'torch/lib/gloo'
Submodule 'torch/lib/nanopb' (https://github.com/nanopb/nanopb.git) registered for path 'torch/lib/nanopb'
Submodule 'torch/lib/pybind11' (https://github.com/pybind/pybind11) registered for path 'torch/lib/pybind11'
Cloning into 'torch/lib/gloo'...
remote: Counting objects: 1922, done.
remote: Compressing objects: 100% (61/61), done.
remote: Total 1922 (delta 28), reused 64 (delta 24), pack-reused 1837
Receiving objects: 100% (1922/1922), 567.77 KiB | 0 bytes/s, done.
Resolving deltas: 100% (1422/1422), done.
Checking connectivity... done.
Submodule path 'torch/lib/gloo': checked out '7fd607e2852c910f0f1320d2aaa92f1da2291109'
Cloning into 'torch/lib/nanopb'...
remote: Counting objects: 4384, done.
...
Resolving deltas: 100% (6335/6335), done.
Checking connectivity... done.
Submodule path 'torch/lib/pybind11': checked out '9f6a636e547fc70a02fa48436449aad67080698f'
이제 다시 pytorch를 설치합니다. 보시다시피 설치 메시지에서 x86 아키텍처에만 있는 SSE2 extension이 없다든가, mkl_intel이 없다는 등의 경고 메시지가 많이 나옵니다만 대범하게 무시하십시요.
root@ubuntu:/data/pytorch# python setup.py install
running install
running build_deps
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Checking if C linker supports --verbose
-- Checking if C linker supports --verbose - yes
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Checking if CXX linker supports --verbose
-- Checking if CXX linker supports --verbose - yes
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Try OpenMP C flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Try OpenMP CXX flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Found OpenMP: -fopenmp
-- Compiling with OpenMP support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for cpuid.h
...
-- Performing Test C_HAS_SSE1_1
-- Performing Test C_HAS_SSE1_1 - Failed
-- Performing Test C_HAS_SSE1_2
-- Performing Test C_HAS_SSE1_2 - Failed
-- Performing Test C_HAS_SSE1_3
...
-- Checking for [mkl_gf_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
-- Library mkl_gf_lp64: not found
-- Checking for [mkl_gf_lp64 - mkl_intel_thread - mkl_core - gomp - pthread - m - dl]
-- Library mkl_gf_lp64: not found
...
-- MKL library not found
-- Checking for [openblas]
-- Library openblas: /opt/anaconda2/lib/libopenblas.so
-- Looking for sgemm_
-- Looking for sgemm_ - found
-- Performing Test BLAS_F2C_DOUBLE_WORKS
-- Performing Test BLAS_F2C_DOUBLE_WORKS - Success
-- Performing Test BLAS_F2C_FLOAT_WORKS
-- Performing Test BLAS_F2C_FLOAT_WORKS - Success
-- Performing Test BLAS_USE_CBLAS_DOT
-- Performing Test BLAS_USE_CBLAS_DOT - Success
-- Found a library with BLAS API (open).
-- Looking for cheev_
-- Looking for cheev_ - found
-- Found a library with LAPACK API. (open)
-- Looking for clock_gettime in rt
-- Looking for clock_gettime in rt - found
...
Scanning dependencies of target TH
[ 12%] Building C object CMakeFiles/TH.dir/THSize.c.o
[ 12%] Building C object CMakeFiles/TH.dir/THHalf.c.o
[ 18%] Building C object CMakeFiles/TH.dir/THGeneral.c.o
[ 25%] Building C object CMakeFiles/TH.dir/THAllocator.c.o
[ 31%] Building C object CMakeFiles/TH.dir/THStorage.c.o
[ 37%] Building C object CMakeFiles/TH.dir/THRandom.c.o
[ 43%] Building C object CMakeFiles/TH.dir/THFile.c.o
[ 50%] Building C object CMakeFiles/TH.dir/THTensor.c.o
[ 56%] Building C object CMakeFiles/TH.dir/THDiskFile.c.o
[ 62%] Building C object CMakeFiles/TH.dir/THMemoryFile.c.o
[ 75%] Building C object CMakeFiles/TH.dir/THLogAdd.c.o
[ 75%] Building C object CMakeFiles/TH.dir/THLapack.c.o
[ 81%] Building C object CMakeFiles/TH.dir/THBlas.c.o
[ 87%] Building C object CMakeFiles/TH.dir/THVector.c.o
[ 93%] Building C object CMakeFiles/TH.dir/THAtomic.c.o
...
/data/pytorch/torch/lib/tmp_install/include/THC/THCNumerics.cuh(38): warning: integer conversion resulted in a change of sign
...
Compiling src/reduce_scatter.cu > /data/pytorch/torch/lib/build/nccl/obj/reduce_scatter.o
ptxas warning : Too big maxrregcount value specified 96, will be ignored
ptxas warning : Too big maxrregcount value specified 96, will be ignored
...
byte-compiling /opt/anaconda2/lib/python2.7/site-packages/torch/utils/data/__init__.py to __init__.pyc
byte-compiling /opt/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataset.py to dataset.pyc
byte-compiling /opt/anaconda2/lib/python2.7/site-packages/torch/utils/data/distributed.py to distributed.pyc
byte-compiling /opt/anaconda2/lib/python2.7/site-packages/torch/_utils.py to _utils.pyc
running install_egg_info
running egg_info
creating torch.egg-info
writing requirements to torch.egg-info/requires.txt
writing torch.egg-info/PKG-INFO
writing top-level names to torch.egg-info/top_level.txt
writing dependency_links to torch.egg-info/dependency_links.txt
writing manifest file 'torch.egg-info/SOURCES.txt'
reading manifest file 'torch.egg-info/SOURCES.txt'
writing manifest file 'torch.egg-info/SOURCES.txt'
Copying torch.egg-info to /opt/anaconda2/lib/python2.7/site-packages/torch-0.2.0+efe91fb-py2.7.egg-info
running install_scripts
보시다시피 warning message들이 나왔을 뿐 결국 잘 compile 됩니다. 이제 python에서 torch를 import하여 간단한 test를 몇가지 해보겠습니다. 저는 프로그래밍에 젬병인지라 그냥 pytorch 홈페이지의 튜토리얼 (http://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py) 일부를 그대로 수행해보았습니다.
root@ubuntu:/data/pytorch# python
Python 2.7.13 |Anaconda custom (64-bit)| (default, Mar 16 2017, 18:34:18)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> from __future__ import print_function
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "torch/__init__.py", line 53, in <module>
from torch._C import *
ImportError: No module named _C
>>>
시작하자마자 이게 웬 error인가요 ? 이건 다소 우스운 error입니다. (https://github.com/pytorch/pytorch/issues/7) pytorch의 설치 directory (여기서는 /data/pytorch)에는 torch라는 이름의 directory가 있는데, 이 directory에서 import torch를 하면 이 directory name과 중복되어 error가 나는 것입니다. 그냥 다른 directory로 옮기셔서 python을 수행하시면 이 error는 나지 않습니다.
root@ubuntu:/data/pytorch# ls
build DLConvertor.h LICENSE test tox.ini
cmake dlpack.h README.md tools
CONTRIBUTING.md Dockerfile requirements.txt torch
DLConvertor.cpp docs setup.py torch.egg-info
이제 다른 아무 directory로 옮겨가서 거기서 python을 수행하겠습니다.
root@ubuntu:/data/pytorch# cd
root@ubuntu:~# python
Python 2.7.13 |Anaconda custom (64-bit)| (default, Mar 16 2017, 18:34:18)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> from __future__ import print_function
>>> import torch
예, 이번에는 이상없이 import됩니다. Torch에서 사용하는 Tensor 및 rand 함수를 써보겠습니다.
>>> x = torch.Tensor(5, 3)
>>> print(x)
0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 9.1957e+35 2.2955e-41
9.2701e+35 2.2955e-41 1.1673e+36
2.2955e-41 9.2913e+35 2.2955e-41
0.0000e+00 0.0000e+00 0.0000e+00
[torch.FloatTensor of size 5x3]
>>> x = torch.rand(5, 3)
>>> print(x)
0.7949 0.8651 0.0330
0.5913 0.2181 0.9074
0.7759 0.0349 0.9361
0.3618 0.9953 0.8532
0.2193 0.1514 0.6486
[torch.FloatTensor of size 5x3]
>>> print(x.size())
(5L, 3L)
>>> y = torch.rand(5, 3)
>>> print(x + y)
0.8520 1.0601 0.7188
0.7161 0.3146 1.0981
1.4604 1.0081 0.9696
1.1450 1.7239 1.2189
0.2487 0.9476 1.6199
[torch.FloatTensor of size 5x3]
>>> print(torch.add(x, y))
0.8520 1.0601 0.7188
0.7161 0.3146 1.0981
1.4604 1.0081 0.9696
1.1450 1.7239 1.2189
0.2487 0.9476 1.6199
[torch.FloatTensor of size 5x3]
>>> result = torch.Tensor(5, 3)
>>> torch.add(x, y, out=result)
0.8520 1.0601 0.7188
0.7161 0.3146 1.0981
1.4604 1.0081 0.9696
1.1450 1.7239 1.2189
0.2487 0.9476 1.6199
[torch.FloatTensor of size 5x3]
>>> y.add_(x)
0.8520 1.0601 0.7188
0.7161 0.3146 1.0981
1.4604 1.0081 0.9696
1.1450 1.7239 1.2189
0.2487 0.9476 1.6199
[torch.FloatTensor of size 5x3]
다 잘 됩니다. Torch Tensor를 numpy Array로 전환하는 것도 해보겠습니다.
>>> a = torch.ones(5)
>>> print(a)
1
1
1
1
1
[torch.FloatTensor of size 5]
>>> b = a.numpy()
>>> print(b)
[ 1. 1. 1. 1. 1.]
>>> a.add_(1)
2
2
2
2
2
[torch.FloatTensor of size 5]
>>> print(b)
[ 2. 2. 2. 2. 2.]
다 잘 됩니다. 이제 GPU를 써서 pytorch를 구동해보겠습니다.
>>> if torch.cuda.is_available():
... x = x.cuda()
... y = y.cuda()
... x + y
...
1.6470 1.9252 0.7518
1.3074 0.5327 2.0054
2.2363 1.0430 1.9057
1.5068 2.7193 2.0721
0.4680 1.0990 2.2685
[torch.cuda.FloatTensor of size 5x3 (GPU 0)]
역시 잘 됩니다. 저 위의 .cuda()가 구동되는 순간 아래와 같이 GPU를 이미 python이 점거하는 것을 보실 수 있습니다.
Tue Oct 10 00:13:54 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 0000:03:00.0 Off | 0 |
| N/A 44C P0 58W / 149W | 200MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla K80 Off | 0000:04:00.0 Off | 0 |
| N/A 30C P8 31W / 149W | 2MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla K80 Off | 0020:03:00.0 Off | 0 |
| N/A 35C P8 26W / 149W | 2MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla K80 Off | 0020:04:00.0 Off | 0 |
| N/A 30C P8 29W / 149W | 2MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 40655 C python 198MiB |
+-----------------------------------------------------------------------------+
이제 가장 간단한 deep learning 예제인 MNIST를 pytorch로 수행해보겠습니다. 먼저, 아래와 같이 pytorch에서 제공하는 example source code를 download 받습니다.
root@ubuntu:/data# git clone --recursive https://github.com/pytorch/examples.git
Cloning into 'examples'...
remote: Counting objects: 1461, done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 1461 (delta 1), reused 2 (delta 0), pack-reused 1455
Receiving objects: 100% (1461/1461), 29.95 MiB | 6.56 MiB/s, done.
Resolving deltas: 100% (767/767), done.
Checking connectivity... done.
root@ubuntu:/data# cd examples/mnist
다운받은 requirements.txt 속에는 아래와 같이 torch와 torchvision의 2줄이 들어있습니다.
root@ubuntu:/data/examples/mnist# vi requirements.txt
torch
torchvision
여기에 대해 pip로 install하면 다음과 같이 torchvision이 새로 설치됩니다.
root@ubuntu:/data/examples/mnist# pip install -r requirements.txt
Requirement already satisfied: torch in /opt/anaconda2/lib/python2.7/site-packages (from -r requirements.txt (line 1))
Collecting torchvision (from -r requirements.txt (line 2))
Downloading torchvision-0.1.9-py2.py3-none-any.whl (43kB)
100% |████████████████████████████████| 51kB 423kB/s
Requirement already satisfied: pyyaml in /opt/anaconda2/lib/python2.7/site-packages (from torch->-r requirements.txt (line 1))
Requirement already satisfied: numpy in /opt/anaconda2/lib/python2.7/site-packages (from torch->-r requirements.txt (line 1))
Requirement already satisfied: pillow in /opt/anaconda2/lib/python2.7/site-packages (from torchvision->-r requirements.txt (line 2))
Requirement already satisfied: six in /opt/anaconda2/lib/python2.7/site-packages (from torchvision->-r requirements.txt (line 2))
Requirement already satisfied: olefile in /opt/anaconda2/lib/python2.7/site-packages (from pillow->torchvision->-r requirements.txt (line 2))
Installing collected packages: torchvision
Successfully installed torchvision-0.1.9
이제 남은 것은 mnist directory에 들어있는 main.py를 수행하기만 하면 됩니다.
root@ubuntu:/data/examples/mnist# python main.py
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Processing...
...
Train Epoch: 10 [55040/60000 (92%)] Loss: 0.148484
Train Epoch: 10 [55680/60000 (93%)] Loss: 0.215679
Train Epoch: 10 [56320/60000 (94%)] Loss: 0.122693
Train Epoch: 10 [56960/60000 (95%)] Loss: 0.120907
Train Epoch: 10 [57600/60000 (96%)] Loss: 0.153347
Train Epoch: 10 [58240/60000 (97%)] Loss: 0.100982
Train Epoch: 10 [58880/60000 (98%)] Loss: 0.272780
Train Epoch: 10 [59520/60000 (99%)] Loss: 0.079338
Test set: Average loss: 0.0541, Accuracy: 9815/10000 (98%)
10번의 epoch(전체 training dataset을 총 10회 반복 training했다는 뜻) 만에 98.15%의 accuracy를 결과로 냅니다. 이 과정에서는 물론 GPU는 1장만 쓰는데, 대략 30~35%의 사용률을 보입니다. (이거 P100이 아니라 K80입니다. 착오 없으시기 바랍니다.)
Tue Oct 10 00:23:02 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 0000:03:00.0 Off | 0 |
| N/A 44C P0 62W / 149W | 380MiB / 11441MiB | 30% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla K80 Off | 0000:04:00.0 Off | 0 |
| N/A 30C P8 31W / 149W | 2MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla K80 Off | 0020:03:00.0 Off | 0 |
| N/A 36C P8 26W / 149W | 2MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla K80 Off | 0020:04:00.0 Off | 0 |
| N/A 30C P8 29W / 149W | 2MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 40735 C python 378MiB |
+-----------------------------------------------------------------------------+
피드 구독하기:
글 (Atom)