레이블이 Newell인 게시물을 표시합니다. 모든 게시물 표시
레이블이 Newell인 게시물을 표시합니다. 모든 게시물 표시

2018년 3월 13일 화요일

AC922에서 bvlc caffe 빌드하고 cifar10 돌려보기


(Pycaffe를 위해) Anaconda2가 설치된 환경을 가정하겠습니다.  먼저 아래 package들부터 설치합니다.

[user1@ac922 ~]$ sudo yum install git gcc gcc-c++ python-devel python-enum34 numpy cmake automake snappy.ppc64le boost-python.ppc64le libgfortran4.ppc64le gtk+.ppc64le gtk+-devel.ppc64le gtk2.ppc64le gtk3.ppc64le gstreamer.ppc64le gstreamer-tools.ppc64le libdc1394.ppc64le libdc1394-tools.ppc64le

1. HDF5를 설치합니다.

[user1@ac922 ~]$ wget https://support.hdfgroup.org/ftp/HDF5/current/src/hdf5-1.10.1.tar
[user1@ac922 ~]$ tar -xf hdf5-1.10.1.tar
[user1@ac922 ~]$ cd hdf5-1.10.1
[user1@ac922 hdf5-1.10.1]$ ./configure --prefix=/usr/local --enable-fortran --enable-cxx --build=powerpc64le-linux-gnu
[user1@ac922 hdf5-1.10.1]$ make && sudo make install

2. boost를 설치합니다.

[user1@ac922 ~]$ wget https://dl.bintray.com/boostorg/release/1.66.0/source/boost_1_66_0.tar.gz
[user1@ac922 ~]$ tar -zxf boost_1_66_0.tar.gz
[user1@ac922 ~]$ cd boost_1_66_0
[user1@ac922 boost_1_66_0]$ ./bootstrap.sh --prefix=/usr/local
[user1@ac922 boost_1_66_0]$ ./b2
[user1@ac922 boost_1_66_0]$ sudo ./b2 install

3.  GFLAGS를 설치합니다.

[user1@ac922 ~]$ wget https://github.com/schuhschuh/gflags/archive/master.zip
[user1@ac922 ~]$ unzip master.zip && cd gflags-master
[user1@ac922 gflags-master]$ mkdir build && cd build
[user1@ac922 build]$ cmake .. -DBUILD_SHARED_LIBS=ON -DBUILD_STATIC_LIBS=ON -DBUILD_gflags_LIB=ON
[user1@ac922 build]$ make && sudo make install

4.  GLOG를 설치합니다.

[user1@ac922 ~]$ wget https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/google-glog/glog-0.3.3.tar.gz
[user1@ac922 ~]$ tar zxvf glog-0.3.3.tar.gz
[user1@ac922 ~]$ cd glog-0.3.3
[user1@ac922 glog-0.3.3]$ ./configure --build=powerpc64le-redhat-linux-gnu
[user1@ac922 glog-0.3.3]$ make && sudo make install


5.  LMDB를 설치합니다.

[user1@ac922 ~]$ git clone https://github.com/LMDB/lmdb
[[user1@ac922 ~]$ cd lmdb/libraries/liblmdb
[user1@ac922 liblmdb]$ make && sudo make install

6.  LEVELDB를 설치합니다.

[user1@ac922 files]$ wget https://rpmfind.net/linux/epel/7/ppc64le/Packages/l/leveldb-1.12.0-11.el7.ppc64le.rpm
[user1@ac922 files]$ wget https://www.rpmfind.net/linux/epel/7/ppc64le/Packages/l/leveldb-devel-1.12.0-11.el7.ppc64le.rpm
[user1@ac922 files]$ sudo rpm -Uvh leveldb-1.12.0-11.el7.ppc64le.rpm
[user1@ac922 files]$ sudo rpm -Uvh leveldb-devel-1.12.0-11.el7.ppc64le.rpm

7.  OpenBLAS를 설치합니다.

[user1@ac922 ~]$ git clone https://github.com/xianyi/OpenBLAS.git
[user1@ac922 ~]$ cd OpenBLAS
[user1@ac922 OpenBLAS]$ git checkout power8
[user1@ac922 OpenBLAS]$ make TARGET=POWER8 LDFLAGS="-fopenmp"
[user1@ac922 OpenBLAS]$ sudo make TARGET=POWER8 LDFLAGS="-fopenmp" install
...
Copying the static library to /opt/OpenBLAS/lib
Copying the shared library to /opt/OpenBLAS/lib
Generating OpenBLASConfig.cmake in /opt/OpenBLAS/lib/cmake/openblas
Generating OpenBLASConfigVersion.cmake in /opt/OpenBLAS/lib/cmake/openblas
Install OK!
make[1]: Leaving directory `/home/user1/OpenBLAS'


8.  OpenCV를 설치합니다.   

[user1@ac922 ~]$ git clone --recursive https://github.com/opencv/opencv.git
[user1@ac922 ~]$ git clone --recursive https://github.com/opencv/opencv_contrib.git
[user1@ac922 ~]$ cd opencv
[user1@ac922 opencv]$ git checkout tags/3.4.1
[user1@ac922 opencv]$ mkdir build && cd build
[user1@ac922 build]$ which protoc
~/anaconda2/bin/protoc
[user1@ac922 build]$ export PROTOBUF_PROTOC_EXECUTABLE="~/anaconda2/bin/protoc"
[user1@ac922 build]$ cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -DOPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules  -D WITH_EIGEN=OFF -DBUILD_LIBPROTOBUF_FROM_SOURCES=ON  ..
[user1@ac922 build]$ make && sudo make install
...
-- Installing: /usr/local/bin/opencv_visualisation
-- Set runtime path of "/usr/local/bin/opencv_visualisation" to "/usr/local/lib64:/usr/local/cuda/lib64"
-- Installing: /usr/local/bin/opencv_interactive-calibration
-- Set runtime path of "/usr/local/bin/opencv_interactive-calibration" to "/usr/local/lib64:/usr/local/cuda/lib64"
-- Installing: /usr/local/bin/opencv_version
-- Set runtime path of "/usr/local/bin/opencv_version" to "/usr/local/lib64:/usr/local/cuda/lib64"

9.  NCCL을 빌드합니다.

[user1@ac922 ~]$ git clone https://github.com/NVIDIA/nccl
[user1@ac922 ~]$ cd nccl
[user1@ac922 nccl]$ make
[user1@ac922 nccl]$ sudo make install

10.  이제 비로소 caffe를 빌드할 수 있습니다.

[user1@ac922 ~]$ git clone https://github.com/BVLC/caffe.git
[user1@ac922 ~]$ cd caffe
[user1@ac922 caffe]$ cp Makefile.config.example Makefile.config
[user1@ac922 caffe]$ vi Makefile.config
...
# USE_CUDNN := 1
USE_CUDNN := 1
...
# OPENCV_VERSION := 3
OPENCV_VERSION := 3
...
#CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
                -gencode arch=compute_20,code=sm_21 \
                -gencode arch=compute_30,code=sm_30 \
                -gencode arch=compute_35,code=sm_35 \
                -gencode arch=compute_50,code=sm_50 \
                -gencode arch=compute_52,code=sm_52 \
                -gencode arch=compute_60,code=sm_60 \
                -gencode arch=compute_61,code=sm_61 \
                -gencode arch=compute_61,code=compute_61
CUDA_ARCH := -gencode arch=compute_60,code=sm_60 \
                -gencode arch=compute_61,code=sm_61 \
                -gencode arch=compute_61,code=compute_61
...
# BLAS := atlas
BLAS := open
...
#PYTHON_INCLUDE := /usr/include/python2.7 \
                /usr/lib/python2.7/dist-packages/numpy/core/include
ANACONDA_HOME := $(HOME)/anaconda2
PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
                 $(ANACONDA_HOME)/include/python2.7 \
                 $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include
...
# PYTHON_LIB := /usr/lib
PYTHON_LIB := $(ANACONDA_HOME)/lib
...
# WITH_PYTHON_LAYER := 1
WITH_PYTHON_LAYER := 1
...
# USE_NCCL := 1
USE_NCCL := 1

LINKFLAGS := -Wl,-rpath,$(HOME)/anaconda2/lib   # added to prevent "~/anaconda2/lib/libpng16.so.16 undefined reference to `inflateValidate@ZLIB_1.2.9" error


여기서 아래와 같이 soft link를 걸어주어야  cannot find -lsnappy 등의 error를 피할 수 있습니다.

[user1@ac922 caffe]$ sudo ln -s /usr/lib64/libsnappy.so.1 /usr/lib64/libsnappy.so
[user1@ac922 caffe]$ sudo ln -s /usr/lib64/libboost_python.so.1.53.0 /usr/lib64/libboost_python.so

[user1@ac922 caffe]$ make all
...
CXX/LD -o .build_release/examples/cpp_classification/classification.bin
CXX examples/mnist/convert_mnist_data.cpp
CXX/LD -o .build_release/examples/mnist/convert_mnist_data.bin
CXX examples/siamese/convert_mnist_siamese_data.cpp
CXX/LD -o .build_release/examples/siamese/convert_mnist_siamese_data.bin

[user1@ac922 caffe]$ sudo mkdir /opt/caffe
[user1@ac922 caffe]$ sudo cp -r build/* /opt/caffe

11.  이제 cifar10을 수행해봅니다.

[user1@ac922 caffe]$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:/usr/local/lib64:/usr/lib:/usr/lib64

[user1@ac922 caffe]$ export CAFFE_HOME=/home/user1/caffe/build/tools

[user1@ac922 caffe]$ cd data/cifar10/

[user1@ac922 cifar10]$ ./get_cifar10.sh
Downloading...
--2018-03-13 14:13:14--  http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz
Resolving www.cs.toronto.edu (www.cs.toronto.edu)... 128.100.3.30
Connecting to www.cs.toronto.edu (www.cs.toronto.edu)|128.100.3.30|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 170052171 (162M) [application/x-gzip]
Saving to: ‘cifar-10-binary.tar.gz’

100%[==================================================================>] 170,052,171 11.0MB/s   in 15s

2018-03-13 14:13:29 (11.0 MB/s) - ‘cifar-10-binary.tar.gz’ saved [170052171/170052171]

Unzipping...
Done.

[user1@ac922 cifar10]$ ls -la
total 180084
drwxrwxr-x. 2 user1 user1      213 Mar 13 14:13 .
drwxrwxr-x. 5 user1 user1       50 Mar 13 10:50 ..
-rw-r--r--. 1 user1 user1       61 Jun  5  2009 batches.meta.txt
-rw-r--r--. 1 user1 user1 30730000 Jun  5  2009 data_batch_1.bin
-rw-r--r--. 1 user1 user1 30730000 Jun  5  2009 data_batch_2.bin
-rw-r--r--. 1 user1 user1 30730000 Jun  5  2009 data_batch_3.bin
-rw-r--r--. 1 user1 user1 30730000 Jun  5  2009 data_batch_4.bin
-rw-r--r--. 1 user1 user1 30730000 Jun  5  2009 data_batch_5.bin
-rwxrwxr-x. 1 user1 user1      506 Mar 13 10:50 get_cifar10.sh
-rw-r--r--. 1 user1 user1       88 Jun  5  2009 readme.html
-rw-r--r--. 1 user1 user1 30730000 Jun  5  2009 test_batch.bin

[user1@ac922 caffe]$ ./examples/cifar10/create_cifar10.sh

[user1@ac922 caffe]$ ls -l examples/cifar10/*_lmdb
examples/cifar10/cifar10_test_lmdb:
total 35656
-rw-rw-r--. 1 user1 user1 36503552 Mar 13 14:14 data.mdb
-rw-rw-r--. 1 user1 user1     8192 Mar 13 14:14 lock.mdb

examples/cifar10/cifar10_train_lmdb:
total 177992
-rw-rw-r--. 1 user1 user1 182255616 Mar 13 14:14 data.mdb
-rw-rw-r--. 1 user1 user1      8192 Mar 13 14:14 lock.mdb

[user1@ac922 caffe]$ vi ./examples/cifar10/train_full.sh
#!/usr/bin/env sh
set -e

TOOLS=./build/tools

$TOOLS/caffe train \
    --solver=examples/cifar10/cifar10_full_solver.prototxt $@

# reduce learning rate by factor of 10
$TOOLS/caffe train \
    --solver=examples/cifar10/cifar10_full_solver_lr1.prototxt \
    --snapshot=examples/cifar10/cifar10_full_iter_60000.solverstate.h5 $@
#    --snapshot=examples/cifar10/cifar10_full_iter_60000.solverstate $@ 

# reduce learning rate by factor of 10
$TOOLS/caffe train \
    --solver=examples/cifar10/cifar10_full_solver_lr2.prototxt \
    --snapshot=examples/cifar10/cifar10_full_iter_65000.solverstate.h5 $@
#    --snapshot=examples/cifar10/cifar10_full_iter_65000.solverstate $@

# 무슨 이유에선지 cifar10_full_iter_60000.solverstate 대신 cifar10_full_iter_60000.solverstate.h5 이라는 파일이 생기므로 그에 따라 파일 이름 변경

[user1@ac922 caffe]$ time ./examples/cifar10/train_full.sh
I0313 14:15:55.463438 114263 caffe.cpp:204] Using GPUs 0
I0313 14:15:55.529319 114263 caffe.cpp:209] GPU 0: Tesla V100-SXM2-16GB
...
I0313 14:34:16.333791 126407 solver.cpp:239] Iteration 69800 (177.976 iter/s, 1.12375s/200 iters), loss = 0.332006
I0313 14:34:16.333875 126407 solver.cpp:258]     Train net output #0: loss = 0.332006 (* 1 = 0.332006 loss)
I0313 14:34:16.333892 126407 sgd_solver.cpp:112] Iteration 69800, lr = 1e-05
I0313 14:34:17.436130 126413 data_layer.cpp:73] Restarting data prefetching from start.
I0313 14:34:17.453459 126407 solver.cpp:478] Snapshotting to HDF5 file examples/cifar10/cifar10_full_iter_70000.caffemodel.h5
I0313 14:34:17.458664 126407 sgd_solver.cpp:290] Snapshotting solver state to HDF5 file examples/cifar10/cifar10_full_iter_70000.solverstate.h5
I0313 14:34:17.461360 126407 solver.cpp:331] Iteration 70000, loss = 0.294117
I0313 14:34:17.461383 126407 solver.cpp:351] Iteration 70000, Testing net (#0)
I0313 14:34:17.610864 126424 data_layer.cpp:73] Restarting data prefetching from start.
I0313 14:34:17.612763 126407 solver.cpp:418]     Test net output #0: accuracy = 0.8169
I0313 14:34:17.612794 126407 solver.cpp:418]     Test net output #1: loss = 0.533315 (* 1 = 0.533315 loss)
I0313 14:34:17.612810 126407 solver.cpp:336] Optimization Done.
I0313 14:34:17.612821 126407 caffe.cpp:250] Optimization Done.

real    6m51.615s
user    7m30.483s
sys     1m5.158s

2018년 1월 13일 토요일

AC922 Redhat python3 환경에서 tensorflow 1.4.1을 source로부터 빌드하기

먼저번 포스팅에서 보신 것처럼 AC922 Redhat 7.4 환경에서 tensorflow 1.4를 사용하기 위한 공식적인 방법은 IBM에서 AC922 구매 고객에게만 별도로 제공하는 Tensorflow Technical Preview를 이용하는 것입니다.  그러나 이것은 아직은 python2만 지원하므로, python3에서는 사용할 수 없습니다.  (2018 2Q에는 다 지원될 예정)

하지만 그래도 python3에서 tensorflow 1.4를 사용할 방법이 전혀 없는 것은 아닙니다.  직접 빌드하면 됩니다.

여기서는 Tensorflow Technical Preview에 포함된 bazel 0.5.4를 이용하면 됩니다.  먼저 다음과 같이 Anaconda3의 path를 설정한 뒤, 이어서 Tensorflow Technical Preview에 포함된 bazel 0.5.4의 PATH를 맨 앞으로 설정하면 됩니다.

[root@ac922 nvme]# export PATH="/opt/DL/bazel/bin:/opt/anaconda3/bin:$PATH"

이어서 필요한 protobuf 등과 기타 필요 파일셋을 설치합니다.

[root@ac922 ~]# conda install protobuf

[root@ac922 ~]# which protoc
/opt/anaconda3/bin/protoc

[root@ac922 ~]# export PROTOC=/opt/anaconda3/bin/protoc

[root@ac922 nvme]# yum install apr-util-devel.ppc64le ant cmake.ppc64le automake.noarch ftp libtool.ppc64le libtool-ltdl-devel.ppc64le apr-util-openssl.ppc64le openssl-devel.ppc64le  golang.ppc64le golang-bin.ppc64le


(옵션 :  Tensorflow Technical Preview에 포함된 bazel 0.5.4를 사용하는 대신 다음과 같이 bazel 최신 버전의 bazel-*-dist.zip을 download 받아서 빌드를 해도 됩니다.

[root@ac922 nvme]# wget https://github.com/bazelbuild/bazel/releases/download/0.8.1/bazel-0.8.1-dist.zip

[root@ac922 nvme]# mkdir bazel-0.8.1 && cd bazel-0.8.1

[root@ac922 bazel-0.8.1]# unzip ../bazel-0.8.1-dist.zip

[root@ac922 bazel-0.8.1]# ./compile.sh

[root@ac922 bazel-0.8.1]# cp output/bazel /usr/local/bin

옵션 부분 끝)


이제 tensorflow source를 다운로드 받습니다.

[root@ac922 nvme]# git clone https://github.com/tensorflow/tensorflow

[root@ac922 nvme]# cd tensorflow

[root@ac922 tensorflow]# git checkout tags/v1.4.1

[root@ac922 tensorflow]# conda install wheel numpy six

[root@ac922 tensorflow]# export LD_LIBRARY_PATH=/usr/local/cuda-9.1/lib64:/usr/lib64:/usr/lib:/usr/local/lib64:/usr/local/lib:$LD_LIBRARY_PATH

[root@ac922 tensorflow]# export PATH=/opt/DL/bazel/bin:$PATH

다음으로는 평범하게 ./configure 뒤에 bazel build를 하면 되는데... 그러면 다음과 같이 boringssl 관련 error가 납니다.

[root@ac922 tensorflow]# bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
....
ERROR: /root/.cache/bazel/_bazel_root/c33b26ecf6ca982d66935dcfbfc79c56/external/boringssl/BUILD:118:1: C++ compilation of rule '@boringssl//:crypto' failed (Exit 1).
In file included from external/boringssl/src/crypto/fipsmodule/bcm.c:92:0:
external/boringssl/src/crypto/fipsmodule/sha/sha1.c:125:6: error: static declaration of 'sha1_block_data_order' follows non-static declaration
 void sha1_block_data_order(uint32_t *state, const uint8_t *data, size_t num);
      ^
In file included from external/boringssl/src/crypto/fipsmodule/bcm.c:91:0:
external/boringssl/src/crypto/fipsmodule/sha/sha1-altivec.c:190:6: note: previous definition of 'sha1_block_data_order' was here
 void sha1_block_data_order(uint32_t *state, const uint8_t *data, size_t num) {
      ^
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 77.133s, Critical Path: 20.67s


이 문제의 해결을 위해서는 다음의 patch 2개가 필요합니다.   patch 내용은 맨 아래에 별도로 달아두겠습니다.

[root@ac922 tensorflow]# patch < 120-curl-build-fix.patch
can't find file to patch at input line 5
Perhaps you should have used the -p or --strip option?
The text leading up to this was:
--------------------------
|diff --git a/third_party/curl.BUILD b/third_party/curl.BUILD
|index 882967d..3c48dfa 100644
|--- a/third_party/curl.BUILD
|+++ b/third_party/curl.BUILD
--------------------------
File to patch: third_party/curl.BUILD
patching file third_party/curl.BUILD
[root@ac922 tensorflow]# patch < 140-boring-ssl.patch
can't find file to patch at input line 5
Perhaps you should have used the -p or --strip option?
The text leading up to this was:
--------------------------
|diff --git a/third_party/boringssl/add_boringssl_s390x.patch b/third_party/boringssl/add_boringssl_s390x.patch
|index 8b42d10..26c51a3 100644
|--- a/third_party/boringssl/add_boringssl_s390x.patch
|+++ b/third_party/boringssl/add_boringssl_s390x.patch
--------------------------
File to patch: third_party/boringssl/add_boringssl_s390x.patch
patching file third_party/boringssl/add_boringssl_s390x.patch


이 patch들을 적용하고도 "fatal error: math_functions.hpp: No such file or directory"가 발생합니다.  이는 아래 URL을 참조하여 tensorflow/workspace.bzl을 다음과 같이 수정하면 됩니다.

# from https://github.com/tensorflow/tensorflow/issues/15389 & https://github.com/angersson/tensorflow/commit/599dc70e9e478b4bc24fb2329c175ea978ef620a

[root@ac922 tensorflow]# vi tensorflow/workspace.bzl
...
  native.new_http_archive(
      name = "eigen_archive",
      urls = [
#          "https://bitbucket.org/eigen/eigen/get/429aa5254200.tar.gz",
#          "http://mirror.bazel.build/bitbucket.org/eigen/eigen/get/429aa5254200.tar.gz",
          "https://bitbucket.org/eigen/eigen/get/034b6c3e1017.tar.gz",
          "http://mirror.bazel.build/bitbucket.org/eigen/eigen/get/034b6c3e1017.tar.gz",
      ],
#      sha256 = "61d8b6fc4279dd1dda986fb1677d15e3d641c07a3ea5abe255790b1f0c0c14e9",
#      strip_prefix = "eigen-eigen-429aa5254200",
      sha256 = "0a8ac1e83ef9c26c0e362bd7968650b710ce54e2d883f0df84e5e45a3abe842a",
      strip_prefix = "eigen-eigen-034b6c3e1017",
      build_file = str(Label("//third_party:eigen.BUILD")),
  )

이제 ./configure와 bazel build를 수행합니다.

[root@ac922 tensorflow]# ./configure
...
Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n
...
Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n
...
Do you wish to build TensorFlow with CUDA support? [y/N]: y
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]: 9.1
Please specify the location where CUDA 9.1 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-9.1
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 6.0]: 7
Please specify the location where cuDNN 7.0 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-9.1]:/usr/local/cuda-9.1/targets/ppc64le-linux/lib
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,5.2]7.0
...
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -mcpu=native]: -mcpu=power8
(아직 gcc에 -mcpu=power9이 없으므로 power8으로 대체해야 합니다.  아무 것도 주지않으면 default로 power9이 되면서 error가 납니다.)
...


[root@ac922 tensorflow]# bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

[root@ac922 tensorflow]# bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

결과로 생긴 tensorflow-1.4.1-cp36-cp36m-linux_ppc64le.whl를 pip로 설치하면 됩니다.

[root@ac922 tensorflow]# ls -l /tmp/tensorflow_pkg
total 67864
-rw-r--r--. 1 root root 69491907 Jan 13 12:23 tensorflow-1.4.1-cp36-cp36m-linux_ppc64le.whl

[root@ac922 tensorflow]# which pip
/opt/anaconda3/bin/pip

[root@ac922 tensorflow]# pip install /tmp/tensorflow_pkg/tensorflow-1.4.1-cp36-cp36m-linux_ppc64le.whl

[root@ac922 tensorflow]# conda list | grep tensor
tensorflow                1.4.1                     <pip>
tensorflow-tensorboard    0.4.0rc3                  <pip>

이제 python3에서 tensorflow 1.4.1을 사용하실 수 있게 되었습니다.


PS.  위에서 적용했던 patch 파일들(140-boring-ssl.patch & 120-curl-build-fix.patch) 내용입니다.

[root@ac922 tensorflow]# cat 140-boring-ssl.patch
diff --git a/third_party/boringssl/add_boringssl_s390x.patch b/third_party/boringssl/add_boringssl_s390x.patch
index 8b42d10..26c51a3 100644
--- a/third_party/boringssl/add_boringssl_s390x.patch
+++ b/third_party/boringssl/add_boringssl_s390x.patch
@@ -131,3 +131,19 @@ index 6b645e61..c90b7beb 100644
          "//conditions:default": ["-lpthread"],
      }),
      visibility = ["//visibility:public"],
+diff --git a/src/crypto/fipsmodule/sha/sha1.c b/src/crypto/fipsmodule/sha/sha1.c
+index 7ce0193..9791fa5 100644
+--- a/src/crypto/fipsmodule/sha/sha1.c
++++ b/src/crypto/fipsmodule/sha/sha1.c
+@@ -63,9 +63,9 @@
+ #include "../../internal.h"
+
+
+-#if !defined(OPENSSL_NO_ASM) &&                         \
++#if (!defined(OPENSSL_NO_ASM) &&                         \
+     (defined(OPENSSL_X86) || defined(OPENSSL_X86_64) || \
+-     defined(OPENSSL_ARM) || defined(OPENSSL_AARCH64) || \
++     defined(OPENSSL_ARM) || defined(OPENSSL_AARCH64)) || \
+      defined(OPENSSL_PPC64LE))
+ #define SHA1_ASM
+ #endif


[root@ac922 tensorflow]# cat 120-curl-build-fix.patch
diff --git a/third_party/curl.BUILD b/third_party/curl.BUILD
index 882967d..3c48dfa 100644
--- a/third_party/curl.BUILD
+++ b/third_party/curl.BUILD
@@ -479,7 +479,12 @@ genrule(
         "#  define HAVE_SSL_GET_SHUTDOWN 1",
         "#  define HAVE_STROPTS_H 1",
         "#  define HAVE_TERMIOS_H 1",
+        "#if defined(__powerpc64__) || defined(__powerpc__)",
+        "#  define OS \"powerpc64le-ibm-linux-gnu\"",
+        "#  undef HAVE_STROPTS_H",
+        "#else",
         "#  define OS \"x86_64-pc-linux-gnu\"",
+        "#endif",
         "#  define RANDOM_FILE \"/dev/urandom\"",
         "#  define USE_OPENSSL 1",
         "#endif",


아울러 위 과정에서 빌드한 tensorflow 1.4.1 for python3의 wheel 파일을 아래 google drive에 올려두겠습니다.  품질을 책임질 수 있는 파일이 아닌 점은 양해부탁드립니다.

https://drive.google.com/open?id=1_C2BZJ9G6HekxV2U6mil2sVf3WJIlh-n

AC922 "Newell" Redhat 7.4 환경에서 Tensorflow Technical Preview 설치하기

먼저, 인터넷 연결이 없는 환경을 위해 ISO 이미지를 이용하여 redhat local repository를 만듭니다.

[root@ac922 nvme]# ls -l *.iso
-rw-r--r--. 1 root root 3187027968 Jan 10 12:42 rhel-alt-server-7.4-ppc64le-dvd.iso

[root@ac922 nvme]# mount -t iso9660 -o loop rhel-alt-server-7.4-ppc64le-dvd.iso /mnt
mount: /dev/loop0 is write-protected, mounting read-only

[root@ac922 nvme]# vi /etc/yum.repos.d/local.repo
[local]
baseurl=file:///mnt/
gpgcheck=0


CUDA 설치를 위한 사전 필요 파일셋을 설치합니다.

[root@ac922 ~]# sudo yum -y install wget nano bzip2

[root@ac922 ~]# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

[root@ac922 ~]# rpm -ihv epel-release-latest-7.noarch.rpm

[root@ac922 ~]# yum update kernel kernel-tools kernel-tools-libs kernel-bootwrapper gcc openssh-server openssh-clients openssl-devel python-devel kernel-devel-uname-r elfutils-libelf-devel

[root@ac922 ~]# yum update


그리고 Tensorflow Technical Preview에서 요구하는대로, 아래 파일을 수정한 뒤 리부팅합니다.

[root@ac922 ~]# vi /lib/udev/rules.d/40-redhat.rules   # 다음 줄을 comment-out 처리
...
#SUBSYSTEM=="memory", ACTION=="add", PROGRAM="/bin/uname -p", RESULT!="s390*", ATTR{state}=="offline", ATTR{state}="online"

[root@ac922 ~]# shutdown -r now


이제 Anaconda를 다운받고 설치합니다.  참고로, IBM에서 제공하는 Tensorflow Technical Preview는 아직 공식적으로는 TF1.4 with python2만 지원하며, python3를 비롯한 정식 지원은 2018년 2Q에 예정되어 있습니다.   여기서는 python3에서도 TF1.4.1을 빌드할 것이므로, 일단 Anaconda3도 설치합니다.

[root@ac922 nvme]# wget https://repo.continuum.io/archive/Anaconda2-5.0.0-Linux-ppc64le.sh
[root@ac922 nvme]# wget https://repo.continuum.io/archive/Anaconda3-5.0.0-Linux-ppc64le.sh

[root@ac922 nvme]# ./Anaconda2-5.0.0-Linux-ppc64le.sh
...
[/root/anaconda2] >>> /opt/anaconda2

[root@ac922 nvme]# ./Anaconda3-5.0.0-Linux-ppc64le.sh
...
[/root/anaconda3] >>> /opt/anaconda3

여기서는 anaconda2를 써야 합니다.  이 Tensorflow ESP는 python 2.x만 지원하기 때문입니다.

[root@ac922 nvme]# export PATH="/opt/anaconda2/bin:$PATH"


이제 CUDA를 설치합니다. 

[root@ac922 nvme]# rpm -Uvh cuda-repo-rhel7-9-1-local-9.1.85-1.ppc64le.rpm

[root@ac922 nvme]# wget ftp://fr2.rpmfind.net/linux/fedora-secondary/development/rawhide/Everything/ppc64le/os/Packages/d/dkms-2.4.0-1.20170926git959bd74.fc28.noarch.rpm

[root@ac922 nvme]# rpm -Uvh dkms-2.4.0-1.20170926git959bd74.fc28.noarch.rpm

[root@ac922 nvme]# yum install cuda

[root@ac922 nvme]# cd /usr/local

[root@ac922 local]# tar -ztvf /nvme/cudnn-9.1.tgz
-r--r--r-- erisuser/erisuser 107140 2017-11-01 21:13 cuda/targets/ppc64le-linux/include/cudnn.h
-r--r--r-- erisuser/erisuser  38963 2017-10-20 21:28 cuda/targets/ppc64le-linux/NVIDIA_SLA_cuDNN_Support.txt
lrwxrwxrwx erisuser/erisuser      0 2017-11-17 14:24 cuda/targets/ppc64le-linux/lib/libcudnn.so -> libcudnn.so.7
lrwxrwxrwx erisuser/erisuser      0 2017-11-17 14:24 cuda/targets/ppc64le-linux/lib/libcudnn.so.7 -> libcudnn.so.7.0.5
-rwxrwxr-x erisuser/erisuser 282621088 2017-11-17 13:23 cuda/targets/ppc64le-linux/lib/libcudnn.so.7.0.5
-rw-rw-r-- erisuser/erisuser 277149668 2017-11-17 14:05 cuda/targets/ppc64le-linux/lib/libcudnn_static.a

이건 좀 묘하긴 한데, 나중에 cudnn.h를 /usr/local/cuda/targets/ppc64le-linux/lib에서 찾는 경우가 있으므로 일단 아래와 같이 soft link를 걸어 줍니다.

[root@ac922 local]# ln -s /usr/local/cuda/targets/ppc64le-linux/include/cudnn.h /usr/local/cuda/targets/ppc64le-linux/lib/cudnn.h

CUDA 설치가 끝나면, 이제 JDK와 cmake 등을 설치합니다.  이것들은 나중에 TF1.4.1을 python3 환경에서 빌드할 때 필요합니다.

[root@ac922 ~]# yum install java-1.8.0-openjdk.ppc64le java-1.8.0-openjdk-headless.ppc64le java-1.8.0-openjdk-devel.ppc64le cmake.ppc64le automake.noarch ftp libtool.ppc64le libtool-ltdl-devel.ppc64le apr-util-devel.ppc64le openssl-devel.ppc64le



이제 IBM에서 제공하는 Tensorflow Technical Preview를 설치하겠습니다.  이건 AC922을 구매한 고객에 한해서 별도로 제공되는군요.

[root@ac922 tf_tech_preview]# ls -l
total 21080
-rw-r--r--. 1 root root  5868971 Jan 11 10:02 ibm_smpi-10.02.00.00eval-rh7_20171214.ppc64le.rpm
-rw-r--r--. 1 root root   498399 Jan 11 10:02 ibm_smpi-devel-10.02.00.00eval-rh7_20171214.ppc64le.rpm
-rw-r--r--. 1 root root  4380209 Jan 11 10:02 ibm_smpi_lic_s-10.02.00eval-rh7_20171214.ppc64le.rpm
-rw-r--r--. 1 root root 10816940 Jan 11 10:02 mldl-repo-local-esp-5.0.0-20.7e4ad85.ppc64le.rpm
-rw-r--r--. 1 root root    13569 Jan 11 10:02 README.md

다음과 같이 smpi 관련 파일셋들을 먼저 설치한 뒤, mldl-repo-local을 설치합니다.

[root@ac922 tf_tech_preview]# rpm -Uvh ibm_smpi*.rpm mldl-repo-local-esp-5.0.0-20.7e4ad85.ppc64le.rpm

[root@ac922 tf_tech_preview]# yum update

[root@ac922 tf_tech_preview]# yum install power-mldl-esp


설치된  Tensorflow Technical Preview를 사용하기 위해서는 예전의 PowerAI에서처럼 *-activate를 먼저 해주어야 합니다.  단, 이젠 license 동의 과정도 있고, 또 dependency 자동 설치 과정도 있습니다.

[root@ac922 ~]# /opt/DL/license/bin/accept-powerai-license.sh   # 이걸 먼저 해주지 않으면 다음 과정이 진행되지 않습니다.

[root@ac922 ~]# /opt/DL/tensorflow/bin/install_dependencies    # 역시 이 과정을 해줘야 Tensorflow Technical Preview를 사용할 수 있습니다.
Fetching package metadata ...........
Solving package specifications: .

Package plan for installation in environment /opt/anaconda2:

The following NEW packages will be INSTALLED:

    backports.weakref: 1.0rc1-py27_0
    libprotobuf:       3.4.0-hd26fab5_0
    mock:              2.0.0-py27_0
    pbr:               1.10.0-py27_0
    protobuf:          3.4.0-py27h7448ec6_0

Proceed ([y]/n)? y

libprotobuf-3. 100% |##########################################################| Time: 0:00:00  42.42 MB/s
backports.weak 100% |##########################################################| Time: 0:00:00   9.48 MB/s
protobuf-3.4.0 100% |##########################################################| Time: 0:00:00 551.85 kB/s
pbr-1.10.0-py2 100% |##########################################################| Time: 0:00:00 286.73 kB/s
mock-2.0.0-py2 100% |##########################################################| Time: 0:00:00 249.62 kB/s

그리고나서 예전처럼 /opt/DL/tensorflow/bin/tensorflow-activate을 하면 이제 tensorflow를 사용하실 준비가 된 것입니다.

[root@ac922 ~]# source /opt/DL/tensorflow/bin/tensorflow-activate