* 아래에서 빌드한 hadoop-2.7.4.tar.gz는 아래 링크에서 download 받으실 수 있도록 google drive에 올려놓았습니다.
https://drive.google.com/open?id=1W0QYAD5DkSeY_vBHRHmu_iril4t9svJz
POWER (ppc64le) chip 위에서 hadoop을 compile하는 것은 매우 간단합니다. 그냥 https://github.com/apache/hadoop/blob/trunk/BUILDING.txt 에 나온 대로 따라 하면 됩니다. 딱 하나, protobuf 버전이 안 맞는 문제 때문에 아래와 처럼 protobuf 2.5를 별도로 설치하는 부분만 추가됩니다.
먼저 Ubuntu OS에서 기본으로 필요한 다음 package들을 설치합니다.
u0017649@sys-90043:~$ sudo apt-get install software-properties-common
u0017649@sys-90043:~$ sudo apt-get -y install maven build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev protobuf-compiler snappy libsnappy-dev
u0017649@sys-90043:~$ sudo apt-get install libjansson-dev bzip2 libbz2-dev fuse libfuse-dev zstd
protoc를 위해 protobuf 2.5.0의 source를 다운받아서 아래와 같이 설치합니다. apt-get으로 설치 가능한 OS에 포함된 버전은 2.6.1인데, 묘하게도 hadoop에서는 2.5.0을 꼭 써야 한다고 고집하네요.
u0017649@sys-90043:~$ git clone --recursive https://github.com/ibmsoe/Protobuf.git
u0017649@sys-90043:~$ cd Protobuf
u0017649@sys-90043:~/Protobuf$ ./configure
u0017649@sys-90043:~/Protobuf$ make
u0017649@sys-90043:~/Protobuf$ sudo make install
u0017649@sys-90043:~/Protobuf$ which protoc
/usr/local/bin/protoc
* 참고로, 위와 같이 protobuf를 따로 해주지 않으면 아래와 같은 error 발생합니다.
[ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:3.1.0-SNAPSHOT:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: protoc version is 'libprotoc 2.6.1', expected version is '2.5.0' -> [Help 1]
환경 변수도 설정합니다.
u0017649@sys-90043:~$ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-ppc64el
u0017649@sys-90043:~$ export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
u0017649@sys-90043:~$ export MAVEN_OPTS="-Xmx2048m"
그리고나서 hadoop-2.7.4의 source를 다음과 같이 download 받습니다. 현재 최신 버전은 3.0인데, 이건 아직 안정화 버전은 아닌 것 같고, 최근 버전의 HortonWorks에 포함된 버전인 2.7.4로 하겠습니다.
u0017649@sys-90043:~$ wget http://apache.tt.co.kr/hadoop/common/hadoop-2.7.4/hadoop-2.7.4-src.tar.gz
u0017649@sys-90043:~$ tar -zxf hadoop-2.7.4-src.tar.gz
u0017649@sys-90043:~$ cd hadoop-2.7.4-src
빌드 자체는 maven으로 수행되는데, 시간은 좀 걸립니다만 상대적으로 매우 간단합니다. 아래와 같이 수행하면 빌드된 binary가 tar.gz로 묶여서 hadoop-dist/target 디렉토리에 생성됩니다.
u0017649@sys-90043:~/hadoop-2.7.4-src$ mvn package -Pdist -DskipTests -Dtar
...
main:
[exec] $ tar cf hadoop-2.7.4.tar hadoop-2.7.4
[exec] $ gzip -f hadoop-2.7.4.tar
[exec]
[exec] Hadoop dist tar available at: /home/u0017649/hadoop-2.7.4-src/hadoop-dist/target/hadoop-2.7.4.tar.gz
[exec]
[INFO] Executed tasks
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Building jar: /home/u0017649/hadoop-2.7.4-src/hadoop-dist/target/hadoop-dist-2.7.4-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [ 4.780 s]
[INFO] Apache Hadoop Build Tools .......................... SUCCESS [ 2.711 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 1.633 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 2.645 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.386 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 2.546 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 6.019 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 11.630 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [ 12.236 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 9.364 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [02:21 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [ 11.743 s]
[INFO] Apache Hadoop KMS .................................. SUCCESS [ 16.980 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [ 3.316 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [02:42 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 34.161 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 13.819 s]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 5.306 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.080 s]
[INFO] hadoop-yarn ........................................ SUCCESS [ 0.073 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [ 39.900 s]
[INFO] hadoop-yarn-common ................................. SUCCESS [ 41.698 s]
[INFO] hadoop-yarn-server ................................. SUCCESS [ 0.160 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [ 13.859 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [ 16.781 s]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [ 5.143 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 10.619 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 25.832 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 6.436 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [ 9.209 s]
[INFO] hadoop-yarn-server-sharedcachemanager .............. SUCCESS [ 4.691 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [ 0.052 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [ 4.187 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [ 2.589 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [ 0.052 s]
[INFO] hadoop-yarn-registry ............................... SUCCESS [ 8.977 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [ 4.737 s]
[INFO] hadoop-mapreduce-client ............................ SUCCESS [ 0.271 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 28.766 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 18.916 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [ 6.326 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 12.547 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 8.090 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 10.544 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [ 2.727 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 7.638 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [ 3.216 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 6.935 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 15.235 s]
[INFO] Apache Hadoop Archives ............................. SUCCESS [ 4.425 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [ 7.658 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 5.281 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [ 3.525 s]
[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [ 2.382 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [ 4.387 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [ 0.031 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 6.296 s]
[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [ 14.099 s]
[INFO] Apache Hadoop Azure support ........................ SUCCESS [ 6.764 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [ 8.594 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 1.873 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 8.096 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 8.972 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.039 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [01:00 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 15:09 min
[INFO] Finished at: 2017-11-21T22:31:05-05:00
[INFO] Final Memory: 205M/808M
[INFO] ------------------------------------------------------------------------
이제 이렇게 빌드된 tar.gz을 가지고 hadoop을 (비록 1대이지만) 기본 구성하는 것을 해보겠습니다.
가령 Minsky 서버로 docker 기반의 cloud 서비스를 해주는 Nimbix cloud의 가상 머신을 사용하고 계시다면, 유일한 persistent storage인 /data에 hadoop을 설치하셔야 합니다. 그 외의 directory들에 설치하면 이 Nimbix instance를 reboot 하면 다 초기화 되어 없어져 버립니다. 이는 Nimbix가 진짜 가상 머신이 아니라 docker instance이기 때문입니다.
아래와 같이 /data 밑에 그냥 hadoop tar.gz 파일을 풀어놓으면 설치는 끝납니다.
u0017649@sys-90043:~$ cd /data
u0017649@sys-90043:/data$ tar -zxf /home/u0017649/hadoop-2.7.4-src/hadoop-dist/target/hadoop-2.7.4.tar.gz
u0017649@sys-90043:/data$ cd hadoop-2.7.4
이제 기본 환경 변수를 설정합니다. JAVA_HOME도 위에서처럼 제대로 설정해주셔야 합니다.
u0017649@sys-90043:/data/hadoop-2.7.4$ export HADOOP_INSTALL=/data/hadoop-2.7.4
u0017649@sys-90043:/data/hadoop-2.7.4$ export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin
일단 hadoop binary가 제대로 작동하는지 확인합니다.
u0017649@sys-90043:/data/hadoop-2.7.4$ hadoop version
Hadoop 2.7.4
Subversion Unknown -r Unknown
Compiled by u0017649 on 2017-11-22T03:17Z
Compiled with protoc 2.5.0
From source with checksum 50b0468318b4ce9bd24dc467b7ce1148
This command was run using /data/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar
그리고나서 configuration directory에 들어가 기본 설정을 다음과 같이 해줍니다.
u0017649@sys-90043:/data/hadoop-2.7.4$ cd etc/hadoop
u0017649@sys-90043:/data/hadoop-2.7.4/etc/hadoop$ vi hadoop-env.sh
...
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-ppc64el
...
u0017649@sys-90043:/data/hadoop-2.7.4/etc/hadoop$ vi core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop-2.7.4/hadoop-${user.name}</value>
</property>
</configuration>
u0017649@sys-90043:/data/hadoop-2.7.4/etc/hadoop$ vi mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>${hadoop.tmp.dir}/mapred/local</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
</property>
</configuration>
slaves 파일에는 자기 자신인 localhost를 적어 줍니다. 그러면 자기 자신이 namenode도 되고 datanode도 되는 것입니다.
u0017649@sys-90043:/data/hadoop-2.7.4/etc/hadoop$ cat slaves
localhost
이제 hadoop을 기동시켜 볼텐데, 그러자면 먼저 localhost 자신에 대해서도 passwd 없이 "ssh localhost" 와 "ssh 0.0.0.0"이 가능하도록 ssh-keygen 및 ssh-copy-id가 수행되어야 합니다.
이제 namenode 포맷을 합니다.
u0017649@sys-90043:~$ hadoop namenode -format
...
17/11/21 23:53:38 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop-2.7.4/hadoop-u0017649/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 325 bytes saved in 0 seconds.
17/11/21 23:53:38 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/21 23:53:38 INFO util.ExitUtil: Exiting with status 0
17/11/21 23:53:38 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sys-90043/172.29.160.241
************************************************************/
그리고나서 hadoop과 yarn을 start 합니다.
u0017649@sys-90043:~$ start-all.sh
...
node-sys-90043.out
starting yarn daemons
starting resourcemanager, logging to /data/hadoop-2.7.4/logs/yarn-u0017649-resourcemanager-sys-90043.out
localhost: starting nodemanager, logging to /data/hadoop-2.7.4/logs/yarn-u0017649-nodemanager-sys-90043.out
다음과 같이 기초적인 hdfs 명령을 수행해 봅니다. 잘 되는 것을 보실 수 있습니다.
u0017649@sys-90043:~$ hadoop fs -df
Filesystem Size Used Available Use%
hdfs://localhost:9000 36849713152 24576 5312647168 0%
u0017649@sys-90043:~$ hadoop fs -mkdir -p /user/u0017649
u0017649@sys-90043:~$ hadoop fs -mkdir input
u0017649@sys-90043:~$ hadoop fs -ls -R
drwxr-xr-x - u0017649 supergroup 0 2017-11-21 23:58 input
-rw-r--r-- 3 u0017649 supergroup 258 2017-11-21 23:58 input/hosts
u0017649@sys-90043:~$ hadoop fs -text input/hosts
127.0.0.1 localhost
127.0.1.1 ubuntu1604-dr-01.dal-ebis.ihost.com ubuntu1604-dr-01
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.29.160.241 sys-90043
2017년 11월 22일 수요일
2017년 11월 20일 월요일
Minsky 서버에서의 JCuda 0.8.0 (CUDA 8용) build (ppc64le)
JCuda를 빌드하는 것은 아래 github에 나온 대로 따라하시면 됩니다.
https://github.com/jcuda/jcuda-main/blob/master/BUILDING.md
먼저, 아래와 같이 9개의 project에 대해 git clone을 수행합니다.
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcuda-main.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcuda-common.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcuda.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcublas.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcufft.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcusparse.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcurand.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcusolver.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jnvgraph.git
각 directory로 들어가서, CUDA 8에 맞는 버전인 version-0.8.0으로 각각 checkout을 해줍니다.
u0017649@sys-90043:~/jcuda$ ls
jcublas jcuda jcuda-common jcuda-main jcufft jcurand jcusolver jcusparse jnvgraph
u0017649@sys-90043:~/jcuda$ cd jcublas
u0017649@sys-90043:~/jcuda/jcublas$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcublas$ cd ../jcuda
u0017649@sys-90043:~/jcuda/jcuda$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcuda$ cd ../jcuda-common
u0017649@sys-90043:~/jcuda/jcuda-common$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcuda-common$ cd ../jcuda-main
u0017649@sys-90043:~/jcuda/jcuda-main$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcuda-main$ cd ../jcufft
u0017649@sys-90043:~/jcuda/jcufft$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcufft$ cd ../jcurand
u0017649@sys-90043:~/jcuda/jcurand$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcurand$ cd ../jcusolver
u0017649@sys-90043:~/jcuda/jcusolver$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcusolver$ cd ../jcusparse
u0017649@sys-90043:~/jcuda/jcusparse$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcusparse$ cd ../jnvgraph
u0017649@sys-90043:~/jcuda/jnvgraph$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jnvgraph$ cd ..
저는 이것을 Ubuntu 16.04 ppc64le에서 빌드했는데, 이걸 빌드할 때 GL/gl.h를 찾기 때문에 다음과 같이 libmesa-dev를 미리 설치해야 합니다.
u0017649@sys-90043:~/jcuda$ sudo apt-get install libmesa-dev
아래와 같이 기본적인 환경변수를 설정합니다.
u0017649@sys-90043:~/jcuda$ export LD_LIBRARY_PATH=/usr/local/cuda-8.0/targets/ppc64le-linux/lib:$LD_LIBRARY_PATH
u0017649@sys-90043:~/jcuda$ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-ppc64el
이제 cmake를 수행합니다. 이때 CUDA_nvrtc_LIBRARY를 cmake에게 알려주기 위해 다음과 같이 -D 옵션을 붙입니다.
u0017649@sys-90043:~/jcuda$ cmake ./jcuda-main -DCUDA_nvrtc_LIBRARY="/usr/local/cuda-8.0/targets/ppc64le-linux/lib/libnvrtc.so"
...
-- Found CUDA: /usr/local/cuda/bin/nvcc
-- Found JNI: /usr/lib/jvm/java-8-openjdk-ppc64el/jre/lib/ppc64le/libjawt.so
-- Configuring done
-- Generating done
-- Build files have been written to: /home/u0017649/jcuda
다음으로는 make all을 수행합니다.
u0017649@sys-90043:~/jcuda$ make all
...
/home/u0017649/jcuda/jnvgraph/JNvgraphJNI/src/JNvgraph.cpp:292:26: warning: deleting ‘void*’ is undefined [-Wdelete-incomplete]
delete nativeObject->nativeTopologyData;
^
[100%] Linking CXX shared library ../../nativeLibraries/linux/ppc_64/lib/libJNvgraph-0.8.0-linux-ppc_64.so
[100%] Built target JNvgraph
끝나고 나면 jcuda-main directory로 들어가서 mvn으로 clean install을 수행합니다. 단, 여기서는 maven test는 모두 skip 했습니다. 저는 여기에 CUDA 8.0이 설치되어있긴 하지만 실제 GPU가 설치된 환경은 아니라서, test를 하면 cuda device를 찾을 수 없다며 error가 나기 때문입니다. GPU가 설치된 환경에서라면 저 "-Dmaven.test.skip=true" 옵션을 빼고 그냥 "mvn clean install"을 수행하시기 바랍니다.
u0017649@sys-90043:~/jcuda$ cd jcuda-main
u0017649@sys-90043:~/jcuda/jcuda-main$ mvn -Dmaven.test.skip=true clean install
...
[INFO] Configured Artifact: org.jcuda:jnvgraph-natives:linux-ppc_64:0.8.0:jar
[INFO] Copying jcuda-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcuda-0.8.0.jar
[INFO] Copying jcuda-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcuda-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcublas-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcublas-0.8.0.jar
[INFO] Copying jcublas-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcublas-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcufft-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcufft-0.8.0.jar
[INFO] Copying jcufft-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcufft-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcusparse-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcusparse-0.8.0.jar
[INFO] Copying jcusparse-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcusparse-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcurand-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcurand-0.8.0.jar
[INFO] Copying jcurand-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcurand-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcusolver-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcusolver-0.8.0.jar
[INFO] Copying jcusolver-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcusolver-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jnvgraph-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jnvgraph-0.8.0.jar
[INFO] Copying jnvgraph-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jnvgraph-natives-0.8.0-linux-ppc_64.jar
[INFO]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ jcuda-main ---
[INFO] Installing /home/u0017649/jcuda/jcuda-main/pom.xml to /home/u0017649/.m2/repository/org/jcuda/jcuda-main/0.8.0/jcuda-main-0.8.0.pom
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] JCuda .............................................. SUCCESS [ 2.596 s]
[INFO] jcuda-natives ...................................... SUCCESS [ 0.596 s]
[INFO] jcuda .............................................. SUCCESS [ 13.244 s]
[INFO] jcublas-natives .................................... SUCCESS [ 0.120 s]
[INFO] jcublas ............................................ SUCCESS [ 6.343 s]
[INFO] jcufft-natives ..................................... SUCCESS [ 0.029 s]
[INFO] jcufft ............................................. SUCCESS [ 2.843 s]
[INFO] jcurand-natives .................................... SUCCESS [ 0.036 s]
[INFO] jcurand ............................................ SUCCESS [ 2.428 s]
[INFO] jcusparse-natives .................................. SUCCESS [ 0.085 s]
[INFO] jcusparse .......................................... SUCCESS [ 7.853 s]
[INFO] jcusolver-natives .................................. SUCCESS [ 0.066 s]
[INFO] jcusolver .......................................... SUCCESS [ 4.158 s]
[INFO] jnvgraph-natives ................................... SUCCESS [ 0.057 s]
[INFO] jnvgraph ........................................... SUCCESS [ 2.932 s]
[INFO] jcuda-main ......................................... SUCCESS [ 1.689 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 45.413 s
[INFO] Finished at: 2017-11-20T01:41:41-05:00
[INFO] Final Memory: 53M/421M
[INFO] ------------------------------------------------------------------------
위와 같이 build는 잘 끝나고, 결과물로는 jcuda-main/target directory에 jar 파일들 14개가 생긴 것을 보실 수 있습니다.
u0017649@sys-90043:~/jcuda/jcuda-main$ cd target
u0017649@sys-90043:~/jcuda/jcuda-main/target$ ls -ltr
total 1680
-rw-rw-r-- 1 u0017649 u0017649 318740 Nov 20 01:41 jcuda-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 149350 Nov 20 01:41 jcuda-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 297989 Nov 20 01:41 jcublas-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 30881 Nov 20 01:41 jcublas-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 196292 Nov 20 01:41 jnvgraph-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 11081 Nov 20 01:41 jnvgraph-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 307435 Nov 20 01:41 jcusparse-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 38335 Nov 20 01:41 jcusparse-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 248028 Nov 20 01:41 jcusolver-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 22736 Nov 20 01:41 jcusolver-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 26684 Nov 20 01:41 jcurand-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 8372 Nov 20 01:41 jcurand-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 27414 Nov 20 01:41 jcufft-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 11052 Nov 20 01:41 jcufft-0.8.0.jar
이제 이 14개의 jar 파일을 적당한 directory에 옮겨서 tar로 묶어 주면 됩니다. 저는 /tmp 밑에 JCuda-All-0.8.0-bin-linux-ppc64le 라는 이름의 directory를 만들고 거기로 이 jar 파일들을 옮긴 뒤 그 directory를 tar로 다음과 같이 묶었습니다.
u0017649@sys-90043:/tmp$ tar -zcvf JCuda-All-0.8.0-bin-linux-ppc64le.tgz JCuda-All-0.8.0-bin-linux-ppc64le
내용은 다음과 같습니다.
u0017649@sys-90043:/tmp$ tar -ztvf JCuda-All-0.8.0-bin-linux-ppc64le.tgz
drwxrwxr-x u0017649/u0017649 0 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/
-rw-rw-r-- u0017649/u0017649 30881 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcublas-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 11052 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcufft-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 196292 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jnvgraph-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 149350 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcuda-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 307435 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcusparse-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 248028 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcusolver-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 11081 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jnvgraph-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 27414 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcufft-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 297989 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcublas-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 38335 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcusparse-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 8372 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcurand-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 26684 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcurand-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 22736 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcusolver-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 318740 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcuda-natives-0.8.0-linux-ppc_64.jar
이 JCuda-All-0.8.0-bin-linux-ppc64le.tgz 파일은 아래 link에서 download 받으실 수 있습니다.
https://drive.google.com/open?id=1CnlvJARkRWPDTbynUUlNBL_TQbu1-xbn
* 참고로, jcuda.org에서 x86용 binary를 download 받아보니 제 것과 마찬가지로 14개의 jar 파일이 들어있습니다. 아마 빌드는 제대로 된 것 같습니다.
** 참고로, 저 위에서 cmake를 할 때 -DCUDA_nvrtc_LIBRARY="/usr/local/cuda-8.0/targets/ppc64le-linux/lib/libnvrtc.so" 옵션을 붙이는 이유는 아래의 error를 피하기 위한 것입니다.
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_nvrtc_LIBRARY
linked by target "JNvrtc" in directory /home/u0017649/jcuda/jcuda/JNvrtcJNI
*
https://github.com/jcuda/jcuda-main/blob/master/BUILDING.md
먼저, 아래와 같이 9개의 project에 대해 git clone을 수행합니다.
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcuda-main.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcuda-common.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcuda.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcublas.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcufft.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcusparse.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcurand.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jcusolver.git
u0017649@sys-90043:~/jcuda$ git clone https://github.com/jcuda/jnvgraph.git
각 directory로 들어가서, CUDA 8에 맞는 버전인 version-0.8.0으로 각각 checkout을 해줍니다.
u0017649@sys-90043:~/jcuda$ ls
jcublas jcuda jcuda-common jcuda-main jcufft jcurand jcusolver jcusparse jnvgraph
u0017649@sys-90043:~/jcuda$ cd jcublas
u0017649@sys-90043:~/jcuda/jcublas$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcublas$ cd ../jcuda
u0017649@sys-90043:~/jcuda/jcuda$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcuda$ cd ../jcuda-common
u0017649@sys-90043:~/jcuda/jcuda-common$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcuda-common$ cd ../jcuda-main
u0017649@sys-90043:~/jcuda/jcuda-main$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcuda-main$ cd ../jcufft
u0017649@sys-90043:~/jcuda/jcufft$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcufft$ cd ../jcurand
u0017649@sys-90043:~/jcuda/jcurand$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcurand$ cd ../jcusolver
u0017649@sys-90043:~/jcuda/jcusolver$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcusolver$ cd ../jcusparse
u0017649@sys-90043:~/jcuda/jcusparse$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jcusparse$ cd ../jnvgraph
u0017649@sys-90043:~/jcuda/jnvgraph$ git checkout tags/version-0.8.0
u0017649@sys-90043:~/jcuda/jnvgraph$ cd ..
저는 이것을 Ubuntu 16.04 ppc64le에서 빌드했는데, 이걸 빌드할 때 GL/gl.h를 찾기 때문에 다음과 같이 libmesa-dev를 미리 설치해야 합니다.
u0017649@sys-90043:~/jcuda$ sudo apt-get install libmesa-dev
아래와 같이 기본적인 환경변수를 설정합니다.
u0017649@sys-90043:~/jcuda$ export LD_LIBRARY_PATH=/usr/local/cuda-8.0/targets/ppc64le-linux/lib:$LD_LIBRARY_PATH
u0017649@sys-90043:~/jcuda$ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-ppc64el
이제 cmake를 수행합니다. 이때 CUDA_nvrtc_LIBRARY를 cmake에게 알려주기 위해 다음과 같이 -D 옵션을 붙입니다.
u0017649@sys-90043:~/jcuda$ cmake ./jcuda-main -DCUDA_nvrtc_LIBRARY="/usr/local/cuda-8.0/targets/ppc64le-linux/lib/libnvrtc.so"
...
-- Found CUDA: /usr/local/cuda/bin/nvcc
-- Found JNI: /usr/lib/jvm/java-8-openjdk-ppc64el/jre/lib/ppc64le/libjawt.so
-- Configuring done
-- Generating done
-- Build files have been written to: /home/u0017649/jcuda
다음으로는 make all을 수행합니다.
u0017649@sys-90043:~/jcuda$ make all
...
/home/u0017649/jcuda/jnvgraph/JNvgraphJNI/src/JNvgraph.cpp:292:26: warning: deleting ‘void*’ is undefined [-Wdelete-incomplete]
delete nativeObject->nativeTopologyData;
^
[100%] Linking CXX shared library ../../nativeLibraries/linux/ppc_64/lib/libJNvgraph-0.8.0-linux-ppc_64.so
[100%] Built target JNvgraph
끝나고 나면 jcuda-main directory로 들어가서 mvn으로 clean install을 수행합니다. 단, 여기서는 maven test는 모두 skip 했습니다. 저는 여기에 CUDA 8.0이 설치되어있긴 하지만 실제 GPU가 설치된 환경은 아니라서, test를 하면 cuda device를 찾을 수 없다며 error가 나기 때문입니다. GPU가 설치된 환경에서라면 저 "-Dmaven.test.skip=true" 옵션을 빼고 그냥 "mvn clean install"을 수행하시기 바랍니다.
u0017649@sys-90043:~/jcuda$ cd jcuda-main
u0017649@sys-90043:~/jcuda/jcuda-main$ mvn -Dmaven.test.skip=true clean install
...
[INFO] Configured Artifact: org.jcuda:jnvgraph-natives:linux-ppc_64:0.8.0:jar
[INFO] Copying jcuda-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcuda-0.8.0.jar
[INFO] Copying jcuda-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcuda-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcublas-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcublas-0.8.0.jar
[INFO] Copying jcublas-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcublas-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcufft-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcufft-0.8.0.jar
[INFO] Copying jcufft-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcufft-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcusparse-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcusparse-0.8.0.jar
[INFO] Copying jcusparse-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcusparse-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcurand-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcurand-0.8.0.jar
[INFO] Copying jcurand-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcurand-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jcusolver-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jcusolver-0.8.0.jar
[INFO] Copying jcusolver-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jcusolver-natives-0.8.0-linux-ppc_64.jar
[INFO] Copying jnvgraph-0.8.0.jar to /home/u0017649/jcuda/jcuda-main/target/jnvgraph-0.8.0.jar
[INFO] Copying jnvgraph-natives-0.8.0-linux-ppc_64.jar to /home/u0017649/jcuda/jcuda-main/target/jnvgraph-natives-0.8.0-linux-ppc_64.jar
[INFO]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ jcuda-main ---
[INFO] Installing /home/u0017649/jcuda/jcuda-main/pom.xml to /home/u0017649/.m2/repository/org/jcuda/jcuda-main/0.8.0/jcuda-main-0.8.0.pom
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] JCuda .............................................. SUCCESS [ 2.596 s]
[INFO] jcuda-natives ...................................... SUCCESS [ 0.596 s]
[INFO] jcuda .............................................. SUCCESS [ 13.244 s]
[INFO] jcublas-natives .................................... SUCCESS [ 0.120 s]
[INFO] jcublas ............................................ SUCCESS [ 6.343 s]
[INFO] jcufft-natives ..................................... SUCCESS [ 0.029 s]
[INFO] jcufft ............................................. SUCCESS [ 2.843 s]
[INFO] jcurand-natives .................................... SUCCESS [ 0.036 s]
[INFO] jcurand ............................................ SUCCESS [ 2.428 s]
[INFO] jcusparse-natives .................................. SUCCESS [ 0.085 s]
[INFO] jcusparse .......................................... SUCCESS [ 7.853 s]
[INFO] jcusolver-natives .................................. SUCCESS [ 0.066 s]
[INFO] jcusolver .......................................... SUCCESS [ 4.158 s]
[INFO] jnvgraph-natives ................................... SUCCESS [ 0.057 s]
[INFO] jnvgraph ........................................... SUCCESS [ 2.932 s]
[INFO] jcuda-main ......................................... SUCCESS [ 1.689 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 45.413 s
[INFO] Finished at: 2017-11-20T01:41:41-05:00
[INFO] Final Memory: 53M/421M
[INFO] ------------------------------------------------------------------------
위와 같이 build는 잘 끝나고, 결과물로는 jcuda-main/target directory에 jar 파일들 14개가 생긴 것을 보실 수 있습니다.
u0017649@sys-90043:~/jcuda/jcuda-main$ cd target
u0017649@sys-90043:~/jcuda/jcuda-main/target$ ls -ltr
total 1680
-rw-rw-r-- 1 u0017649 u0017649 318740 Nov 20 01:41 jcuda-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 149350 Nov 20 01:41 jcuda-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 297989 Nov 20 01:41 jcublas-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 30881 Nov 20 01:41 jcublas-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 196292 Nov 20 01:41 jnvgraph-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 11081 Nov 20 01:41 jnvgraph-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 307435 Nov 20 01:41 jcusparse-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 38335 Nov 20 01:41 jcusparse-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 248028 Nov 20 01:41 jcusolver-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 22736 Nov 20 01:41 jcusolver-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 26684 Nov 20 01:41 jcurand-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 8372 Nov 20 01:41 jcurand-0.8.0.jar
-rw-rw-r-- 1 u0017649 u0017649 27414 Nov 20 01:41 jcufft-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- 1 u0017649 u0017649 11052 Nov 20 01:41 jcufft-0.8.0.jar
이제 이 14개의 jar 파일을 적당한 directory에 옮겨서 tar로 묶어 주면 됩니다. 저는 /tmp 밑에 JCuda-All-0.8.0-bin-linux-ppc64le 라는 이름의 directory를 만들고 거기로 이 jar 파일들을 옮긴 뒤 그 directory를 tar로 다음과 같이 묶었습니다.
u0017649@sys-90043:/tmp$ tar -zcvf JCuda-All-0.8.0-bin-linux-ppc64le.tgz JCuda-All-0.8.0-bin-linux-ppc64le
내용은 다음과 같습니다.
u0017649@sys-90043:/tmp$ tar -ztvf JCuda-All-0.8.0-bin-linux-ppc64le.tgz
drwxrwxr-x u0017649/u0017649 0 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/
-rw-rw-r-- u0017649/u0017649 30881 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcublas-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 11052 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcufft-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 196292 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jnvgraph-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 149350 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcuda-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 307435 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcusparse-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 248028 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcusolver-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 11081 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jnvgraph-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 27414 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcufft-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 297989 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcublas-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 38335 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcusparse-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 8372 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcurand-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 26684 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcurand-natives-0.8.0-linux-ppc_64.jar
-rw-rw-r-- u0017649/u0017649 22736 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcusolver-0.8.0.jar
-rw-rw-r-- u0017649/u0017649 318740 2017-11-20 01:46 JCuda-All-0.8.0-bin-linux-ppc64le/jcuda-natives-0.8.0-linux-ppc_64.jar
이 JCuda-All-0.8.0-bin-linux-ppc64le.tgz 파일은 아래 link에서 download 받으실 수 있습니다.
https://drive.google.com/open?id=1CnlvJARkRWPDTbynUUlNBL_TQbu1-xbn
* 참고로, jcuda.org에서 x86용 binary를 download 받아보니 제 것과 마찬가지로 14개의 jar 파일이 들어있습니다. 아마 빌드는 제대로 된 것 같습니다.
** 참고로, 저 위에서 cmake를 할 때 -DCUDA_nvrtc_LIBRARY="/usr/local/cuda-8.0/targets/ppc64le-linux/lib/libnvrtc.so" 옵션을 붙이는 이유는 아래의 error를 피하기 위한 것입니다.
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_nvrtc_LIBRARY
linked by target "JNvrtc" in directory /home/u0017649/jcuda/jcuda/JNvrtcJNI
*
2017년 11월 10일 금요일
tensorflow 1.3, caffe2, pytorch의 nvidia-docker를 이용한 테스트
tensorflow 1.3, caffe2, pytorch의 nvidia-docker를 이용한 테스트 방법입니다.
1) tensorflow v1.3
다음과 같이 tensorflow 1.3 docker image를 구동합니다.
root@minsky:~# nvidia-docker run -ti --rm -v /data:/data bsyu/tf1.3-ppc64le:v0.1 bash
먼저 각종 PATH 환경 변수를 확인합니다.
root@67c0e6901bb2:/# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/anaconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHONPATH=/opt/anaconda3/lib/python3.6/site-packages
cifar10 관련된 example code가 들어있는 directory로 이동합니다.
root@67c0e6901bb2:/# cd /data/imsi/tensorflow/models/tutorials/image/cifar10
수행할 cifar10_multi_gpu_train.py code를 일부 수정합니다. (원래는 --train_dir 등의 명령어 파라미터로 조정이 가능해야 하는데, 실제로는 직접 source를 수정해야 제대로 수행되는 것 같습니다.)
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512 --num_gpus 2
usage: cifar10_multi_gpu_train.py [-h] [--batch_size BATCH_SIZE]
[--data_dir DATA_DIR] [--use_fp16 USE_FP16]
cifar10_multi_gpu_train.py: error: unrecognized arguments: --num_gpus 2
위와 같은 error를 막기 위해, 아래와 같이 직접 code를 수정합니다.
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# vi cifar10_multi_gpu_train.py
...
#parser.add_argument('--train_dir', type=str, default='/tmp/cifar10_train',
parser.add_argument('--train_dir', type=str, default='/data/imsi/test/tf1.3',
help='Directory where to write event logs and checkpoint.')
#parser.add_argument('--max_steps', type=int, default=1000000,
parser.add_argument('--max_steps', type=int, default=10000,
help='Number of batches to run.')
#parser.add_argument('--num_gpus', type=int, default=1,
parser.add_argument('--num_gpus', type=int, default=4,
help='How many GPUs to use.')
이제 다음과 같이 run 하시면 됩니다. 여기서는 batch_size를 512로 했는데, 더 크게 잡아도 될 것 같습니다.
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512
>> Downloading cifar-10-binary.tar.gz 6.1%
...
2017-11-10 01:20:23.628755: step 9440, loss = 0.63 (15074.6 examples/sec; 0.034 sec/batch)
2017-11-10 01:20:25.052011: step 9450, loss = 0.64 (14615.4 examples/sec; 0.035 sec/batch)
2017-11-10 01:20:26.489564: step 9460, loss = 0.55 (14872.0 examples/sec; 0.034 sec/batch)
2017-11-10 01:20:27.860303: step 9470, loss = 0.61 (14515.9 examples/sec; 0.035 sec/batch)
2017-11-10 01:20:29.289386: step 9480, loss = 0.54 (13690.6 examples/sec; 0.037 sec/batch)
2017-11-10 01:20:30.799570: step 9490, loss = 0.69 (15940.8 examples/sec; 0.032 sec/batch)
2017-11-10 01:20:32.239056: step 9500, loss = 0.54 (12581.4 examples/sec; 0.041 sec/batch)
2017-11-10 01:20:34.219832: step 9510, loss = 0.60 (14077.9 examples/sec; 0.036 sec/batch)
...
다음으로는 전체 CPU, 즉 2개 chip 총 16-core의 절반인 1개 chip 8-core와, 전체 GPU 4개 중 2개의 GPU만 할당한 docker를 수행합니다. 여기서 --cpuset-cpus을 써서 CPU 자원을 control할 때, 저렇게 CPU 번호를 2개씩 그룹으로 줍니다. 이는 IBM POWER8가 SMT(hyperthread)가 core당 8개씩 낼 수 있는 특성 때문에 core 1개당 8개의 logical CPU 번호를 할당하기 때문입니다. 현재는 deep learning의 성능 최적화를 위해 SMT를 8이 아닌 2로 맞추어 놓았습니다.
root@minsky:~# NV_GPU=0,1 nvidia-docker run -ti --rm --cpuset-cpus="0,1,8,9,16,17,24,25,32,33,40,41,48,49" -v /data:/data bsyu/tf1.3-ppc64le:v0.1 bash
root@3b2c2614811d:~# nvidia-smi
Fri Nov 10 02:24:14 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... On | 0002:01:00.0 Off | 0 |
| N/A 38C P0 30W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... On | 0003:01:00.0 Off | 0 |
| N/A 40C P0 33W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
root@3b2c2614811d:/# cd /data/imsi/tensorflow/models/tutorials/image/cifar10
이제 GPU가 4개가 아니라 2개이므로, cifar10_multi_gpu_train.py도 아래와 같이 수정합니다.
root@3b2c2614811d:/data/imsi/tensorflow/models/tutorials/image/cifar10# vi cifar10_multi_gpu_train.py
...
#parser.add_argument('--num_gpus', type=int, default=1,
parser.add_argument('--num_gpus', type=int, default=2,
help='How many GPUs to use.')
수행하면 잘 돌아갑니다.
root@3b2c2614811d:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512
>> Downloading cifar-10-binary.tar.gz 1.7%
...
2017-11-10 02:35:50.040462: step 120, loss = 4.07 (15941.4 examples/sec; 0.032 sec/batch)
2017-11-10 02:35:50.587970: step 130, loss = 4.14 (19490.7 examples/sec; 0.026 sec/batch)
2017-11-10 02:35:51.119347: step 140, loss = 3.91 (18319.8 examples/sec; 0.028 sec/batch)
2017-11-10 02:35:51.655916: step 150, loss = 3.87 (20087.1 examples/sec; 0.025 sec/batch)
2017-11-10 02:35:52.181703: step 160, loss = 3.90 (19215.5 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:52.721608: step 170, loss = 3.82 (17780.1 examples/sec; 0.029 sec/batch)
2017-11-10 02:35:53.245088: step 180, loss = 3.92 (18888.4 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:53.777146: step 190, loss = 3.80 (19103.7 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:54.308063: step 200, loss = 3.76 (18554.2 examples/sec; 0.028 sec/batch)
...
2) caffe2
여기서는 처음부터 GPU 2개와 CPU core 8개만 가지고 docker를 띄워 보겠습니다.
root@minsky:~# NV_GPU=0,1 nvidia-docker run -ti --rm --cpuset-cpus="0,1,8,9,16,17,24,25,32,33,40,41,48,49" -v /data:/data bsyu/caffe2-ppc64le:v0.3 bash
보시는 바와 같이 GPU가 2개만 올라옵니다.
root@dc853a5495a0:/# nvidia-smi
Fri Nov 10 07:22:21 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... On | 0002:01:00.0 Off | 0 |
| N/A 32C P0 29W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... On | 0003:01:00.0 Off | 0 |
| N/A 35C P0 32W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
환경변수를 확인합니다. 여기서는 caffe2가 /opt/caffe2에 설치되어 있으므로, LD_LIBRARY_PATH나 PYTHONPATH를 거기에 맞춥니다.
root@dc853a5495a0:/# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/opt/caffe2/lib:/opt/DL/nccl/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/caffe2/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHONPATH=/opt/caffe2
caffe2는 아래의 resnet50_trainer.py를 이용해 테스트합니다. 그 전에, 먼저 https://github.com/caffe2/caffe2/issues/517 에 나온 lmdb 생성 문제를 해결하기 위해 이 URL에서 제시하는 대로 아래와 같이 code 일부를 수정합니다.
root@dc853a5495a0:/# cd /data/imsi/caffe2/caffe2/python/examples
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# vi lmdb_create_example.py
...
flatten_img = img_data.reshape(np.prod(img_data.shape))
# img_tensor.float_data.extend(flatten_img)
img_tensor.float_data.extend(flatten_img.flat)
이어서 다음과 같이 lmdb를 생성합니다. 이미 1번 수행했으므로 다시 할 경우 매우 빨리 수행될 것입니다.
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# python lmdb_create_example.py --output_file /data/imsi/test/caffe2/lmdb
>>> Write database...
Inserted 0 rows
Inserted 16 rows
Inserted 32 rows
Inserted 48 rows
Inserted 64 rows
Inserted 80 rows
Inserted 96 rows
Inserted 112 rows
Checksum/write: 1744827
>>> Read database...
Checksum/read: 1744827
그 다음에 training을 다음과 같이 수행합니다. 여기서는 GPU가 2개만 보이는 환경이므로, --gpus에 0,1,2,3 대신 0,1만 써야 합니다.
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# time python resnet50_trainer.py --train_data /data/imsi/test/caffe2/lmdb --gpus 0,1 --batch_size 128 --num_epochs 1
수행하면 다음과 같이 'not a valid file'이라는 경고 메시지가 나옵니다만, github 등을 googling해보면 무시하셔도 되는 메시지입니다.
Ignoring @/caffe2/caffe2/contrib/nccl:nccl_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops_gpu as it is not a valid file.
Ignoring @/caffe2/caffe2/distributed:file_store_handler_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/distributed:redis_store_handler_ops as it is not a valid file.
INFO:resnet50_trainer:Running on GPUs: [0, 1]
INFO:resnet50_trainer:Using epoch size: 1499904
INFO:data_parallel_model:Parallelizing model for devices: [0, 1]
INFO:data_parallel_model:Create input and model training operators
INFO:data_parallel_model:Model for GPU : 0
INFO:data_parallel_model:Model for GPU : 1
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Add initial parameter sync
WARNING:data_parallel_model:------- DEPRECATED API, please use data_parallel_model.OptimizeGradientMemory() -----
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.252535104752 secs
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.253523111343 secs
INFO:resnet50_trainer:Starting epoch 0/1
INFO:resnet50_trainer:Finished iteration 1/11718 of epoch 0 (27.70 images/sec)
INFO:resnet50_trainer:Training loss: 7.39205980301, accuracy: 0.0
INFO:resnet50_trainer:Finished iteration 2/11718 of epoch 0 (378.51 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 3/11718 of epoch 0 (387.87 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 4/11718 of epoch 0 (383.28 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 5/11718 of epoch 0 (381.71 images/sec)
...
다만 위와 같이 처음부터 accuracy가 1.0으로 나오는 문제가 있습니다. 이 resnet50_trainer.py 문제에 대해서는 caffe2의 github에 아래와 같이 discussion들이 있었습니다만, 아직 뾰족한 해결책은 없는 상태입니다. 하지만 상대적 시스템 성능 측정에는 별 문제가 없습니다.
https://github.com/caffe2/caffe2/issues/810
3) pytorch
이번에는 pytorch 이미지로 테스트하겠습니다.
root@8ccd72116fee:~# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/anaconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
먼저 docker image를 아래와 같이 구동합니다. 단, 여기서는 --ipc=host 옵션을 씁니다. 이유는 https://discuss.pytorch.org/t/imagenet-example-is-crashing/1363/2 에서 언급된 hang 현상을 피하기 위한 것입니다.
root@minsky:~# nvidia-docker run -ti --rm --ipc=host -v /data:/data bsyu/pytorch-ppc64le:v0.1 bash
가장 간단한 example인 mnist를 아래와 같이 수행합니다. 10 epochs를 수행하는데 대략 1분 30초 정도가 걸립니다.
root@8ccd72116fee:/data/imsi/examples/mnist# time python main.py --batch-size 512 --epochs 10
...
rain Epoch: 9 [25600/60000 (42%)] Loss: 0.434816
Train Epoch: 9 [30720/60000 (51%)] Loss: 0.417652
Train Epoch: 9 [35840/60000 (59%)] Loss: 0.503125
Train Epoch: 9 [40960/60000 (68%)] Loss: 0.477776
Train Epoch: 9 [46080/60000 (76%)] Loss: 0.346416
Train Epoch: 9 [51200/60000 (85%)] Loss: 0.361492
Train Epoch: 9 [56320/60000 (93%)] Loss: 0.383941
Test set: Average loss: 0.1722, Accuracy: 9470/10000 (95%)
Train Epoch: 10 [0/60000 (0%)] Loss: 0.369119
Train Epoch: 10 [5120/60000 (8%)] Loss: 0.377726
Train Epoch: 10 [10240/60000 (17%)] Loss: 0.402854
Train Epoch: 10 [15360/60000 (25%)] Loss: 0.349409
Train Epoch: 10 [20480/60000 (34%)] Loss: 0.295271
...
다만 이건 single-GPU만 사용하는 example입니다. Multi-GPU를 사용하기 위해서는 아래의 imagenet example을 수행해야 하는데, 그러자면 ilsvrc2012 dataset을 download 받아 풀어놓아야 합니다. 그 data는 아래와 같이 /data/imagenet_dir/train과 /data/imagenet_dir/val에 각각 JPEG 형태로 풀어놓았습니다.
root@minsky:/data/imagenet_dir/train# while read SYNSET; do
> mkdir -p ${SYNSET}
> tar xf ../../ILSVRC2012_img_train.tar "${SYNSET}.tar"
> tar xf "${SYNSET}.tar" -C "${SYNSET}"
> rm -f "${SYNSET}.tar"
> done < /opt/DL/caffe-nv/data/ilsvrc12/synsets.txt
root@minsky:/data/imagenet_dir/train# ls -1 | wc -l
1000
root@minsky:/data/imagenet_dir/train# du -sm .
142657 .
root@minsky:/data/imagenet_dir/train# find . | wc -l
1282168
root@minsky:/data/imagenet_dir/val# ls -1 | wc -l
50000
이 상태에서 그대로 main.py를 수행하면 다음과 같은 error를 겪게 됩니다. 이유는 이 main.py는 val 디렉토리 밑에도 label별 디렉토리에 JPEG 파일들이 들어가 있기를 기대하는 구조이기 때문입니다.
RuntimeError: Found 0 images in subfolders of: /data/imagenet_dir/val
Supported image extensions are: .jpg,.JPG,.jpeg,.JPEG,.png,.PNG,.ppm,.PPM,.bmp,.BMP
따라서 아래와 같이 inception 디렉토리의 preprocess_imagenet_validation_data.py를 이용하여 label별 디렉토리로 JPEG 파일들을 분산 재배치해야 합니다.
root@minsky:/data/models/research/inception/inception/data# python preprocess_imagenet_validation_data.py /data/imagenet_dir/val imagenet_2012_validation_synset_labels.txt
이제 다시 보면 label별로 재분배된 것을 보실 수 있습니다.
root@minsky:/data/imagenet_dir/val# ls | head -n 3
n01440764
n01443537
n01484850
root@minsky:/data/imagenet_dir/val# ls | wc -l
1000
root@minsky:/data/imagenet_dir/val# find . | wc -l
51001
이제 다음과 같이 main.py를 수행하면 됩니다.
root@8ccd72116fee:~# cd /data/imsi/examples/imagenet
root@8ccd72116fee:/data/imsi/examples/imagenet# time python main.py -a resnet18 --epochs 1 /data/imagenet_dir
=> creating model 'resnet18'
Epoch: [0][0/5005] Time 11.237 (11.237) Data 2.330 (2.330) Loss 7.0071 (7.0071) Prec@1 0.391 (0.391) Prec@5 0.391 (0.391)
Epoch: [0][10/5005] Time 0.139 (1.239) Data 0.069 (0.340) Loss 7.1214 (7.0515) Prec@1 0.000 (0.284) Prec@5 0.000 (1.065)
Epoch: [0][20/5005] Time 0.119 (0.854) Data 0.056 (0.342) Loss 7.1925 (7.0798) Prec@1 0.000 (0.260) Prec@5 0.781 (0.930)
...
* 위에서 사용된 docker image들은 다음과 같이 backup을 받아두었습니다.
root@minsky:/data/docker_save# docker save --output caffe2-ppc64le.v0.3.tar bsyu/caffe2-ppc64le:v0.3
root@minsky:/data/docker_save# docker save --output pytorch-ppc64le.v0.1.tar bsyu/pytorch-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output tf1.3-ppc64le.v0.1.tar bsyu/tf1.3-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output cudnn6-conda2-ppc64le.v0.1.tar bsyu/cudnn6-conda2-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output cudnn6-conda3-ppc64le.v0.1.tar bsyu/cudnn6-conda3-ppc64le:v0.1
root@minsky:/data/docker_save# ls -l
total 28023280
-rw------- 1 root root 4713168896 Nov 10 16:48 caffe2-ppc64le.v0.3.tar
-rw------- 1 root root 4218520064 Nov 10 17:10 cudnn6-conda2-ppc64le.v0.1.tar
-rw------- 1 root root 5272141312 Nov 10 17:11 cudnn6-conda3-ppc64le.v0.1.tar
-rw------- 1 root root 6921727488 Nov 10 16:51 pytorch-ppc64le.v0.1.tar
-rw------- 1 root root 7570257920 Nov 10 16:55 tf1.3-ppc64le.v0.1.tar
비상시엔 이 이미지들을 docker load 명령으로 load 하시면 됩니다.
(예) docker load --input caffe2-ppc64le.v0.3.tar
1) tensorflow v1.3
다음과 같이 tensorflow 1.3 docker image를 구동합니다.
root@minsky:~# nvidia-docker run -ti --rm -v /data:/data bsyu/tf1.3-ppc64le:v0.1 bash
먼저 각종 PATH 환경 변수를 확인합니다.
root@67c0e6901bb2:/# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/anaconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHONPATH=/opt/anaconda3/lib/python3.6/site-packages
cifar10 관련된 example code가 들어있는 directory로 이동합니다.
root@67c0e6901bb2:/# cd /data/imsi/tensorflow/models/tutorials/image/cifar10
수행할 cifar10_multi_gpu_train.py code를 일부 수정합니다. (원래는 --train_dir 등의 명령어 파라미터로 조정이 가능해야 하는데, 실제로는 직접 source를 수정해야 제대로 수행되는 것 같습니다.)
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512 --num_gpus 2
usage: cifar10_multi_gpu_train.py [-h] [--batch_size BATCH_SIZE]
[--data_dir DATA_DIR] [--use_fp16 USE_FP16]
cifar10_multi_gpu_train.py: error: unrecognized arguments: --num_gpus 2
위와 같은 error를 막기 위해, 아래와 같이 직접 code를 수정합니다.
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# vi cifar10_multi_gpu_train.py
...
#parser.add_argument('--train_dir', type=str, default='/tmp/cifar10_train',
parser.add_argument('--train_dir', type=str, default='/data/imsi/test/tf1.3',
help='Directory where to write event logs and checkpoint.')
#parser.add_argument('--max_steps', type=int, default=1000000,
parser.add_argument('--max_steps', type=int, default=10000,
help='Number of batches to run.')
#parser.add_argument('--num_gpus', type=int, default=1,
parser.add_argument('--num_gpus', type=int, default=4,
help='How many GPUs to use.')
이제 다음과 같이 run 하시면 됩니다. 여기서는 batch_size를 512로 했는데, 더 크게 잡아도 될 것 같습니다.
root@67c0e6901bb2:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512
>> Downloading cifar-10-binary.tar.gz 6.1%
...
2017-11-10 01:20:23.628755: step 9440, loss = 0.63 (15074.6 examples/sec; 0.034 sec/batch)
2017-11-10 01:20:25.052011: step 9450, loss = 0.64 (14615.4 examples/sec; 0.035 sec/batch)
2017-11-10 01:20:26.489564: step 9460, loss = 0.55 (14872.0 examples/sec; 0.034 sec/batch)
2017-11-10 01:20:27.860303: step 9470, loss = 0.61 (14515.9 examples/sec; 0.035 sec/batch)
2017-11-10 01:20:29.289386: step 9480, loss = 0.54 (13690.6 examples/sec; 0.037 sec/batch)
2017-11-10 01:20:30.799570: step 9490, loss = 0.69 (15940.8 examples/sec; 0.032 sec/batch)
2017-11-10 01:20:32.239056: step 9500, loss = 0.54 (12581.4 examples/sec; 0.041 sec/batch)
2017-11-10 01:20:34.219832: step 9510, loss = 0.60 (14077.9 examples/sec; 0.036 sec/batch)
...
다음으로는 전체 CPU, 즉 2개 chip 총 16-core의 절반인 1개 chip 8-core와, 전체 GPU 4개 중 2개의 GPU만 할당한 docker를 수행합니다. 여기서 --cpuset-cpus을 써서 CPU 자원을 control할 때, 저렇게 CPU 번호를 2개씩 그룹으로 줍니다. 이는 IBM POWER8가 SMT(hyperthread)가 core당 8개씩 낼 수 있는 특성 때문에 core 1개당 8개의 logical CPU 번호를 할당하기 때문입니다. 현재는 deep learning의 성능 최적화를 위해 SMT를 8이 아닌 2로 맞추어 놓았습니다.
root@minsky:~# NV_GPU=0,1 nvidia-docker run -ti --rm --cpuset-cpus="0,1,8,9,16,17,24,25,32,33,40,41,48,49" -v /data:/data bsyu/tf1.3-ppc64le:v0.1 bash
root@3b2c2614811d:~# nvidia-smi
Fri Nov 10 02:24:14 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... On | 0002:01:00.0 Off | 0 |
| N/A 38C P0 30W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... On | 0003:01:00.0 Off | 0 |
| N/A 40C P0 33W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
root@3b2c2614811d:/# cd /data/imsi/tensorflow/models/tutorials/image/cifar10
이제 GPU가 4개가 아니라 2개이므로, cifar10_multi_gpu_train.py도 아래와 같이 수정합니다.
root@3b2c2614811d:/data/imsi/tensorflow/models/tutorials/image/cifar10# vi cifar10_multi_gpu_train.py
...
#parser.add_argument('--num_gpus', type=int, default=1,
parser.add_argument('--num_gpus', type=int, default=2,
help='How many GPUs to use.')
수행하면 잘 돌아갑니다.
root@3b2c2614811d:/data/imsi/tensorflow/models/tutorials/image/cifar10# time python cifar10_multi_gpu_train.py --batch_size 512
>> Downloading cifar-10-binary.tar.gz 1.7%
...
2017-11-10 02:35:50.040462: step 120, loss = 4.07 (15941.4 examples/sec; 0.032 sec/batch)
2017-11-10 02:35:50.587970: step 130, loss = 4.14 (19490.7 examples/sec; 0.026 sec/batch)
2017-11-10 02:35:51.119347: step 140, loss = 3.91 (18319.8 examples/sec; 0.028 sec/batch)
2017-11-10 02:35:51.655916: step 150, loss = 3.87 (20087.1 examples/sec; 0.025 sec/batch)
2017-11-10 02:35:52.181703: step 160, loss = 3.90 (19215.5 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:52.721608: step 170, loss = 3.82 (17780.1 examples/sec; 0.029 sec/batch)
2017-11-10 02:35:53.245088: step 180, loss = 3.92 (18888.4 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:53.777146: step 190, loss = 3.80 (19103.7 examples/sec; 0.027 sec/batch)
2017-11-10 02:35:54.308063: step 200, loss = 3.76 (18554.2 examples/sec; 0.028 sec/batch)
...
2) caffe2
여기서는 처음부터 GPU 2개와 CPU core 8개만 가지고 docker를 띄워 보겠습니다.
root@minsky:~# NV_GPU=0,1 nvidia-docker run -ti --rm --cpuset-cpus="0,1,8,9,16,17,24,25,32,33,40,41,48,49" -v /data:/data bsyu/caffe2-ppc64le:v0.3 bash
보시는 바와 같이 GPU가 2개만 올라옵니다.
root@dc853a5495a0:/# nvidia-smi
Fri Nov 10 07:22:21 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.119 Driver Version: 361.119 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... On | 0002:01:00.0 Off | 0 |
| N/A 32C P0 29W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... On | 0003:01:00.0 Off | 0 |
| N/A 35C P0 32W / 300W | 0MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
환경변수를 확인합니다. 여기서는 caffe2가 /opt/caffe2에 설치되어 있으므로, LD_LIBRARY_PATH나 PYTHONPATH를 거기에 맞춥니다.
root@dc853a5495a0:/# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/opt/caffe2/lib:/opt/DL/nccl/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/caffe2/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHONPATH=/opt/caffe2
caffe2는 아래의 resnet50_trainer.py를 이용해 테스트합니다. 그 전에, 먼저 https://github.com/caffe2/caffe2/issues/517 에 나온 lmdb 생성 문제를 해결하기 위해 이 URL에서 제시하는 대로 아래와 같이 code 일부를 수정합니다.
root@dc853a5495a0:/# cd /data/imsi/caffe2/caffe2/python/examples
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# vi lmdb_create_example.py
...
flatten_img = img_data.reshape(np.prod(img_data.shape))
# img_tensor.float_data.extend(flatten_img)
img_tensor.float_data.extend(flatten_img.flat)
이어서 다음과 같이 lmdb를 생성합니다. 이미 1번 수행했으므로 다시 할 경우 매우 빨리 수행될 것입니다.
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# python lmdb_create_example.py --output_file /data/imsi/test/caffe2/lmdb
>>> Write database...
Inserted 0 rows
Inserted 16 rows
Inserted 32 rows
Inserted 48 rows
Inserted 64 rows
Inserted 80 rows
Inserted 96 rows
Inserted 112 rows
Checksum/write: 1744827
>>> Read database...
Checksum/read: 1744827
그 다음에 training을 다음과 같이 수행합니다. 여기서는 GPU가 2개만 보이는 환경이므로, --gpus에 0,1,2,3 대신 0,1만 써야 합니다.
root@dc853a5495a0:/data/imsi/caffe2/caffe2/python/examples# time python resnet50_trainer.py --train_data /data/imsi/test/caffe2/lmdb --gpus 0,1 --batch_size 128 --num_epochs 1
수행하면 다음과 같이 'not a valid file'이라는 경고 메시지가 나옵니다만, github 등을 googling해보면 무시하셔도 되는 메시지입니다.
Ignoring @/caffe2/caffe2/contrib/nccl:nccl_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops_gpu as it is not a valid file.
Ignoring @/caffe2/caffe2/distributed:file_store_handler_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/distributed:redis_store_handler_ops as it is not a valid file.
INFO:resnet50_trainer:Running on GPUs: [0, 1]
INFO:resnet50_trainer:Using epoch size: 1499904
INFO:data_parallel_model:Parallelizing model for devices: [0, 1]
INFO:data_parallel_model:Create input and model training operators
INFO:data_parallel_model:Model for GPU : 0
INFO:data_parallel_model:Model for GPU : 1
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Add initial parameter sync
WARNING:data_parallel_model:------- DEPRECATED API, please use data_parallel_model.OptimizeGradientMemory() -----
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.252535104752 secs
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.253523111343 secs
INFO:resnet50_trainer:Starting epoch 0/1
INFO:resnet50_trainer:Finished iteration 1/11718 of epoch 0 (27.70 images/sec)
INFO:resnet50_trainer:Training loss: 7.39205980301, accuracy: 0.0
INFO:resnet50_trainer:Finished iteration 2/11718 of epoch 0 (378.51 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 3/11718 of epoch 0 (387.87 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 4/11718 of epoch 0 (383.28 images/sec)
INFO:resnet50_trainer:Training loss: 0.0, accuracy: 1.0
INFO:resnet50_trainer:Finished iteration 5/11718 of epoch 0 (381.71 images/sec)
...
다만 위와 같이 처음부터 accuracy가 1.0으로 나오는 문제가 있습니다. 이 resnet50_trainer.py 문제에 대해서는 caffe2의 github에 아래와 같이 discussion들이 있었습니다만, 아직 뾰족한 해결책은 없는 상태입니다. 하지만 상대적 시스템 성능 측정에는 별 문제가 없습니다.
https://github.com/caffe2/caffe2/issues/810
3) pytorch
이번에는 pytorch 이미지로 테스트하겠습니다.
root@8ccd72116fee:~# env | grep PATH
LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/opt/anaconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
먼저 docker image를 아래와 같이 구동합니다. 단, 여기서는 --ipc=host 옵션을 씁니다. 이유는 https://discuss.pytorch.org/t/imagenet-example-is-crashing/1363/2 에서 언급된 hang 현상을 피하기 위한 것입니다.
root@minsky:~# nvidia-docker run -ti --rm --ipc=host -v /data:/data bsyu/pytorch-ppc64le:v0.1 bash
가장 간단한 example인 mnist를 아래와 같이 수행합니다. 10 epochs를 수행하는데 대략 1분 30초 정도가 걸립니다.
root@8ccd72116fee:/data/imsi/examples/mnist# time python main.py --batch-size 512 --epochs 10
...
rain Epoch: 9 [25600/60000 (42%)] Loss: 0.434816
Train Epoch: 9 [30720/60000 (51%)] Loss: 0.417652
Train Epoch: 9 [35840/60000 (59%)] Loss: 0.503125
Train Epoch: 9 [40960/60000 (68%)] Loss: 0.477776
Train Epoch: 9 [46080/60000 (76%)] Loss: 0.346416
Train Epoch: 9 [51200/60000 (85%)] Loss: 0.361492
Train Epoch: 9 [56320/60000 (93%)] Loss: 0.383941
Test set: Average loss: 0.1722, Accuracy: 9470/10000 (95%)
Train Epoch: 10 [0/60000 (0%)] Loss: 0.369119
Train Epoch: 10 [5120/60000 (8%)] Loss: 0.377726
Train Epoch: 10 [10240/60000 (17%)] Loss: 0.402854
Train Epoch: 10 [15360/60000 (25%)] Loss: 0.349409
Train Epoch: 10 [20480/60000 (34%)] Loss: 0.295271
...
다만 이건 single-GPU만 사용하는 example입니다. Multi-GPU를 사용하기 위해서는 아래의 imagenet example을 수행해야 하는데, 그러자면 ilsvrc2012 dataset을 download 받아 풀어놓아야 합니다. 그 data는 아래와 같이 /data/imagenet_dir/train과 /data/imagenet_dir/val에 각각 JPEG 형태로 풀어놓았습니다.
root@minsky:/data/imagenet_dir/train# while read SYNSET; do
> mkdir -p ${SYNSET}
> tar xf ../../ILSVRC2012_img_train.tar "${SYNSET}.tar"
> tar xf "${SYNSET}.tar" -C "${SYNSET}"
> rm -f "${SYNSET}.tar"
> done < /opt/DL/caffe-nv/data/ilsvrc12/synsets.txt
root@minsky:/data/imagenet_dir/train# ls -1 | wc -l
1000
root@minsky:/data/imagenet_dir/train# du -sm .
142657 .
root@minsky:/data/imagenet_dir/train# find . | wc -l
1282168
root@minsky:/data/imagenet_dir/val# ls -1 | wc -l
50000
이 상태에서 그대로 main.py를 수행하면 다음과 같은 error를 겪게 됩니다. 이유는 이 main.py는 val 디렉토리 밑에도 label별 디렉토리에 JPEG 파일들이 들어가 있기를 기대하는 구조이기 때문입니다.
RuntimeError: Found 0 images in subfolders of: /data/imagenet_dir/val
Supported image extensions are: .jpg,.JPG,.jpeg,.JPEG,.png,.PNG,.ppm,.PPM,.bmp,.BMP
따라서 아래와 같이 inception 디렉토리의 preprocess_imagenet_validation_data.py를 이용하여 label별 디렉토리로 JPEG 파일들을 분산 재배치해야 합니다.
root@minsky:/data/models/research/inception/inception/data# python preprocess_imagenet_validation_data.py /data/imagenet_dir/val imagenet_2012_validation_synset_labels.txt
이제 다시 보면 label별로 재분배된 것을 보실 수 있습니다.
root@minsky:/data/imagenet_dir/val# ls | head -n 3
n01440764
n01443537
n01484850
root@minsky:/data/imagenet_dir/val# ls | wc -l
1000
root@minsky:/data/imagenet_dir/val# find . | wc -l
51001
이제 다음과 같이 main.py를 수행하면 됩니다.
root@8ccd72116fee:~# cd /data/imsi/examples/imagenet
root@8ccd72116fee:/data/imsi/examples/imagenet# time python main.py -a resnet18 --epochs 1 /data/imagenet_dir
=> creating model 'resnet18'
Epoch: [0][0/5005] Time 11.237 (11.237) Data 2.330 (2.330) Loss 7.0071 (7.0071) Prec@1 0.391 (0.391) Prec@5 0.391 (0.391)
Epoch: [0][10/5005] Time 0.139 (1.239) Data 0.069 (0.340) Loss 7.1214 (7.0515) Prec@1 0.000 (0.284) Prec@5 0.000 (1.065)
Epoch: [0][20/5005] Time 0.119 (0.854) Data 0.056 (0.342) Loss 7.1925 (7.0798) Prec@1 0.000 (0.260) Prec@5 0.781 (0.930)
...
* 위에서 사용된 docker image들은 다음과 같이 backup을 받아두었습니다.
root@minsky:/data/docker_save# docker save --output caffe2-ppc64le.v0.3.tar bsyu/caffe2-ppc64le:v0.3
root@minsky:/data/docker_save# docker save --output pytorch-ppc64le.v0.1.tar bsyu/pytorch-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output tf1.3-ppc64le.v0.1.tar bsyu/tf1.3-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output cudnn6-conda2-ppc64le.v0.1.tar bsyu/cudnn6-conda2-ppc64le:v0.1
root@minsky:/data/docker_save# docker save --output cudnn6-conda3-ppc64le.v0.1.tar bsyu/cudnn6-conda3-ppc64le:v0.1
root@minsky:/data/docker_save# ls -l
total 28023280
-rw------- 1 root root 4713168896 Nov 10 16:48 caffe2-ppc64le.v0.3.tar
-rw------- 1 root root 4218520064 Nov 10 17:10 cudnn6-conda2-ppc64le.v0.1.tar
-rw------- 1 root root 5272141312 Nov 10 17:11 cudnn6-conda3-ppc64le.v0.1.tar
-rw------- 1 root root 6921727488 Nov 10 16:51 pytorch-ppc64le.v0.1.tar
-rw------- 1 root root 7570257920 Nov 10 16:55 tf1.3-ppc64le.v0.1.tar
비상시엔 이 이미지들을 docker load 명령으로 load 하시면 됩니다.
(예) docker load --input caffe2-ppc64le.v0.3.tar
2017년 11월 9일 목요일
R 3.4.1의 설치 및 기존 rstudio와의 연동
예전에 쓰던 버전의 R (가령 3.3.2)를 그대로 놔둔 상태에서, 새로운 버전의 R 3.4.1을 써야 할 경우가 있습니다. 이때는 아래 절차에 따라 설치하시면 됩니다.
먼저, 아래 link에서 제가 ppc64le(POWER8) 상에서 빌드한 R server 3.4.1을 여기서 download 받아서, 기존 서버의 사용자 홈 디렉토리에 upload 한 뒤 자신의 홈 디렉토리에 풀어놓습니다.
여기서는 u0017649라는 유저의 홈디렉토리에 풀었습니다만, 아무 곳에 풀어놓아도 상관없습니다. 기존 버전은 /usr/local/lib/R 밑에 설치되어 있을 것입니다.
u0017649@sys-89830:~$ tar -zxf R341.tgz
u0017649@sys-89830:~$ cd R
u0017649@sys-89830:~/R$ pwd
/home/u0017649/R
u0017649@sys-89830:~/R$ sudo mv /usr/bin/R /usr/bin/R.old
u0017649@sys-89830:~/R$ sudo ln /home/u0017649/R/bin/R /usr/bin/R
이제 Rstudio를 재시작합니다.
u0017649@sys-89830:~/R$ sudo systemctl stop rstudio-server.service
u0017649@sys-89830:~/R$ sudo systemctl start rstudio-server.service
u0017649@sys-89830:~/R$ sudo systemctl status rstudio-server.service
● rstudio-server.service - LSB: RStudio Server
Loaded: loaded (/etc/init.d/rstudio-server; bad; vendor preset: enabled)
Active: active (running) since Wed 2017-11-08 20:26:40 EST; 7s ago
Docs: man:systemd-sysv-generator(8)
Process: 12456 ExecStop=/etc/init.d/rstudio-server stop (code=exited, status=0/SUCCESS)
Process: 12469 ExecStart=/etc/init.d/rstudio-server start (code=exited, status=0/SUCCESS)
Tasks: 3
Memory: 10.1M
CPU: 661ms
CGroup: /system.slice/rstudio-server.service
└─12477 /usr/local/lib/rstudio-server/bin/rserver
Nov 08 20:26:40 sys-89830 systemd[1]: Starting LSB: RStudio Server...
Nov 08 20:26:40 sys-89830 systemd[1]: Started LSB: RStudio Server.
그 다음에 평소대로 http://서버주소:8787 로 접속하시면 됩니다. 아래와 같이 잘 됩니다.
참고1) 기존 Rstudio 버전(Version 99.9.9)은 새 R 3.4.1과 위와 같이 잘 연동됩니다.
참고2) 여기서 사용된 CRAN R 3.4.1의 빌드는 예전 포스팅 http://hwengineer.blogspot.kr/2017/06/ppc64le-ubuntu-cran-r-package-rstudio.html 에 나온 절차대로 하시면 됩니다.
참고3) 이 R341.tgz 속에 설치된 R 패키지들은 다음과 같습니다.
먼저, 아래 link에서 제가 ppc64le(POWER8) 상에서 빌드한 R server 3.4.1을 여기서 download 받아서, 기존 서버의 사용자 홈 디렉토리에 upload 한 뒤 자신의 홈 디렉토리에 풀어놓습니다.
여기서는 u0017649라는 유저의 홈디렉토리에 풀었습니다만, 아무 곳에 풀어놓아도 상관없습니다. 기존 버전은 /usr/local/lib/R 밑에 설치되어 있을 것입니다.
u0017649@sys-89830:~$ tar -zxf R341.tgz
u0017649@sys-89830:~$ cd R
u0017649@sys-89830:~/R$ pwd
/home/u0017649/R
이제 새로 풀어놓은 R을 Rstudio와 연동시키기 위해서, 다음과 같이 기존 /usr/bin/R을 다음과 같이 옮겨놓고, 방금 풀어놓은 R을 /usr/bin/R로 softlink를 시켜 줍니다.
u0017649@sys-89830:~/R$ sudo mv /usr/bin/R /usr/bin/R.old
u0017649@sys-89830:~/R$ sudo ln /home/u0017649/R/bin/R /usr/bin/R
이제 Rstudio를 재시작합니다.
u0017649@sys-89830:~/R$ sudo systemctl stop rstudio-server.service
u0017649@sys-89830:~/R$ sudo systemctl start rstudio-server.service
u0017649@sys-89830:~/R$ sudo systemctl status rstudio-server.service
● rstudio-server.service - LSB: RStudio Server
Loaded: loaded (/etc/init.d/rstudio-server; bad; vendor preset: enabled)
Active: active (running) since Wed 2017-11-08 20:26:40 EST; 7s ago
Docs: man:systemd-sysv-generator(8)
Process: 12456 ExecStop=/etc/init.d/rstudio-server stop (code=exited, status=0/SUCCESS)
Process: 12469 ExecStart=/etc/init.d/rstudio-server start (code=exited, status=0/SUCCESS)
Tasks: 3
Memory: 10.1M
CPU: 661ms
CGroup: /system.slice/rstudio-server.service
└─12477 /usr/local/lib/rstudio-server/bin/rserver
Nov 08 20:26:40 sys-89830 systemd[1]: Starting LSB: RStudio Server...
Nov 08 20:26:40 sys-89830 systemd[1]: Started LSB: RStudio Server.
참고1) 기존 Rstudio 버전(Version 99.9.9)은 새 R 3.4.1과 위와 같이 잘 연동됩니다.
참고2) 여기서 사용된 CRAN R 3.4.1의 빌드는 예전 포스팅 http://hwengineer.blogspot.kr/2017/06/ppc64le-ubuntu-cran-r-package-rstudio.html 에 나온 절차대로 하시면 됩니다.
참고3) 이 R341.tgz 속에 설치된 R 패키지들은 다음과 같습니다.
> installed.packages() Package anim.plots "anim.plots" animation "animation" assertthat "assertthat" BH "BH" bindr "bindr" bindrcpp "bindrcpp" bitops "bitops" boot "boot" Boruta "Boruta" caret "caret" caretEnsemble "caretEnsemble" caTools "caTools" class "class" cluster "cluster" codetools "codetools" colorspace "colorspace" curl "curl" CVST "CVST" data.table "data.table" ddalpha "ddalpha" DEoptimR "DEoptimR" dichromat "dichromat" digest "digest" dimRed "dimRed" dplyr "dplyr" DRR "DRR" e1071 "e1071" evaluate "evaluate" foreach "foreach" foreign "foreign" gdata "gdata" ggplot2 "ggplot2" glue "glue" gower "gower" gplots "gplots" gridExtra "gridExtra" gtable "gtable" gtools "gtools" highr "highr" ipred "ipred" iterators "iterators" kernlab "kernlab" KernSmooth "KernSmooth" knitr "knitr" labeling "labeling" lattice "lattice" lava "lava" lazyeval "lazyeval" lubridate "lubridate" magrittr "magrittr" markdown "markdown" MASS "MASS" Matrix "Matrix" mgcv "mgcv" mime "mime" ModelMetrics "ModelMetrics" munsell "munsell" nlme "nlme" numDeriv "numDeriv" pbapply "pbapply" pkgconfig "pkgconfig" plogr "plogr" LibPath anim.plots "/usr/local/lib/R/site-library" animation "/usr/local/lib/R/site-library" assertthat "/usr/local/lib/R/site-library" BH "/usr/local/lib/R/site-library" bindr "/usr/local/lib/R/site-library" bindrcpp "/usr/local/lib/R/site-library" bitops "/usr/local/lib/R/site-library" boot "/usr/local/lib/R/site-library" Boruta "/usr/local/lib/R/site-library" caret "/usr/local/lib/R/site-library" caretEnsemble "/usr/local/lib/R/site-library" caTools "/usr/local/lib/R/site-library" class "/usr/local/lib/R/site-library" cluster "/usr/local/lib/R/site-library" codetools "/usr/local/lib/R/site-library" colorspace "/usr/local/lib/R/site-library" curl "/usr/local/lib/R/site-library" CVST "/usr/local/lib/R/site-library" data.table "/usr/local/lib/R/site-library" ddalpha "/usr/local/lib/R/site-library" DEoptimR "/usr/local/lib/R/site-library" dichromat "/usr/local/lib/R/site-library" digest "/usr/local/lib/R/site-library" dimRed "/usr/local/lib/R/site-library" dplyr "/usr/local/lib/R/site-library" DRR "/usr/local/lib/R/site-library" e1071 "/usr/local/lib/R/site-library" evaluate "/usr/local/lib/R/site-library" foreach "/usr/local/lib/R/site-library" foreign "/usr/local/lib/R/site-library" gdata "/usr/local/lib/R/site-library" ggplot2 "/usr/local/lib/R/site-library" glue "/usr/local/lib/R/site-library" gower "/usr/local/lib/R/site-library" gplots "/usr/local/lib/R/site-library" gridExtra "/usr/local/lib/R/site-library" gtable "/usr/local/lib/R/site-library" gtools "/usr/local/lib/R/site-library" highr "/usr/local/lib/R/site-library" ipred "/usr/local/lib/R/site-library" iterators "/usr/local/lib/R/site-library" kernlab "/usr/local/lib/R/site-library" KernSmooth "/usr/local/lib/R/site-library" knitr "/usr/local/lib/R/site-library" labeling "/usr/local/lib/R/site-library" lattice "/usr/local/lib/R/site-library" lava "/usr/local/lib/R/site-library" lazyeval "/usr/local/lib/R/site-library" lubridate "/usr/local/lib/R/site-library" magrittr "/usr/local/lib/R/site-library" markdown "/usr/local/lib/R/site-library" MASS "/usr/local/lib/R/site-library" Matrix "/usr/local/lib/R/site-library" mgcv "/usr/local/lib/R/site-library" mime "/usr/local/lib/R/site-library" ModelMetrics "/usr/local/lib/R/site-library" munsell "/usr/local/lib/R/site-library" nlme "/usr/local/lib/R/site-library" numDeriv "/usr/local/lib/R/site-library" pbapply "/usr/local/lib/R/site-library" pkgconfig "/usr/local/lib/R/site-library" plogr "/usr/local/lib/R/site-library" Version Priority anim.plots "0.2" NA animation "2.5" NA assertthat "0.2.0" NA BH "1.65.0-1" NA bindr "0.1" NA bindrcpp "0.2" NA bitops "1.0-6" NA boot "1.3-20" "recommended" Boruta "5.2.0" NA caret "6.0-77" NA caretEnsemble "2.0.0" NA caTools "1.17.1" NA class "7.3-14" "recommended" cluster "2.0.6" "recommended" codetools "0.2-15" "recommended" colorspace "1.3-2" NA curl "3.0" NA CVST "0.2-1" NA data.table "1.10.4-3" NA ddalpha "1.3.1" NA DEoptimR "1.0-8" NA dichromat "2.0-0" NA digest "0.6.12" NA dimRed "0.1.0" NA dplyr "0.7.4" NA DRR "0.0.2" NA e1071 "1.6-8" NA evaluate "0.10.1" NA foreach "1.4.3" NA foreign "0.8-69" "recommended" gdata "2.18.0" NA ggplot2 "2.2.1" NA glue "1.2.0" NA gower "0.1.2" NA gplots "3.0.1" NA gridExtra "2.3" NA gtable "0.2.0" NA gtools "3.5.0" NA highr "0.6" NA ipred "0.9-6" NA iterators "1.0.8" NA kernlab "0.9-25" NA KernSmooth "2.23-15" "recommended" knitr "1.17" NA labeling "0.3" NA lattice "0.20-35" "recommended" lava "1.5.1" NA lazyeval "0.2.1" NA lubridate "1.7.1" NA magrittr "1.5" NA markdown "0.8" NA MASS "7.3-47" "recommended" Matrix "1.2-11" "recommended" mgcv "1.8-22" "recommended" mime "0.5" NA ModelMetrics "1.1.0" NA munsell "0.4.3" NA nlme "3.1-131" "recommended" numDeriv "2016.8-1" NA pbapply "1.3-3" NA pkgconfig "2.0.1" NA plogr "0.1-1" NA Depends anim.plots NA animation "R (>= 2.14.0)" assertthat NA BH NA bindr NA bindrcpp NA bitops NA boot "R (>= 3.0.0), graphics, stats" Boruta "ranger" caret "R (>= 2.10), lattice (>= 0.20), ggplot2" caretEnsemble "R (>= 3.2.0)" caTools "R (>= 2.2.0)" class "R (>= 3.0.0), stats, utils" cluster "R (>= 3.0.1)" codetools "R (>= 2.1)" colorspace "R (>= 2.13.0), methods" curl "R (>= 3.0.0)" CVST "kernlab,Matrix" data.table "R (>= 3.0.0)" ddalpha "stats, utils, graphics, grDevices, MASS, class, robustbase,\nsfsmisc" DEoptimR NA dichromat "R (>= 2.10), stats" digest "R (>= 2.4.1)" dimRed "R (>= 3.0.0), methods, DRR" dplyr "R (>= 3.1.2)" DRR "kernlab, CVST, Matrix" e1071 NA evaluate "R (>= 3.0.2)" foreach "R (>= 2.5.0)" foreign "R (>= 3.0.0)" gdata "R (>= 2.3.0)" ggplot2 "R (>= 3.1)" glue "R (>= 3.1)" gower NA gplots "R (>= 3.0)" gridExtra NA gtable "R (>= 2.14)" gtools "R (>= 2.10)" highr "R (>= 3.0.2)" ipred "R (>= 2.10)" iterators "R (>= 2.5.0), utils" kernlab "R (>= 2.10)" KernSmooth "R (>= 2.5.0), stats" knitr "R (>= 3.1.0)" labeling NA lattice "R (>= 3.0.0)" lava "R (>= 3.0)" lazyeval "R (>= 3.1.0)" lubridate "methods, R (>= 3.0.0)" magrittr NA markdown "R (>= 2.11.1)" MASS "R (>= 3.1.0), grDevices, graphics, stats, utils" Matrix "R (>= 3.0.1)" mgcv "R (>= 2.14.0), nlme (>= 3.1-64)" mime NA ModelMetrics "R (>= 3.2.2)" munsell NA nlme "R (>= 3.0.2)" numDeriv "R (>= 2.11.1)" pbapply "R (>= 3.2.0)" pkgconfig NA plogr NA Imports anim.plots "animation" animation NA assertthat "tools" BH NA bindr NA bindrcpp "Rcpp, bindr" bitops NA boot NA Boruta NA caret "foreach, methods, plyr, ModelMetrics (>= 1.1.0), nlme,\nreshape2, stats, stats4, utils, grDevices, recipes (>= 0.0.1),\nwithr (>= 2.0.0)" caretEnsemble "methods, pbapply, ggplot2, digest, plyr, lattice, gridExtra,\ndata.table, caret" caTools "bitops" class "MASS" cluster "graphics, grDevices, stats, utils" codetools NA colorspace "graphics, grDevices" curl NA CVST NA data.table "methods" ddalpha "Rcpp (>= 0.11.0)" DEoptimR "stats" dichromat NA digest NA dimRed NA dplyr "assertthat, bindrcpp (>= 0.2), glue (>= 1.1.1), magrittr,\nmethods, pkgconfig, rlang (>= 0.1.2), R6, Rcpp (>= 0.12.7),\ntibble (>= 1.3.1), utils" DRR "stats, methods" e1071 "graphics, grDevices, class, stats, methods, utils" evaluate "methods, stringr (>= 0.6.2)" foreach "codetools, utils, iterators" foreign "methods, utils, stats" gdata "gtools, stats, methods, utils" ggplot2 "digest, grid, gtable (>= 0.1.1), MASS, plyr (>= 1.7.1),\nreshape2, scales (>= 0.4.1), stats, tibble, lazyeval" glue "methods" gower NA gplots "gtools, gdata, stats, caTools, KernSmooth" gridExtra "gtable, grid, grDevices, graphics, utils" gtable "grid" gtools NA highr NA ipred "rpart (>= 3.1-8), MASS, survival, nnet, class, prodlim" iterators NA kernlab "methods, stats, grDevices, graphics" KernSmooth NA knitr "evaluate (>= 0.10), digest, highr, markdown, stringr (>= 0.6),\nyaml, methods, tools" labeling NA lattice "grid, grDevices, graphics, stats, utils" lava "grDevices, graphics, methods, numDeriv, stats, survival, utils" lazyeval NA lubridate "stringr, Rcpp (>= 0.11)," magrittr NA markdown "utils, mime (>= 0.3)" MASS "methods" Matrix "methods, graphics, grid, stats, utils, lattice" mgcv "methods, stats, graphics, Matrix" mime "tools" ModelMetrics "Rcpp" munsell "colorspace, methods" nlme "graphics, stats, utils, lattice" numDeriv NA pbapply "parallel" pkgconfig "utils" plogr NA LinkingTo anim.plots NA animation NA assertthat NA BH NA bindr NA bindrcpp "Rcpp, plogr" bitops NA boot NA Boruta NA caret NA caretEnsemble NA caTools NA class NA cluster NA codetools NA colorspace NA curl NA CVST NA data.table NA ddalpha "BH, Rcpp" DEoptimR NA dichromat NA digest NA dimRed NA dplyr "Rcpp (>= 0.12.0), BH (>= 1.58.0-1), bindrcpp, plogr" DRR NA e1071 NA evaluate NA foreach NA foreign NA gdata NA ggplot2 NA glue NA gower NA gplots NA gridExtra NA gtable NA gtools NA highr NA ipred NA iterators NA kernlab NA KernSmooth NA knitr NA labeling NA lattice NA lava NA lazyeval NA lubridate "Rcpp," magrittr NA markdown NA MASS NA Matrix NA mgcv NA mime NA ModelMetrics "Rcpp" munsell NA nlme NA numDeriv NA pbapply NA pkgconfig NA plogr NA Suggests anim.plots "maps, knitr, mapdata, testthat" animation "MASS, class, testit" assertthat "testthat" BH NA bindr "testthat" bindrcpp "testthat" bitops NA boot "MASS, survival" Boruta "mlbench, rFerns, randomForest" caret "BradleyTerry2, e1071, earth (>= 2.2-3), fastICA, gam, ipred,\nkernlab, klaR, MASS, ellipse, mda, mgcv, mlbench, MLmetrics,\nnnet, party (>= 0.9-99992), pls, pROC, proxy, randomForest,\nRANN, spls, subselect, pamr, superpc, Cubist, testthat (>=\n0.9.1)" caretEnsemble "caTools, testthat, lintr, randomForest, glmnet, rpart,\nkernlab, nnet, e1071, ipred, pROC, knitr, mlbench, MASS, gbm,\nklaR" caTools "MASS, rpart" class NA cluster "MASS" codetools NA colorspace "datasets, stats, utils, KernSmooth, MASS, kernlab, mvtnorm,\nvcd, dichromat, tcltk, shiny, shinyjs" curl "testthat (>= 1.0.0), knitr, jsonlite, rmarkdown, magrittr,\nhttpuv, webutils" CVST NA data.table "bit64, knitr, nanotime, chron, ggplot2 (>= 0.9.0), plyr,\nreshape, reshape2, testthat (>= 0.4), hexbin, fastmatch, nlme,\nxts, gdata, GenomicRanges, caret, curl, zoo, plm, rmarkdown,\nparallel" ddalpha NA DEoptimR NA dichromat NA digest "knitr, rmarkdown" dimRed "MASS, Matrix, RANN, RSpectra, Rtsne, coRanking, diffusionMap,\nenergy, fastICA, ggplot2, graphics, igraph, kernlab, lle, loe,\noptimx, pcaPP, rgl, scales, scatterplot3d, stats, testthat,\ntidyr, vegan" dplyr "bit64, covr, dbplyr, dtplyr, DBI, ggplot2, hms, knitr, Lahman\n(>= 3.0-1), mgcv, microbenchmark, nycflights13, rmarkdown,\nRMySQL, RPostgreSQL, RSQLite, testthat, withr" DRR "knitr" e1071 "cluster, mlbench, nnet, randomForest, rpart, SparseM, xtable,\nMatrix, MASS" evaluate "testthat, lattice, ggplot2" foreach "randomForest" foreign NA gdata "RUnit" ggplot2 "covr, ggplot2movies, hexbin, Hmisc, lattice, mapproj, maps,\nmaptools, mgcv, multcomp, nlme, testthat (>= 0.11.0), quantreg,\nknitr, rpart, rmarkdown, svglite" glue "testthat, covr, magrittr, crayon, knitr, rmarkdown, DBI,\nRSQLite, R.utils, forcats, microbenchmark, rprintf, stringr,\nggplot2" gower "testthat, knitr, rmarkdown" gplots "grid, MASS" gridExtra "ggplot2, egg, lattice, knitr, testthat" gtable "testthat, covr" gtools NA highr "knitr, testit" ipred "mvtnorm, mlbench, TH.data" iterators "RUnit" kernlab NA KernSmooth "MASS" knitr "formatR, testit, rgl (>= 0.95.1201), codetools, rmarkdown,\nhtmlwidgets (>= 0.7), webshot, tikzDevice (>= 0.10), png, jpeg,\nXML, RCurl, DBI (>= 0.4-1), tibble" labeling NA lattice "KernSmooth, MASS, latticeExtra" lava "KernSmooth, Matrix, Rgraphviz, ascii, data.table, fields,\nforeach, geepack, gof (>= 0.9), graph, igraph (>= 0.6),\nlava.tobit, lme4, mets (>= 1.1), optimx, quantreg, rgl,\ntestthat (>= 0.11), visNetwork, zoo" lazyeval "knitr, rmarkdown (>= 0.2.65), testthat, covr" lubridate "testthat, knitr, covr" magrittr "testthat, knitr" markdown "knitr, RCurl" MASS "lattice, nlme, nnet, survival" Matrix "expm, MASS" mgcv "splines, parallel, survival, MASS" mime NA ModelMetrics "testthat" munsell "ggplot2, testthat" nlme "Hmisc, MASS" numDeriv NA pbapply NA pkgconfig "covr, testthat, disposables (>= 1.0.3)" plogr "Rcpp" Enhances anim.plots NA animation NA assertthat NA BH NA bindr NA bindrcpp NA bitops NA boot NA Boruta NA caret NA caretEnsemble NA caTools NA class NA cluster NA codetools NA colorspace NA curl NA CVST NA data.table NA ddalpha NA DEoptimR "robustbase" dichromat NA digest NA dimRed NA dplyr NA DRR NA e1071 NA evaluate NA foreach "compiler, doMC, RUnit, doParallel" foreign NA gdata NA ggplot2 "sp" glue NA gower NA gplots NA gridExtra NA gtable NA gtools NA highr NA ipred NA iterators NA kernlab NA KernSmooth NA knitr NA labeling NA lattice "chron" lava NA lazyeval NA lubridate "chron, fts, timeSeries, timeDate, tis, tseries, xts, zoo" magrittr NA markdown NA MASS NA Matrix "MatrixModels, graph, SparseM, sfsmisc" mgcv NA mime NA ModelMetrics NA munsell NA nlme NA numDeriv NA pbapply NA pkgconfig NA plogr NA License anim.plots "GPL-2" animation "GPL" assertthat "GPL-3" BH "BSL-1.0" bindr "MIT + file LICENSE" bindrcpp "MIT + file LICENSE" bitops "GPL (>= 2)" boot "Unlimited" Boruta "GPL (>= 2)" caret "GPL (>= 2)" caretEnsemble "MIT + file LICENSE" caTools "GPL-3" class "GPL-2 | GPL-3" cluster "GPL (>= 2)" codetools "GPL" colorspace "BSD_3_clause + file LICENSE" curl "MIT + file LICENSE" CVST "GPL (>= 2.0)" data.table "GPL-3 | file LICENSE" ddalpha "GPL-2" DEoptimR "GPL (>= 2)" dichromat "GPL-2" digest "GPL (>= 2)" dimRed "GPL-3 | file LICENSE" dplyr "MIT + file LICENSE" DRR "GPL-3" e1071 "GPL-2" evaluate "MIT + file LICENSE" foreach "Apache License (== 2.0)" foreign "GPL (>= 2)" gdata "GPL-2" ggplot2 "GPL-2 | file LICENSE" glue "MIT + file LICENSE" gower "GPL-3" gplots "GPL-2" gridExtra "GPL (>= 2)" gtable "GPL-2" gtools "GPL-2" highr "GPL" ipred "GPL (>= 2)" iterators "Apache License (== 2.0)" kernlab "GPL-2" KernSmooth "Unlimited" knitr "GPL" labeling "MIT + file LICENSE | Unlimited" lattice "GPL (>= 2)" lava "GPL-3" lazyeval "GPL-3" lubridate "GPL-2" magrittr "MIT + file LICENSE" markdown "GPL-2" MASS "GPL-2 | GPL-3" Matrix "GPL (>= 2) | file LICENCE" mgcv "GPL (>= 2)" mime "GPL" ModelMetrics "GPL (>= 2)" munsell "MIT + file LICENSE" nlme "GPL (>= 2) | file LICENCE" numDeriv "GPL-2" pbapply "GPL-2" pkgconfig "MIT + file LICENSE" plogr "MIT + file LICENSE" License_is_FOSS anim.plots NA animation NA assertthat NA BH NA bindr NA bindrcpp NA bitops NA boot NA Boruta NA caret NA caretEnsemble NA caTools NA class NA cluster NA codetools NA colorspace NA curl NA CVST NA data.table NA ddalpha NA DEoptimR NA dichromat NA digest NA dimRed NA dplyr NA DRR NA e1071 NA evaluate NA foreach NA foreign NA gdata NA ggplot2 NA glue NA gower NA gplots NA gridExtra NA gtable NA gtools NA highr NA ipred NA iterators NA kernlab NA KernSmooth NA knitr NA labeling NA lattice NA lava NA lazyeval NA lubridate NA magrittr NA markdown NA MASS NA Matrix NA mgcv NA mime NA ModelMetrics NA munsell NA nlme NA numDeriv NA pbapply NA pkgconfig NA plogr NA License_restricts_use OS_type anim.plots NA NA animation NA NA assertthat NA NA BH NA NA bindr NA NA bindrcpp NA NA bitops NA NA boot NA NA Boruta NA NA caret NA NA caretEnsemble NA NA caTools NA NA class NA NA cluster NA NA codetools NA NA colorspace NA NA curl NA NA CVST NA NA data.table NA NA ddalpha NA NA DEoptimR NA NA dichromat NA NA digest NA NA dimRed NA NA dplyr NA NA DRR NA NA e1071 NA NA evaluate NA NA foreach NA NA foreign NA NA gdata NA NA ggplot2 NA NA glue NA NA gower NA NA gplots NA NA gridExtra NA NA gtable NA NA gtools NA NA highr NA NA ipred NA NA iterators NA NA kernlab NA NA KernSmooth NA NA knitr NA NA labeling NA NA lattice NA NA lava NA NA lazyeval NA NA lubridate NA NA magrittr NA NA markdown NA NA MASS NA NA Matrix NA NA mgcv NA NA mime NA NA ModelMetrics NA NA munsell NA NA nlme NA NA numDeriv NA NA pbapply NA NA pkgconfig NA NA plogr NA NA MD5sum NeedsCompilation Built anim.plots NA "no" "3.4.1" animation NA "no" "3.4.1" assertthat NA "no" "3.4.1" BH NA "no" "3.4.1" bindr NA "no" "3.4.1" bindrcpp NA "yes" "3.4.1" bitops NA "yes" "3.4.1" boot NA "no" "3.4.1" Boruta NA "no" "3.4.1" caret NA "yes" "3.4.1" caretEnsemble NA "no" "3.4.1" caTools NA "yes" "3.4.1" class NA "yes" "3.4.1" cluster NA "yes" "3.4.1" codetools NA "no" "3.4.1" colorspace NA "yes" "3.4.1" curl NA "yes" "3.4.1" CVST NA "no" "3.4.1" data.table NA "yes" "3.4.1" ddalpha NA "yes" "3.4.1" DEoptimR NA "no" "3.4.1" dichromat NA NA "3.4.1" digest NA "yes" "3.4.1" dimRed NA "yes" "3.4.1" dplyr NA "yes" "3.4.1" DRR NA "no" "3.4.1" e1071 NA "yes" "3.4.1" evaluate NA "no" "3.4.1" foreach NA "no" "3.4.1" foreign NA "yes" "3.4.1" gdata NA "no" "3.4.1" ggplot2 NA "no" "3.4.1" glue NA "yes" "3.4.1" gower NA "yes" "3.4.1" gplots NA "no" "3.4.1" gridExtra NA "no" "3.4.1" gtable NA "no" "3.4.1" gtools NA "yes" "3.4.1" highr NA "no" "3.4.1" ipred NA "yes" "3.4.1" iterators NA "no" "3.4.1" kernlab NA "yes" "3.4.1" KernSmooth NA "yes" "3.4.1" knitr NA "no" "3.4.1" labeling NA "no" "3.4.1" lattice NA "yes" "3.4.1" lava NA "no" "3.4.1" lazyeval NA "yes" "3.4.1" lubridate NA "yes" "3.4.1" magrittr NA "no" "3.4.1" markdown NA "yes" "3.4.1" MASS NA "yes" "3.4.1" Matrix NA "yes" "3.4.1" mgcv NA "yes" "3.4.1" mime NA "yes" "3.4.1" ModelMetrics NA "yes" "3.4.1" munsell NA "no" "3.4.1" nlme NA "yes" "3.4.1" numDeriv NA "no" "3.4.1" pbapply NA "no" "3.4.1" pkgconfig NA "no" "3.4.1" plogr NA "no" "3.4.1" [ reached getOption("max.print") -- omitted 61 rows ]
피드 구독하기:
글 (Atom)