2020년 10월 27일 화요일

IBM POWER9 (ppc64le) Redhat 7에서 Spectrum Scale (GPFS) v5 구성하기

 


여기서의 설정은 gw(2.1.1.1) 서버를 1대 뿐인 GPFS 서버로, 그리고 2대의 서버 tac1과 tac2 (각각 2.1.1.3, 2.1.1.4)를 GPFS client 노드로 등록하는 것입니다.  즉 GPFS의 물리적 disk가 직접 연결되는 것은 gw 서버이고, tac1과 tac2 서버는 gw 서버가 보내주는 GPFS filesystem을 NSD (network storage device) 형태로 network을 통해서 받게 됩니다.


먼저 모든 서버에서 firewalld를 disable합니다.  이와 함께 각 서버 간에 passwd 없이 ssh가 가능하도록 미리 설정해둡니다.


[root@gw ~]# systemctl stop firewalld


[root@gw ~]# systemctl disable firewalld


여기서는 GPFS (새이름 SpectrumScale) installer를 이용하여 설치하겠습니다.  GPFS v5부터는 ansible을 이용하여 1대에서만 설치하면 다른 cluster node들에게도 자동으로 설치가 되어 편합니다.  먼저 install 파일을 수행하면 self-extraction이 시작되며 파일들이 생성됩니다.


[root@gw SW]# ./Spectrum_Scale_Advanced-5.0.4.0-ppc64LE-Linux-install

Extracting Product RPMs to /usr/lpp/mmfs/5.0.4.0 ...

tail -n +641 ./Spectrum_Scale_Advanced-5.0.4.0-ppc64LE-Linux-install | tar -C /usr/lpp/mmfs/5.0.4.0 --wildcards -xvz  installer gpfs_rpms/rhel/rhel7 hdfs_debs/ubuntu16/hdfs_3.1.0.x hdfs_rpms/rhel7/hdfs_2.7.3.x hdfs_rpms/rhel7/hdfs_3.0.0.x hdfs_rpms/rhel7/hdfs_3.1.0.x zimon_debs/ubuntu/ubuntu16 ganesha_rpms/rhel7 ganesha_rpms/rhel8 gpfs_debs/ubuntu16 gpfs_rpms/sles12 object_rpms/rhel7 smb_rpms/rhel7 smb_rpms/rhel8 tools/repo zimon_debs/ubuntu16 zimon_rpms/rhel7 zimon_rpms/rhel8 zimon_rpms/sles12 zimon_rpms/sles15 gpfs_debs gpfs_rpms manifest 1> /dev/null

   - installer

   - gpfs_rpms/rhel/rhel7

   - hdfs_debs/ubuntu16/hdfs_3.1.0.x

   - hdfs_rpms/rhel7/hdfs_2.7.3.x

...

   - gpfs_debs

   - gpfs_rpms

   - manifest


Removing License Acceptance Process Tool from /usr/lpp/mmfs/5.0.4.0 ...

rm -rf  /usr/lpp/mmfs/5.0.4.0/LAP_HOME /usr/lpp/mmfs/5.0.4.0/LA_HOME


Removing JRE from /usr/lpp/mmfs/5.0.4.0 ...

rm -rf /usr/lpp/mmfs/5.0.4.0/ibm-java*tgz


==================================================================

Product packages successfully extracted to /usr/lpp/mmfs/5.0.4.0


   Cluster installation and protocol deployment

      To install a cluster or deploy protocols with the Spectrum Scale Install Toolkit:  /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale -h

      To install a cluster manually:  Use the gpfs packages located within /usr/lpp/mmfs/5.0.4.0/gpfs_<rpms/debs>


      To upgrade an existing cluster using the Spectrum Scale Install Toolkit:

      1) Copy your old clusterdefinition.txt file to the new /usr/lpp/mmfs/5.0.4.0/installer/configuration/ location

      2) Review and update the config:  /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale config update

      3) (Optional) Update the toolkit to reflect the current cluster config:

         /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale config populate -N <node>

      4) Run the upgrade:  /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale upgrade -h


      To add nodes to an existing cluster using the Spectrum Scale Install Toolkit:

      1) Add nodes to the clusterdefinition.txt file:  /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale node add -h

      2) Install GPFS on the new nodes:  /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale install -h

      3) Deploy protocols on the new nodes:  /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale deploy -h


      To add NSDs or file systems to an existing cluster using the Spectrum Scale Install Toolkit:

      1) Add nsds and/or filesystems with:  /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale nsd add -h

      2) Install the NSDs:  /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale install -h

      3) Deploy the new file system:  /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale deploy -h


      To update the toolkit to reflect the current cluster config examples:

         /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale config populate -N <node>

      1) Manual updates outside of the install toolkit

      2) Sync the current cluster state to the install toolkit prior to upgrade

      3) Switching from a manually managed cluster to the install toolkit


==================================================================================

To get up and running quickly, visit our wiki for an IBM Spectrum Scale Protocols Quick Overview:

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Protocols%20Quick%20Overview%20for%20IBM%20Spectrum%20Scale

===================================================================================



먼저 아래와 같이 spectrumscale 명령으로 gw 서버, 즉 2.1.1.5를 installer 서버로 지정합니다.  


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale setup -s 2.1.1.5


이어서 gw 서버를 manager node이자 admin node로 지정합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale node add 2.1.1.5 -m -n -a

[ INFO  ] Adding node gw as a GPFS node.

[ INFO  ] Adding node gw as a manager node.

[ INFO  ] Adding node gw as an NSD server.

[ INFO  ] Configuration updated.

[ INFO  ] Tip :If all node designations are complete, add NSDs to your cluster definition and define required filessytems:./spectrumscale nsd add <device> -p <primary node> -s <secondary node> -fs <file system>

[ INFO  ] Setting gw as an admin node.

[ INFO  ] Configuration updated.

[ INFO  ] Tip : Designate protocol or nsd nodes in your environment to use during install:./spectrumscale node add <node> -p -n



각 node들을 quorum node로 등록합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale node add 2.1.1.3 -q


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale node add 2.1.1.4 -q


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale node add 2.1.1.5 -q

[ INFO  ] Adding node gwp as a quorum node.



node list를 확인합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale node list

[ INFO  ] List of nodes in current configuration:

[ INFO  ] [Installer Node]

[ INFO  ] 2.1.1.5

[ INFO  ]

[ INFO  ] [Cluster Details]

[ INFO  ] No cluster name configured

[ INFO  ] Setup Type: Spectrum Scale

[ INFO  ]

[ INFO  ] [Extended Features]

[ INFO  ] File Audit logging     : Disabled

[ INFO  ] Watch folder           : Disabled

[ INFO  ] Management GUI         : Disabled

[ INFO  ] Performance Monitoring : Enabled

[ INFO  ] Callhome               : Enabled

[ INFO  ]

[ INFO  ] GPFS  Admin  Quorum  Manager   NSD   Protocol  Callhome   OS   Arch

[ INFO  ] Node   Node   Node     Node   Server   Node     Server

[ INFO  ] gw      X       X       X       X                       rhel7  ppc64le

[ INFO  ] tac1p           X                                       rhel7  ppc64le

[ INFO  ] tac2p           X                                       rhel7  ppc64le

[ INFO  ]

[ INFO  ] [Export IP address]

[ INFO  ] No export IP addresses configured



sdc와 sdd를 nsd로 등록합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale nsd add /dev/sdc -p 2.1.1.5 --name data_nsd -fs data


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale nsd add /dev/sdd -p 2.1.1.5 --name backup_nsd -fs backup


nsd를 확인합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale nsd list

[ INFO  ] Name       FS     Size(GB) Usage   FG Pool    Device   Servers

[ INFO  ] data_nsd   data   400      Default 1  Default /dev/sdc [gwp]

[ INFO  ] backup_nsd backup 400      Default 1  Default /dev/sdd [gwp]


filesystem을 확인합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale filesystem list

[ INFO  ] Name   BlockSize   Mountpoint   NSDs Assigned  Default Data Replicas     Max Data Replicas     Default Metadata Replicas     Max Metadata Replicas

[ INFO  ] data   Default (4M)/ibm/data    1              1                         2                     1                             2

[ INFO  ] backup Default (4M)/ibm/backup  1              1                         2                     1                             2

[ INFO  ]



GPFS cluster를 정의합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale config gpfs -c tac_gpfs

[ INFO  ] Setting GPFS cluster name to tac_gpfs


다른 node들에게의 통신은 ssh와 scp를 이용하는 것으로 지정합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale config gpfs -r /usr/bin/ssh

[ INFO  ] Setting Remote shell command to /usr/bin/ssh


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale config gpfs -rc /usr/bin/scp

[ INFO  ] Setting Remote file copy command to /usr/bin/scp


확인합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale config gpfs --list

[ INFO  ] Current settings are as follows:

[ INFO  ] GPFS cluster name is tac_gpfs.

[ INFO  ] GPFS profile is default.

[ INFO  ] Remote shell command is /usr/bin/ssh.

[ INFO  ] Remote file copy command is /usr/bin/scp.

[ WARN  ] No value for GPFS Daemon communication port range in clusterdefinition file.



기본으로 GPFS 서버는 장애 발생시 IBM으로 연락하는 callhome 기능이 있습니다.  Internet에 연결된 노드가 아니므로 disable합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale callhome disable

[ INFO  ] Disabling the callhome.

[ INFO  ] Configuration updated.


이제 install 준비가 되었습니다.  Install 하기 전에 precheck을 수행합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale install -pr

[ INFO  ] Logging to file: /usr/lpp/mmfs/5.0.4.0/installer/logs/INSTALL-PRECHECK-23-10-2020_21:13:23.log

[ INFO  ] Validating configuration

...

[ INFO  ] The install toolkit will not configure call home as it is disabled. To enable call home, use the following CLI command: ./spectrumscale callhome enable

[ INFO  ] Pre-check successful for install.

[ INFO  ] Tip : ./spectrumscale install


이상 없으면 install을 수행합니다.  이때 gw 뿐만 아니라 tac1과 tac2에도 GPFS가 설치됩니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale install

...

[ INFO  ] GPFS active on all nodes

[ INFO  ] GPFS ACTIVE

[ INFO  ] Checking state of NSDs

[ INFO  ] NSDs ACTIVE

[ INFO  ] Checking state of Performance Monitoring

[ INFO  ] Running Performance Monitoring post-install checks

[ INFO  ] pmcollector running on all nodes

[ INFO  ] pmsensors running on all nodes

[ INFO  ] Performance Monitoring ACTIVE

[ INFO  ] SUCCESS

[ INFO  ] All services running

[ INFO  ] StanzaFile and NodeDesc file for NSD, filesystem, and cluster setup have been saved to /usr/lpp/mmfs folder on node: gwp

[ INFO  ] Installation successful. 3 GPFS nodes active in cluster tac_gpfs.tac1p. Completed in 2 minutes 52 seconds.

[ INFO  ] Tip :If all node designations and any required protocol configurations are complete, proceed to check the deploy configuration:./spectrumscale deploy --precheck



참고로 여기서 아래와 같은 error가 나는 경우는 전에 이미 GPFS NSD로 사용된 disk이기 때문입니다.  


[ FATAL ] gwp failure whilst: Creating NSDs  (SS16)

[ WARN  ] SUGGESTED ACTION(S):

[ WARN  ] Review your NSD device configuration in configuration/clusterdefinition.txt

[ WARN  ] Ensure all disks are not damaged and can be written to.

[ FATAL ] FAILURE REASON(s) for gwp:

[ FATAL ] gwp ---- Begin output of /usr/lpp/mmfs/bin/mmcrnsd -F /usr/lpp/mmfs/StanzaFile  ----

[ FATAL ] gwp STDOUT: mmcrnsd: Processing disk sdc

[ FATAL ] gwp mmcrnsd: Processing disk sdd

[ FATAL ] gwp STDERR: mmcrnsd: Disk device sdc refers to an existing NSD

[ FATAL ] gwp mmcrnsd: Disk device sdd refers to an existing NSD

[ FATAL ] gwp mmcrnsd: Command failed. Examine previous error messages to determine cause.

[ FATAL ] gwp ---- End output of /usr/lpp/mmfs/bin/mmcrnsd -F /usr/lpp/mmfs/StanzaFile  ----

[ INFO  ] Detailed error log: /usr/lpp/mmfs/5.0.4.0/installer/logs/INSTALL-23-10-2020_21:20:05.log

[ FATAL ] Installation failed on one or more nodes. Check the log for more details.


이건 다음과 같이 disk 앞부분 약간을 덮어쓰면 됩니다.


[root@gw SW]# dd if=/dev/zero of=/dev/sdc bs=1M count=100

100+0 records in

100+0 records out

104857600 bytes (105 MB) copied, 0.0736579 s, 1.4 GB/s


[root@gw SW]# dd if=/dev/zero of=/dev/sdd bs=1M count=100

100+0 records in

100+0 records out

104857600 bytes (105 MB) copied, 0.0737598 s, 1.4 GB/s



이제 각 node 상태를 check 합니다.  


[root@gw SW]# mmgetstate -a


 Node number  Node name        GPFS state

-------------------------------------------

       1      tac1p            active

       2      tac2p            active

       3      gwp              active


nsd 상태를 check 합니다.  그런데 GPFS filesystem 정의가 (free disk)로 빠져 있는 것을 보실 수 있습니다.


[root@gw SW]# mmlsnsd


 File system   Disk name    NSD servers

---------------------------------------------------------------------------

 (free disk)   backup_nsd   gwp

 (free disk)   data_nsd     gwp


spectrumscale filesystem list 명령으로 다시 GPFS filesystem 상태를 보면 거기엔 정보가 들어가 있습니다.  다만 mount point가 /ibm/data 이런 식으로 잘못 되어 있네요.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale filesystem list

[ INFO  ] Name   BlockSize   Mountpoint   NSDs Assigned  Default Data Replicas     Max Data Replicas     Default Metadata Replicas     Max Metadata Replicas

[ INFO  ] data   Default (4M)/ibm/data    1              1                         2                     1                             2

[ INFO  ] backup Default (4M)/ibm/backup  1              1                         2                     1                             2

[ INFO  ]


잘못된 mount point들을 제대로 수정합니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale filesystem modify data -m /data

[ INFO  ] The data filesystem will be mounted at /data on all nodes.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale filesystem modify backup -m /backup

[ INFO  ] The backup filesystem will be mounted at /backup on all nodes.


확인합니다.  그러나 여전히 mount는 되지 않습니다.


[root@gw SW]# /usr/lpp/mmfs/5.0.4.0/installer/spectrumscale filesystem list

[ INFO  ] Name   BlockSize   Mountpoint   NSDs Assigned  Default Data Replicas     Max Data Replicas     Default Metadata Replicas     Max Metadata Replicas

[ INFO  ] data   Default (4M)/data        1              1                         2                     1                             2

[ INFO  ] backup Default (4M)/backup      1              1                         2                     1                             2

[ INFO  ]


이를 수정하기 위해 GPFS fileystem 설정은 예전 방식, 즉 mmcrnsd와 mmcrfs 명령을 쓰겠습니다.  먼저 disk description 파일을 아래와 같이 만듭니다.


[root@gw ~]# vi /home/SW/gpfs/disk.desc1

/dev/sdc:gwp::dataAndMetadata:1:nsd_data


[root@gw ~]# vi /home/SW/gpfs/disk.desc2

/dev/sdd:gwp::dataAndMetadata:1:nsd_backup


그리고 예전의 NSD 포맷을 지우기 위해 sdc와 sdd에 아래와 같이 dd로 overwrite를 합니다.


[root@gw ~]# dd if=/dev/zero of=/dev/sdc bs=1M count=100

100+0 records in

100+0 records out

104857600 bytes (105 MB) copied, 0.0130229 s, 8.1 GB/s


[root@gw ~]# dd if=/dev/zero of=/dev/sdd bs=1M count=100

100+0 records in

100+0 records out

104857600 bytes (105 MB) copied, 0.0128207 s, 8.2 GB/s


mmcrnsd 명령과 mmcrfs 명령을 수행하여 NSD와 GPFS filesystem을 만듭니다.


[root@gw ~]# mmcrnsd -F /home/SW/gpfs/disk.desc1


[root@gw ~]# mmcrnsd -F /home/SW/gpfs/disk.desc2


[root@gw ~]# mmcrfs /data /dev/nsd_data -F /home/SW/gpfs/disk.desc1


The following disks of nsd_data will be formatted on node gw:

    nsd_data: size 409600 MB

Formatting file system ...

Disks up to size 3.18 TB can be added to storage pool system.

Creating Inode File

Creating Allocation Maps

Creating Log Files

Clearing Inode Allocation Map

Clearing Block Allocation Map

Formatting Allocation Map for storage pool system

Completed creation of file system /dev/nsd_data.

mmcrfs: Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.



[root@gw ~]# mmcrfs /backup /dev/nsd_backup -F /home/SW/gpfs/disk.desc2


The following disks of nsd_backup will be formatted on node gw:

    nsd_backup: size 409600 MB

Formatting file system ...

Disks up to size 3.18 TB can be added to storage pool system.

Creating Inode File

Creating Allocation Maps

Creating Log Files

Clearing Inode Allocation Map

Clearing Block Allocation Map

Formatting Allocation Map for storage pool system

Completed creation of file system /dev/nsd_backup.

mmcrfs: Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.


이제 모든 node에서 mount 해봅니다.


[root@gw ~]# mmmount all -a

Sat Oct 24 09:45:43 KST 2020: mmmount: Mounting file systems ...


[root@gw ~]# df -h

Filesystem      Size  Used Avail Use% Mounted on

devtmpfs        1.7G     0  1.7G   0% /dev

tmpfs           1.8G   18M  1.8G   1% /dev/shm

tmpfs           1.8G   96M  1.7G   6% /run

tmpfs           1.8G     0  1.8G   0% /sys/fs/cgroup

/dev/sda5        50G  5.4G   45G  11% /

/dev/sda6       345G  8.6G  337G   3% /home

/dev/sda2      1014M  178M  837M  18% /boot

tmpfs           355M     0  355M   0% /run/user/0

/dev/sr0        3.4G  3.4G     0 100% /home/cdrom

nsd_backup      400G  2.8G  398G   1% /backup

nsd_data        400G  2.8G  398G   1% /data


테스트를 위해 /data 밑에 /etc/hosts 파일을 copy해 둡니다.


[root@gw ~]# cp /etc/hosts /data


[root@gw ~]# ls -l /data

total 1

-rw-r--r--. 1 root root 298 Oct 24 09:49 hosts



Client node들에서도 잘 mount 되었는지 확인합니다.  그리고 아까 copy해둔 hosts 파일이 있는 확인합니다.


[root@gw ~]# ssh tac1

Last login: Sat Oct 24 09:33:46 2020 from gwp


[root@tac1 ~]# df -h

Filesystem      Size  Used Avail Use% Mounted on

devtmpfs         28G     0   28G   0% /dev

tmpfs            28G     0   28G   0% /dev/shm

tmpfs            28G   15M   28G   1% /run

tmpfs            28G     0   28G   0% /sys/fs/cgroup

/dev/sda5        50G  3.3G   47G   7% /

/dev/sda6       321G  2.8G  319G   1% /home

/dev/sda2      1014M  178M  837M  18% /boot

tmpfs           5.5G     0  5.5G   0% /run/user/0

nsd_data        400G  2.8G  398G   1% /data

nsd_backup      400G  2.8G  398G   1% /backup


[root@tac1 ~]# ls -l /data

total 1

-rw-r--r--. 1 root root 298 Oct 24 09:49 hosts




[root@gw ~]# ssh tac2

Last login: Sat Oct 24 09:33:46 2020 from gwp


[root@tac2 ~]# df -h

Filesystem      Size  Used Avail Use% Mounted on

devtmpfs         28G     0   28G   0% /dev

tmpfs            28G     0   28G   0% /dev/shm

tmpfs            28G   15M   28G   1% /run

tmpfs            28G     0   28G   0% /sys/fs/cgroup

/dev/sda5        50G  3.2G   47G   7% /

/dev/sda6       321G  3.3G  318G   2% /home

/dev/sda2      1014M  178M  837M  18% /boot

tmpfs           5.5G     0  5.5G   0% /run/user/0

nsd_backup      400G  2.8G  398G   1% /backup

nsd_data        400G  2.8G  398G   1% /data



[root@tac2 ~]# ls -l /data

total 1

-rw-r--r--. 1 root root 298 Oct 24 09:49 hosts





참고로 어떤 disk가 GPFS nsd인지는 fdisk 명령으로 아래와 같이 확인하실 수 있습니다.  fdisk -l 로 볼 때, 아래와 같이 IBM General Par GPFS라고 나오는 것이 GPFS nsd 입니다.



[root@tac1 ~]# fdisk -l | grep sd

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 429.5 GB, 429496729600 bytes, 838860800 sectors

/dev/sda1   *        2048       10239        4096   41  PPC PReP Boot

/dev/sda2           10240     2107391     1048576   83  Linux

/dev/sda3         2107392    60829695    29361152   82  Linux swap / Solaris

/dev/sda4        60829696   838860799   389015552    5  Extended

/dev/sda5        60831744   165689343    52428800   83  Linux

/dev/sda6       165691392   838860799   336584704   83  Linux

Disk /dev/sdb: 429.5 GB, 429496729600 bytes, 838860800 sectors

Disk /dev/sdc: 429.5 GB, 429496729600 bytes, 838860800 sectors

Disk /dev/sdd: 429.5 GB, 429496729600 bytes, 838860800 sectors

Disk /dev/sde: 429.5 GB, 429496729600 bytes, 838860800 sectors

Disk /dev/sdf: 429.5 GB, 429496729600 bytes, 838860800 sectors

Disk /dev/sdg: 429.5 GB, 429496729600 bytes, 838860800 sectors



[root@tac1 ~]# fdisk -l /dev/sdb

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.


Disk /dev/sdb: 429.5 GB, 429496729600 bytes, 838860800 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: gpt

Disk identifier: 236CE033-C570-41CC-8D2E-E20E6F494C38



#         Start          End    Size  Type            Name

 1           48    838860751    400G  IBM General Par GPFS:



[root@tac1 ~]# fdisk -l /dev/sdc

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.


Disk /dev/sdc: 429.5 GB, 429496729600 bytes, 838860800 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: gpt

Disk identifier: 507A299C-8E96-49E2-8C25-9D051BC9B935



#         Start          End    Size  Type            Name

 1           48    838860751    400G  IBM General Par GPFS:



일반 disk는 아래와 같이 평범하게 나옵니다.


[root@tac1 ~]# fdisk -l /dev/sdd


Disk /dev/sdd: 429.5 GB, 429496729600 bytes, 838860800 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes



댓글 없음:

댓글 쓰기