User Tools

Site Tools


zfs

This is an old revision of the document!


ZFS

Zpool

To check zpool status

zpool status [<volume>]

To clear error on drive, which you think reported wrongly.

zpool clear <volume name>

To add Zeus Drive

zpool add performance log <drive>

Adding spare to volume

zpool add performance spare <drive>

Remove attached spare drive to volume

zpool detach <volume> <Drive>

Removing drives from volume (require to remove spare drives after raid rebuilt)

zpool remove <Volume> <Drive>

List all devices / drive

sas2ircu 0 DISPLAY

ZFS Display drive information by serial number

sas2ircu 0 DISPLAY | grep -B 9 -A 4 <Serial Number>
sas2ircu 0 locate 3:7 ON  

Offline Drive

zpool offline hermes 15935140517898495532

Replace disk in ZFS Pool

zpool replace hermes 15935140517898495532 /dev/disk/by-id/ata-ST3500320AS_9QM03ATQ

Nexenta

Disk / Lun

To blink drive use following command in NMC

show lun c8t5000C5005785138Bd0 blink -y

Nexenta Mgmt.

To create nexenta collector report, Run following command from root prompt

nexenta-collector --no-upload

Printing Each JBOD Slotmap

nmc -c "show jbod jbod:1 slotmap" | less

Check HA Status from command line

/opt/HAC/RSF-1/bin/rsfcli status

Synchronize Disk Location

nmc -c "lunsync -r -y"

Migrate Pool to another server

/opt/HAC/RSF-1/bin/rsfcli -i0 move <volume> <hostname>

Solaris

List all faults on system

fmadm faulty

Clearing Faults

fmadm repair <faulty_id>

Checking Service status

svcs nm{s,v,cd} dbus rmvolmgr nmdtrace

Restarting Services

svcadm restart nms

Loading unloading zfs-diagnosis module

fmadm load /usr/lib/fm/fmd/plugins/zfs-diagnosis.so

Clearning dangling dev link

devfsadm -Cv

Check status of ARC and L2

kstat -p zfs:0:arcstats
zfs.1550090807.txt.gz · Last modified: 2020/08/10 02:29 (external edit)