× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

RAID 1 resync incredibly slow

RealityDev
Aspirant

RAID 1 resync incredibly slow

I would've replied to this thread but I don't see any options to reply.
https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/ReadyNAS-4312X-Initial-Sync-Incredi...

 

When I try to use blockdev I get an error that says:

-bash: blockdev: command not found

 

What am I missing here?

 

 

Model: RN214|4 BAY Desktop ReadyNAS Storage
Message 1 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow


@RealityDev wrote:

 

When I try to use blockdev I get an error that says:

-bash: blockdev: command not found

 


It's on my RN202 (and my RN526)

root@RN202:~# which blockdev
/sbin/blockdev

What firmware are you running?  Are you logging in as root?

 


@RealityDev wrote:

https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/ReadyNAS-4312X-Initial-Sync-Incredi...

 


Of course using the tips there is at your own risk. 

  • I wouldn't disable NCQ myself. All the posts I'm seeing that suggest that are quite old, and seem to target hardware controllers (not software RAID).
  • I don't have any experience in tuning read-ahead - so if I changed that during sync, I'd change it back to the original value when it is done.  You can see that value with blockdev --getra /dev/md127
Message 2 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

The firmware version is 6.10.0 and logging in as root allowed me to change the settings. However it doesn't seem like the settings make any difference, it's still ultra slow at over 140 hours to sync a 4TB mirrored RAID. I can't believe Netgear does not optimize the device to sync faster than this, people are okay with taking a week or more to do something that should take less than a day? 

Message 3 of 38
yxue
NETGEAR Expert

Re: RAID 1 resync incredibly slow

Can you use get_disk_info to see if there are errors on disks

Message 4 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow


@RealityDev wrote:

I can't believe Netgear does not optimize the device to sync faster than this, people are okay with taking a week or more to do something that should take less than a day? 


It shouldn't take a week for 2x4TB so I think something is wrong.

 

What disks are you using (manufacturer and model)?

 

Have you looked at the smart stats for both disks?

# smartctl -x /dev/sda
# smartctl -x /dev/sdb

The -x will give you some data on recent failed commands, which I think is relevant here.

Message 5 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

The drives we are using are the following:

Western Digital WD4000FYYZ ENTERPRISE 4TB 7200RPM, 64MB Cache SATA 6.0Gb/s 3.5"

Message 6 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

Drive 1:


root@nas1:~# smartctl -x /dev/sda
smartctl 6.6 2017-11-05 r4594 [armv7l-linux-4.4.163.alpine.1] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Re
Device Model: WDC WD4000FYYZ-05UL1B0
Serial Number: WD-WMC130D3C11E
LU WWN Device Id: 5 0014ee 0ae925986
Firmware Version: 00.0NS05
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Size: 512 bytes logical/physical
Rotation Rate: 7200 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue May 21 09:13:42 2019 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM level is: 254 (maximum performance)
Rd look-ahead is: Enabled
Write cache is: Disabled
DSN feature is: Unavailable
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (46080) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 497) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x70bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate POSR-K 200 200 051 - 0
3 Spin_Up_Time POS--K 253 253 021 - 5541
4 Start_Stop_Count -O--CK 100 100 000 - 12
5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0
7 Seek_Error_Rate -OSR-K 100 253 000 - 0
9 Power_On_Hours -O--CK 100 100 000 - 256
10 Spin_Retry_Count -O--CK 100 253 000 - 0
11 Calibration_Retry_Count -O--CK 100 253 000 - 0
12 Power_Cycle_Count -O--CK 100 100 000 - 12
16 Total_LBAs_Read -O---K 000 200 000 - 7480455366
183 Runtime_Bad_Block -O--CK 100 100 000 - 0
192 Power-Off_Retract_Count -O--CK 200 200 000 - 11
193 Load_Cycle_Count -O--CK 200 200 000 - 0
194 Temperature_Celsius -O---K 112 106 000 - 40
196 Reallocated_Event_Count -O--CK 200 200 000 - 0
197 Current_Pending_Sector -O--CK 200 200 000 - 0
198 Offline_Uncorrectable ----CK 100 253 000 - 0
199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0
200 Multi_Zone_Error_Rate ---R-- 100 253 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning

General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 5 Comprehensive SMART error log
0x03 GPL R/O 6 Ext. Comprehensive SMART error log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x08 GPL R/O 2 Power Conditions log
0x09 SL R/W 1 Selective self-test log
0x10 GPL R/O 1 NCQ Command Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x24 GPL R/O 1 Current Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xa0-0xa7 GPL,SL VS 16 Device vendor specific log
0xa8-0xb1 GPL,SL VS 1 Device vendor specific log
0xb2 GPL VS 65535 Device vendor specific log
0xb2 SL VS 255 Device vendor specific log
0xb3-0xb7 GPL,SL VS 1 Device vendor specific log
0xbd GPL,SL VS 1 Device vendor specific log
0xc0 GPL,SL VS 1 Device vendor specific log
0xc1 GPL VS 24 Device vendor specific log
0xd0 GPL VS 1 Device vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (6 sectors)
No Errors Logged

SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version: 3
SCT Version (vendor specific): 258 (0x0102)
SCT Support Level: 1
Device State: Active (0)
Current Temperature: 40 Celsius
Power Cycle Min/Max Temperature: 40/43 Celsius
Lifetime Min/Max Temperature: 24/46 Celsius
Under/Over Temperature Limit Count: 0/0
Vendor specific:
01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/60 Celsius
Min/Max Temperature Limit: -41/85 Celsius
Temperature History Size (Index): 478 (126)

Index Estimated Time Temperature Celsius
127 2019-05-21 01:16 41 **********************
... ..(257 skipped). .. **********************
385 2019-05-21 05:34 41 **********************
386 2019-05-21 05:35 40 *********************
... ..(217 skipped). .. *********************
126 2019-05-21 09:13 40 *********************

SCT Error Recovery Control:
Read: 70 (7.0 seconds)
Write: 70 (7.0 seconds)

Device Statistics (GP/SMART Log 0x04) not supported

Pending Defects log (GP Log 0x0c) not supported

SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 1 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 1 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
0x000f 2 0 R_ERR response for host-to-device data FIS, CRC
0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC
0x8000 4 67670 Vendor specific

Message 7 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

Drive 2:

 


root@nas1:~# smartctl -x /dev/sdb
smartctl 6.6 2017-11-05 r4594 [armv7l-linux-4.4.163.alpine.1] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Re
Device Model: WDC WD4000FYYZ-05UL1B0
Serial Number: WD-WCC13JSVRE5D
LU WWN Device Id: 5 0014ee 2b54a011d
Firmware Version: 00.0NS05
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Size: 512 bytes logical/physical
Rotation Rate: 7200 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue May 21 09:14:53 2019 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM level is: 254 (maximum performance)
Rd look-ahead is: Enabled
Write cache is: Disabled
DSN feature is: Unavailable
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (46800) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 506) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x70bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate POSR-K 106 067 051 - 310
3 Spin_Up_Time POS--K 253 251 021 - 5583
4 Start_Stop_Count -O--CK 100 100 000 - 36
5 Reallocated_Sector_Ct PO--CK 194 194 140 - 211
7 Seek_Error_Rate -OSR-K 200 200 000 - 0
9 Power_On_Hours -O--CK 100 100 000 - 257
10 Spin_Retry_Count -O--CK 100 253 000 - 0
11 Calibration_Retry_Count -O--CK 100 253 000 - 0
12 Power_Cycle_Count -O--CK 100 100 000 - 27
16 Total_LBAs_Read -O---K 000 200 000 - 6988914804
183 Runtime_Bad_Block -O--CK 100 100 000 - 0
192 Power-Off_Retract_Count -O--CK 200 200 000 - 23
193 Load_Cycle_Count -O--CK 200 200 000 - 12
194 Temperature_Celsius -O---K 111 105 000 - 41
196 Reallocated_Event_Count -O--CK 199 199 000 - 1
197 Current_Pending_Sector -O--CK 200 200 000 - 0
198 Offline_Uncorrectable ----CK 100 253 000 - 0
199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0
200 Multi_Zone_Error_Rate ---R-- 100 253 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning

General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 5 Comprehensive SMART error log
0x03 GPL R/O 6 Ext. Comprehensive SMART error log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x08 GPL R/O 2 Power Conditions log
0x09 SL R/W 1 Selective self-test log
0x10 GPL R/O 1 NCQ Command Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x24 GPL R/O 1 Current Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xa0-0xa7 GPL,SL VS 16 Device vendor specific log
0xa8-0xb1 GPL,SL VS 1 Device vendor specific log
0xb2 GPL VS 65535 Device vendor specific log
0xb2 SL VS 255 Device vendor specific log
0xb3-0xb7 GPL,SL VS 1 Device vendor specific log
0xbd GPL,SL VS 1 Device vendor specific log
0xc0 GPL,SL VS 1 Device vendor specific log
0xc1 GPL VS 24 Device vendor specific log
0xd0 GPL VS 1 Device vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (6 sectors)
Device Error Count: 310 (device log contains only the most recent 24 errors)
CR = Command Register
FEATR = Features Register
COUNT = Count (was: Sector Count) Register
LBA_48 = Upper bytes of LBA High/Mid/Low Registers ] ATA-8
LH = LBA High (was: Cylinder High) Register ] LBA
LM = LBA Mid (was: Cylinder Low) Register ] Register
LL = LBA Low (was: Sector Number) Register ]
DV = Device (was: Device/Head) Register
DC = Device Control Register
ER = Error register
ST = Status register
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 310 [21] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cc 10 5f 40 00 Error: UNC at LBA = 0x02cc105f = 46927967

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 02 90 00 78 00 00 02 cc 2d d0 40 08 1d+04:45:31.977 READ FPDMA QUEUED
60 05 40 00 70 00 00 02 cc 28 90 40 08 1d+04:45:31.977 READ FPDMA QUEUED
60 02 c0 00 68 00 00 02 cc 25 d0 40 08 1d+04:45:31.977 READ FPDMA QUEUED
60 05 40 00 60 00 00 02 cc 20 90 40 08 1d+04:45:31.977 READ FPDMA QUEUED
60 02 c0 00 58 00 00 02 cc 1d d0 40 08 1d+04:45:31.977 READ FPDMA QUEUED

Error 309 [20] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb d0 e0 40 00 Error: UNC at LBA = 0x02cbd0e0 = 46911712

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 80 00 00 02 cb d0 e0 40 08 1d+04:45:30.253 READ FPDMA QUEUED
60 00 08 00 78 00 00 02 cb d0 d8 40 08 1d+04:45:30.253 READ FPDMA QUEUED
60 00 08 00 70 00 00 02 cb d0 d0 40 08 1d+04:45:30.253 READ FPDMA QUEUED
60 00 08 00 68 00 00 02 cb d0 c8 40 08 1d+04:45:30.252 READ FPDMA QUEUED
60 00 08 00 60 00 00 02 cb d0 c0 40 08 1d+04:45:30.252 READ FPDMA QUEUED

Error 308 [19] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb d0 e0 40 00 Error: UNC at LBA = 0x02cbd0e0 = 46911712

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 04 20 00 f0 00 00 02 cb cc f0 40 08 1d+04:45:29.069 READ FPDMA QUEUED
60 00 08 00 e8 00 00 02 cb ca e8 40 08 1d+04:45:29.067 READ FPDMA QUEUED
60 00 08 00 e0 00 00 02 cb ca e0 40 08 1d+04:45:29.067 READ FPDMA QUEUED
60 00 08 00 d8 00 00 02 cb ca d8 40 08 1d+04:45:29.066 READ FPDMA QUEUED
60 00 08 00 d0 00 00 02 cb ca d0 40 08 1d+04:45:29.066 READ FPDMA QUEUED

Error 307 [18] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb c6 64 40 00 Error: UNC at LBA = 0x02cbc664 = 46909028

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 d0 00 00 02 cb c6 60 40 08 1d+04:45:27.356 READ FPDMA QUEUED
60 00 08 00 c8 00 00 02 cb c6 58 40 08 1d+04:45:27.356 READ FPDMA QUEUED
60 00 08 00 c0 00 00 02 cb c6 50 40 08 1d+04:45:27.356 READ FPDMA QUEUED
60 00 08 00 b8 00 00 02 cb c6 48 40 08 1d+04:45:27.356 READ FPDMA QUEUED
60 00 08 00 b0 00 00 02 cb c6 40 40 08 1d+04:45:27.355 READ FPDMA QUEUED

Error 306 [17] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb bb e8 40 00 Error: UNC at LBA = 0x02cbbbe8 = 46906344

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 f0 00 00 02 cb bb e8 40 08 1d+04:45:23.847 READ FPDMA QUEUED
60 00 08 00 e8 00 00 02 cb bb e0 40 08 1d+04:45:23.847 READ FPDMA QUEUED
61 00 08 00 e0 00 00 02 cb bb e0 40 08 1d+04:45:23.290 WRITE FPDMA QUEUED
ef 00 10 00 02 00 00 00 00 00 00 a0 08 1d+04:45:23.268 SET FEATURES [Enable SATA feature]
27 00 00 00 00 00 00 00 00 00 00 e0 08 1d+04:45:23.268 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]

Error 305 [16] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb bb e7 40 00 Error: UNC at LBA = 0x02cbbbe7 = 46906343

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 80 00 00 02 cb bb e0 40 08 1d+04:45:22.138 READ FPDMA QUEUED
60 00 08 00 78 00 00 02 cb bb d8 40 08 1d+04:45:22.138 READ FPDMA QUEUED
60 00 08 00 70 00 00 02 cb bb d0 40 08 1d+04:45:22.138 READ FPDMA QUEUED
60 00 08 00 68 00 00 02 cb bb c8 40 08 1d+04:45:22.138 READ FPDMA QUEUED
60 00 08 00 60 00 00 02 cb bb c0 40 08 1d+04:45:22.138 READ FPDMA QUEUED

Error 304 [15] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb c6 64 40 00 Error: UNC at LBA = 0x02cbc664 = 46909028

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 05 40 00 68 00 00 02 cb c5 b0 40 08 1d+04:45:20.985 READ FPDMA QUEUED
60 00 08 00 60 00 00 02 cb b6 78 40 08 1d+04:45:20.985 READ FPDMA QUEUED
60 00 08 00 58 00 00 02 cb b6 70 40 08 1d+04:45:20.985 READ FPDMA QUEUED
60 00 08 00 50 00 00 02 cb b6 68 40 08 1d+04:45:20.985 READ FPDMA QUEUED
60 00 08 00 48 00 00 02 cb b6 60 40 08 1d+04:45:20.985 READ FPDMA QUEUED

Error 303 [14] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb b1 6b 40 00 Error: UNC at LBA = 0x02cbb16b = 46903659

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 b8 00 00 02 cb b1 68 40 08 1d+04:45:19.255 READ FPDMA QUEUED
60 00 08 00 b0 00 00 02 cb b1 60 40 08 1d+04:45:19.255 READ FPDMA QUEUED
60 00 08 00 a8 00 00 02 cb b1 58 40 08 1d+04:45:19.255 READ FPDMA QUEUED
60 00 08 00 a0 00 00 02 cb b1 50 40 08 1d+04:45:19.254 READ FPDMA QUEUED
60 00 08 00 98 00 00 02 cb b1 48 40 08 1d+04:45:19.254 READ FPDMA QUEUED

SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version: 3
SCT Version (vendor specific): 258 (0x0102)
SCT Support Level: 1
Device State: Active (0)
Current Temperature: 41 Celsius
Power Cycle Min/Max Temperature: 40/43 Celsius
Lifetime Min/Max Temperature: 18/46 Celsius
Under/Over Temperature Limit Count: 0/0
Vendor specific:
01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/60 Celsius
Min/Max Temperature Limit: -41/85 Celsius
Temperature History Size (Index): 478 (202)

Index Estimated Time Temperature Celsius
203 2019-05-21 01:17 41 **********************
... ..(292 skipped). .. **********************
18 2019-05-21 06:10 41 **********************
19 2019-05-21 06:11 40 *********************
... ..( 23 skipped). .. *********************
43 2019-05-21 06:35 40 *********************
44 2019-05-21 06:36 41 **********************
... ..(157 skipped). .. **********************
202 2019-05-21 09:14 41 **********************

SCT Error Recovery Control:
Read: 70 (7.0 seconds)
Write: 70 (7.0 seconds)

Device Statistics (GP/SMART Log 0x04) not supported

Pending Defects log (GP Log 0x0c) not supported

SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 1 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 1 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
0x000f 2 0 R_ERR response for host-to-device data FIS, CRC
0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC
0x8000 4 67742 Vendor specific

Message 8 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

Why are my replies being removed?

Message 9 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow


@RealityDev wrote:

Why are my replies being removed?


There is an automated spam filter that might be kicking in.  Mods should be looking in there periodically to release false positives - but if there is a flood of spam that isn't always practical.  You can PM me if you have this problem, and I will release your post as soon as I can.

Message 10 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

Disk info:

 

root@nas1:~# get_disk_info
Device: sda
Controller: 0
Channel: 0
Model: WDC WD4000FYYZ-05UL1B0
Serial: WD-WMC130D3C11E
Firmware: 00.0NS05
Class: SATA
RPM: 7200
Sectors: 7814037168
Pool: data
PoolType: RAID 1
PoolState: 3
PoolHostId: 405641b2
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 41
Start/Stop Count: 12
Power-On Hours: 256
Power Cycle Count: 12
Load Cycle Count: 0

Device: sdb
Controller: 0
Channel: 1
Model: WDC WD4000FYYZ-05UL1B0
Serial: WD-WCC13JSVRE5D
Firmware: 00.0NS05
Class: SATA
RPM: 7200
Sectors: 7814037168
Pool: data
PoolType: RAID 1
PoolState: 3
PoolHostId: 405641b2
Health data
ATA Error Count: 0
Reallocated Sectors: 211
Reallocation Events: 1
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 41
Start/Stop Count: 36
Power-On Hours: 257
Power Cycle Count: 27
Load Cycle Count: 12

Message 11 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow

The ~300 errors on sdb are concerning.   The ones in the disk cache are all UNCs (uncorrectable errors).  That likely is enough to explain the slow sync time.

 

It isn't clear why they aren't showing up in the SMART stats - though I have run into this before on one of my own disks.

 

Anyway, I think this disk needs to be replaced - and that if you test it with Lifeguard (both the long read test and the full write-zeros test) you will find that is fails.  At least that was the outcome with my own drive.

 


@RealityDev wrote:

Drive 2:

 

Error 310 [21] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cc 10 5f 40 00 Error: UNC at LBA = 0x02cc105f = 46927967

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 02 90 00 78 00 00 02 cc 2d d0 40 08 1d+04:45:31.977 READ FPDMA QUEUED
60 05 40 00 70 00 00 02 cc 28 90 40 08 1d+04:45:31.977 READ FPDMA QUEUED
60 02 c0 00 68 00 00 02 cc 25 d0 40 08 1d+04:45:31.977 READ FPDMA QUEUED
60 05 40 00 60 00 00 02 cc 20 90 40 08 1d+04:45:31.977 READ FPDMA QUEUED
60 02 c0 00 58 00 00 02 cc 1d d0 40 08 1d+04:45:31.977 READ FPDMA QUEUED

 

 

Error 309 [20] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb d0 e0 40 00 Error: UNC at LBA = 0x02cbd0e0 = 46911712

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 80 00 00 02 cb d0 e0 40 08 1d+04:45:30.253 READ FPDMA QUEUED
60 00 08 00 78 00 00 02 cb d0 d8 40 08 1d+04:45:30.253 READ FPDMA QUEUED
60 00 08 00 70 00 00 02 cb d0 d0 40 08 1d+04:45:30.253 READ FPDMA QUEUED
60 00 08 00 68 00 00 02 cb d0 c8 40 08 1d+04:45:30.252 READ FPDMA QUEUED
60 00 08 00 60 00 00 02 cb d0 c0 40 08 1d+04:45:30.252 READ FPDMA QUEUED

 

 

Error 308 [19] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb d0 e0 40 00 Error: UNC at LBA = 0x02cbd0e0 = 46911712

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 04 20 00 f0 00 00 02 cb cc f0 40 08 1d+04:45:29.069 READ FPDMA QUEUED
60 00 08 00 e8 00 00 02 cb ca e8 40 08 1d+04:45:29.067 READ FPDMA QUEUED
60 00 08 00 e0 00 00 02 cb ca e0 40 08 1d+04:45:29.067 READ FPDMA QUEUED
60 00 08 00 d8 00 00 02 cb ca d8 40 08 1d+04:45:29.066 READ FPDMA QUEUED
60 00 08 00 d0 00 00 02 cb ca d0 40 08 1d+04:45:29.066 READ FPDMA QUEUED

 

 

Error 307 [18] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb c6 64 40 00 Error: UNC at LBA = 0x02cbc664 = 46909028

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 d0 00 00 02 cb c6 60 40 08 1d+04:45:27.356 READ FPDMA QUEUED
60 00 08 00 c8 00 00 02 cb c6 58 40 08 1d+04:45:27.356 READ FPDMA QUEUED
60 00 08 00 c0 00 00 02 cb c6 50 40 08 1d+04:45:27.356 READ FPDMA QUEUED
60 00 08 00 b8 00 00 02 cb c6 48 40 08 1d+04:45:27.356 READ FPDMA QUEUED
60 00 08 00 b0 00 00 02 cb c6 40 40 08 1d+04:45:27.355 READ FPDMA QUEUED

 

 

Error 306 [17] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb bb e8 40 00 Error: UNC at LBA = 0x02cbbbe8 = 46906344

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 f0 00 00 02 cb bb e8 40 08 1d+04:45:23.847 READ FPDMA QUEUED
60 00 08 00 e8 00 00 02 cb bb e0 40 08 1d+04:45:23.847 READ FPDMA QUEUED
61 00 08 00 e0 00 00 02 cb bb e0 40 08 1d+04:45:23.290 WRITE FPDMA QUEUED
ef 00 10 00 02 00 00 00 00 00 00 a0 08 1d+04:45:23.268 SET FEATURES [Enable SATA feature]
27 00 00 00 00 00 00 00 00 00 00 e0 08 1d+04:45:23.268 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]

 

 

Error 305 [16] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb bb e7 40 00 Error: UNC at LBA = 0x02cbbbe7 = 46906343

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 80 00 00 02 cb bb e0 40 08 1d+04:45:22.138 READ FPDMA QUEUED
60 00 08 00 78 00 00 02 cb bb d8 40 08 1d+04:45:22.138 READ FPDMA QUEUED
60 00 08 00 70 00 00 02 cb bb d0 40 08 1d+04:45:22.138 READ FPDMA QUEUED
60 00 08 00 68 00 00 02 cb bb c8 40 08 1d+04:45:22.138 READ FPDMA QUEUED
60 00 08 00 60 00 00 02 cb bb c0 40 08 1d+04:45:22.138 READ FPDMA QUEUED

 

 

Error 304 [15] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb c6 64 40 00 Error: UNC at LBA = 0x02cbc664 = 46909028

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 05 40 00 68 00 00 02 cb c5 b0 40 08 1d+04:45:20.985 READ FPDMA QUEUED
60 00 08 00 60 00 00 02 cb b6 78 40 08 1d+04:45:20.985 READ FPDMA QUEUED
60 00 08 00 58 00 00 02 cb b6 70 40 08 1d+04:45:20.985 READ FPDMA QUEUED
60 00 08 00 50 00 00 02 cb b6 68 40 08 1d+04:45:20.985 READ FPDMA QUEUED
60 00 08 00 48 00 00 02 cb b6 60 40 08 1d+04:45:20.985 READ FPDMA QUEUED

 

 

Error 303 [14] occurred at disk power-on lifetime: 179 hours (7 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 02 cb b1 6b 40 00 Error: UNC at LBA = 0x02cbb16b = 46903659

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 08 00 b8 00 00 02 cb b1 68 40 08 1d+04:45:19.255 READ FPDMA QUEUED
60 00 08 00 b0 00 00 02 cb b1 60 40 08 1d+04:45:19.255 READ FPDMA QUEUED
60 00 08 00 a8 00 00 02 cb b1 58 40 08 1d+04:45:19.255 READ FPDMA QUEUED
60 00 08 00 a0 00 00 02 cb b1 50 40 08 1d+04:45:19.254 READ FPDMA QUEUED
60 00 08 00 98 00 00 02 cb b1 48 40 08 1d+04:45:19.254 READ FPDMA QUEUED

SMART Extended Self-test Log Version: 1 (1 sectors)

Message 12 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

Okay, running the extended test now with Lifeguard. I'm assuming this would give me some kind of report I could give to WD to get a replacement for a defective drive?

Message 13 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

Definitely a bad drive, the extended test stopped with the following. Funny thing is it's a 4TB drive but says it's only 1801.76 GB.

 

Test Option: EXTENDED TEST
Model Number: WDC WD40 00FYYZ-05UL1B0
Unit Serial Number:
Firmware Number: NS05
Capacity: 1801.76 GB
SMART Status: Not Available
Test Result: FAIL
Test Error Code: 08-Too many bad sectors detected.
Test Time: 11:22:47, May 21, 2019

Message 14 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow


@RealityDev wrote:

I'm assuming this would give me some kind of report I could give to WD to get a replacement for a defective drive?


WD will give you a recertified replacement (1 year warranty).  So return it to the seller if you can - that way you'll get a new drive with the full warranty.

 

As far as documentation goes, just telling WD that it failed lifeguard is enough.  They of course reserve the right to test the returned disk themselves - but neither Seagate nor WD have ever challenged one of my returns.

Message 15 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

So I replaced the bad drive, did an extended test with Lifeguard to confirm that it was good. I also did an extended test on the other original good drive just to make sure and it passed the extended test as well. No dice, still just as slow. I then made the following setting changes as outlined in the original post I referenced:

 

blockdev --setra 65536 /dev/md127

 

for drive in sd{a..l}
do
echo 1 > /sys/block/$drive/device/queue_depth
done

 

Still no dice, just as slow. I guess I'm out of options at this point, wish we could go back and get another NAS. Also, I notice when I restart the NAS the setting changes do not stick and are reset to the original values, not sure if this could have something to do with it.

Message 16 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow


@RealityDev wrote:

So I replaced the bad drive, did an extended test with Lifeguard to confirm that it was good. I also did an extended test on the other original good drive just to make sure and it passed the extended test as well. No dice, still just as slow.


Mean that it is still saying 140 hours to sync???

Message 17 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

It initially said over 140 hours, it's currently sitting at about 127 hours.

Message 18 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow

Are you actively using the NAS?  That will slow it down a lot.

 

 

 

Message 19 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

I could see that, but no. We haven't started moving files to it yet, we're waiting for the RAID to sync.

Message 20 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow

Normally it will complete in about 12 hours, so I don't know why it's so slow in your case.

 

You can watch the process with ssh using

# watch cat /proc/mdstat

You could use this to see the current speed limits:

# sysctl dev.raid.speed_limit_min
# sysctl dev.raid.speed_limit_max

Raising the min speed limit could help, but I don't know if that will apply if the sync is already underway

# sysctl -w dev.raid.speed_limit_min=200000

If you try it, I suggest setting it back when the sync completes.

Message 21 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

Tried raising the min to 200000 but still no dice on that, it actually says about 130 hours now. I also notice the setting doesn't stick when you restart so I'm not sure changing the settings is actually really changing them.

Message 22 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow

I think it does change them, but that the system restores the original settings on reboot.

 

You could potentially remove a disk, change the setting, and reinsert the disk (and likely re-add it to the array).  That of course starts the process over, and might not help.

 

FWIW, my own NAS is expanding at the moment (and also happens to be syncing a 2x4TB RAID group).  It's slower than it should be - around 25 MB/sec.  So it appears to be applying the min speed. Still, that should take about 45 hours - not 140.

 

Message 23 of 38
RealityDev
Aspirant

Re: RAID 1 resync incredibly slow

Are the disks hot swappable? What I've been doing is shutting the thing down before I change a disk and then when it starts up it automatically starts syncing. If I remember right I didn't even get presented with any options to create a RAID to begin with, it just decided to make a mirrored RAID on its own.

Message 24 of 38
StephenB
Guru

Re: RAID 1 resync incredibly slow


@RealityDev wrote:

Are the disks hot swappable? 


They are - you don't need to power down (and my normal recommendation is not to).

 


@RealityDev wrote:

If I remember right I didn't even get presented with any options to create a RAID to begin with, it just decided to make a mirrored RAID on its own.


Yes, that is default behavior with XRAID.  Though if the disk is formatted, the system won't add it until you select the disk and chose "format" again on the volume tab.

 

With FlexRAID you explicitly add the disk to the array (and select the RAID mode).  You shift to FlexRAID by clicking on the XRAID control on the volume tab.  If a green stripe shows on the control, you are in XRAID.  If there is no stripe, you are in FlexRAID.

Message 25 of 38
Top Contributors
Discussion stats
  • 37 replies
  • 6303 views
  • 2 kudos
  • 5 in conversation
Announcements