- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
1002030001 - Commit Failed #27399265
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ReadyNas 312. Current Firmware (6.7.4). For the second time in under a month, this ReadyNAS has become 'Read Only'. The first time was after a firmware upgrade and I was also setting up access rights for ReadyNas Remote and so I thought I'd caused the issue. This time is after a power fail and restart of the NAS drive.
I've checked the logs and there's no report of disk errors or problems. However, the share is read-only (when accessed from user PCs) and when I try and change any of the security setttings via the Admin page I get the error: 100203001 Commit Failed. I can't create a new share (same error) or create or rename folders in this share (no error via Admin page / disk full error via PC).
I have tried to run the backup manually (there's an overnight backup to USB as well as a continuous backup to ReadyVault) but the backup to USB job status remains as 'In Queue'.
I have the data secure and I know I can restore to factory settings, set it up again and copy all the data back as that's what I did last time. However I need to know it's not going to happen again.I'm guessing there could be some underlying hardware issue (disk error?) but there's nothing I can see in the logs.
Is there any way of running a check or test?Currently all users can access the files they need and are saving changes to their local PCs and will copy back once the NAS is working again, and it's Friday, so I have a day or two to resolve this.
Any help or advise would be greatly appreciated.
Solved! Go to Solution.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie,
The long way, I'm afraid. I had an USB backup from the night before. The files on the NAS were read-only so I could still access them over the network so I ran a robocopy on a connected PC and secured any files that had changed since the last backup by copying them to the local PC over the network (I just copied any files that had changed in the last two days to be safe).
I made sure I could read the data from the USB drive by connecting it to a PC and copying all the file over. Fortunately, in my case, there was only around 50gb of data so it was manageable. This step wasn't really needed but I was being ultra-cautious.
Once I was 100% certain I had all the data and could recover it, I performed a factory reset on the NAS drive and set it up again. Once I'd got it working, I re-connected the USB drive to it and used the admin interface to copy the contents of the USB drive back to the share. Once that was done, I then copied the 'recently changed' files back to the NAS.
Nothing techincal or clever so not really a practical solution to this problem, I'm afraid.
All Replies
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
You can run a disk check from the volume settings wheel. What disks are you using?
There's also a memory diagnostic that you can run from the boot menu. See pages 31-32 here: http://www.downloads.netgear.com/files/GDC/READYNAS-100/RN%20OS%206%20Desktop%20HW%20UM_15Oct2013.pd... There's a guide for understanding progress/results here: https://kb.netgear.com/25632/RN312-Memory-Test-Analysis
There's a disk check in the boot menu too, but I think you're better off using the on-line test.
I am thinking paid support (my.netgear.com) might be useful in finding the root cause.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
If the whole volume is read only, it could be that it's mounted read only. Have you tried rebooting the NAS?
It could also be that the BTRFS volume is corrupted after the power loss. If you download the logs from the GUI and check dmesg.log and systemd-journal.log for BTRFS messages. btrfs.log also contains valuable info.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
Thanks, I didn't realise there was so much more information in the logs when downloaded. Problem is now understanding them. There are messages but not sure if it's an error. For example, this:
Jun 23 08:55:33 KNAPMAN-NAS kernel: ------------[ cut here ]------------ Jun 23 08:55:33 KNAPMAN-NAS kernel: WARNING: CPU: 2 PID: 4147 at fs/btrfs/qgroup.c:2967 btrfs_qgroup_free_meta+0x92/0xa0() Jun 23 08:55:33 KNAPMAN-NAS kernel: Modules linked in: ufsd(PO) jnl(O) vpd(PO) Jun 23 08:55:33 KNAPMAN-NAS kernel: CPU: 2 PID: 4147 Comm: smbd Tainted: P W O 4.4.68.x86_64.1 #1 Jun 23 08:55:33 KNAPMAN-NAS kernel: Hardware name: NETGEAR ReadyNAS 312/ReadyNAS 312 , BIOS 4.6.5 01/07/2013 Jun 23 08:55:33 KNAPMAN-NAS kernel: 0000000000000000 ffff880077f9bd50 ffffffff883d3e98 0000000000000000 Jun 23 08:55:33 KNAPMAN-NAS kernel: ffffffff88cf2bad ffff880077f9bd88 ffffffff8806b27c ffff88007827e000 Jun 23 08:55:33 KNAPMAN-NAS kernel: 0000000000000050 ffff88007827e000 ffffffffffffffe4 ffff8800390250e8 Jun 23 08:55:33 KNAPMAN-NAS kernel: Call Trace: Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff883d3e98>] dump_stack+0x4d/0x65 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff8806b27c>] warn_slowpath_common+0x7c/0xb0 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff8806b365>] warn_slowpath_null+0x15/0x20 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff88359fd2>] btrfs_qgroup_free_meta+0x92/0xa0 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff882eef92>] start_transaction+0x3f2/0x440 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff881444d4>] ? generic_permission+0x164/0x190 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff882ef446>] btrfs_start_transaction_fallback_global_rsv+0x26/0xc0 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff882f8890>] btrfs_unlink+0x30/0xb0 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff88144834>] vfs_unlink+0x104/0x180 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff881493ed>] do_unlinkat+0x23d/0x290 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff88149b01>] SyS_unlink+0x11/0x20 Jun 23 08:55:33 KNAPMAN-NAS kernel: [<ffffffff88a50617>] entry_SYSCALL_64_fastpath+0x12/0x6a Jun 23 08:55:33 KNAPMAN-NAS kernel: ---[ end trace cea84c9661a966aa ]---
That says smbd Tainted, which I guess means an issue with the share access. Is that right?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
Thanks - I didn't realise that option was there. I've set a scrub running, still waiting for results (30 mins in and <8% complete so will need to check back later).
Can't remember disk models as I'm not on site so no physical access to the NAS, but the model numbers are ST2000VN0001-1SF174 if that helps.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
Can you post the top of btrfs.log please?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
Same issue here (same error Commit Failed Code : 1002030001 after upgrade to 6.7.4). Only I can't even get logs (click on button opens new page, but then it times out after several minutes). I have another RN104 unit that is not upgraded and runs OK (logs are collected normally). I can't even turn on SSH on this faulty unit after upgrade so can't access via SSH. Reset does not help either. Please help.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
As requested (sorry for the delay)
Label: '5e27460a:root' uuid: a29577df-3106-405e-a5bc-5541569e4a6d Total devices 1 FS bytes used 373.25MiB devid 1 size 4.00GiB used 1.24GiB path /dev/md0 Label: '5e27460a:data' uuid: 3efd78ce-168b-44fb-bb95-19dc29dc7836 Total devices 1 FS bytes used 48.17GiB devid 1 size 1.81TiB used 51.02GiB path /dev/md127 === filesystem /data === Data, single: total=48.01GiB, used=47.36GiB System, DUP: total=8.00MiB, used=32.00KiB Metadata, DUP: total=1.50GiB, used=825.97MiB GlobalReserve, single: total=42.66MiB, used=0.00B === subvolume /data === ID 257 gen 13891 top level 5 path home ID 258 gen 18428 top level 5 path .apps ID 259 gen 17852 top level 5 path .vault ID 260 gen 17354 top level 5 path ._share ID 265 gen 18428 top level 5 path .timemachine ID 270 gen 18379 top level 5 path SHARE ID 271 gen 18428 top level 270 path SHARE/.snapshots ID 272 gen 48 top level 257 path home/scanner ID 273 gen 62 top level 5 path .purge ID 320 gen 899 top level 257 path home/margaret ID 326 gen 1041 top level 257 path home/christine ID 331 gen 1266 top level 257 path home/rebecca@knapmansolicitor.co.uk ID 347 gen 2976 top level 271 path SHARE/.snapshots/24/snapshot ID 372 gen 5901 top level 271 path SHARE/.snapshots/48/snapshot ID 402 gen 7327 top level 271 path SHARE/.snapshots/72/snapshot ID 412 gen 7465 top level 257 path home/lynne ID 414 gen 7523 top level 257 path home/jackie ID 415 gen 7535 top level 257 path home/ray ID 430 gen 8022 top level 271 path SHARE/.snapshots/96/snapshot ID 444 gen 8469 top level 257 path home/eve ID 456 gen 8774 top level 271 path SHARE/.snapshots/120/snapshot ID 481 gen 9406 top level 271 path SHARE/.snapshots/144/snapshot ID 506 gen 10077 top level 271 path SHARE/.snapshots/168/snapshot ID 531 gen 10773 top level 271 path SHARE/.snapshots/192/snapshot ID 556 gen 10920 top level 271 path SHARE/.snapshots/216/snapshot ID 581 gen 10996 top level 271 path SHARE/.snapshots/240/snapshot ID 608 gen 11590 top level 271 path SHARE/.snapshots/264/snapshot ID 633 gen 12357 top level 271 path SHARE/.snapshots/288/snapshot ID 658 gen 12953 top level 271 path SHARE/.snapshots/312/snapshot ID 683 gen 13550 top level 271 path SHARE/.snapshots/336/snapshot ID 697 gen 13892 top level 257 path home/rebecca ID 709 gen 14145 top level 271 path SHARE/.snapshots/360/snapshot ID 734 gen 14419 top level 271 path SHARE/.snapshots/384/snapshot ID 759 gen 14517 top level 271 path SHARE/.snapshots/408/snapshot ID 784 gen 15113 top level 271 path SHARE/.snapshots/432/snapshot ID 809 gen 15899 top level 271 path SHARE/.snapshots/456/snapshot ID 819 gen 15956 top level 271 path SHARE/.snapshots/465/snapshot ID 820 gen 16033 top level 271 path SHARE/.snapshots/466/snapshot ID 821 gen 16113 top level 271 path SHARE/.snapshots/467/snapshot ID 822 gen 16189 top level 271 path SHARE/.snapshots/468/snapshot ID 823 gen 16241 top level 271 path SHARE/.snapshots/469/snapshot ID 824 gen 16307 top level 271 path SHARE/.snapshots/470/snapshot ID 825 gen 16384 top level 271 path SHARE/.snapshots/471/snapshot ID 826 gen 16444 top level 271 path SHARE/.snapshots/472/snapshot ID 827 gen 16468 top level 271 path SHARE/.snapshots/473/snapshot ID 828 gen 16471 top level 271 path SHARE/.snapshots/474/snapshot ID 829 gen 16487 top level 271 path SHARE/.snapshots/475/snapshot ID 830 gen 16502 top level 271 path SHARE/.snapshots/476/snapshot ID 831 gen 16514 top level 271 path SHARE/.snapshots/477/snapshot ID 832 gen 16532 top level 271 path SHARE/.snapshots/478/snapshot ID 833 gen 16535 top level 271 path SHARE/.snapshots/479/snapshot ID 834 gen 16538 top level 271 path SHARE/.snapshots/480/snapshot ID 836 gen 16546 top level 271 path SHARE/.snapshots/481/snapshot ID 837 gen 16549 top level 271 path SHARE/.snapshots/482/snapshot ID 838 gen 16552 top level 271 path SHARE/.snapshots/483/snapshot ID 839 gen 16556 top level 271 path SHARE/.snapshots/484/snapshot ID 840 gen 16559 top level 271 path SHARE/.snapshots/485/snapshot ID 841 gen 16572 top level 271 path SHARE/.snapshots/486/snapshot ID 842 gen 16623 top level 271 path SHARE/.snapshots/487/snapshot ID 843 gen 16661 top level 271 path SHARE/.snapshots/488/snapshot ID 844 gen 16709 top level 271 path SHARE/.snapshots/489/snapshot ID 845 gen 16787 top level 271 path SHARE/.snapshots/490/snapshot ID 846 gen 16857 top level 271 path SHARE/.snapshots/491/snapshot ID 847 gen 16940 top level 271 path SHARE/.snapshots/492/snapshot ID 848 gen 16976 top level 271 path SHARE/.snapshots/493/snapshot ID 849 gen 17018 top level 271 path SHARE/.snapshots/494/snapshot ID 850 gen 17104 top level 271 path SHARE/.snapshots/495/snapshot ID 851 gen 17158 top level 271 path SHARE/.snapshots/496/snapshot ID 852 gen 17186 top level 271 path SHARE/.snapshots/497/snapshot ID 853 gen 17194 top level 271 path SHARE/.snapshots/498/snapshot ID 854 gen 17201 top level 271 path SHARE/.snapshots/499/snapshot ID 855 gen 17224 top level 271 path SHARE/.snapshots/500/snapshot ID 856 gen 17230 top level 271 path SHARE/.snapshots/501/snapshot ID 857 gen 17250 top level 271 path SHARE/.snapshots/502/snapshot ID 858 gen 17255 top level 271 path SHARE/.snapshots/503/snapshot ID 859 gen 17258 top level 271 path SHARE/.snapshots/504/snapshot ID 861 gen 17266 top level 271 path SHARE/.snapshots/505/snapshot ID 862 gen 17269 top level 271 path SHARE/.snapshots/506/snapshot ID 863 gen 17272 top level 271 path SHARE/.snapshots/507/snapshot ID 864 gen 17276 top level 271 path SHARE/.snapshots/508/snapshot ID 865 gen 17301 top level 271 path SHARE/.snapshots/509/snapshot ID 866 gen 17323 top level 271 path SHARE/.snapshots/510/snapshot ID 867 gen 17353 top level 271 path SHARE/.snapshots/511/snapshot === btrfs dump-super /dev/md0 /dev/md127 superblock: bytenr=65536, device=/dev/md0 --------------------------------------------------------- csum_type 0 (crc32c) csum_size 4 csum 0xe6a3050c [match] bytenr 65536 flags 0x1 ( WRITTEN ) magic _BHRfS_M [match] fsid a29577df-3106-405e-a5bc-5541569e4a6d label 5e27460a:root generation 11962 root 72564736 sys_array_size 129 chunk_root_generation 11816 root_level 0 chunk_root 20987904 chunk_root_level 0 log_root 0 log_root_transid 0 log_root_level 0 total_bytes 4290772992 bytes_used 391380992 sectorsize 4096 nodesize 16384 leafsize 16384 stripesize 4096 root_dir 6 num_devices 1 compat_flags 0x0 compat_ro_flags 0x0 incompat_flags 0x161 ( MIXED_BACKREF | BIG_METADATA | EXTENDED_IREF | SKINNY_METADATA ) cache_generation 18446744073709551615 uuid_tree_generation 11962 dev_item.uuid 36fbd781-4dc6-439c-aee2-9bca64d8983b dev_item.fsid a29577df-3106-405e-a5bc-5541569e4a6d [match] dev_item.type 0 dev_item.total_bytes 4290772992 dev_item.bytes_used 1326579712 dev_item.io_align 4096 dev_item.io_width 4096 dev_item.sector_size 4096 dev_item.devid 1 dev_item.dev_group 0 dev_item.seek_speed 0 dev_item.bandwidth 0 dev_item.generation 0 sys_chunk_array[2048]: item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520) length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP io_align 65536 io_width 65536 sector_size 4096 num_stripes 2 sub_stripes 0 stripe 0 devid 1 offset 20971520 dev_uuid 36fbd781-4dc6-439c-aee2-9bca64d8983b stripe 1 devid 1 offset 29360128 dev_uuid 36fbd781-4dc6-439c-aee2-9bca64d8983b backup_roots[4]: backup 0: backup_tree_root: 72564736 gen: 11962 level: 0 backup_chunk_root: 20987904 gen: 11816 level: 0 backup_extent_root: 72531968 gen: 11962 level: 1 backup_fs_root: 72466432 gen: 11962 level: 2 backup_dev_root: 39845888 gen: 11816 level: 0 backup_csum_root: 71794688 gen: 11958 level: 1 backup_total_bytes: 4290772992 backup_bytes_used: 391380992 backup_num_devices: 1 backup 1: backup_tree_root: 72024064 gen: 11959 level: 0 backup_chunk_root: 20987904 gen: 11816 level: 0 backup_extent_root: 71991296 gen: 11959 level: 1 backup_fs_root: 71925760 gen: 11959 level: 2 backup_dev_root: 39845888 gen: 11816 level: 0 backup_csum_root: 71794688 gen: 11958 level: 1 backup_total_bytes: 4290772992 backup_bytes_used: 391380992 backup_num_devices: 1 backup 2: backup_tree_root: 72187904 gen: 11960 level: 0 backup_chunk_root: 20987904 gen: 11816 level: 0 backup_extent_root: 72138752 gen: 11960 level: 1 backup_fs_root: 72220672 gen: 11961 level: 2 backup_dev_root: 39845888 gen: 11816 level: 0 backup_csum_root: 71794688 gen: 11958 level: 1 backup_total_bytes: 4290772992 backup_bytes_used: 391380992 backup_num_devices: 1 backup 3: backup_tree_root: 72400896 gen: 11961 level: 0 backup_chunk_root: 20987904 gen: 11816 level: 0 backup_extent_root: 72368128 gen: 11961 level: 1 backup_fs_root: 72466432 gen: 11962 level: 2 backup_dev_root: 39845888 gen: 11816 level: 0 backup_csum_root: 71794688 gen: 11958 level: 1 backup_total_bytes: 4290772992 backup_bytes_used: 391380992 backup_num_devices: 1 superblock: bytenr=65536, device=/dev/md127 --------------------------------------------------------- csum_type 0 (crc32c) csum_size 4 csum 0x8625c149 [match] bytenr 65536 flags 0x1 ( WRITTEN ) magic _BHRfS_M [match] fsid 3efd78ce-168b-44fb-bb95-19dc29dc7836 label 5e27460a:data generation 18461 root 362577920 sys_array_size 129 chunk_root_generation 17380 root_level 1 chunk_root 21004288 chunk_root_level 0 log_root 0 log_root_transid 0 log_root_level 0 total_bytes 1995432787968 bytes_used 51719938048 sectorsize 4096 nodesize 32768 leafsize 32768 stripesize 4096 root_dir 6 num_devices 1 compat_flags 0x0 compat_ro_flags 0x0 incompat_flags 0x161 ( MIXED_BACKREF | BIG_METADATA | EXTENDED_IREF | SKINNY_METADATA ) cache_generation 18446744073709551615 uuid_tree_generation 18461 dev_item.uuid afa517f2-42c0-4a7c-9782-07687fd89b59 dev_item.fsid 3efd78ce-168b-44fb-bb95-19dc29dc7836 [match] dev_item.type 0 dev_item.total_bytes 1995432787968 dev_item.bytes_used 54785998848 dev_item.io_align 4096 dev_item.io_width 4096 dev_item.sector_size 4096 dev_item.devid 1 dev_item.dev_group 0 dev_item.seek_speed 0 dev_item.bandwidth 0 dev_item.generation 0 sys_chunk_array[2048]: item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520) length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP io_align 65536 io_width 65536 sector_size 4096 num_stripes 2 sub_stripes 0 stripe 0 devid 1 offset 20971520 dev_uuid afa517f2-42c0-4a7c-9782-07687fd89b59 stripe 1 devid 1 offset 29360128 dev_uuid afa517f2-42c0-4a7c-9782-07687fd89b59 backup_roots[4]: backup 0: backup_tree_root: 362577920 gen: 18459 level: 1 backup_chunk_root: 21004288 gen: 17380 level: 0 backup_extent_root: 363003904 gen: 18459 level: 2 backup_fs_root: 41877504 gen: 62 level: 0 backup_dev_root: 152240128 gen: 17380 level: 0 backup_csum_root: 224034816 gen: 17891 level: 0 backup_total_bytes: 1995432787968 backup_bytes_used: 51719938048 backup_num_devices: 1 backup 1: backup_tree_root: 362545152 gen: 18460 level: 1 backup_chunk_root: 21004288 gen: 17380 level: 0 backup_extent_root: 364806144 gen: 18460 level: 2 backup_fs_root: 41877504 gen: 62 level: 0 backup_dev_root: 152240128 gen: 17380 level: 0 backup_csum_root: 224034816 gen: 17891 level: 0 backup_total_bytes: 1995432787968 backup_bytes_used: 51719938048 backup_num_devices: 1 backup 2: backup_tree_root: 362577920 gen: 18461 level: 1 backup_chunk_root: 21004288 gen: 17380 level: 0 backup_extent_root: 363003904 gen: 18461 level: 2 backup_fs_root: 41877504 gen: 62 level: 0 backup_dev_root: 152240128 gen: 17380 level: 0 backup_csum_root: 224034816 gen: 17891 level: 0 backup_total_bytes: 1995432787968 backup_bytes_used: 51719938048 backup_num_devices: 1 backup 3: backup_tree_root: 362545152 gen: 18458 level: 1 backup_chunk_root: 21004288 gen: 17380 level: 0 backup_extent_root: 364806144 gen: 18458 level: 2 backup_fs_root: 41877504 gen: 62 level: 0 backup_dev_root: 152240128 gen: 17380 level: 0 backup_csum_root: 224034816 gen: 17891 level: 0 backup_total_bytes: 1995432787968 backup_bytes_used: 51719938048 backup_num_devices: 1
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
@Jon_Roberts wrote:
Label: '5e27460a:data' uuid: 3efd78ce-168b-44fb-bb95-19dc29dc7836 Total devices 1 FS bytes used 48.17GiB devid 1 size 1.81TiB used 51.02GiB path /dev/md127
Metadata, DUP: total=1.50GiB, used=825.97MiB
While the metadata allocation isn't very high (I've seen volumes with 500GiB of metadata and more), the ratio metadata / data is surprisingly high. In comparison, on my 1.81TiB backup volume containing 1.20TiB of data, the metadata allocation is 368.75MiB.
Do you have bit rot enabled? Do you really need all these snapshots? Is the data hugely fragmented? An enormous number of small files? What's the write pattern?
At 50GB, I think it would be just as easy to Factory Default, reconfigure and reimport the data.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
Thanks for your help looking into this. I have already done as you suggested and rebuilt the NAS and restored the data as it needed to be up and working again by start of this week. As it was the second time in recent weeks this had happened I wanted to be sure there was no underlying hardware fault.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
Hi Jon.
Can you please share with me how did you reocver the system? Thanks in advance.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie,
The long way, I'm afraid. I had an USB backup from the night before. The files on the NAS were read-only so I could still access them over the network so I ran a robocopy on a connected PC and secured any files that had changed since the last backup by copying them to the local PC over the network (I just copied any files that had changed in the last two days to be safe).
I made sure I could read the data from the USB drive by connecting it to a PC and copying all the file over. Fortunately, in my case, there was only around 50gb of data so it was manageable. This step wasn't really needed but I was being ultra-cautious.
Once I was 100% certain I had all the data and could recover it, I performed a factory reset on the NAS drive and set it up again. Once I'd got it working, I re-connected the USB drive to it and used the admin interface to copy the contents of the USB drive back to the share. Once that was done, I then copied the 'recently changed' files back to the NAS.
Nothing techincal or clever so not really a practical solution to this problem, I'm afraid.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
Hi Jon,
Thank you for the answer. I was afraid it would require evacuation of all data and clean start. In my case we are talking terabytes of data, so it will not be so easy.
And one more thing. I have a message for person who put your last reply as "solution" to this issue. I'm technical support engineer myself, and if I was "RESOLVING" my cases like this, I would be out of job in no time. So dear Netgear community, no this issue is not resolved. Upgrade to your firmware 6.7.4 obviously can make device unusable and causes business interruption. While I understand lack of motivation for further troubleshooting, I cannot but take this "resolution" as an insult.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 1002030001 - Commit Failed #27399265
Hi Eddie, I agree. I don't see that as a solution to the error just a pretty time consuming work-around. I also suspect it was the firmware so not going to upgrade any of the other drives I support until the next release just in case.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content