Alternatively titled, the evolution of hobbies and fulfilling their needs.
This time last month (to the day) I started a blog entry, progressing for a week and a half about building a network-attached storage PC. I noted within the first few paragraphs that this would not be an inexpensive project, which now I am ready to talk about cost (what it takes), other options, and in general how hobbies (and their needs) develop over time. But lets start with an epilogue to the last entry.
The final setup of Konor, including the volume setup occurred Friday night (with the YouTube video recorded late in the evening) while the final addition to the FreeNAS blog happened sometime Saturday. On Sunday, everything started to fall apart, with this email:
Subject: konor.node: Critical Alerts
The volume tank (ZFS) state is DEGRADED: One or more devices are faulted in response to persistent errors.
+Sufficient replicas exist for the pool to continue functioning in a degraded state.
I ran a few checks and tests, and I was able to determine with a good degree of certainty that one of the drives (coincidentally the same one I had trouble firing up early on Friday night) was defective. The ZFS scrub discovered several hundred checksum errors, and although I could not make it determine any SMART error, the drive was bad (or, the bottom drive bay).
I was about to start the RMA process with Western Digital (I register all serial numbers with the company at the purchase of drives now) when I remembered I would probably be well within the return policy with my supplier. So I issued an RMA with them, turned off Konor, pulled the drive, and shipped the drive back with the help of original packaging and a supplier label. About two weeks later, the replacement drive arrived in the mail. I screwed the drive to the bay bracket, inserted the drive, powered on, and performed a replace and “resilver” (populate the new drive by recovering the missing pieces that would have been on the bad drive using redundancy information on the other two good drives) on the FreeNAS interface. This process took a while, but in the end, no data was lost, and everything seems back to normal (even as I write this).
The degradation problem (and solution) taught me how I am adaptable to problems as they come up, pinpointing the exact issue (for a while I worried it was a caddy connection problem), and capable of carrying out the solution. Which is good, as there are other things that can go wrong as well.
Not up to Parity
Another issue (potential or real) that I have come across early on (the Saturday before “diaster”) was that the system rebooted by itself once. A log of the issue was difficult to find, when I did find something, it linked back to a memory (RAM) issue.
Now, I am using a third generation i5 processor, which the line seems to be allergic to ECC (error-correcting-code) RAM. Lack of ECC RAM isn’t that big a deal in the desktop world, but it is a big thing for enterprise systems. For FreeNAS, it is almost a requirement as the scheduled checking and cleaning of the drives volume for data errors (known as scrubs) – like “flipped bits” – relies on trusting the RAM isn’t going to lie. Something more serious may await, but for now I will keep rolling dice with every day.
Just Another Day, and Another Project
All of this to say nothing is infallible or immune to issues, and none of this could have been prevented if I had went another way with this project. Although, this can be considered a stepping stone to redundant storage solutions. The next step I need not worry about for a while, except to sense what is important and how I can achieve that for next time.
No, I won’t be rebuilding the server any time soon.
I did however, have a brain fart over a suggestion made by a business equipment supply store over a month back when I was looking for an SSD bracket. Why could I not have piggy-backed a solution off my main PC, that runs 24/7 anyway? That is when I took a quick look online, and discovered one of these bad boys:
No, I did not buy it.
But what is it? My FreeNAS adventure started after watching a video of a build, followed by me finding and buying a caddy (the Icy Dock) shortly after, an internal device. The chassis posted above is purely external, connected to the computer by USB 3.0 (there is a version that can do eSATA as well). It can handle five drives instead of three (although later on I will suggest why not to use all five for the same array), and most important of all, it does not require another computer to be on all the time.
That last bit sounds like a deal-maker to me. When I was determining if my power supply was going to be sufficient (I eventually swapped it for good measure then later found out I needed to anyway for the connections), I estimated using the Cooler Master website that I would need a power supply capable of at least 175W (since I got the host bus adapter card, this has now increased to 200W) load. There are at least two variables however:
- Power supplies are not 100% efficient and
- Parts do not require full load all the time
Let’s assume the two factors balance each other out, and that the server downstairs is sucking up 200W 24/7. That would mean I would be paying an extra $20 a month on power, approximately. Similarly, three 5400RPM hard drives and a cooling fan uses about 30W, or $3 extra a month.
All this for a part that would (in the estimate phase) have cost the project $100 more when in the end would have saved the project the same amount (I am not proud of this, but I spent around $400 on cables and parts, excluding the drives themselves). This oversight will continue cost me about $200 a year in power.
But…this is all part of the learning process to a new hobby.
A Budding Hobby Needs a Turnkey Solution
I can think of three reasons why I should not beat myself up for “jumping the gun” and building an unnecessary server, and this can relate to any new hobby where there is a learning process:
- You are not going to know absolutely everything from the get-go
- If you try to know exactly what you need from the get-go, you will drive yourself wild in the process and still eventually find things wrong and
- If you knew everything to know about a hobby and have actualized it, the hobby is considered over and done with, as a hobby is naturally a never-ending process
This also counts for buying a wide range of products for a purpose. For all the smartphones I have gone through since my first purchase in 2014 I have had many requirements. Some I have given up on (RIP physical keyboard; long live Blackberry Priv), others I have prioritized. I realize my current phone will (being morbid aside) not be my last, and my needs will continue to evolve.
From looking at the empty PC case about six weeks ago to now (looking at another empty case) – yes I did manage to reuse an SSD – my first NAS project is still a success. I have learned quite a bit about FreeNAS and the underlying FreeBSD. I learned a bit about the NFS protocol as well. All from a DIY project that uses an operating system which I would consider a “turnkey” solution. FreeNAS is built to make setting up a NAS easy, and had I have gone through the cheaper route, I would have overwhelmed myself with the process that is alien to me so new to the game. The alternative route I would have lingering and sometimes critical questions:
- How do I place the drives in an array?
- What file system will I use?
- Does smartmon detect errors on drives connected over USB?
In regards to the second question, I will keep an occasional watch on btrfs (unless ZFS officially makes it to Linux) as it is sounding the most promising for my setup. Except right now, btrfs is only meant for RAID-0 and RAID-1 scenarios (where I am currently using a type of RAID-5). Maybe when I am ready to take a crack at a solution that feels “more proper”, btrfs will be ready for striping disks.
At that point, I would revisit how much money I will be willing to put down on disks for a new array. I had three factors in play last time around:
- Final Storage
Three drives seemed to provide the perfect balance, with a lot of weight placed on cost (since, drives are expensive). I used 3TB Western Digital Red 5400RPM drives, at $140 + tax, each. This left me with (less than) 6TB of redundant storage for around $500 CAD. Had I went for four 2TB models to get the same storage, I would have paid out $560, and in terms of redundancy, puts me at a greater risk of total data loss by a significant margin as well (more drives in a RAID-5 increases the chance of two failing at the same time, which a RAID-5 can only recover from one disk failure at best), even though my early failure was a rare occurrence.
Five drives at RAID-5, and you are really asking for trouble, as well as shelling out a lot of coin. At that point you will want to use RAID-6, and apparently that is not a possibility for the enclosure I showcased above. Maybe it will be an option by the time I purchase one. And even then, for the cost, I think I will stick with just three drives.