/hsg/ - home server
- consumer grade hardware servers
- no rack mounts
- atx based
- show me your best non-rack server cases (pic related)
- this relates to nas' too
- linux stuff
Need you guys' advice.
I want to into home servers, but still stay as cheap as possible. I'm good at gaymen builds, but I've never built a server, so I have no idea to look for. What parts are the most important? What would you say are the minimum acceptable specs?
That is beautiful.
This is my server.
Does pretty well for me.
Start with an SBC (a *Pi for example) or a NUC/Brix. Don't drop $500+ until you know what you are doing and what you want.
I just installed a Ubiquiti Edge Router X a few days ago. The following has popped up in my logs twice now. Should I be worried?<27>1 2016-01-16T07:37:35-05:00 Edge-Router-X dhcpd - - - dhcpd: WARNING: Host declarations are global. They are not limited to the scope you declared them in.
<27>1 2016-01-16T07:37:35-05:00 Edge-Router-X dhcpd - - - dhcpd: lease 10.0.1.[old netbook I use for testing]: no subnet.
<27>1 2016-01-16T07:37:35-05:00 Edge-Router-X dhcpd - - - dhcpd:
<27>1 2016-01-16T07:37:35-05:00 Edge-Router-X dhcpd - - - dhcpd: No subnet declaration for eth0 ([My-public-IP]).
<27>1 2016-01-16T07:37:35-05:00 Edge-Router-X dhcpd - - - dhcpd: ** Ignoring requests on eth0. If this is not what
<27>1 2016-01-16T07:37:35-05:00 Edge-Router-X dhcpd - - - dhcpd: you want, please write a subnet declaration
<27>1 2016-01-16T07:37:35-05:00 Edge-Router-X dhcpd - - - dhcpd: in your dhcpd.conf file for the network segment
<27>1 2016-01-16T07:37:35-05:00 Edge-Router-X dhcpd - - - dhcpd: to which interface eth0 is attached. **
<27>1 2016-01-16T07:37:35-05:00 Edge-Router-X dhcpd - - - dhcpd:
I think it's fine / cosmetic. I'm assuming eth0 is your outside - and under the hood dhcpd is running on all interfaces. You don't want to serve DHCP clients on eth0 - as that's your ISP. Hence no subnet declaration / config for it.
>tfw when you spec out a server-grade build yourself it's about double the price you'd pay for an entry-level server box, and you don't even know if everything will work together properly
Also does anyone know if the HP Microserver is due for a refresh anytime soon?
>Also does anyone know if the HP Microserver is due for a refresh anytime soon?
Same question got asked last thread. IMHO soon, it looks like HP is trying to sell out their stock of Gen 8's first though.
I post fairly often in these but never posted guts. This is before I ran the extra SATA for my drives, but I can assure you that the wiring is still this messy. Will fix some day maybe.
Little passively cooled thing.
It runs on like 10 or 15W with that pico psu
Hoping to upgrade some of these to 2TB drives at some point.
Better capacity, although I'll be getting Ultrastars so I guess that's an upgrade in brand. Right now it's 4 random WDs and 2 Deskstars.
The eSATA chipset has 2 ports but the mobo only has one eSATA port on the back, so it gives you another port.
>disk in raid 5 array fails
>rebuild also fails
>two 'healthy' disks with rising bad sectors
Guess it's time to put together a new system. From the looks of it, ZFS is the more popular choice now anyway.
It was just a old desktop Ubuntu machine I crammed with disks and set up as a server.
This time I think I'll build with running it as a NAS in mind. Something much more power-efficient, probably running FreeNAS. ZFS does seem a bit overkill, but I can't find any other particularly convenient ways to ensure data integrity.
Your only other options are snapraid, btrfs, and maybe some automated par2 run. In any event if your going for the "data integrity" meme at least use ECC memory and reliable HDDs.
Also has a 6x 3TB raid 5 and a 2x 2TB raid 1, controller has been passthrought to a VM.
4xWD RED 4TB in zfs pool
2xSeagate 4TB for offline backup
Are there any real downsides to just using your main desktop as your home server? I'm currently using an old thinkpad, but I'm not sure of the benefits since I just leave my desktop on all the time anyways.
Apache, SSH, and FTP server. uptime is 1m because i don't use it because i don't know what use this could have. It's my test server to learn how to sysAdmin
freenas is expensive as fuck, and if you try to get the same features on a regular debian install you have to spend a good time learning since it will require a lot of unorthodox tweaking.
currently changing my server os from debian to alpine linux.
>p. good package manager
is dis gud? Any of you happen to have one?
Yeah it's expensive, but I don't mind that if it can also give me plex, owncloud, and some VMs if they're needed. I have a board that supports ECC, 16GB of ECC, and a E3-1231 just waiting to get ordered. The question now is choosing whether to go with 6 4TB WD Reds on RAIDz2 or going with the retard route of 4 8TB Seagate drives on RAIDz.
I think I'll just install Debian on the desktop and learn as I go.
Prebuilts are massively overpriced. The way they handle disks makes it very hard if not impossible to recover. For example in a normal server, if your HBA dies or raid controller you can swap it out and hope it still just werks.
If you're just looking for 4 bays get a microserver. That at least gives you a full computer and shit like iLO.
Sounds like FreeNAS is the better bet with those requirements and that hardware. I have it and for the most part it just werks.
Also, I see Toki up there. Better be the Kancolle variety.
reposting for the 9483092870432 time.
>1TB+500GB+16GB SSD drives
>Dual core AMD G-t56n CPU after undervolting
>8GB DDR3 RAM
>WiFi module acting as tor AP
>completely noiseless, fan is disabled ATM
>15W of power consumption with two VMs at 50% load$ sensors; uptime; free -h; grep "model name" /proc/cpuinfo
Adapter: PCI adapter
temp1: +67.6°C (high = +70.0°C)
(crit = +100.0°C, hyst = +97.0°C)
22:01:18 up 33 days, 1:08, 1 user, load average: 0.34, 0.32, 0.46
total used free shared buffers cached
Mem: 7.8G 7.6G 158M 36M 109M 1.9G
-/+ buffers/cache: 5.6G 2.2G
Swap: 7.4G 59M 7.4G
model name : AMD G-T56N Processor
model name : AMD G-T56N Processor
Also got a Dell Optiplex 760 USFF with dual core E7400 CPU and 4GB RAM/2TB HDD acting as a test server. Oh, and Dell Optiplex 360 MT acting as an LTO3 archiver (meant to read and write LTO3 tape drives).
Just because something is rated for some temperature doesn't mean it's optimal.
Surely, you want your drive to last more than 1 year, especially if it has important data on it?
This was my server for 2 years
>Celeron 1007u 1.5ghz dual core
>ozc 60gb ssd
>2x4tb WD Red in external enclosure
Still just weeks.
I literally have not a singe fckin clue about home servers.
Do you just store your data on an Apache Server and open a port so that it can be accessed through the Internet?
Why is no one using a GUI interface on a server? Do you just use shell scripts to manage your server? Dont some providers block home Servers?
What's your point?
I'm saying a drive that's only 1 year old says nothing about its reliability under high temperature, because 1 year is nothing in terms of drive lifetime.
The top one is an 850 Evo, the 49k and 54k hour ones are HGST, and the rest are WD.
HGSTs are built to last.
hey guys- does anyone here know how to rig up something like a personal cloud storage system? What I mean is like this- a server with tons of disk space and some kind of dropbox-like sync program that lets you store all your files on all your devices 'in the cloud' except rather than being on dropbox or apple's servers, it would be on your own personal home server.
does anyone know how I could learn to do this?
Mah dell t20
dual core with 4gb of ram mobo and power supply $140 shipped.
Prebuilt prosumer and home NAS's are reasonably competitive in terms of pricing (if your alternative is building your own), though practically none of them offer ECC unless you shelve put double or more over the already significant price.
I'd second the microserver recommendation. It's just about the only prebuilt with ECC and easily accessible drive bays which is important for consistency. Plus it's miles cheaper than all the prosumer nas's.
I got mine for £170, plus £100 for a xeon 1260v2, £100ish for 16gb ram and a 120gb ssd for like £45.
Thinking of getting a second one, leaving it standard and use it as a backup for the main one.
I was looking to get a small NAS just for storage. The T20 is actually cheaper than buying a WD or QNAP NAS system, and offers double the hard drive slots, more RAM, and a faster processor.
I think I might go for that.
For everyone interested in OP's picture, I found the origin: http://www.stephenyeong.idv.hk/wp/2011/10/home-server-oct-2011/
Unfortunately there's no info on the chassis or hard drive cages or fan controller.
I'm installing CentOS 7 on a desktop (Dell Vostro 230).
My main goals are to:
>Secure the system
>Enable SSH from outside of the network (so I can access from anywhere)
>Enable transmission daemon (So i can remotely download torrents(
>Enable FTP (so I can transfer downloaded content to laptop)
How can I achieve these goals?
it's been pretty good to me so far. honestly, i think alpine is going to be the next big thing on /g/. small, no systemd, growing number of packages. just about everything any autist would want.
not actively using this at the moment.
Just checked out that guys blog and stumbled across this rather cool looking NAS case.
A pity I cant find any reseller in the UK though. Probably costs a fortune in shipping.
4x 2tb internal
4x 4tb external
ILO 4 advanced license, Smart Array Advanced (P421) Runs Windows. Deal with it.
I have a 2ghz c2d 2gb ram machine running Windows server. I recently ordered a pci USB3 card so I can connect my 2x3tb hard-drives.
It currently only does torrents, ts3 and web server. It'll be a nas once I get the card.
>not just storing the drivers on your HP Microserver.
Own cloud is eat you are looking for, but it's woefully slow for me.
Best thing would be windows share. You can access it by connecting to your home network with VPN (any good router should have this)
Ftp works great with Android clients, but for Windows there doesn't seem to be a way to mount ftp drives that works well.
was going to pull trigger on a home file server, few questions I have though
looking at this cpu/mobo combo
A) any better boards for the price?
B) is it gonna be powerful enough for a plex and samba server? (not gonna be doing much transcoding)
Already have a Node 304 w/ a bunch of WD Greens, power supply, and adaptec raid card.
my all in one (two) solution for server shit. these bad boys run all my shit in VMs.
there both full atx boards with consumer atx psu's in 4u cases in a custom half rack
This is fucking awesome
I can remotely update my CentOS server from my phone
Meh. I'm running a 2-bay one right now until I buy/build myself something better. It's alright as a stopgap measure, but I wouldn't bother investing in a 4-bay model.
What do you need two switches stacked on top of it for?
For the lulz:
E5-1650v2 @ 3.5g x 6 core
1 x PCI NVMe Intel 750 SSD 1.2 TB for VMs
2 x 120 GB SSD RAID0 for Boot
1 x Crucial M550 1 TB for VMs, Vidya
2 x Seagate 3TBs striped for daily backups
4 x Seagate 2TBs for a 4 TB Mirror for archival storage
NTFS on Boot, ReFs on everything else so that when vidya's crash the workstation, my filesystems are still consistent.
Laptop on top of the CCTV host with external hdds connected
Xeon e5 2680
128GB ECC RDIMMS
2TB local storage
Some shitty cooler master case
5x2TB WD reds raidz1
1x256GB SSD isci for esx host
4x128 mirrored SSD NFS for esx host
32GB ECC RAM
In a lianli q25b
FreeNAS on Centos is literally:$ sudo yum localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
$ sudo yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
$ sudo yum install kernel-devel zfs
$ zpool create -f muhpool sda sdb sdc sdd sde sdf
It's easy as shit
>Do you just store your data on an Apache Server and open a port so that it can be accessed through the Internet?
Apache is just a webserver. I store mine to access some data locally through simple folder mounting, and some files are stored for remote access using ownCloud.
>Why is no one using a GUI interface on a server?
Using ssh is usually easier and faster. It allows you to have your server connected to only a power supply and an ethernet. No need for extra peripherals at hand like a screen, keyboard, mouse. Connecting to an SSH session is also faster and more reliable than VNC or Teamviewer.
There's also the element of having a little bloat as possible on your server. Why make the server run a GUI when you're not using it 99% of the time?
>Dont some providers block home Servers?
Some providers may block certain ports, but you should always be able to host a simple webserver through port 80 or 443. If not, your provider is horrible and you should switch.
Here are mine.
Using one of these with my old comp's parts running it. No pic of my own since my phone's rom ruined camera focus support but the performance and battery life is worth it.
Wide case but allows me to cool the whole thing with a single 200mm fan leaving it nice and quiet. First near silent build i've done but gotta say it's nice. Probably wouldn't have paid more than $30 for it though.
On a daily basis, not a lot, but most run services.
Main: Compute, seedbox, and databases.
nginx: what it says on the tin, runs about 7 websites.
pineapple: Student society (developer society) server, hosted by uni
[???]: University server. General compute, useful for keeping an eye on lecturers. Has my details all over it.
fez: The next university over has a computing society, this is one of the machines in that cluster. I help administrate it.
euro-vm: Runs XMPP (prosody), nginx, and certain other services for a social group for EVE Online
SDF: http://sdf.lonestar.org/ . This was my first shell account, hosting web stuff and my irc connection.
Rigel: one of the servers in the rack in pic. You can't see it, it's down behind the desk. It's misc, it has serial ports so it manages the switches etc.
Pollux: Thinkpad X240, day to day machine.
Keep it consistent yo. Almost everything is Debian jessie, so that's that. Also, I only really need to use about half of these in any given week, mainly "main".
>Acquire SSD/laptop HDD
>Replace ODD with said drive
>Only assign the one drive to the RAID
>Everything else runs in AHCI mode
>No random fan ramping, almost silent
Running Debian on mine, mostly for Samba and torrenting.
My laptop has i3 for "normal computing", but other than Chromium I don't need it for much.
There's also a big black PC you can see at the bottom of the rack, with a keyboard across it, that's actually got windows on it, with steam etc.
I was legitimately considering getting a thin client fro those Chinese wholesaler sites as my own little seedbox but then I just went and got a server in the netherlands
is dat assrocks soldered cpu board? I thought about buying one and having an antirely passively cooled htpc, pussied out and got a qubi instead, summers get pretty hot and risking unstability wasn't an option. Really happy with it for now.
>tfw home server is thrown together from old parts
>tfw RAID5 is all different drives of different capacities and form factors
Waiting for one of my drives to fail.
>Built my first house.
>Moving in soon.
>Currently using a G630 with 8GB RAM and a 15TB JBOD for everything.
>Going to buy some rackmount shit and build a real server.
>Reconditioned UPS - $150
>Wall mounted rackmount case - $50
>Dual 5540 w/ 48GB of ram and 6 SAS caddies - $400
>6* 4TB SAS drives - $1500
>Decent second hand switch - $100
Just have to pull the trigger and then spend a weekend putting it together, making sure nothing is going to explode.
I'll be using it to stream my Plex to up to 7 concurrent users, torrent, run my home security network and virtualise a few development environments.
I might cannibalise an old gaymen computer (2600k/680) and slave it as a steam streaming machine.
Is there any way I can use my server to power on and off the gaymen computer - I've seen arduino controllers that connect straight to the power button circuit and a USB port, but if there's something that doesn't require an intermediary device, I'd be happier?
Also, should I just stick with Debian, or would I get more functionality/capability out of a WS variant?
Yeah, I had IRN BRU and in the bowl I had borscht, the two best coloured foods out there.
Also please save me from this shit project management class
I call it the clunkpad on account of the fucking awful floating touchpad. Also, apart from the spacebar, they keyboard is pretty nice.
4GB RAM, 120GB SSD, i5-4300U, 10h battery make it worth it though.
I also got the monitor in >>52487361 , and a dock for it, plus lenovo keyboard and mouse for £480.
I'm pretty pleased with it overall, and it's a nice portable laptop for school. My bigger one was a bit shit to lug around and was always needing recharged.
what do you intend on doing with the server? what you should get depends totally on how you want to use the server. the easiest thing that comes to mind is a media or file server. you can get away with using a regular pc for that and it would work well. for other specialized stuff, it depends.
ill make new photos when the APC's are in.
here is a old one for now.
Got a watchguard firewall for free.
It's an xtm505. There's some docs on installing pfsense but I'm wondering if linux can support the packet accelerator that comes with it.
So far no problems. Bought it used off craigslist for like 80 bucks about six months ago. It probably does need new batteries, but it holds up fine.
I am not using the 750VA because the batteries are bad.
Getting a home server has ruined my life
>all money goes on drives
>server is 7 months old
>that would mean getting a switch
Does it just get worse from there? I keep seeing people post fucking racked home servers and I keep saying to myself that I don't need it but I know that I'll end up with that shit soon.
I'm not even particularly bothered about processing power, I just want shitloads of storage.
>Image and filename related.
Makes my dick hard thinking about it. But at what point am I shooting myself in the foot by having 8+ microserver if I want 30+ drives instead of just a decent jbod rack?
I can get the basic gen8 microserver for £170 new.
>Get the dl360
>Install Software raid card
>Boot 1 instance of a NAS OS
>Boot a mail server instance (for reasons)
>Boot a website portfolio
>Get a Plen instance seperate or running in the NAS OS
And you're pretty much halfway done with the power.
Also remember Get software raid and not hardware raid cards don't go full Linus.
>Is there any way I can use my server to power on and off the gaymen computer - I've seen arduino controllers that connect straight to the power button circuit and a USB port, but if there's something that doesn't require an intermediary device, I'd be happier?
BUMPING FOR ANSWER
So, it turns out I did get my end of year bonus this year.
And I've ordered the replacement for the HP. Not sure what I'm going to do switch wise yet, but I'm leaning to changing out the fans in one of the 48 port HP's I have, and telling it to ignore the fan speed errors.
If you're not worried about power or noise, but just want storage, look at an HP DL380G6 and some HP MSA 60's (or Dell PE 2950's and MD1000's)
Unless you want 2.5" drives for everything, in which case you'd go MSA 70 or MD1220.
Those drives are the weird shingling ones that need special drivers because they write data in a nonstandard way. I think. Either way, only get those if your workload is write once and never modify.
Never had that issue with them. We have thousands deployed at work and they get hammered. HARD. Failure rate seems to be about .7%, but I'm sure that will go up with age.
Possible a firmware thing?
Not in terms of failure rates. Was under the impression those ones didn't allow for changing a bit once it's written without rewriting the entire track, making for poor home usage.
If I'm misinformed that's even better, those are super cheap for the capacity.
New to this but considering something like a Gigabyte Brix or Intel NUC with Kodi or PLEX (not sure yet) with some kind of USB3 RAID array.
Mostly for media streaming. But will mounting the drives on my PC be slow?
Or perhaps just a micro ATX machine for it all?
Not working in IT at the moment, just a student that likes to keep informed.
Is that the preferred way to use storage spaces in enterprise? I was looking at using WS2012 and storage spaces, but I don't know how stable it is.
We remove upwards of 200 at a time, and yes, the storage space LUN's stay online without error. If you insist on commenting on something you should really at least get a baseline idea of how it works first.
Yes. Storage Spaces wants JBOD. Any type of redundancy just adds overhead, unless you're provisioning SAN as the storage spaces pool, but that's beyond the scope of this discussion.
When you carve out a volume, you can choose the level of redundancy there, as well as how much tiered storage you wish to present.
As for stability, I've only ever seen one array go down, but it was 4 3TB WD drives connected to the same USB3 hub. But even then, it took a bit to get it to break (and that was the point of that test).
The volume metadata is stored on all the disks in the pool, and I personally have lost 5 drives in an 8 drive pool and been able to recover most of what I needed.
Protip - Dropping running hard drives is bad for them.
Good to know. My server is slightly more robust than a USB hub thankfully.
Reading about it, seems similar to how most software RAID solutions work. Currently using ZFS on FreeNAS, so SS doesn't sound like too much of a jump at least.
Now I just need to cobble enough storage together to migrate everything....
Yeah, it's pretty straight forward. It's also nice because it eliminates another potential failure point (RAID card). If the drives are just presented as AHCI or SAS JBOD, as long as the drives can be read by Windows it doesn't matter.
In my case I moved my 3 remaining good drives + one that was bad but at least being ID'd to a USB enclosure, and the pool was still read.
It's just so damn easy...
>In my case I moved my 3 remaining good drives + one that was bad but at least being ID'd to a USB enclosure, and the pool was still read.
That's quite lucky. I have a esata controller and 2bay enclosure and the controller in that enclosure somehow fucks up hard drive IDs. I can only add the first drive into a pool but can access both when individually formatted.
>That's quite lucky
The hard drive ID doesn't matter. Once the drives are allocated to a pool, there's metadata tagged on all of them.
That's the point with Storage Spaces, the drive can literally live anywhere, and move anywhere.
Though I have seen something similar to what you describe with eSATA. In fact, the 4 drive canister I have on the micro server is the same.
The individual drives will report with Get-Disk, but not with Get-PhysicalDisk. It's strange.
do prebuilts count? I've a synology ds214se. It's sold as a "home NAS" but it's really just a low power server box, i could pretty much do anything I could want to with it, web server, DNS, VPN, asterisk, etc you name it, yet all it does is download porn. Almost feels like a waste.
One thing I'm still unclear on. Does it support dynamically resizing the pool? If I add another few drives to the pool or change the level of redundancy of a space will everything just kinda automatically reshuffle around?
>The individual drives will report with Get-Disk, but not with Get-PhysicalDisk. It's strange.
Yes exactly that. Got tired of figuring it out so now those 2x3tb disks are striped backup.
Definitely something that made me regret not going with an SAS enclosure.
I got my current licenses through Dreamspark, so should be able to get 2016. Not sure if I'm not reading into it far enough, but these features don't seem to apply to a single server situation like mine. Is it mostly the ability to build SS over a network? Might have multiple boxes in the future though....
Awesome, this was the flexibility I was missing out on with ZFS.
What do I need to read up on if I wanted to safely open my server up to the internet? I have a programming background and I'm familiar with Linux, but I wouldn't call myself a system administrator or anything and I don't know much about network security.
> How would I go about adding a second one with equal storage as a backup?
Are you trying to provide the storage as usable space, or just as a backup location for your existing server?
> All too often on here people see WS and wonder why you aren't using Linux
And Linux has it's place too. It's one of those things. Why does it have to be one OR the other? Why not both?
It's actually either a checksum offload, or a crypto accelerator.
>Dell Powervault MD1000
Why do people even bother with £400+ 4 bay solutions? Is it really as simple as getting a perc raid controller and drop some disks in?
I think there are some SansDigital or Norco cases around, but for rack gear, sound doesn't matter, because it's all in a data center.
I've heard that fans can be changed and the BMC's (Baseboard Management Controller) tuned for a lower fan RPM, but I've never done it.
Eh I might hold out until I finally buy a house and can dedicated a room or a corner to not so quiets servers. Currently the gen8 and all other shit is next to my bedroom and I can hear the fans when I'm transcoding videos for days on end.
Also then I'll be one step closer to having ethernet in every fucking room.
No when it's idle the xeon sits at 35-40C at 11% fan speed, under load it spikes to 80C until the system fan picks up, then goes down to 65C at 45-50% fan speed.
It's not too bad, just trying to imagine what a 12 bay JBOD enclosure sounds like when all you have is a 4 bay box with one fan is kinda difficult.
So I can get a IBM X3650 M1 for 60 euros, shall I get it?
That was the toppest of keks. three separate hardware controlled raid 5 arrays all striped.
Jesus Christ Linus what are you even doing?
As soon as he explained his setup I was just like u wot m8.
Yeah the DDR2 is actually a huge let down. A HP SE326M1 sounds interesting as fuck.
Unfortunately all the ones I can see are on Ebay, so I don't know how shipping of it is gonna be to my Country. One of them has a shipping of just 13 euro which sounds kinda fishy.
I've got pic related, two WD greens inside. Using the USb backup from an external drive I can get write speeds of up to 90mbytes/s, but over wireless it tops out at around 5mb/s. If I'm streaming something to my TV and try to do a file operation on my laptop, the file operation will be slow and time out at are 10%. The stream will work fine though. If I'm doing a file operation I won't be able to stream, so I can really only do one thing at a time and there are usually 2-3 people trying to use the server at once to stream stuff or whatever.
my router is only rated to 65mbits/s, would it be worth upgrading that to 100, or even gigabit? Would hat alleviate those timeut issues? The Actual NAS itelf only has a tiny amount of RAM too, culd that be an isue? I might have to upgrade to a self built solution with more space.
>secure the system
no root login via ssh, 4096 bit rsa keys for ssh login, no forwarded services other than ssh
port forward and do above security
>enable transmission daemon
ftp sucks and you should use sshfs
Just bought a Intel S2600IP4 motherboard for a home server
Uses a Custom 14.2" x 15" form factor?
Anyone know a case that it can fit in, it's about 2 inches longer on both sides compared to a EATX motherboard
IRN Bru is fantastic, I went to scottland once, shit was cash. I order 2 liters of that shit to murica whenever i get the itch.
They tend to have lots of high-grade hardware. Hotswap PSUs, hotswap drive bays, 2P (sometimes even 4P) mobos, xeons, ECC, sometimes HW raid.
You can get all of those things individually but it ends up costing more for less quality.
Just finished setting up my seedbox that I share with a friend that moved to another country, Kimsufi that I got for cheap. Not exactly a homeserver but still.
>secure iptables, only allow port 22, 80, 443 and two random 49XXX UDP for rtorrent as incoming, drop everything else from the outside
>ssh key-based login, non-root, password-less, explicitly allow only our accounts to login
>install docket, run 2 rtorrent+rutorrent containers, map the containers' Download to each user's ~/Downloads
>each of us can access the UI in different ports
>install simple certificates and only use https
Bretty good senpai, I'm auto-mounting my remote /home to my desktop with sshfs, running like a champ.
I did a lot of research into this. AMD can provide some very basic, very cheap ECC capable builds. Asrock seems quite generous in giving even lowend motherboards ECC capabilities. Also possible with Intel but CPU and motherboard are a bit pricier.
USB3 doesn't play nice with more than one device using a lot of bandwidth. I have 2 drives on usb3 too and when one is reading/writing the other one pretty much crawls to a halt.
You either need esata or more usb3 controllers.
>How does Debian with ZFS on Linux fare?
My own experience is that it doesn't work on debian testing. I tried to set it up few months ago, but failed.
Now that ZOL is finally in debian's repos, things might change in near future.
Hopefully my next file server runs on debian with zfs, instead of bsd I'm currently using.