In this ITT thread we sit by the warm glow of our lack racks with the rhythmic blinking of hdd indicators reflecting off our faces, and discuss:
- Virtualization Technology: Do you use Xen, KVM or ESX? Do you think virtualization is for pussies and keep all your hosts baremetal? Or are you an openvz/lxc type guy.
- NFS, Sambaor iSCSI?
- Whitebox or last-gen commerical?
- Do you host subsonic? plex? rtorrent? IRC server or bouncer? wiki? Whats your killer nginx reverse proxy config?
And remember, no matter what you think, there is only 1 right answer to these questions.
Just finished migrating my debian + libvirt setup to proxmox. I skipped visualization for lxc containers this time because all my hosts are debian based anyway. So far I really like it though I do feel dirty after switching to a proprietary solution.
Depends on what I need
- NFS,Samba iSCSI
Depends on my hardware
- Whitebox or Last-Gen comercial
Depends on my budget
-What do you host
>Xpenology Box (HP Microserver g7 8 gb of non ecc RAM)
Currently i'm hosting Plex,SMB,Download Station( a hacked up version of transmission for Synology),Security Station(IP cameras).
>PFsence Box( i5 4 gb of non ECC Ram)
Local website caching, AV scanner for all downloaded files,firewall (duh),and a few more things.
I could probably just boot Xen on the Gen 7 with 4 gigs of ecc ram and do all this from there with a decent network card right?
Trying to set up and run a debian based Plex server, except I have no idea what I'm doing. It's all physically set up but I can't seem to get anything to work. Right now I'm trying to get remote desktop to work (xrdp) but when I log in on my Windows laptop all I see is pic related.
That's not even to mention the trouble I'm having getting Plex installed. Being a first time linux/debian user is hard.
Anyone using kvm with vt-d gpu passthrough for LINUX guests? I dont get it why i cant get it working. Ive tried ubuntu, debian, arch for guests. Windows guests work well, rebooting is no prob. Linux guests hang eventually host too after few reboots. Nothing on dmesg. Should I use ovmf or bios for linux guests? Ive tried everything..
Seriously, just use SSH and learn to deal with it. Everything is very easy after you get the hang of it and doing something like getting Linux remote desktop working properly may very well be more complicated than getting Plex installed in the first place. In fact, installing Plex is nearly trivial.
Find the link to the latest Ubuntu Plex package (*.deb file) on the Plex website, then in a terminal on your server do this:wget <link_to_deb>
sudo dpkg -i downloaded-file-name.deb
That's it. Some things REALLY are much easier and more straightforward by using the command line. The first command downloads the Plex package, the 2nd one installs it.
A crappy Optiplex 760 with 5.5GB RAM running ESXi with pfSense and CentOS as a samba fileserver + other testing VMs
got an old tiny computer running ubunutu desktop which I can control using putty/webmin on my other computers.
Just using it as a NAS pretty much.
Yeah there really isn't anything difficult about it. Just read the installation instructions and make sure you apt-get all the dependencies(though you could just apt-get -f install after running dpkg)
I love the concept of strong computers feed thin clients. Use this as my powerhouse for video editing/game streaming along with a giant Plex server, Teamspeak, Mail/nginx/etc server.
Runs WS2012 + Ubuntu Server
More like dog-proofing. He's a small, but very curious dog. The button is a perfect fit for his paw so when he goes climbing and jumping around, his paw pad can sometimes press the button in. There used to be one on the UPS too but he tore it off.
UPS's still not in.
>- Virtualization Technology: Do you use Xen, KVM or ESX? Do you think virtualization is for pussies and keep all your hosts baremetal? Or are you an openvz/lxc type guy.
I use ESX with vcenter to manage them all.
>- NFS, Sambaor iSCSI?
most stuff still uses iSCSI
>- Whitebox or last-gen commerical?
Current are last-gen commercial barebones for me.
- Do you host subsonic? plex? rtorrent? IRC server or bouncer? wiki? Whats your killer nginx reverse proxy config?
running plex but I dont use it myself, torrents, terminal server, backup servers, legacy servers, game servers, IPAM, NTP, mail, observium, VPN, WDS, unifi, websites, AD, FTP etc etc
For aesthetic reasons you could wire the reset button as a power button, just change the cables on mobo. Or disable power button either from bios or altogether and just a) keep it always on (from bios) or b) set up WOL with magic packets.
4gb isnt that much(especially for full visualization). Where do you host your files? If its on the g7 then you might run in to some problems if you like streaming/torrenting on your server(they can choke even with fast protocols like nfs). I would look into containers(lxc) which allow a clean environment with minimal wasted ram and possibilities for fast disk access(via mountpoints)
Looking to build a home server, currently picking parts. I'm used to building computers, but I'm a bit lost when it comes to picking parts for a fileserver.
My main focus is reliability and power saving(and of course money). Plan is to put CentOS on it and run ZFS with 3x2TB disks. I may also put PLEX on it.
Here's my list so far. Opinions? Can I jew more on the processor?
>Redundant filestoring for safe data storage
>A webserver that allows my girlfriend to easily download anime from me because torrent is blocked in her dorm
>Torrent server, controlled via webui from my main desktop
>Hentai@Home for mad exhentai points
>Privately hosted cloudstorage (ownCloud)
I could run most of this on my main desktop, but I dislike the idea of not being able to shut down my computer when I want to.
Everyone will probably tell you that you need ECC RAM if you want to go with ZFS. At which point you'll realize that you need to start making compromises either by getting an older generation cpu+mb kit, or by shelling out a lot more money.
Am I the only one that's had very bad luck with headphones? My music collection is very well organized(just filed by musicbrainz), but it hardly recognizes any of it, and there is no obvious way to match the rest.
Planning on mirroring two sets of two disks for data and media on winblows server 2012 and storage spaces.
Refs or ntfs? Only thing putting me off refs is that I can't just pull a drive and read it on other machines because refs.
For 2012 I'd to NTFS. I don't think ReFS is quite mature enough yet.
Also, if you're going to use Storage Spaces let it handle the RAID levels, and present the disks to it as JBOD.
Considering replacing my ProLiant Micro G8.
Toying with http://pcpartpicker.com/p/bsDwRB
Otherwise, I'll go full retard and pick up a pair of Dell R610's and an MD3200 + a couple MD1200's.
Would prefer to stay simple-ish
Leaf blower? You mean jet engine.
I'm thinking of getting a quiet server instead.
I don't use it because of the noise.
Quite a waste of money, unfortunately
The top one is a ZyXEL GS1920-24 that replaced the bottom one, 3com Baseline 2948.
>Also, if you're going to use Storage Spaces let it handle the RAID levels, and present the disks to it as JBOD.
I'm a bit apprehensive of committing to something like that for now. I'm used to my retard method, syncing disks using freefilesync.
Gives me the opportunity to undo changes, along side shadow copies, allows me to just pull a drive if the machine were to die and read it on any other machine I have and I can't destroy all my data by being a retard.
If I had spare disks to fuck around with I'd mess around with storage spaces a bit more and get some experience.
Storage spaces puts metadata on the disk. Any Win 8 machine can read a storage spaces set.
I've had to do DR a few times when servers blew up.
Protip - you can attach the disks via USB and as long as enough of them are there, the data is perfectly transferable.
A shit I forgot about that. It wouldn't be the case if I used Refs but I forgot you can use NTFS too.
I suppose since my plan is to use 2-3 sets of 2 mirrors I could pull either drive on its own and it should be readable right? Is it possible to read them on winblows 7 too?
4x4tb and 2x3tb.
I'm currently backing up the 4tb drives onto the empty 3tb drives so that I can reformat the 4tb drives, copy everything back over and then set up the 3tb mirror drives last.
Add the 4tb drives as JBOD to storage spaces, and then create a mirror volume. Once the data is back on the 4tb drives, add the 3's to the storage pool and expand the volume.
I understand why you want to do things with mirrors, but you're adding an additional level of complexity, and what happens when you can't use the RAID controller?
As for Windows 7 and storage spaces, I don't recall, and I'm too lazy to search it.
>I understand why you want to do things with mirrors, but you're adding an additional level of complexity, and what happens when you can't use the RAID controller?
That's the beauty of it, I'm not relying on a RAID controller. I'm in AHCI mode and all my disks are individual volumes. I sync them using freefilesync, it's a pretty nifty piece of software.
Also the only reason as to why I'd rather not include the 3tb drives with the rest is because theyre running through a single usb3 controller and they start acting up when theyre accessed simultaneously. Currently waiting on an esata controller that should take care of that though then we'll see.
>I sync them using freefilesync
Apparently I can't into reading today. Sorry.
Including the 3TB drives is not required at all. I know, I'll get shit for it because Windows, but I'm a huge fan of Storage Spaces. Use it at home on small (4+) drive arrays, and at work in storage clusters (72 - 500+) drive arrays per node.
Don't get me wrong I've had a quick play with 2 empty drives I had but that doesn't really give you the real experience . I'd love to fuck around with it with 10+ drives and see what it all actually means. I simply don't have the drives and spare data to mess around.
In 2016 testing I'm using ReFS, but I everything in production is 2012 r2.
I think I'll end up rebuild a lot of my storage spaces environment when I deploy 2016, because thus far I really like the way it handles clusters and local storage.
currently waiting for a 3tb WD Red drive.
>dual Xeon 5160
>4gb ram NON ECC
>Quadro FX 4800
>40gb OS HDD (old)
>360gb HDD (old)
i have no idea what the power draw is, but it's at very low cpu/gpu usage since i use it mainly as a Teamspeak / Home Storage server.
>>dual Xeon 5160
Per HP, for the xw8400 (guessing from pics)
Memory module features :
● Eight memory slots for DIMMs
● 512–MB, 1-GB , 2-GB, 4-GB pairs ● 32 GB maximum configuration with 4-GB DIMMs
● Configurable for Single Channel (one DIMM), Dual Channel (two DIMMs), or Quad Channel (four to eight DIMMs)
● DDR2-667 or DDR2-533, Fully Buffered DIMMs (FBD)
● No support for mirroring
● No spare DIMM support
● Standard FBD, ECC (72-bit ECC)
I'd definitely recommend the GS1920-24 over the Netgear GS724T. I've used both, but the Zyxel is better, in that price segment. If you have a slightly bigger budget, there are some nice Procurves with better support and warranty.
Don't bother with the old 3coms, they're loud and power hungry.
What distro are you running on that poweredge? I'm actually having some trouble getting centos 7 running on it, and I'm curious if you might have experienced similar. Debian gave me driver trouble, so I'm out of preferred distros to run on the thing.
Not that anon, but what errors were you getting, and I'm guessing you're using x64? I can give it a shot when I get home, and sometimes OEM's will provide custom installation media. Dell is especially good about this with VMWare.
Specifically, upon booting to the usb drive that has a dd'd centos 7 image, I get "isolinux.bin missing or corrupted". I know it's NOT corrupted because the iso works on my laptop and desktop just fine.
So nobody else controls the machine.
If someone wants to know what is going on on your server then they have to get a warrant.
The rented box might not be allowed to say it has been raided
>- Virtualization Technology:
>- NFS, Sambaor iSCSI?
>- Whitebox or last-gen commerical?
>- Do you host ?
Flexget, Logitech Media Server, Transmission-daemon, OpenVPN, Ngnix reverse proxy, etc (all containers)
I guess if I'm spending a lot of money on this already, I might as well go all out. Most of the ECC compatible motherboards require a Xeon processor, so the price goes up even more. I guess I'll save up for a month before make the purchase.
>Most of the ECC compatible motherboards require a Xeon
Asrock makes quite a few non-server boards that take ECC. Intel is funny with ecc compatibility when it comes to their processors.
Certain celerons, pentiums, I3 and I5 are ECC compatible but very few. Intel ARK isn't overly accurate either.
It's cheaper to go AMD if you want the cheapest ECC route but I was deterred by the performance.
Buy it? Seriously though what do you need it for? I've been using the trial and it just werks. I use team viewer for 99% of remote management and during install and BIOS I use iLO.
It disconnects me every minute or two but its like 2 clicks to reconnect.
It isn't valid.
I am not giving HP one more cent after they decided a support contract is needed in order to be be able to download BIOS and firmwares for hardware I already paid for.
Virtual media requires the advanced version.
1 custom no case server and a dell vostro 410 both with 8GB RAM. 3x Windows server 2008 R2 VMs and the vostro runs the plex one
Anon got raided for illegal content on his computer server. Used a credit card to purchase the server. We got the son of a bitch.
Also would it be better to create my own server and run my website gambling site? Csgo
Just curious, why would you use Team Viewer for remote access? Are you managing servers on a network that you don't own (i.e. no VPN)? It wouldn't make sense to use it in a LAN environment.
Thanks for the reply.
After reading quite a bit about ZFS and ECC, I decided that it's better to just go for it. It's more future proof anyway, and if I'm building something I might as well try to be serious about it.
Here's my updated list. Price went up about $200 but I can manage that.
>support contract is needed in order to be be able to download BIOS and firmwares
Since when? I bought my microserver in ~july and got all the drivers, BIOS updates and firmwares for free after registering my product number.
You can save more on the cpu if you wanted. For example my microserver came with a Celeron g1610t, supports ecc a d is more than enough for simple filesharing. It even handled 1-2 plex transcodes.
Also the psu could be a bit cheaper. In the UK that psu is roughly £50 but there are 450-500w ones for £35ish by EVGA for example. It's not like your pay will be stressed with a low power CPU and a few drives.
Red drives are okay but really just overpriced green drives with a different firmware. I've recently bought 2x3tb Toshiba's that run just as quiet and cool as my 4tb reds, are 7200rpm and use about 1watt more each.
There's not a lot to save on PSU or the drives for me, unless I went with Seagate Barracuda drives. I can save a little bit by going WD Blues instead of Red, and as far as I heard the WD Blues aren't that bad anymore.
As for the CPU I am being cautious. My choice is mostly based on the fact that its power footprint is really low, at 35W while others are at 54W.
Another alternative would be the Intel Celeron G1840 for less money.
hey /hst/ what would be the optimal small form factor home server, i'm running mine off of a thinkpad (t410) because in the event of power failiure the battery carries it over and i slapped 2*2tb drives in it but i'm looking to make something a bit more extensible.
It's an excellent idea actually, because its very easy to update and has the most recent software, as we all know. No more sitting nervously as you wait for a specific patch (or compiling it yourself), and no more hunting around for software you want to run, because of the AUR.
Faggots like to talk about 'muh updates breaking things' and 'muh instable software'. It's all bs. You decide when to upgrade and very rarely do updates break anything. For example Arch just recently switched to php7; a simple yaourt -C later, diffing the new config files to my old and tested ones, and the issue was solved.
>tfw installed ubuntu server on an old ideapad that was given to me
It's blazing fast since it's a newer laptop but after setting up nginx I have no idea what else to do with it. I feel like a failure.
Holy shit I didn't think that included the drivers and firmware. I was waiting for something like this to show itself, everything seemed a bit too easy and cheap for HP. The damn thing only cost me £170 new.
Then again I've seen people share the files, on homeserve show for example which has a decent gen8 section. Also I don't think we'll see many more driver updates, except maybe iLO.
Apparently Mushkin is coming out with 4TB SSDs this year, and are aiming at a $500 price point.
I'd say by the end of this year or sometime next year SSDs will be economically feasible and have enough capacity to replace traditional hard drives.
no, just some virtualbox w/w7 and ubuntu to test things
>- NFS, Samba or iSCSI
none, i guess
>- Whitebox or last-gen commercial
>- Do you host subsonic? plex? rtorrent? IRC server or bouncer? wiki? Whats your killer nginx reverse proxy config?
qtorrent i access from the browser, nothing else
I'm quite new into this so any tips/ideas to work with servers would be much appreciated
>wouldn't make sense in a LAN environment
Are you for real? My server is in another room, desktop upstairs and GFs shitpost ng box downstairs and I have team viewer running on all machines including tablet and phone.
I can manage all those devices from every device I own, whether it's doing something on the server, which is headless, to fixing GFs retarded problems with 2 mouse clicks. I don't think I can live without it.
/RaspberryPi/ NAS here
This thing is slow as shit but it's something.
I've been toying with replacing my microserver myself, but finding a suitable case (i.e. which has some sort of tray system with hotswap) is a little troublesome. Looks like you have made a good choice there anon!
how do i into backing up an 8tb windows storage space drive?
I have 9tb over 3 external hard drives that i could use. Should I just split the files among the drives or create another storage pool with the externals as backup? Thanks senpaitachi
unless you buy a server case no.
both, got a 1090t and a xeon/supermicro setup
It's not hot swap, but for what I'm using it for, I don't need hot swap.
Funny thing, while my test environment stays mostly off, when I run a build test, I can't put a maintenance window into the schedule, and need hot swap in that environment.
My server isn't doing anything at the moment. Any suggestions as to what software I should run? It's fairly decent at 8 cores 32gb of ddr2 ecc and 2TB of raided drives.
Inb4 plex and torrent
I have 4x4tb drives (wd red) and I want to make a raid array out of them. Should I go raid 10 or 6? I like the fact that ANY hdd can die with raid6, but I'm nervous about potential drive failures during resilvering and construction of the array (plus everyone touts raid10 as being the nectar of the gods). Any insight?
I have the same drives as you and set up raid 10 using windows storage spaces. it's bretty gud lad but you still need backup.
Raid 10 will be faster than 6 with 4 drives, I'd suspect even with a good controller.
Speed isn't too much of a concern with me, I just want to be able to survive a drive failing. Ideally I'd have another server for backing up all my shit, but I don't feel comfortable shelling out another $600-700 to do that.
Raid 6 and 10 both will be able to recover from the loss of two drives, but both have their disadvantages. Resilvering raid6 has a good potential to fail, and raid10 will fail if two drives in each raid0 array fail. I can't decide on which to pick.
I have 2x drives in an esata enclosure and a usb3 hard drive that i would like to create a pool with under storage spaces but the usb3 doesnt show up in the primordial pool.
I can format it and it works fully but can't add it to a pool.
I thought it would be better than a £12 startech thing. It works fine and the only problem seems to be related to storage spaces. All the reviews I checked didn't mention it except a couple and I happened to miss those.