Curious what you’ve got installed on it. What do you use a lot but took awhile to find? What do you recommend?

  • Drew@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I’ve just been using an old laptop with jellyfin, radarr, sonarr and transmission.

  • Error Lab@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I got a DS920+ been using it for file storage, backups, plex, and running docker for all my arr’s. Really like synology as an entry level, it got me to dig deeper and learn more. I’m behind a CGNAT, so setting up a VPN solution that would work was a pain on DSM. In the process of setting up my own homelab and building a truenas as I learn more about ZFS.

  • DM_Gold@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I’d like to build a NAS. Does anyone have a simple guide I could follow? I do have experience building my personal computers. I could search online for a guide, but a lot of the time small communities like this will have the end-all be-all guide that isn’t well known.

    • Parsnip8904@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I don’t have one off hand but a NAS at homelab level is not that different from a server.

      I have had success with getting a second hand server with a moderately powerful processor (old i5 maybe?), a good 1/10Gb network card (which can be set up with bonding if you have multiple ports), and lots of SATA ports or a raid card (need PCI slots for the cards as well).

      I would go with even a lower power processor for power savings if that’s a thing. ECC ram would be great, especially for ZFS/btrfs/xfs.

  • DEADBEEF@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Mine currently runs on an old pi3 with an external hard drive plugged in via a powered usb hub. I’m using openmediavault at the moment, but I’m probably going to swap it over to just NFS when I get the chance. I’m also planning to swap out the single external drive for 4 drives in a soft RAID through LVM.

  • Muscular_Michael@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Synology RS1221+ and RX418 for main personal storage, media and device backups. Synology DS918 as a backup to personal data on RS1221+ Synology DS423 + Synology TC500/BC500 cameras for NVR surveillance Synology DS220+ offsite as a backup of personal data.

    I like Synology for their ease of use in setting up. I just replaced an older SYnology and IP cameras with the DS423.

  • Dandroid@dandroid.app
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I just have a Synology with 4 drives. Super basic and was very easy to set up and takes up very little space in a closet. I mount it to my Ubuntu server using samba, and then any data processing that needs to be done on that data (e.g. plex, music server, etc.) is done on the server, which is much more powerful than the little Celeron CPU that the Synology has.

  • Parsnip8904@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I have a proxmox host with a privileged lxc container offering samba and nfs shares. Drives are pooled as a btrfs RAID 1 (lets me use different sized drives easily).

    I tried OMV etc. but found all of those options to be not really convenient once you start directly modifying config files. I love the convenience those offer but for something like NAS , less moving parts mean less breakage in my experience.

    I use raid 1 for actual data but no raid for movies and media that is disposable (not photos etc.). Didn’t find raid worthwhile in that scenario after having tried it. Might be because I didn’t have enough $$ to truly spend on it.

  • donio@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    No dedicated NAS. I have a main Linux system that’s always-on for other purposes so that also serves as main storage. Remote access is entirely via ssh-based methods: sshfs, TRAMP in Emacs, git, occasional copying stuff around.

  • YuzuDrink@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I just set up my first NAS – a Synology 1522+, with 4x6TB drives in whatever Synology’s custom raid is called where I lose one drive to redundancy.

    So far, I mostly use if for my in-home Plex server (which finally lets me shut down my gaming rig at night while still letting me watch TV shows and listen to music to sleep, so big energy win there, I imagine). I’m considering running my own Matrix chat, PixelFed(?), maybe Mastodon and/or Lemmy on it for my local friends and family; but I’m still looking into solutions.

    Would like to get off “Discord for everything”, and I kind of miss what Facebook used to be in terms of just posting updates about my life and reading updates about other people’s lives.

    Not sure I trust Synology’s built-in Chat and other replacement options to be as good as what I’d like, though, in terms of both quality and privacy/security. Happy to hear from folks with more experience!

  • jeansburger@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Currently running an R710 in RAID6 with 32TB usable, but between the data on plex and backups of things in the rack I’m low on space.

    I’m looking at getting 8 Odroid HC4s and some referbed 20TB drives to build a Glusterfs cluster that will host all of my VM disks and backups. At least with that I’ll have 80-120TB depending on how much fault tolerance I have. Because they have two HDD slots I can double my storage when it gets low and just add more boards to expand the array when I’m tight for space again.

    • nodiet@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I don’t have any experience with the Odroid HC4, but I used to have an N2 and while I am sympathetic towards Odroid I can’t help but feel their software/firmware support is lacking. I always had issues with the GPU driver and there was either a hardware or firmware fault with the USB controller which lead to random access errors.

      • jeansburger@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Oh I’m not going to use the trash OS Odroid supplies. I’m going to use Armbian which is much more stable and has better support for the tooling I want to use

        • OzoneThePirate@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Thank you for saying that, I’ve been struggling with my HC4 using the Odroid supplied OS for a while and need to start fresh. Definitely going down this path thus time. Cheers!

  • Swintoodles@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I built a massive overkill NAS with the intention of turning it into a full blown home server. That fizzled out after a while (partially because the setup I went with didn’t have GPU power options on the server PSUs, and fenangling an ATX PSU in there was too sketchy for me), so now it’s a power hog that just holds files. I just turn it on to use the files, then flip it back off to save on its ridiculous idle power costs.

    In hindsight I’d have gone with a lighter motherboard/CPU combo and kept the server grade stuff for a separate unit. The NAS doesn’t need more than a beefy NIC and a SAS drive controller, and those are only x8 PCIE slots at most.

    Also I use TrueNAS scale, more work to set up than UNRAID but the ZFS architecture seemed too good to ignore.

    • Parsnip8904@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      A GPU isn’t really necessary for home server unless you want to do lots of client side transcoding. I have a powerhungry server that runs a VM offering samba and nfs shares as well as a bunch of other vms, lxc containers and docker containers, with a full *arr stack, Plex, jellyfin, a jupyterlab instance, pihole and a bunch of other stuff.

      • Swintoodles@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I was trying to do some fancy stuff like GPU passthrough to make the ultimate all in one unit that I could have 2 or 3 GPUS in and have several VMs running games independently, or at least the option to spin it up for a friend if they came over. I’m probably not quite sophisticated enough to pull that off anyways, and the use case was too uncommon to bother with after unga bungaing a power distribution board after a hard day of work.

        • Parsnip8904@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Ah now I get it. You’ll probably need an expensive PSU to make that work. I’m sure there would be some option though in the server segment for people building GPU clusters.

          • Swintoodles@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Yeah, I was trying to go all the way when I should have compartmentalized it a bit and just had two computers instead of one superbeast. The server PSUs aren’t super expensive relatively speaking, 1U hotswap 1200W PSUs with 94% efficiency are like $100. Problem was that the power distribution board I had didn’t have GPU power connectors, only CPU power connectors, and tired me wasn’t going to accept no for an answer and thus let out the magic smoke in it. I got lucky and the distribution board seems to be the intended failure point in these things, so the expensive motherboard and components got by unscathed (I think, I never used the GPU, and it was just some cheap Ebay thing). Still a fairly costly mistake that I should have avoided, but I was tired that night and wanted something to just work out.

            • Parsnip8904@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              That’s quite interesting. I would have thought that they were more expensive than that. I’ve been there too. You’re doing a bunch of stuff, tired and just want it to somehow work. What have you been doing with the build after that, if you don’t mind me asking?

              • Swintoodles@beehaw.org
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                Was going to make it a sort of central computer that could centralize all the computing for several members of the family. Was hoping to get a basic laptop that could hook into the unit and play games/program on a virtual machine with graphics far above what the laptop could have handled, plus the aforementioned spin up of more machines for friends. Craft Computing had a lot of fun computing setups I wanted to learn and emulate. I would have also had the standard suite of video services and general tomfoolery. Maybe dip into crypto mining with idle time later on. Lots of ideas that somewhat fizzled out.

                • Parsnip8904@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  That sounds really interesting. I have some VMs set up in a similar way for family memeber though they’re very low power. They’re mostly used to ease the transition from windows to Linux. I hope you get to do it again sometime :)

  • blackstrat@lemmy.fwgx.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I have an Ubuntu VM running on my Proxmox server. It just exports some folders over NFS that I mount from my laptops and PC. Then I have Nextcloud running in a separate VM so my phone can upload photos. The NC storage is all the NFS mounted folders from the NAS. Simple and works.

  • SoftestVoid@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’ve got a HP DL360 g9 running Ubuntu server lts and ZFS on Linux with 8× 1.2tb 10k disks, and an external enclosure (connected by external SAS) with 8× 2tb (3.5" sata) disks. The 1.2tb disks are in a ZFS raid10 array which has all our personal and shared documents, photos, etc The 2tb disks are in a raidz6 and stores larger files.

    It uses a stupid amount of power though (mainly the 10k disks) so it’s going to be replaced this year with something newer, not sure what that will look like yet.

  • pineapplelover@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I bought a 2 bay ds220+. 2 x 4TB drives. Been happy with it so far. I got Jellyfin on here and use Synology Photos and Drive to back up stuff. I also use Adguard home, this has been amazing and has blocked many weird microsoft and amazon pings. Yes, it’s proprietary but when I was building it, it seemed to be a decent choice and had lots of support. As I get more experience, I will probably build my own NAS.