• ThorrJo@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Go with used & refurb business PCs right out of the gate instead of fucking around with SBCs like the Pi.

    Go with “1-liter” aka Ultra Small Form Factor right away instead of starting with SFF. (I don’t have a permanent residence at the moment so this makes sense for me)

    • constantokra@lemmy.one
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Ah, but now you have a stack of PiS to screw around with, separate from all the stuff you actually use.

  • stanleytweedle@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Buy an actual NAS instead of a rats nest of USB hub and drives. But now it works so I’m too lazy and cheap to migrate it off.

  • chickenfingersub@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Use actual nas drives. Do not use shucked external drives, they are cheaper for a reason, not meant for 24-7. Though I guess they did get me through a couple years, and hard drive prices seem to keep falling.

  • z3bra@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    I already have to do it every now and then, because I insisted on buying bare metal servers (at scale way) rather than VMs. These things die very abruptly, and I learnt the hard way how important are backups and config management systems.

    If I had to redo EVERYTHING, I would use terraform to provision servers, and go with a “backup, automate and deploy” approach. Documentation would be a plus, but with the config management I feel like I don’t need it anymore.

    Also I’d encrypt all disks.

    • vegetaaaaaaa@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Also I’d encrypt all disks.

      What’s the point on a rented VPS? The provider can just dump the decryption key from RAM.

      bare metal servers (at scale way) rather than VMs. These things die very abruptly

      Had this happen to me with two Dedibox (scaleway) servers over a few months (I had backups, no big deal but annoying). wtf do they do with their machines to burn through them at this rate??

      • z3bra@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I don’t know if they can “just” dump the key from RAM on a bare metal server. Nevertheless, it covers my ass when they retire the server after I used it.

        And yeah I’ve had quite a few servers die on me (usually the hard drive). At this point I’m wondering if it isn’t scheduled obsolescence to force you into buying their new hardware every now and then. Regardless, I’m slowly moving off scaleway as their support is now mediocre in these cases, and their cheapest servers don’t support console access anymore, which means you’re bound to using their distro.

        • vegetaaaaaaa@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’d encrypt all disks. Nevertheless, it covers my ass when they retire the server after I used it.

          Good point. How do you unlock the disk at boot time? dropbear-initramfs and enter the passphrase manually every time it boots? Unencrypted /boot/ and store the decryption key in plaintext there?

          • z3bra@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I run openbsd on all my servers so I would be entering the passphrase manually at boot time. Saving the key on unencrypted /boot is basically locking your door and leaving the key on it :)

  • misaloun@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I always redo it lol, which is kind of a waste but I enjoy it.

    Maybe a related question is what I wish I could do if I had the time (which I will do eventually. Some I plan to do very soon):

    • self host wireguard instead of using tailscale
    • self host a ACME-like setup for self signed certificates for TLS and HTTPS
    • self host encrypted git server for private stuff
    • setup a file watcher on clients to sync my notes on-save automatically using rsync (yes I know I can use syncthing. Don’t wanna!)
      • misaloun@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        I don’t think there’s any significant downsides. I suppose you are dependent on their infrastructure and uptime. If they ever go down, or for any reason stop offering their services, then you’re out of luck. But yeah that’s not significant.

        The reason I want to do this is it gives me more control over the setup in case I ever wanted to customize it or the wireguard config, and also teaches me more in general, which will enable me to better debug.

  • Showroom7561@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Instead of a 4-bay NAS, I would have gone with a 6-bay.

    You only realize just how expensive it is to expand on your space when you have to REPLACE HDDs rather than simply adding more.

  • Anarch157a@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I already did a few months ago. My setup was a mess, everything tacked on the host OS, some stuff installed directly, others as docker, firewall was just a bunch of hand-written iptables rules…

    I got a newer motherboard and CPU to replace my ageing i5-2500K, so I decided to start from scratch.

    First order of business: Something to manage VMs and containers. Second: a decent firewall. Third: One app, one container.

    I ended up with:

    • Proxmox as VM and container manager
    • OPNSense as firewall. Server has 3 network cards (1 built-in, 2 on PCIe slots), the 2 add-ons are passed through to OPNSense, the built in is for managing Proxmox and for the containers .
    • A whole bunch of LXC containers running all sorts of stuff.

    Things look a lot more professional and clean, and it’s all much easier to manage.

      • Anarch157a@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Can’t say anything about CUDA because I don’t have Nvidia cards nor do I work with AI stuff, but I was able to pass the built-in GPU on my Ryzen 2600G to the Jellyfin container so it could do hardware transcoding of videos.

        You need the drivers for the GPU installed on the host OS, then link the devices on /dev to the container. For AMD this is easy, bc the drivers are open source and included in the distro (Proxmox is Debian based), for Nvidia you’d have to deal with the proprietary stuff both on the host and on the containers.