• 1 Post
  • 23 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • I’m using step-ca. Its running on dedicated SBC. ACME certs created for each service renewing automatically daily. Honestly this setup wouldn’t be worth it if it wasn’t for daily cert rotation. I’m not using wildcard certs with own CA as it’s bad practice and defeats the purpose. There are bunch of different ACME renewal scripts/services. K8s cert manager handling kubernetes services automatically. Opensense has ACME cert plugin, nginx proxy manager is using external cert managed by script. I’m validating certs with DNS using TSIG. Step-ca have several integrations with different DNS services. I chose TSIG because it’s universal. There is pi-hole integration if you using that. Buying valid domain is not needed as long as you have internal DNS. You need to Install root Ca on every machine that will be connecting to services. If you have many VM’s configuration management is the way to go.


  • Luks full disk encryption and encrypted offsite backups. This protects from most common smash and grab scenario.

    I had issues where system upgrades would loose encryption keys and full restore from backup was my only option. Nextcloud have issues with encryption, some features are not available if you enable it (don’t remember which ones now).

    Generally speaking if someone has physical access to your system you’re screwed. There are many ways that physical access can be used to get access to your data including denying you access to your data.


  • Yeah I looked at tutorial. Port 81 is only for management (NPM admin gui). Then you have your traffic ports for proxy services. Those would be 80 and 443 normally. You would need to expose those ports to the Internet if you want to access NPM/proxy your service. Port 81 shouldn’t be exposed on your public interface make sure it isn’t or at least have firewall rule to allow only local network (ideally management network/vlan)


  • It’s not clear what’s the purpose of NPM in your case. Do you want to serve internal network or expose to Internet. If it’s the latter, you need to see what interface you exposed NPM port on (have to be your public network - VPS IP), your firewall needs to allow incoming connections on that port. Most likely you will be using port 443 and maybe 80 for redirect (checkbox in NPM always use TLS). Use IP address first to eliminate DNS issues. Once IP is valid test DNS with nslookup/dig to see if it resolves to your IP.

    OpenSSL command needs to be executed from VPS to eliminate network issues and just validate certificate setup. The IP and port would depend on what port you exposed. 127.0.0.1 should work from that context. Once you see certificate you can execute openssl command from your local and use WireGuard tunnel IP to connect to service. This is for internal network.


  • Can you elaborate more on what is not working? What are you testing to conclude it’s not working?

    From my understanding you’re running VPS server. You have tunnel setup to connect to the server. You’re trying to setup N.P.M. with let’s encrypt certs validating via DNS.

    To continue troubleshooting you should eliminate all network paths and test from the VPS (ssh to the system). Once you have NPM setup you should be able to test certificate locally connecting to NPM exposed port.

    Assuming you exposed port 443

    openssl s_client -connect 127.0.0.1:443 -showcerts

    If you can validate that NPM is serving endpoint with the correct certificate you can move on to troubleshooting your network path.





  • Yes! I have a split DNS setup with technitium using advanced forwarding plugin. You can set different upstream based on client IP or subnet. So this way you can send to vpn DNS to prevent leaking.

    Also you can have multiple piholes (poor mans setup) and have each configured filtering for dedicated VLAN. For instance be more strict for guest kids and less on adults net. Adguard can do that without having to have many instances but then Adguard can’t forward traffic based on origin IP. You can make any kind of logic and send different clients to different upstreams. As far as I know only BIND provides this functionality through views but it’s more complicated setup and no lovely GUI. You can always send all traffic through tunnel but then some results may be not ideal if you will be detected to be in different country and content will be served in other language. I think results will vary based on VPN endpoint. You don’t need to tunnel through vpn if you use DNS over https. It’s completely invisible to the ISP. VPN is more of a use case if you want to be consistent with your exit IP and DNS queries.


  • citizen@sh.itjust.workstoPrivacy@lemmy.mlPi-Hole vs AdGuard vs NextDNS
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    11 months ago

    Pihole is most popular among self holsters. It has nice GUI, it’s capable and its solid. It’s basic in sense of DNS features. You need to use config files to customize from terminal and even then it’s limited.

    Adguard in my experience has more advanced blocking features. DNS also allows you little more flexibility like wildcard records. You can have separate config for different clients (like guest/kids network blocking)

    NextDNS is SaaS only. It has most advanced blocking features but free account only gets you limited queries monthly. You can choose to keep your logs on specific servers or not to keep at all… from privacy perspective it’s arguably worse because you have to trust another company but it’s a good middle ground. Self hosted still needs upstream DNS but it could be tunneled through VPN which would anonymize traffic. NextDNS is upstream dns and it can’t distinguish internal network source.

    I would throw zenarmor to the mix https://www.zenarmor.com/. Paid home license costs 10$/month and allows 3 different profiles. It is more advanced as it sniffs all network packets and not only DNS. It’s not replacing dns. It has great reports/dashboards.

    For best DNS capabilities I would recommend technitium https://technitium.com/dns/. It’s free. You have gui, dns blocking and full DNS capability with some advanced plugins. It’s not as fancy for dashboards like pihole or Adguard.

    You would use combination of solutions and nextDNS could be your upstream if you don’t mind paying them. If privacy is your thing you want to have more generic upstream that everyone uses like quad9.


  • If your goal is to improve security you would have to look into e2e encryption. This means network traffic needs to be encrypted both between client and proxy as well as between proxy and service. Your volumes should be also encrypted. You didn’t elaborate on your proxmox/network setup. I will assume that you have multiple proxmox hosts and external router perhaps with switch between them. Traffic this way flows between multiple devices. With security mindset you’re assuming network can’t be trusted. You need to apply layered approach and use sparation of physical devices, VLANs, ACLs, separate network interfaces for management and services for respective networks. Firewall rules on router, proxmox and VM.

    Some solutions

    • separate network for VM/CT. Instead of using network routable IP going to your router you can create new bridge on separate CIDR without specifying gateway. Add bridge to every VM that needs connectivity. Use new bridge IPs to communicate between VMs. Further you can configure proxmox to communicate between nodes in ring network P2P instead using switch/router. This requires at least 2 dedicated NICs on Proxmox host. This separates network but doesn’t encrypt.

    Encryption:

    • You could run another proxy on same VM as service just to encrypt traffic if service doesn’t support that. Then have your proxy connect to that proxy instead of service directly. This way unencrypted traffic doesn’t leave VM. Step up would be to use certificate validation. Step up from there would be to use internal certificate authority and issue certificates from there as well as validate using CA cert.
    • Another alternative is to use overlay network between proxy and VM. There are bunch of different options. Hashicorp consul network could be interesting project. There are more advanced projects combining zero trust concepts like nebula.
    • if you start building advanced overlay networks you may as well look at kubernetes as it streamlines deployment of both services and underlying infrastructure. You could deploy calico with wire guard network. Setup gets more complicated for a simple home lab.

    All boils down to the question why you do self hosting? If it’s to learn new tech then go for it all the way. Experiment and fail often so you learn what works and what doesn’t. If you want to focus on reliability and simplicity don’t overcomplicate things. You will spend too much time troubleshooting and have your services unavailable. Many people run everything on single node just running docker with networks between services to separate internal services from proxy traffic. Simplicity trumps everything if you can’t configure complex networks securely.


  • If you want to look into enterprise grade equipment I recommend Ruckus with unleashed firmware. You can get older models r510 r610 from eBay for around 100-150 then flash unleashed by downloading it from official ruckus website. R610 requires more power so if you do PoE you need a switch that supports 802.3at 30w. R510 is less power hungry and suites most setups. You can do all sorts of network configurations with them. Meshing with other ruckus unleashed is supported. Guest portal, VLANs and client isolation. They are not the newest and don’t support wifi6 but are rock solid, support hundreds of devices and perform well in high congestion places. There are newer models but they are expensive r550 and r650. I used to recommend ubiquity equipment in the past but they are not the best for privacy focused deployment. Arguably the hardware of ubiquity AP’s are far inferior to enterprise gear like Ruckus or Aruba.


  • If terrain is mostly flat and your antenna is somehow elevated you should be good. If you have more friends in neighborhood it will help as every radio relays messages in mesh by default. Ideal setup would be to have base station at home (terminal) and a handheld device connected to your mobile phone. So 2 devices per person. This way it’s more reliable. If you have HAM license you can use higher power device like this G1 https://meshtastic.org/docs/hardware/devices/station-g1/ The only thing is that you can’t legally use encryption with higher power transmission. In my experience the reliability of this setup varies. It works when users actively maintain and check Meshtastic app. If you have less technical users or users that just want things to work all the time, it may not be the best solution. I found sometimes radios disconnect from Bluetooth, specially when charging. Radio works but Bluetooth connection isn’t established. If you send message it will be received by radio and ACK’ed but person will not get it until connects to Meshtastic app and reestablish Bluetooth connection. Messages are visible on radio if they have display and are not in relay mode. This means that you don’t know for sure if message was read. For day to day use in normal urban setting I find it little finicky and not reliable enough. You have to carry additional device with you and antenna needs to be in good position. Some radios have built in antennas optimized for on body carry. This is just my experience and it will vary on people and situation. There is Nano explorer radio with dedicated notification bell that could be useful https://meshtastic.org/docs/hardware/devices/Nano Series/

    This solution works best if you don’t have cellphone network reception and all users are actively checking status of their radios (charge, messages, connection to app). This fits perfect with recreational outdoor activities in remote areas. Search and rescue is very niche and unless you engage in such activities on regular basis it’s not something you need. It can also serve as a backup solution for emergencies (neighborhood watch/ prepping).

    If you also considering Wi-Fi/intranet solution I recommend looking into Mikrotik wireless wire products. There is more equipment cost involved and it’s completely different use case as it’s stationary solution. https://mikrotik.com/products/group/60-ghz-products


  • Meshtastic allows to send text messages, sensor metrics and GPS coordinates to nodes in mesh. It’s like a walkie talkie but on steroids. Meshtastic have 2 components: 1 firmware, 2 software. You flash firmware onto device that’s compatible. You can then download software on your mobile phone (Android or iOS) there is also web UI that can be used on PC. You connect to the radio device using Bluetooth Wi-Fi or USB cable (depending on device some don’t have Wi-Fi but then they drain less power). Range varies vastly depending on many factors. Just like any radio device the antenna quality and position is everything. In practice if you have only 2 devices that are both mobile handheld by person the range will be depending on terrain about few miles. If you add another device the range and dependability improves. Meshtastic tries to send each message 3 times and if it doesn’t get ACK reply it will show message failed. You can setup radio to be a relay as well as store messages so that connecting nodes can still retrieve messages even when they originally missed the transmission. You can setup Jedi to be administered remotely (sending configuration through other radio).

    Meshatastic supports encryption AES 128 and 256. The weakness is that if any of the radios where ever compromised adversary could get the key from the device and able to decrypt future communications. 100+ mile range can be achieved with terrain like elevated hill or mountain where there is line of sight. See docs https://meshtastic.org/docs/overview/range-tests I’ve read some use balloons to improve range during events. Both methods require dedicated relay node. Currently there is a limit to how many nodes you can have in the mesh.


  • The nice thing about vm with nginx proxy manager or just nginx running on the same host as the rest(or majority) of vms is that internal traffic doesn’t traverse other devices. This only applies if your backend services are not configured with TLS so you’re effectively terminating at proxy and run unencrypted traffic to backend. That being said chances of some packet sniffer running on your internal network between proxy and destination VM is low.

    I’m in similar situation as you. I run overpowered router that barely sees any CPU usage.

    I tried Nginx opnsense plug-in but looks like GUI doesn’t support proxy by header (locations are path based). I don’t want to ssh and mess with raw config files. I’m running HA proxy on opnsense router. I saw in community forums most people use that. After going through tutorial for one service it’s pretty easy to grasp configuration concept and replicate for other services. I think only one confusing option is that backends pools and rules can have backends configured and you can have only one in use when assigning rules to public service. Test syntax button ensures you don’t make mistakes. HA proxy has powerful options for backend more than you probably need. I moved router management port to higher number and setup proxy to run on 443. Then wildcard DNS entry points to router and that allows to keep adding services as needed.



    • MFA all accounts that support it
    • important accounts use hardware key like Yubikey
    • Ditch SMS mfa use Authenticator or hardware key
    • custom email aliases (proton have SimpleLogin) use separate email for every account just like password
    • change your browsing habits from YouTube instagram twitter to privacy alternatives (there is Firefox plugin Privacy Redirect)
    • use separate vm for higher risk browsing or separate computer (tails)
    • get VoIP phone number redirect your current phone to VoIP.
    • use pre paid phone only for internet and never use it for phone or sms. For more paranoid activate away from home using fake name (Mint mobile for instance doesn’t check if it’s real)
    • use phone that was never registered to your name (don’t reuse old phones)
    • setup always on VPN on your home on router with killswitch so you never reveal your IP accidentally
    • use privacy oriented DNS service

    If you into privacy I recommend Extreme Privacy book that goes over many things. The lengths that you go to protect your privacy will depend on your threat model. Privacy is expensive unfortunately.


  • I think there are many levels to approach this problem. First off the obvious investigate why your org DNS is having issues. This is IT request they should fix that. They should have SLA on this critical service and not fixing it should escalate to management. There may be many reasons why resolver is not working specially in complex multi site setups. This is the best option as it solves this and probably other DNS related issues.

    The rogue approach: On other side if you only host service for handful of users that you personally know and you have ability to edit your hosts file, you can bypass DNS completely. This isn’t ideal as it has to be done one every system and in case your IP changes you will have to do it again. It would largely depend on your level of access to system. If you even can change hosts file.

    Alternative crazy idea is to host your own DNS. Change DNS setting on your network configuration. Then point your dns to your org dns. Same problem as hosts file you will need to do that for all systems that need connectivity.

    Expanding on own DNS approach you could go as far as hosting your own network. WiFi or switch in case you need Ethernet cable connection. You can buy used enterprise equipment for cheap plug it in l, configure to point to your own DNS and anyone connected to your network would have your settings. Of course this is super shadow IT and I would discourage from pursuing that.

    Less crazy and rogue option is to use something like tailscale (or similar) which would have DNS (magic dns). You would need agent installed on every client.


  • Here is my security point of view. Second instance would be too much overhead for just one use case of sharing file. You have to decide how comfortable you are with exposing anything in your private network. I would personally not expose Nextcloud instance because it’s complex application with many modules each possibly having 0day exploits. If your goal is to share a file and selfhost I would look into dedicated apps for that purpose. You can setup simple microbin/privatebin on dedicated hardware in DMZ network behind firewall. You should run IDS/IPS on your open ports (pfsense/opnsense have that nicely pairs with crowdsec). You could also look into cloud fare tunnels to expose your dedicated file sharing app but I would still use as much isolation as possibilities (ideally phisical hardware) so that it would be not easy to compromise your local network in event of breach. Regardless selfhosted solution will always pose risks and management overhead if you want to run a tight setup. It’s much easier to use public cloud solution. For example proton drive is encrypted and you can share files via links with people.