Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 1.41K Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • It does, I wrote it in corrupted text for a reason, but if you want something functional you can use it and then see how it set it up for you and still go set up the rest of the services yourself.

    When I switched to Arch, it used the Arch Install Framework, that predates even pacstrap, and I still learned a fair bit. Although the now normal pacstrap really doesn’t hide how the bootstrapping works which is really nice especially for learning.

    Point is mostly if OP is too terried they can test the waters with archinstall (ideally in a VM).


  • Max-P@lemmy.max-p.metoLinux@lemmy.worldGraduating from user to power user
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    3 days ago

    I DONT want to build a system from the ground up, which I expect to be a common suggestion.

    Arch kind of is building from the ground up, but without all the compiling and stuff. It’s really not as hard as it sounds especially if you use a̶r̴c̷h̴i̵n̵s̴t̷a̶l̷l̵ and you do get the experience of learning how it all fits together through the great ArchWiki.

    That said one can learn a lot even on Debian/Ubuntu/Pop_OS. I graduated to Arch after I felt like apt was more in my way than convenient and kept breaking on me so I was itching for a more reliable distro. But for stuff like managing systemd services and messing with Wayland, definitely doable on a Debian/Ubuntu/Pop distro. Just use the terminal more really, and it’ll come slowly through exposure.



  • I think we’re still deeply into the “shove it everywhere we can” hype era of AI and it’ll eventually die down a bit, as it with any new major technological leap. The same fears and thoughts were present when computers came along, then affordable home computers, and affordable Internet access.

    AI can be useful it used correctly but right now we’re trying to put it everywhere for rather dubious gains. I’ve seen coworkers mess with AI until it generates the right code for much longer than it would take to hand write it.

    I’ve seen it being used quite successfully in the tech support field, because an AI is perfectly good at asking the customer if they’ve tried turning it off and then back on again, and make sure it’s plugged in. People would hate it I’m sure on principle, but the amount of repetitive “the user can’t figure out something super basic” is very common in tech support and would let them focus a lot of their time on actual problems. It’s actually smarter than many T1 techs I’ve worked with, because at least the AI won’t sent the Windows instructions to a Mac user and then accuse them of not wanting to try the troubleshooting steps (yes, I’ve actually seen that happen). But you’ll still need humans for anything that’s not a canned answer or known issue.

    One big problem is when the AI won’t work can be somewhat unpredictable especially if you’re not yourself fairly knowledgeable of how the AIs actually work. So something you think would normally take you say 4 hours and you expect done in 2 with AI might end up being an 8h task anyway. It’s the eternal layoff/hires cycle in tech: oh we have React Native now, we can just have the web team do the mobile apps and fire the iOS and Android teams. And then they end up hiring another iOS and Android team because it’s a pain in the ass to maintain and make work anyway and you still need the special knowledge.

    We’re still quite some ways out from being able to remove the human pilot in front. It’s easy to miss how much an experienced worker implicitly guides the AI the right direction. “Rewrite this with the XYZ algorithm” still needs the human worker to have experience with it and enough knowledge to know it’s the better solution. Putting inexperienced people at the helm with AI works for a while but eventually it’s gonna be a massive clusterfuck only the best will be able to undo. It’s still just going to be a useful tool to have for a while.


  • It works so well, if you stretch a window across more than one monitors of different refresh rates, it’ll be able to vsync to all of them at once. I’m not sure if it’ll VRR across multiple monitors at once, but it’s definitely possible. Fullscreen on a single monitor definitely VRRs properly.

    With my 60+144+60 setup and glxgears stretched across all of them, the framerate locks to something between like 215-235 as the monitors go in and out of sync with eachother, and none of them have any skips or tears. Some games get a little bit confused if the timing logic is tied to frame rate, but triple monitor Minecraft works great apart from the lack of FOV correction for the side monitors.

    This is compositor dependent but I think most of the big compositors these days have it figured out. I’m on the latest KDE release with KWin.








  • The website requests an image or whatever from 27748626267848298474.example.com, where the number is unique for the visitor. To load the content the browser has to resolve the DNS for it, and the randomness ensures it won’t be cached anywhere as it’s just for you. So it queries its DNS server which queries your DNS provider which queries the website’s DNS server. From there the website’s DNS server can see where the request came from and the website can tell you where it came from and who it’s associated with if known.

    Yes it absolutely can be used for fingerprinting. Everything can be used for fingerprinting, and we refuse to fix it because “but who thinks of the ad companies???”.




  • It’s going to depend on how the access is set up. It could be set up such that the only way into that network is via that browser thing.

    You can always connect to yourself from the Windows machine and tunnel SSH over that, but it’s likely you’ll hit a firewall or possibly even a TLS MitM box.

    Virtual desktops like that are usually used for security, it would be way cheaper and easier to just VPN your workstation in. Everything about this feels like a regulated or certified secure environment like payment processing/bank/government stuff.


  • but I’m curious if it’s hitting the server, then going the router, only to be routed back to the same machine again. 10.0.0.3 is the same machine as 192.168.1.14

    No, when you talk to yourself you talk to yourself it doesn’t go out over the network. But you can always check using utilities like tracepath, traceroute and mtr. It’ll show you the exact path taken.

    Technically you could make the 172.18.0.0/16 subnet accessible directly to the VPS over WireGuard and skip the double DNAT on the game server’s side but that’s about it. The extra DNAT really won’t matter at that scale though.

    It’s possible to do without any connection tracking or NAT, but at the expense of significantly more complicated routing for the containers. I would do that on a busy 10Gbit router or if somehow I really need to public IP of the connecting client to not get mangled. The biggest downside of your setup is, the game server will see every player as coming from 192.168.1.14 or 172.18.0.1. With the subnet routed over WireGuard it would appear to come from VPN IP of the VPS (guessing 10.0.0.2). It’s possible to get the real IP forwarded but then the routing needs to be adjusted so that it doesn’t go Client -> VPS -> VPN -> Game Server -> Home router -> Client.





  • The fediverse is plainly just not appropriate for this. The ActivityPub makes too many assumptions that the data is fully public.

    End-to-end encryption: Encrypt all user communications, private messages, and sensitive data

    That could work probably, it’s a lot of work and will break interoperability but could be done. You’d still have to vet your users very well though, which might contradict the next point. It takes one user to leak everything.

    Anonymous accounts: Allow users to create accounts without requiring personally identifiable information (PII), such as email or phone numbers. How can we balance this with the need to combat spam?

    There’s a fair amount of instances already that will let you sign up with a disposable email

    Tor and VPN Integration: Ensure compatibility with privacy tools like Tor, and provide guidance on using VPNs.

    A fair chunk of instances already allow VPN/Tor traffic. The bigger ones don’t because of spam and CSAM and all that crap, but even Reddit is fully functional over a VPN.

    Remove or minimize data collection, including IP addresses, geolocation, and device information. No web server logs.

    That’d be very hard to enforce, and the instance owners have to do some collection for the sake of being able to handle lawsuits and pass the blame. But you can protect yourself using a VPN or Tor.

    Ephemeral content: auto-deleting posts, messages, etc after a set period.

    As an admin, I can literally just restore last month’s backup and undelete everything that got deleted. If someone’s seen it, you must assume it can at minimum have been screenshot.

    Instance chooser that flags which instances are in unsafe countries.

    Anyone can get a VPS in just about any country, so you’d have to personally verify the owner which is PII and probably one of the most vulnerable part of the group. You take down the owner you take down the whole thing.

    Once again however users have plenty of choices already for that, if you trust your instance’s admins.

    Defederate from instances in unsafe countries?

    Same as previous point. Plus, one can still use the API to fetch the content anyway.

    Better opsec around instance owners, admins and moderators

    Also pretty hard to enforce.