• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • If Arch wants to make things more stable it would end up looking like Tumbleweed. If Arch wants to make things even more stable it would end up looking like Debian. Arch wants to be at the level of bleeding-edge that it is, and this is roughly what it looks like when you choose that.

    That’s actually a fair point and reading this does change my perspective a little. Tumbleweed gets me 95% to where Arch is, but a lot can go wrong in that last 5%. People who chose that understand that. I think we’re in agreement that those who genuinely need that last 5% bleeding edge are a very small group. Back about 10 years ago I was a massive Gentoo fanboy and I admit that Gentoo was my hobby, rather than simply a tool to get work done. I suspect a lot of Arch users are using it for the hobby aspect rather than necessity too, which is fine, I’ve been there myself. I sometimes wonder if there is a certain type of person who just gets bored when using something stable, and the constant threat/thrill of breakage gives them the drama they crave. I think that describes me fairly well in my Gentoo days.

    I still think Tumbleweed is the best compromise between “my grub blew up” and “my kernel is 2 years old”, especially when it comes to laptops and gaming. I’ve not really run into problems with a lack of software, but I do make good use of distrobox environments and flatpak. I’ll use OBS builds when only when necessary, namely Mullvad which can’t be run sandboxed.


  • Thanks for the Tumbleweed shout out. I’m always curious about Arch people’s opinion of Tumbleweed. Arch seems to cast a large shadow over it. But man do I swear by Tumbleweed. There is nothing in Tumbleweed that you can’t do in Arch, but I guess my main question is why would you want to? TW has all the benefits of Arch without the problems. Rather than updating each package individually, TW bundles all the new versions into a snapshot and tests that snapshot to ensure everything works within it. This way no random rogue update conflicts with anything else within that specific snapshot. As a user, when you update you just move from snapshot to snapshot. With Arch you can set up snapper rollback, but you better make sure you’ve partitioned everything correctly or you need to reinstall, TW will just enable rollback by default.

    Some people can’t seem to live without AUR, but I feel like distrobox is a much safer way to install software that isn’t available on your distro. If you need something that only comes as a .deb, you can do something like:

    distrobox create --image unbuntu:\

    And now you have a super minimal version of Ubuntu you can run your software inside of using the official packages instead of something someone else has hacked together/compiled. It also makes setting up custom dev environments trivial without littering your install with dependencies. I get the allure of AUR but I’d rather use distrobox or, if I must, flatpak.

    The main defense I see of Arch is "it’s not Arch’s fault, I did ". I guess with TW I don’t ever really worry about \ because the OS really just sort of takes care of itself. And even if I did do a stupid \ rollback is there to reverse my boneheaded idea instantly. I say all this after having experimented with Arch for a little bit now. It felt like taking a vacation: everything was new and different and you start thinking about how cool it would be to live here, but then you start to notice the little things, and after a while you just want to go home and sleep in your own bed.

    I have nothing against Arch but the constant defense of “Arch broke, but it’s not Arch’s fault” seems like a meme. Just read this comment section and take a shot for every person who says it. Meanwhile I’m over here on TW running the same versions of everything as Arch has and all I ever did was “zypper dup” and maybe 1-2 times a year “snapper rollback”. I don’t know if I sound defensive, maybe I do, but I feel like Tumbleweed is criminally underrated and a large portion of people on Arch would probably be better served by something like Tumbleweed judging by the forums/Reddit.


  • I think your experience is more to do with nvidia + Wayland than anything OS specific. Although I think other distros have done a lot of patching and coding around nvidia’s incompetence to get Wayland to work better and I think Arch doesn’t really do this sort of thing. Definitely seems like you unwittingly took on a project.

    I also use nvidia but I have no desire to move to Wayland any time soon. X11 works just fine unless you get into esoteric setups like multiple monitors with different refresh rates. My first boot into KDE with Arch was completely broken and I thought “okay, here comes the hard part” until I realized it was defaulting to Wayland. Changed it to X11 in sddm and it’s perfect. I use my ForceCompositionPipeline script on login and set kwin to force lowest latency and it’s smooth as butter.

    Wayland is the future but nvidia is definitely gatekeeping that future. I’ve got a 3080 in this machine that is going to last a pretty long time I suspect, but unless nvidia can manage to remove head from ass I see AMD in my future.


  • Ah, I see. That sounds like a completely fair scenario for using something a little more automated. Thanks for sharing.

    Arch seems fine and I’ll probably stay here for at least another few months, out of laziness if nothing else. If I’m not completely happy I’ll probably end up back on Tumbleweed which is my usual daily, but I can’t say I’ve had any problems that would drive me back immediately.


  • I guess I used a whole lot of words to say what you just did in just a few sentences. Thanks for summarizing my thoughts. Just out of curiosity though, why EndeavourOS? See this is also something that tripped me up. I see quite a few Arch spinoffs that all claim to be easier versions which naturally lead me to believe Arch itself was complicated. Which again is probably a community/communication problem and has nothing to do with the OS itself.


  • The Gentoo install isn’t hard, it’s very methodical. But it is a much more in-depth process than Arch, that’s for sure. Granted these days Gentoo seems to only do Stage 3 installs which is half the system in a tarball anyway. The way people spoke about getting through the Arch install I was thinking it would be a step-by-step process like Gentoo is. It’s really not.


  • That’s exactly how I installed it. The install media boots to cli. You partition your disks, install the boot loader, add a user, and then pacman does the rest. I didn’t really find this all that “hands on”. Sure it’s not the same as clicking Next on an installer but none of it is very complicated at all. Don’t get me wrong, as someone else replied, being needlessly difficult is stupid. But when people are saying “advanced users only, DIY, etc” I’m thinking like a Gentoo install or something. I was surprised how simple it was with all the hype and evangelizing that goes on around Arch. It’s a good package manager, AUR seems interesting even if I don’t really need it. But you must admit the hype is a bit overboard.


  • Yeah I get that. I’m running it as we speak. I suppose my expectations were set more by the community than the distro itself. Arch users, by and large (and perhaps not you specifically), talk about Arch as if Jesus Christ himself built pacman. I didn’t find it hard to install, but as you say I’ve been using Linux for nearly 30 years and I know exactly what I want. I got caught up the hype and the DIY aspect I suppose, and I was evangelized to pretty hard to try it. Maybe it’s people new to Linux using fdisk for the first time thinking they did something cool? They talk about “getting through the install” like it’s some rite of passage.

    I think I probably still prefer Tumbleweed but I’m not going to bother changing again any time soon unless Arch gives me a reason to because it’s not worth the hassle. Arch and Tumbleweed are pretty similar but I think Tumbleweed has a few extra touches that I appreciate.

    Just to reiterate my position, I’m not saying anything is wrong with Arch but the hype is enormous and I’m not fully convinced it’s deserved. Something like NixOS on the other hand is starting to gain a lot of buzz and I think that’s warranted because it’s so radically different it deserves to be talked about. So far Nix is my “learning in a VM” distro.


  • I know you’re making a joke but I was convinced recently to try out Arch. I’m running it right now. I was told it’s a DIY distro for advanced users and you really have to know what you’re doing, etc etc. I had the system up and running in 20 minutes, and about an hour to copy my backup to /home and configure a few things. I coped the various pacman commands to a text file to use as a cheat sheet until muscle memory kicked in.

    …and that was it. What is so advanced about Arch? It’s literally the same as every other distro. “pacman -Syu” is no different from “zypper dup” in Tumbleweed. I don’t get the hype. I mean it’s fine. I don’t have any overwhelming desire to use something else at the moment because it’s annoying to change distros. It’s working and everything is fine. As I would expect it to be. But people talk about Arch like its something to be proud of? I guess the relentless “arch btw” attitude made me think it would be something special.

    I guess the install is hard for some people? But you just create some partitions, install a boot loader, and then an automated system installs your DE. That’s DIY? You want DIY go install NixOS or Void, or hell, go OG with Slackware. Arch is way overrated. That doesn’t mean it’s bad, but it’s just Linux and it’s no different from anything else. KDE is KDE no matter who packages it.


  • I don’t know, man. Unless you’re running on ancient hardware does a few gigs even really matter? I’ve got a 1 TB nvme in my box and I’m using like 300 gigs of it, 200 gigs of which are two Steam games and a few different Proton versions. Surely the 2 gigs shown in that screenshot is almost meaningless in a modern system. I mean you can get a 1 TB Samsung EVO for like 60 bucks on Amazon these days.


  • Simply referencing Christianity isn’t propaganda. Like it or not, it’s a touchstone most everyone can relate to and so it gets used in plenty of plots, from sci-fi to horror. Would you call Kevin Smith’s Dogma Christian propaganda because it references Christianity? Bruh. Actual God showed up there too. I don’t get upset when religion is referenced, so long as it isn’t used to beat me over the head with. I mean Marvel has a literal Norse god as a main character. Why would it be any different to have a Christian god as a main character?

    Anyway, even if you map this movie 1:1 with Christian tradition it has nothing good to say about it. That is the opposite of propaganda.


  • I think you’re missing the point. Oracle and SUSE have quite successful commercial offerings already. They don’t need to sell a RHEL clone as their core business. I don’t know why you think SUSE is unable to “create or maintain a Linux distribution,” they’re one of the oldest distros out there. SLES and SLED are extremely well regarded, and SUSE is doing further work/research into immutable server distros for the future. They certainly can “create a Linux distribution”. Oracle has a mixed history but certainly anyone could view them as successful overall.

    No, what they’re actually doing is creating a clone for the downstream packagers so they aren’t suddenly cutoff by Red Hat’s (IBM’s) decision. They’re trying to give the community back what was lost. A collaborative effort to mitigate the damage done by commercial interests. They’re not really doing anything other than restoring things to the way they were. Anyone who was using a distro that was downstream of RHEL wasn’t looking for enterprise-level support in the first place so I don’t really understand your complaint there.

    I mean, really, the whole Linux ethos is community. These two companies coming together to give back what the community lost, for free, is what FOSS is all about. Somehow I feel like that has gone right over your head.


  • Sorry if I wasn’t clear about that. My essential thinking with the NAS was: Cloud is nice, but how vulnerable are you if the Cloud provider turns evil?

    With Apple and Google, you’re basically screwed and there is nothing you can do.

    With a NAS, you own the server. You don’t rent it. You own it. You can hold the thing that stores all your private data in your own two hands.

    So what if the data center I host my backups on becomes evil? Well, then they find a bunch of encrypted blobs they can’t access while I move my backups to a different host. I’m not sure even the server hosting you’re talking about is as secure as that. What if they become evil? How much access do they have to your data? All “evil” takes is a single policy change from a suit who has no idea about actual tech. It happens all the time.

    Maybe that comes off as paranoid, but with all the data breaches and enshittification happening lately I feel much more secure having my data literally in my own two hands and a built-in defense against evil policy changes/government overreach for anything that must be hosted externally. Coupled with Tailscale for remote access I believe this as secure as you can get.

    And again, Synology was my choice for ease of use, but you can build a capable NAS from an old Optiplex on ebay for 200 bucks + drives.


  • I don’t really understand your comment.

    PC breaks? House burns down? My data is encrypted in a datacenter. My account gets cancelled? My data is on my NAS.

    I don’t store much data on my PCs or devices at all. Any data that is there I treat as transient. The NAS acts as permanent storage. So if the devices die, I can quite literally restore them to the state they were in within hours of their death from the NAS. If my house is hit by a tornado and my NAS dies, my data is safely encrypted in an external location. I’ve lost nothing. If my NAS, devices, and Wasabi’s data center are all hit by tornadoes at the same time we have bigger problems to worry about. If that ridiculous scenario happened your server would not be immune either.

    I’m not seeing the advantage of your rented server vs having backups in the cloud. Is it because the server will keep running? But if you’ve lost your devices in a fire you still can’t access it whether it’s running or not. When you replace your device you can then connect to your server, but I can simply download my data again. HyperBackup Explorer is available for every platform and can do a full restore back to a NAS, or individual file downloads for anything else.


  • I felt this “prison” very strongly with iCloud. Don’t get me wrong, I think iCloud functions exceptionally well. It’s an extremely well integrated cloud and works seamlessly with all Apple products. It’s just that after a while I start to realize just how much of my life was sitting on Apple servers and what a dependency I had on Apple, hoping they are the good guy (narrator: they were not, in fact, the good guy) or at least, not as bad as the next best option (I feel Google has legitimately become evil at this point). I was constantly reading about security and getting myself worried, etc.

    Finally I just bought a NAS. Synology is my current choice, but use whatever you prefer. A NAS can replicate anything the “cloud” can do, it’s faster, it’s safer, it doesn’t rely on the good graces of any cloud provider. YOU hold the access to your data. As it should be. I still use the “cloud” for my backups with HyperBackup sending encrypted backups to Wasabi, but that is a different matter. Even if Wasabi decided to be evil, my data is encrypted before it ever leaves the NAS and Wasabi could never see my raw data like Apple/Google can.

    The only thing holding people back from this, I guess, is price. Apple charges $0.99/month for 500gigs, while just the NAS itself with no drives will cost you several hundred. But man, not being worried about the latest cloud drama, government overreach, privacy scandals, etc is worth every cent. A Synology NAS with Tailscale is just about the safest place to put my data. All the Snyology mobile apps even pass the gf test for features and ease of use. I recommend a small 2-bay NAS to everyone I can.

    Turn off the cloud, and take your data back.


  • I see people say they turn off notifications about updates and just do it once a week, but man, if I open Discover and see 30 updates sitting there I cannot ignore it. I get real twitchy about it. So my update routine is daily. Every morning with my fresh cup of coffee I run “zypper dup”. If all goes well, I start my day. If all does not go well, I rollback to the previous state with snapper, and then start my day. Using snapper takes about 30 seconds, and frankly nvidia is the only reason I can remember ever having to use rollback.

    Tumbleweed is really painless to maintain, even if you update every day. You don’t have to update every day, but my particularly specialized Update OCD doesn’t allow me to wait a week, it seems.


  • The problem is that the blocking will have to be layers deep. If your instance has defederated from Meta, but is federated with an instance that does federate with Meta, then Meta still has access to all your data through that mutual server. So not only would people have to defederate from Meta, they’d have to defederate with anyone who does federate with Meta. If everyone isn’t on board with this, it’ll cause a huge fracture to form.

    Make no mistake: Meta wants to sell your data. They know all it takes is one server to federate with them and they’ve unlocked the entire fediverse to be harvested. I would not be shocked to see large amounts of cash flowing in exchange for federation rights.