IDK, if there will ever be one, I think this year or next year is it. Steam Deck seems to have really hit the mainstream, and Linux is overtaking macOS in some stats. GNOME Wayland also works well and has finally solved my variable refresh rate issues (one monitor @60Hz, one with FreeSync at higher refresh). That’s pretty amazing, and worth recognition!
I don’t think Linux will ever become #1 on desktop, nor do I think that’s the intention behind “the year of the Linux desktop.” Linux as a desktop platform is as or more viable than macOS for a majority of users, and it’s competitive with Windows for many if not most.
The only thing left for me is to see major software vendors natively support Linux. That means:
AAA games support Linux directly - kinda happening with Steam Deck
Adobe products - probably won’t happen, but I can dream
Microsoft office - I guess cloud counts, but I’d like to see desktop client support like exists on macOS
another game store - EGS, GOG, Origin, Xbox Game Pass, should make a native Linux client
And so on. Once major software starts releasing on Linux, I think we’ve won.
GOG not having a native Linux client baffles me, like, there’s this whole bunch of people who clearly care about software freedom and your store focusing on selling DRM-free games will just ignore them? Oh well. At least we have Heroic.
Exactly, and last I checked, it was the most highly upvoted feature on their user voice.
If they made a native Linux client that worked well on Steam Deck, they’d get a ton of customers. In fact, I’d switch from Steam to GOG for most of my purchases.
Probably because GOG/CDPR don’t actually give a fuck about Linux. They made that perfectly clear with the whole “Witcher 3 coming to Linux” fiasco. Maybe I am just bitter, but I feel like even the DRM-free aspect of their business model isn’t through any values they hold. It is just a business decision to corner a niche market.
Hopefully by the time MS and Adobe port their cash cows to Linux, no one thinks they need their closed stuff any more. Moving to an open platform but still running closed software loses some of advantages of an open system.
I guess in an abstract sense sure, but not from a practical one. I really enjoy using Steam on Linux, and that absolutely isn’t open source, nor are any of the games I’ve launched with it. I’ve been on Linux for ~15 years now and used various proprietary software on it, and I really like the flexibility of having options.
If Linux is going to truly go mainstream, it needs to have those options. If I really want to run Adobe products (I don’t), that should work, ideally through something like FlatHub so I can keep it separate from the rest of my system.
I don’t play games, so it’s a non-issue to me. Steam probably has been good for Linux adoption, and games being closed is a different issue than tools. Problem with closed stuff is required frozen dependencies. One of the great things about an all open system is can be all compiled to use the same versions of dependencies. You have one copy of each in use. This saves disk and memory as well being more secure, because that one version can be the latest patched one. As well as you can fix/read/add-to anything you like.
I bet each Steam game has a complete copy of it’s dependencies. It is the easiest thing to do. Though, compared with all the art assets, that’s probably a drop in the ocean. Plus it is ok for a game to consume the whole machine. The problem with a closed box like that is, it can’t have everything. It has to interface with stuff and those interfaces will change and one day the game will no longer work. As it is closed, the game can’t be updated to new interfaces and compatability layers may bring out bugs in the game itself. Then game can be lost. Which is lost culture.
Closed tools is different. The problem is power inbalance. That comes out as vendor lockin and high prices. Plus, you get the same closed issues with duplicate, out of date, dependencies. It isn’t acceptable for tools to use up lots of the machine as you use multiple of the them. It’s part of the reason Windows is so bloated.
As long as can keep a clean open system myself, I don’t care what others are doing. Them giving me their non-standard closed tool files pisses me off, but I either find a way to import it or refuse it until they send me a proper format.
Disk space and memory are relatively cheap, so I don’t see having duplicate libraries as particularly problematic, especially when most applications are dominated by assets (e.g. icons and other images).
My larger concern is security. There are two major ways of thinking here. One is the shared library model where everything uses the same handful of libraries, so if a vulnerability is found, that handful of libraries can be updated and everything is secure again. However, if a vulnerability is discovered, every application that links to it is potentially vulnerable, so it’s kind of like putting all your eggs in one basket.
The alternative solution is containerization. Basically, put all of an application’s dependencies in a container and restrict what access the application has to the outside world. This way, if the application has a vulnerability, the attacker has to also break out of the container, and there’s not as high of a chance that the rest of the applications have a similar vulnerability if they’re using different versions of the library.
I personally believe containerization is on net more secure than having all applications use the same version of a dependency, provided the interface between the container and the rest of the system can be limited.
game can be lost
This is also fixed with containerization. I can still play old Windows games through WINE because WINE essentially packages those dependencies. For Steam games, Steam ships a directory of libraries for Steam games to use, because many Steam games don’t ship every dependency.
Steam could take this a step further and run Steam games within a container to limit access to the rest of the system. You could also run Steam within a container and limit its access yourself.
tools to use up lots of the machine
But how much is “lots”?
My system uses well under 10GB for libraries, and many of those are only used by one or two applications. I could duplicate the entire system 3-4 times before I start to notice the disk usage. If I converted everything to FlatPak, I’m guessing I’d still be well under 50GB, which is my personal limit for considering the disk usage to be a problem. Many packages use a similar base, so many would share the same base image layer.
If your priority is security, you’d probably be better if e containerizing everything and managing access of each app to the rest of the system than trying to secure a handful of shared libraries. If your priority is preservation, again, containerizing is better because you can freeze dependencies as needed. If your priority is space, then you won’t want containerization. But given how cheap storage, that doesn’t seem like it should be the target.
This debate is older that Docker/Flatpack/Snaps/etc.
It’s basically the same as static linking everything vs dynamic linking. Or even, App folders of RISCOS of my youth.
Disk and RAM disk usage does add up, reference: Windows and WinSxS directory. Also, having all those old interfaces hanging around doesn’t scale. RISCOS taught me the value of centralized libs as compared to every app having it’s own (which was RISCOS’s norm).
WINE does an amazing job, trying to match Windows “bug for bug” so stuff runs, but it’s a thankless task that doesn’t scale either. Unfortunately WINE gets the blame when stuff doesn’t run, not the concept of closed software, or the vendor of that software, or Microsoft for the platform design. I’m glad WINE exists (and it’s code is well written) but it’s fighting a war from a weak position.
I want up to date libs, and some containerization on things exposed. Containers doesn’t have to mean duplicate old libs. We can have the best of both. If it’s open.
If it’s closed, it all gets bloated and stale and crusty.
If taken to either extreme it can become problematic. On one hand you try to dynamically link everything and you can end up with huge issues when you try to upgrade one of those common dependencies. On the other hand if you statically link everything (or have separate copies of dynamically linked libraries) you have a lot more disk and memory usage.
So the sensible thing is to take the middle road. Dynamically link all of your system packages, like your desktop environment, core utilities, etc, and containerize the rest of your apps. That way all of your riskier applications (closed source, or stuff with a big stack surface like a browser) can have a layer of security between it and the rest of the OS and also have a separate set of libraries that the vendor ships with. You’ll pay a small penalty for duplicate libraries, but you should only have a handful of them.
I think every containerized application should have duplicate libs. You want this exposed applications to have whatever the vendor has vetted, and you want to make sure it’s only interacting with other containerized libs.
This is basically solved for decades with package management. Dependences are in a database and things are rebuilt accordingly.
Somethings are already run in a container. Without lib duplication. I agree that there is an argument for more network facing things to be run like that.
This isn’t just static vs dynamic, but the whole app folder things again.
Far too often this whole thing is just an excuse to avoid packaging properly. Instead they gift wrap their environment of old libs. Closed stuff has no choice, so always champions ways of shipping with old libs. Really pisses me of when open stuff does it so they can avoid the work of porting to the current version of say, Python. When it’s Docker, it’s often most of some hacked old Debian/Ubuntu. It’s the exact opposite of “reproducable builds” and means that software will never make it into things like BuildRoot or Yocto. Never mind Debian/etc proper.
Closed source could document system dependencies, it just can’t be rebuilt on demand to target a different set of libraries. So it’s usually easier to give it a separate set of libraries instead of expecting the system to accommodate it.
porting to the current version of say, Python
It kinda goes both ways. To properly work with package management, it needs to support the oldest and newest versions of a library in popular distros. So for Python, that may mean Python 2.x and 3.x around the launch of Python 3. Supporting both is possible, it’s just more work for the developers for what could be considered a pretty minor benefit.
This isn’t just a Python or an interpreted language thing, it also happens for compiled shared libraries. If you need a feature from the latest libc, Debian can’t ship your package until it ships a new enough libc, but maybe it’ll ship with Ubuntu or Arch.
That said, most “system” packages are willing to go through this effort, so I expect things like KDE, git, GNU utils, etc to all use the same set of shared libraries. I think browsers are special enough that they should be containerized, if only because of the large attack surface that you should use the exact libraries they recommend instead of perhaps an older or newer one that happens to work.
Shipping your own dependencies should absolutely be the exception, not the rule, but I think it’s a good thing for some types of applications. Bloating an install from something like 30MB to 300MB is fine if it’s only for a handful of applications that tend to use a ton of resources anyway, like a browser, video game, or web service.
IDK, if there will ever be one, I think this year or next year is it. Steam Deck seems to have really hit the mainstream, and Linux is overtaking macOS in some stats. GNOME Wayland also works well and has finally solved my variable refresh rate issues (one monitor @60Hz, one with FreeSync at higher refresh). That’s pretty amazing, and worth recognition!
I don’t think Linux will ever become #1 on desktop, nor do I think that’s the intention behind “the year of the Linux desktop.” Linux as a desktop platform is as or more viable than macOS for a majority of users, and it’s competitive with Windows for many if not most.
The only thing left for me is to see major software vendors natively support Linux. That means:
And so on. Once major software starts releasing on Linux, I think we’ve won.
GOG not having a native Linux client baffles me, like, there’s this whole bunch of people who clearly care about software freedom and your store focusing on selling DRM-free games will just ignore them? Oh well. At least we have Heroic.
Exactly, and last I checked, it was the most highly upvoted feature on their user voice.
If they made a native Linux client that worked well on Steam Deck, they’d get a ton of customers. In fact, I’d switch from Steam to GOG for most of my purchases.
Probably because GOG/CDPR don’t actually give a fuck about Linux. They made that perfectly clear with the whole “Witcher 3 coming to Linux” fiasco. Maybe I am just bitter, but I feel like even the DRM-free aspect of their business model isn’t through any values they hold. It is just a business decision to corner a niche market.
Hopefully by the time MS and Adobe port their cash cows to Linux, no one thinks they need their closed stuff any more. Moving to an open platform but still running closed software loses some of advantages of an open system.
I guess in an abstract sense sure, but not from a practical one. I really enjoy using Steam on Linux, and that absolutely isn’t open source, nor are any of the games I’ve launched with it. I’ve been on Linux for ~15 years now and used various proprietary software on it, and I really like the flexibility of having options.
If Linux is going to truly go mainstream, it needs to have those options. If I really want to run Adobe products (I don’t), that should work, ideally through something like FlatHub so I can keep it separate from the rest of my system.
I don’t play games, so it’s a non-issue to me. Steam probably has been good for Linux adoption, and games being closed is a different issue than tools. Problem with closed stuff is required frozen dependencies. One of the great things about an all open system is can be all compiled to use the same versions of dependencies. You have one copy of each in use. This saves disk and memory as well being more secure, because that one version can be the latest patched one. As well as you can fix/read/add-to anything you like.
I bet each Steam game has a complete copy of it’s dependencies. It is the easiest thing to do. Though, compared with all the art assets, that’s probably a drop in the ocean. Plus it is ok for a game to consume the whole machine. The problem with a closed box like that is, it can’t have everything. It has to interface with stuff and those interfaces will change and one day the game will no longer work. As it is closed, the game can’t be updated to new interfaces and compatability layers may bring out bugs in the game itself. Then game can be lost. Which is lost culture.
Closed tools is different. The problem is power inbalance. That comes out as vendor lockin and high prices. Plus, you get the same closed issues with duplicate, out of date, dependencies. It isn’t acceptable for tools to use up lots of the machine as you use multiple of the them. It’s part of the reason Windows is so bloated.
As long as can keep a clean open system myself, I don’t care what others are doing. Them giving me their non-standard closed tool files pisses me off, but I either find a way to import it or refuse it until they send me a proper format.
Disk space and memory are relatively cheap, so I don’t see having duplicate libraries as particularly problematic, especially when most applications are dominated by assets (e.g. icons and other images).
My larger concern is security. There are two major ways of thinking here. One is the shared library model where everything uses the same handful of libraries, so if a vulnerability is found, that handful of libraries can be updated and everything is secure again. However, if a vulnerability is discovered, every application that links to it is potentially vulnerable, so it’s kind of like putting all your eggs in one basket.
The alternative solution is containerization. Basically, put all of an application’s dependencies in a container and restrict what access the application has to the outside world. This way, if the application has a vulnerability, the attacker has to also break out of the container, and there’s not as high of a chance that the rest of the applications have a similar vulnerability if they’re using different versions of the library.
I personally believe containerization is on net more secure than having all applications use the same version of a dependency, provided the interface between the container and the rest of the system can be limited.
This is also fixed with containerization. I can still play old Windows games through WINE because WINE essentially packages those dependencies. For Steam games, Steam ships a directory of libraries for Steam games to use, because many Steam games don’t ship every dependency.
Steam could take this a step further and run Steam games within a container to limit access to the rest of the system. You could also run Steam within a container and limit its access yourself.
But how much is “lots”?
My system uses well under 10GB for libraries, and many of those are only used by one or two applications. I could duplicate the entire system 3-4 times before I start to notice the disk usage. If I converted everything to FlatPak, I’m guessing I’d still be well under 50GB, which is my personal limit for considering the disk usage to be a problem. Many packages use a similar base, so many would share the same base image layer.
If your priority is security, you’d probably be better if e containerizing everything and managing access of each app to the rest of the system than trying to secure a handful of shared libraries. If your priority is preservation, again, containerizing is better because you can freeze dependencies as needed. If your priority is space, then you won’t want containerization. But given how cheap storage, that doesn’t seem like it should be the target.
This debate is older that Docker/Flatpack/Snaps/etc.
It’s basically the same as static linking everything vs dynamic linking. Or even, App folders of RISCOS of my youth.
Disk and RAM disk usage does add up, reference: Windows and WinSxS directory. Also, having all those old interfaces hanging around doesn’t scale. RISCOS taught me the value of centralized libs as compared to every app having it’s own (which was RISCOS’s norm).
WINE does an amazing job, trying to match Windows “bug for bug” so stuff runs, but it’s a thankless task that doesn’t scale either. Unfortunately WINE gets the blame when stuff doesn’t run, not the concept of closed software, or the vendor of that software, or Microsoft for the platform design. I’m glad WINE exists (and it’s code is well written) but it’s fighting a war from a weak position.
I want up to date libs, and some containerization on things exposed. Containers doesn’t have to mean duplicate old libs. We can have the best of both. If it’s open.
If it’s closed, it all gets bloated and stale and crusty.
If taken to either extreme it can become problematic. On one hand you try to dynamically link everything and you can end up with huge issues when you try to upgrade one of those common dependencies. On the other hand if you statically link everything (or have separate copies of dynamically linked libraries) you have a lot more disk and memory usage.
So the sensible thing is to take the middle road. Dynamically link all of your system packages, like your desktop environment, core utilities, etc, and containerize the rest of your apps. That way all of your riskier applications (closed source, or stuff with a big stack surface like a browser) can have a layer of security between it and the rest of the OS and also have a separate set of libraries that the vendor ships with. You’ll pay a small penalty for duplicate libraries, but you should only have a handful of them.
I think every containerized application should have duplicate libs. You want this exposed applications to have whatever the vendor has vetted, and you want to make sure it’s only interacting with other containerized libs.
This is basically solved for decades with package management. Dependences are in a database and things are rebuilt accordingly.
Somethings are already run in a container. Without lib duplication. I agree that there is an argument for more network facing things to be run like that.
This isn’t just static vs dynamic, but the whole app folder things again.
Far too often this whole thing is just an excuse to avoid packaging properly. Instead they gift wrap their environment of old libs. Closed stuff has no choice, so always champions ways of shipping with old libs. Really pisses me of when open stuff does it so they can avoid the work of porting to the current version of say, Python. When it’s Docker, it’s often most of some hacked old Debian/Ubuntu. It’s the exact opposite of “reproducable builds” and means that software will never make it into things like BuildRoot or Yocto. Never mind Debian/etc proper.
Closed source could document system dependencies, it just can’t be rebuilt on demand to target a different set of libraries. So it’s usually easier to give it a separate set of libraries instead of expecting the system to accommodate it.
It kinda goes both ways. To properly work with package management, it needs to support the oldest and newest versions of a library in popular distros. So for Python, that may mean Python 2.x and 3.x around the launch of Python 3. Supporting both is possible, it’s just more work for the developers for what could be considered a pretty minor benefit.
This isn’t just a Python or an interpreted language thing, it also happens for compiled shared libraries. If you need a feature from the latest libc, Debian can’t ship your package until it ships a new enough libc, but maybe it’ll ship with Ubuntu or Arch.
That said, most “system” packages are willing to go through this effort, so I expect things like KDE, git, GNU utils, etc to all use the same set of shared libraries. I think browsers are special enough that they should be containerized, if only because of the large attack surface that you should use the exact libraries they recommend instead of perhaps an older or newer one that happens to work.
Shipping your own dependencies should absolutely be the exception, not the rule, but I think it’s a good thing for some types of applications. Bloating an install from something like 30MB to 300MB is fine if it’s only for a handful of applications that tend to use a ton of resources anyway, like a browser, video game, or web service.