Microsoft has Copilot Plus PCs loaded with AI, and rumors are that Apple is all in on AI, too, but if you don't want AI in everything you do, there is another option: Linux.
People keep pointing the finger at AI, but miss the fact that the problem is corporate greed. AI has the possibility to help us solve problems, corporate greed will gate keep the solutions and cause us suffering.
Linux is a solution against corporate greed, it directly takes market share away from Microsoft, and is a viable competitive alternative with few drawbacks.
Mainly incompatibilities, manual setup requirements, heightened understanding of technology requirement. Not necessarily Linux’s fault, but still drawbacks.
I want all the cool Ai shit, but I want to be in charge of it 100%. I don’t want a data mining company with an OS side project spying on me for profit.
Enshittification is the result of the user not being in control: markets have a natural tendency to become dominated by a few companies (or even just a single one) if they have any significant barriers to entry (and said barriers to entry include things like networking effects), and once they consolidate control over a large enough share of the market those companies become less and less friendly and more and more extractive towards customers, simply because said customers don’t actually have any other options, which is what we now call enshittification.
At the same time Linux (and most Open Source software) is mainly about the owner being in control of their own stuff, not some corporate provider of software for your hardware or of a hardware + software “solution” (i.e. most modern electronics) provider.
So we’re getting to see more and more Linux-based full solutions to take control of one’s devices back from the corporations, not just Linux on the Desktop to wrestle control back from an increasingly anti-customer Microsoftw, but also, for example, stuff like OpenELEC (for TV boxes) and OPNSense (for firewalls/router).
Tell that to the code I have it write and debug daily. I was skeptical at first, but it’s been a huge help for that, as well s learning new (development) languages.
I do not agree with @FiniteBanjo@lemmy.today’s take. LLMs as these are used today, at the very least, reduces the number of steps required to consume any previously documented information. So these are solving at least one problem, especially with today’s Internet where one has to navigate a cruft of irrelevant paragraphs and annoying pop ups to reach the actual nugget of information.
Having said that, since you have shared an anecdote, I would like to share a counter(?) anecdote.
Ever since our workplace allowed the use of LLM-based chatbots, I have never seen those actually help debug any undocumented error or non-traditional environments/configurations. It has always hallucinated incorrectly while I used it to debug such errors.
In fact, I am now so sceptical about the responses, that I just avoid these chatbots entirely, and debug errors using the “old school” way involving traditional search engines.
Similarly, while using it to learn new programming languages or technologies, I always got incorrect responses to indirect questions. I learn that it has incorrectly hallucinated only after verifying the response through implementation. This makes the entire purpose futile.
I do try out the latest launches and improvements as I know the responses will eventually become better. Most recently, I tried out GPT-4o when it got announced. But I still don’t find them useful for the mentioned purposes.
That’s an interesting anecdote. Usually my code sorta works and I just have to debug it a little bit, and it’s way faster to get to a viable starting point that starting from scratch.
Often times my issue is unknown by it when debugging though, but sometimes it helps to find stupid mistakes.
I’d probably give it a 50% success rate, but I’ll take the help.
Mate, all it does is predict the next word or phrase. It doesn’t know what you’re trying to do or have any ethics. When it fucks up it’s going to be your fuckup and since you relied on the bot rather than learned to do it yourself you’re not going to be able to fix it.
I understand how it works, but that’s irrelevant if it does work as a tool in my toolkit. I’m also not relying on the LLM, I’m taking it with a massive grain of salt. It usually gets most of the way there, and I have to fix issues or have it revise the code. For simple stuff that’d be busy work for me, it does pretty well.
It would be my fuck up if it fucks up, and I don’t catch it. I’m not putting code it writes directly into production, I’m not stupid.
I think they do have their help, but it’s not nearly as dramatic as some companies earning money from it want us to think. It’s just a tool that helps just like a good IDE has helped in the past.
It’s not greed - it’s masqueraded violence being allowed, centralization, impunity, and general corruption, all supported by various IP, patent and “child protection” laws.
No separate component is necessary, it’s a redundant system built very slowly and carefully.
Referencing that quote about blood of patriots, and another about difference between journalism and public relations being in outrage and offense, or difference between a protest and a demonstration being in obviously breaking rules.
EDIT: I meant - it’s a general tendency. But IT today is as important as police station, post office and telegraph were in 1917. One can also refer to that “means of production” controversy.
People keep pointing the finger at AI, but miss the fact that the problem is corporate greedcapitalism. AI has the possibility to help us solve problems, corporate greedcapitalism will gate keep the solutions and cause us suffering.
We don’t have capitalism in the US, we have late-stage crony capitalism. Regulated capitalism is fine, but we are in a crony capitalist system which feeds corporate greed. Our government is controlled by a handful of mega corps which have their hands pulling the strings due to the lobbying system. It wasn’t always this way, which is why I don’t blame capitalism, I blame human greed.
it’s greed. whether under a socialist regime, capitalist, communist or other, all it takes to destroy the system is for greedy people in power to force it open by buying judges and politicians. capitalism is in no way a prerequisite.
There is no such thing as a “socialist” regime… not in the way we generally use the term regime, anyway. And the regimes that (falsely) attributed to themselves the characteristics of socialism never claimed to make a virtue out of human greed like our neoliberal ones do.
all it takes to destroy the system is for greedy people
Are you trying to say that a disjointed and incoherent jumble of pretexts, justifications and outright lies masquerading as an ideology that specifically exists to justify said human greed will (somehow) be destroyed by human greed?
Looks to me like it’s working as designed… and not “destroyed” at all.
AI can’t solve problems. This should be abundantly clear by now from the number of laughable and even dangerous “solutions” it gives while stealing content, destroying privacy, and sucking up tons of power to do so. Just ban AI.
At this point there’s barely a difference in practical use. And both are the same amount of stupid, sucking up tons of power, destroying privacy, and stealing information. It’s all bullshit and should be banned.
How is this not something that is a common sense, widely accepted worldview? Are people this dense?
Because you clearly do not have any technical understanding of the field, or what machine learning even is, or how it can be useful, and the dozen of different things also called AI.
Just take some time to look up the benefits of AI and what it is being used to solve. It’s easy to focus on how corporations are abusing the technology for profit, but it’s a bland weak perspective to think that AI can’t solve problems.
It can’t even solve simple queries correctly half the time. Exactly what “benefits” can come from such a flawed system that steals its information, destroys privacy, and uses tons of resources?
Grow up and admit you’re fascinated by some sci-fi bullshit poorly implemented by garbage corporations.
Have you seen how many garbage query returns people get? It’s completely ineffectual unless you just treat it like a simple non “AI” search engine query, in which case, why bother wasting time with AI?
And do you realize how much power and time is needed to create a local LLM? The reason AI is generally implemented online is because it’s so incredibly complex and computationally heavy to do it locally for any decent amount of data. So unless you’re a fucking pervert with 20 overpowered PCs paying thousands a month on electricity generating AI porn art, what’s the point?
People keep pointing the finger at AI, but miss the fact that the problem is corporate greed. AI has the possibility to help us solve problems, corporate greed will gate keep the solutions and cause us suffering.
Sure. But then, Linux may well be a solution against corporate greed.
Linux is a solution against corporate greed, it directly takes market share away from Microsoft, and is a viable competitive alternative with few drawbacks.
What drawbacks?
Photoshop is a birch to get running
Those trees always getting in the way…
How does Premiere Pro do?
Not sure, but Davinci Resolve works
I recently watched this video where the guy says it doesn’t work, or rather the whole Adobe suite doesn’t.
They switched to Kdenlive and seem to be happy with it, but it sounds like it was a bit of a project to learn the new editor.
Here is an alternative Piped link(s):
this video
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Not sure, but Davinci Resolve works
Mainly incompatibilities, manual setup requirements, heightened understanding of technology requirement. Not necessarily Linux’s fault, but still drawbacks.
I want all the cool Ai shit, but I want to be in charge of it 100%. I don’t want a data mining company with an OS side project spying on me for profit.
Enshittification is the result of the user not being in control: markets have a natural tendency to become dominated by a few companies (or even just a single one) if they have any significant barriers to entry (and said barriers to entry include things like networking effects), and once they consolidate control over a large enough share of the market those companies become less and less friendly and more and more extractive towards customers, simply because said customers don’t actually have any other options, which is what we now call enshittification.
At the same time Linux (and most Open Source software) is mainly about the owner being in control of their own stuff, not some corporate provider of software for your hardware or of a hardware + software “solution” (i.e. most modern electronics) provider.
So we’re getting to see more and more Linux-based full solutions to take control of one’s devices back from the corporations, not just Linux on the Desktop to wrestle control back from an increasingly anti-customer Microsoftw, but also, for example, stuff like OpenELEC (for TV boxes) and OPNSense (for firewalls/router).
LLMs in particular are unlikely to solve really any problems, much less a measurable number of the problems it is currently being thrown at.
Tell that to the code I have it write and debug daily. I was skeptical at first, but it’s been a huge help for that, as well s learning new (development) languages.
I do not agree with @FiniteBanjo@lemmy.today’s take. LLMs as these are used today, at the very least, reduces the number of steps required to consume any previously documented information. So these are solving at least one problem, especially with today’s Internet where one has to navigate a cruft of irrelevant paragraphs and annoying pop ups to reach the actual nugget of information.
Having said that, since you have shared an anecdote, I would like to share a counter(?) anecdote.
Ever since our workplace allowed the use of LLM-based chatbots, I have never seen those actually help debug any undocumented error or non-traditional environments/configurations. It has always hallucinated incorrectly while I used it to debug such errors.
In fact, I am now so sceptical about the responses, that I just avoid these chatbots entirely, and debug errors using the “old school” way involving traditional search engines.
Similarly, while using it to learn new programming languages or technologies, I always got incorrect responses to indirect questions. I learn that it has incorrectly hallucinated only after verifying the response through implementation. This makes the entire purpose futile.
I do try out the latest launches and improvements as I know the responses will eventually become better. Most recently, I tried out GPT-4o when it got announced. But I still don’t find them useful for the mentioned purposes.
That’s an interesting anecdote. Usually my code sorta works and I just have to debug it a little bit, and it’s way faster to get to a viable starting point that starting from scratch.
Often times my issue is unknown by it when debugging though, but sometimes it helps to find stupid mistakes.
I’d probably give it a 50% success rate, but I’ll take the help.
Seems like you agreed with everything I said, tho.
Mate, all it does is predict the next word or phrase. It doesn’t know what you’re trying to do or have any ethics. When it fucks up it’s going to be your fuckup and since you relied on the bot rather than learned to do it yourself you’re not going to be able to fix it.
I understand how it works, but that’s irrelevant if it does work as a tool in my toolkit. I’m also not relying on the LLM, I’m taking it with a massive grain of salt. It usually gets most of the way there, and I have to fix issues or have it revise the code. For simple stuff that’d be busy work for me, it does pretty well.
It would be my fuck up if it fucks up, and I don’t catch it. I’m not putting code it writes directly into production, I’m not stupid.
I think they do have their help, but it’s not nearly as dramatic as some companies earning money from it want us to think. It’s just a tool that helps just like a good IDE has helped in the past.
Oh absolutely, I agree with that comparison. That said, I’d take an IDE over AI 11 times out of 10.
deleted by creator
I mean, if LLMs really make software engineering easier, we should also expect Linux apps to improve dramatically. But I’m not betting on it.
It’s not greed - it’s masqueraded violence being allowed, centralization, impunity, and general corruption, all supported by various IP, patent and “child protection” laws.
No separate component is necessary, it’s a redundant system built very slowly and carefully.
Referencing that quote about blood of patriots, and another about difference between journalism and public relations being in outrage and offense, or difference between a protest and a demonstration being in obviously breaking rules.
EDIT: I meant - it’s a general tendency. But IT today is as important as police station, post office and telegraph were in 1917. One can also refer to that “means of production” controversy.
No need to thank me.
We don’t have capitalism in the US, we have late-stage crony capitalism. Regulated capitalism is fine, but we are in a crony capitalist system which feeds corporate greed. Our government is controlled by a handful of mega corps which have their hands pulling the strings due to the lobbying system. It wasn’t always this way, which is why I don’t blame capitalism, I blame human greed.
So… capitalism.
Sooo… capitalism?
So just bog-standard capitalism, then?
The Soviets tried that and failed. The Chinese tried it too, and it turned into… bog-standard capitalism.
Nope, wrong. Entirely.
It’s always been crony capitalism. There is no other kind of capitalism - never has been.
it’s greed. whether under a socialist regime, capitalist, communist or other, all it takes to destroy the system is for greedy people in power to force it open by buying judges and politicians. capitalism is in no way a prerequisite.
There is no such thing as a “socialist” regime… not in the way we generally use the term regime, anyway. And the regimes that (falsely) attributed to themselves the characteristics of socialism never claimed to make a virtue out of human greed like our neoliberal ones do.
Are you trying to say that a disjointed and incoherent jumble of pretexts, justifications and outright lies masquerading as an ideology that specifically exists to justify said human greed will (somehow) be destroyed by human greed?
Looks to me like it’s working as designed… and not “destroyed” at all.
AI can’t solve problems. This should be abundantly clear by now from the number of laughable and even dangerous “solutions” it gives while stealing content, destroying privacy, and sucking up tons of power to do so. Just ban AI.
You really need to specify what you mean by “AI”. AI has been used in tons of applications for decades. Do you mean LLMs? Because not all AI is LLMs.
At this point there’s barely a difference in practical use. And both are the same amount of stupid, sucking up tons of power, destroying privacy, and stealing information. It’s all bullshit and should be banned.
How is this not something that is a common sense, widely accepted worldview? Are people this dense?
Because you clearly do not have any technical understanding of the field, or what machine learning even is, or how it can be useful, and the dozen of different things also called AI.
Says the person with zero understanding of the terrible problems with AI. It’s just rampant, ignorant fanboyism at this point. Ban it.
Just take some time to look up the benefits of AI and what it is being used to solve. It’s easy to focus on how corporations are abusing the technology for profit, but it’s a bland weak perspective to think that AI can’t solve problems.
It can’t even solve simple queries correctly half the time. Exactly what “benefits” can come from such a flawed system that steals its information, destroys privacy, and uses tons of resources?
Grow up and admit you’re fascinated by some sci-fi bullshit poorly implemented by garbage corporations.
Lie and lie again, neither do you realize there are open source LLMs. You keep yelling to ban it when nothing you write even matters.
Have you seen how many garbage query returns people get? It’s completely ineffectual unless you just treat it like a simple non “AI” search engine query, in which case, why bother wasting time with AI?
And do you realize how much power and time is needed to create a local LLM? The reason AI is generally implemented online is because it’s so incredibly complex and computationally heavy to do it locally for any decent amount of data. So unless you’re a fucking pervert with 20 overpowered PCs paying thousands a month on electricity generating AI porn art, what’s the point?
Ah yes. Let’s ban AI so the cartels take over the market for AI. What a great plan.
The living fuck are you on about? What cartels? What?
You people are so brainwashed by the AI bullshit being spouted by these rich corporations that you can’t see its huge problems.