- cross-posted to:
- hackernews@derp.foo
- cross-posted to:
- hackernews@derp.foo
Thanks for introducing me to this creator, I ended up watching his video on crypto games too and it was really good. Definitely a new subscribe.
Since it seems like a lot of people weren’t able to watch the video, I got AI to write me a script to summarize youtube transcripts, here is the result:
The AI Revolution is Rotten to the Core
Minecraft, a game well-known for its open world, creativity, and unique graphics, has become a cultural phenomenon with millions of worldwide players. But behind the scenes, machine learning, a popular technology in various industries, including gaming, has its own set of challenges.
Neural networks, which are simplistic models of brains, take in stimuli and produce outputs. However, the lack of explainability in AI poses a problem. Training data involves comparing known images to network guesses and adjusting weights accordingly. AI can even generate media from text descriptions.
But the practices of companies like ImageNet and Mechanical Turk, who rely on questionable consent practices and pay low wages, come under scrutiny. Appen, a company that hires Venezuelan refugees, pays irregularly but thrives as the demand for AI grows. Unity, another player in the gaming industry, embraces machine learning but lacks transparency.
AI-generated art, while impressive in its own right, falls short when it comes to originality and creativity. It relies heavily on existing images and lacks the ability to invent new concepts. AI can only mimic patterns from training data and does not possess the capacity to generate something entirely new.
Moreover, AI-generated images lack precise control, and prompt engineering influences the results. Legal issues arise when scraped datasets are used without proper consent. While AI may democratize art, it hinders creative expression and takes away from the authorial vision found in exceptional works. AI simply cannot replicate the intricate design and writing of true art.
OpenAI, a prominent AI company, has its own controversies. Their AI technology, Worldcoin, collects retina data to identify individuals and profits from stolen training data for projects like DALL-E and GPT. Despite the legality of using copyrighted data, OpenAI plans to continue data collection. This raises concerns about copyright protection for solo artists. Direct regulation may be a more effective way to regulate AI art. It is also essential to address the biases in models and datasets used by OpenAI, and having diverse datasets can help reduce those biases.
In the face of this AI revolution, it is crucial to unite against machine learning to safeguard workers from being replaced. The WGA and SAG strikes aim to protect our members’ careers from AI abuse. Organizing and striking for improved conditions is vital. The AI revolution brings with it rigid systems and arbitrary rules, emphasizing the need to set boundaries and prioritize life over games. OpenAI’s refusal to take action leaves the responsibility in our hands.
I also got it to pick out quotes, to have something in the author’s own words:
“It’s a pretty goddamn cool idea, but there are no miracles here.”
“We’re at a point where we need to choose between building a world for money to live in or a world for people to live in.”
“These people are invisible, poor and very easy to exploit.”
“The generative AI era has arrived.”
“Everything we make is ultimately going to be shaped by the world we live in.”
Maybe I missed it but what has Minecraft to do with all of this?
Minecraft is an experience that everypony should try at least once. Help. Help. Help. [Jim]: ChatGPT wrote that video and decided what it would look like. AI is not magic Sorry. The performance isn’t great and the writing is boring but passable for a totally inoffensive, middle-of-the-road game summary
So basically they made a crappy AI minecraft video and briefly ascribed its flaws to the limitations of AI in order to justify having a Minecraft related thumbnail for enhanced clickbait.
He’s doing a Dan Olsen type thing where he starts with a seemingly irrelevant “cold open” and then ties it back in. It’s worth watching, as there is a payoff.
Whenever I see headlines like this I read “technological advancement is bad cause I don’t understand it”.
Whenever I see replies like this I read either
“I have no ability to think through, or no interest in considering, the ramifications or evaluating risks before evangelizing a piece of tech”
or
“I’m too selfish to consider anyone else but myself”
Or best of all
“I’m too lazy to watch the video and would rather be contrarian in the comments”
As a rule, I don’t watch or read things with clickbait headlines and this is clearly one of those.
I’m happy to take part in the discussion though - because it is an interesting (and important) debate.
I’ve watched/read plenty of articles on this topic. But if they can’t write a good headline, then I’m not going to encourage more of that by giving them an ad impression.
The problem is almost always with our society and not the technology itself. AI is fine, the way people are using it and the fact that people need jobs to exist in society isn’t.
Do you think that you can’t take a critical view of “technological advancement” without understanding it? I understand if you think the title is too clickbaity or something but it sounds a bit like you’re dismissing criticism about AI out of hand.
Technological advancement is cool. Widening the power divide between megacorporations and the general population and allowing rich assholes to have greater power and control over the average person is not cool, and unfortunately, that’s what technological advancement is doing.
People truly don’t comprehend the profound horror of a dystopia that people behind AI/proponents of AI/capitalists and grifters clinging to the next big exploitation tool/etc. are already causing. AGI isn’t a thing, and it’s not even a consideration. Skynet isn’t going to ruin everything. Greedy, toxic, soulless people are the problem, and AI is the tool they are wielding to to it.
AI training isn’t only for mega-corporations. We can already train our own open source models, so we should erode our own rights and let people put up barriers that will keep out all but the ultra-wealthy.
I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven’t already. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.
You should also read this open letter by artists that have been using generative AI for years, some for decades. I’d like to hear your thoughts.
did you watch it?
Why watch something that is most likely worthless drivel based on the clickbait title being similar to other worthless drivel?
Why watch or read anything? Just have your prejudices and stick to them, that’s what I say. Trying to learn why other people think the way they do, especially when those people are actively trying to engage you, is a waste of time. Better to shit on their work while having zero knowledge of its content.
“Don’t judge a book by its cover”… except maybe when it’s part of a series of books with the same kind of cover.
If you haven’t read any, then by all means, go ahead and learn what that cover represents.
Yes, that’s exactly what I said. Just look at the cover then get to posting online reviews. Reading it is unnecessary so long as you are pretty sure you have an idea what it is maybe about probably I’ve seen similar stuff elsewhere after all. Because just a glance at the cover is enough to decide not only whether you are interested in reading it but whether you can go online and declare the content to be “worthless drivel”.
There is literally no difference between a meme post on a microblog and a 90 minute video essay. It’s all exactly the same and definitely worthless.
It’s not just that you should glance at the cover/title and decide whether or not you’re interested in it. No way. It’s that you should declare the quality of the entire work based solely on the title/cover.
You can skip the snark.
If you’re young, edgy, and you know it, then go watch every single 90-minute video essay with a clickbait cover you can find, it’s going to be highly educative.
Have fun.
You don’t have to watch every video. No one does. Look at the thumbnail, who’s recommending it, and title and decide if it interests you.
But don’t review shit you haven’t even pretended to attempt to consume. That’s fucking philistine shit.
Ok, go spend some hours watching Incel videos and let me know if it was time well spent.
I mean, how can you know it wasn’t worth your time if you didn’t watch it?
Why even comment at all if you didn’t watch it? Did you just come to the comments to shit talk people watching it?
I don’t want to watch that, same as you neither of us want to watch this video (it’s 90 minutes long, goddamn).
Difference is, you’re seeking the stuff out just to review it as garbage without even pretending to click on the link. You aren’t even adding your own opinion about the topic; literally just saying ANY content ANYONE produces that disagrees with your view is automatically garbage. You’re a total partisan on the issue and you see absolutely nothing wrong with that.
No, I am disagreeing with the idea that you cannot make assumptions based on similarities to other things and must view everything to have an opinion on it. I did not say the video is garbage, but that it is reasonable for someone who has seen similarly click bait titled articles that were garbage to assume this one is too.
You do realize the very first post in this thread was the poster saying they assumed right? They admitted it was an assumption. You are turning an assumption into a negative review.
I replied to you.
What do you think the authors of the video don’t understand? You must have some insights if you say you understand AI better then everyone criticizing it.
What do you think the authors of the video don’t understand?
-
Nuance. It’s clear they’re trying to turn a complex issue into a simple black and white one.
-
Futility. The “AI Revolution” is happening and nothing will stop it or meaningfully slow it down. If you’re worried about it (and everyone should be) then you need to think about how it can be made better, not how it can be stopped.
You must have some insights if you say you understand AI better then everyone criticizing it.
I’m not the person you replied to, but I do agree with them. I don’t claim to understand AI better than everyone who’s critical and I doubt the person you replied to would make that claim either.
Your argument of futility is truly horrifying.
AI must be stopped. We must fight it at every turn. To do otherwise is to willingly accept a horrific dystopian future
How are you going to convince ten billion people not to use AI? You can’t do it.
So no matter how awful and demonstrably harmful something is, we should just accept it because it’s “inevitable”?
Concluding that something is scary or dystopian doesn’t make it less true. There is literally no way that we can stop AI. It’s code that anyone can download and run.
So no matter how awful and demonstrably harmful something is, we should just accept it because it’s “inevitable”?
If and only if it’s inevitable, then yes.
-
Whenever I get worried about new technology I think of the Luddites, then I am less worried
Why? They were right. The advent of mechanization in textile factories led to a profound weakening of labor and a steep decline in working conditions generally.
They were demanding worker protections both in terms of safety and livability and they didn’t get them. They were demanding fair wages and were correctly concerned that the machine operators would be so thoroughly subservient to the machine owners that they would never again have significant power over their own profession.
And again, the were right. That’s what happened.
All technological advancements have caused changes, many have made entire professions obsolete.
One could even be allowed to imagine that science itself ought to have put priests out of a job, yet that hasn’t happened yet either.
“AI” is a generic term that’s being thrown around a lot.
There’s a huge distance from today’s AI, which at its best is generative AI based on large language models, to actual General AI that is able to learn, understand, and adapt.
Sure, you can train a language model, but it doesn’t make it “smarter” in the same instance.