• 0 Posts
  • 202 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • I agree that’s it’s a “hate the game, not the player”. The issue is how much influence he could have to steer the market to favor his product vs. the competition. It’s happened so many times in history where the better product fails because they can’t play the game like the inferior company.

    To quote “Pirates of Silicon Valley”:

    Steve Jobs: We’re better than you are! We have better stuff.

    Bill Gates: You don’t get it, Steve. That doesn’t matter!

    So is it fair for the consumer for big companies to be able to influence the game itself and not just play within the same rules? I’d say no.


  • Sam started this. The comparisons would have come up anyway, but it’s a lot harder to dismiss the claims from users when your CEO didn’t tweet “her” before the release. I don’t myself think the voice in the demos sounded exactly like her, just closer in seamlessness and attitude, which is a problem itself down the road for easily convinced users.

    AI companions will be both great and dangerous for those with issues. Wow, it’s another AI safety path that apparently no company is bothering exploring.







  • I have a laptop that’s suffered from that for a while now, so it’s not just one update but a trend. Tried a number of things from clearing space to even a manual download on a USB to force it. It always reverts back to churning away trying to complete the update, restarting, and then reversing it. The irony is the laptop works fine until it comes time for it to check again, then repeat ad nauseam.


  • There are two dangers in the current race to get to AGI and in developing the inevitable ANI products along the way. One is that advancement and profit are the goals while the concern for AI safety and alignment in case of success has taken a back seat (if it’s even considered anymore). Then there is number two - we don’t even have to succeed in AGI for there to be disastrous consequences. Look at the damage early LLM usage has already done, and it’s still not good enough to fool anyone who looks closely. Imagine a non-reasoning LLM able to manipulate any media well enough to be believable even with other AI testing tools. We’re just getting to that point - the latest AI Explained video discussed Gemini and Sora and one of them (I think Sora) fooled some text generation testers into thinking its stories were 100% human created. In short, we don’t need full general AI to end up with catastrophe, we’ll easily use the “lesser” ones ourselves. Which will really fuel things if AGI comes along and sees what we’ve done.





  • Rhaedas@kbin.socialtolinuxmemes@lemmy.worldOld Head
    link
    fedilink
    arrow-up
    19
    ·
    9 months ago

    That’s about the speed you can read text…it’s why pre-internet sites like BBSes weren’t all flashy, you had to keep it loadable. Actual downloads you would plan overnight and hope you didn’t lose connection. The first big breakthrough was resumable downloading where you left off. Huge.