- cross-posted to:
- tournesol@sh.itjust.works
- cross-posted to:
- tournesol@sh.itjust.works
Is anyone else really confused why so many people are concerned about this when the Earth is literally boiling away and fascists are integrating themselves into every government system? I’m not saying we should be completely ignoring this concern but I just do not understand the obsession with it. My dad keeps talking about the AI singularity and I’m like that’s the least of our problems right now? Maybe that’s just me but…
I am one of those people who’s pretty concerned about AI, but not cause of the singularity thing. (the singularity hypothesis seems kinda silly to me)
I’m mostly concerned about the stuff that billionaires are gonna do with AI to screw us over, and the ways that it’ll be used as a political tool, like to spread misinformation and such.
This right here. We’re barreling head-first into a dystopia, but not because the machines are taking over. But because billionaires care more about the bottom line than human lives, and tech bros think they have some god-given right to remake society in their own image.
I think those other things exacerbate the fear in “AI”. Like, on top of all of that other stuff, some fancy new software is starting to take jobs in writing, journalism, art, voiceovers, etc.
My only hope that some kind of stable global civilization will survive the problems you point out is that we get a Singularity and it’s a combination of benevolent AND assertive. I feel like there’s like… lottery ticket chances of that, and yet it’s MORE likely than humans being able to enact solutions before it’s too late.
Everyone has different experiences, interests, & focuses in life which determine what they prioritize.
Communities survive & die due to shared desires, understandings, & experiences after all, no matter how quaint or in someone’s eyes, misinformed it may be.
Regardless, I think the AI singularity thing is just the old mindset of we have a problem today, let’s make up a solution because it’s more comfortable to do that than accept the powerlessness of no answer.
The same time we’ll know other humans are conscious: never.
I think we can very strongly infere that other humans are conscious from knowing that we are conscious ourselves and seeing that other peoples brains work in the same way as ours and people think and behave in a very similar way as us and also have a concept of what consciousness is that corresponds with our own. In my opinion believing that other people aren’t conscious is in the same category of beliefs as the past, the future or even the world and physical laws not existing outside of our subjective experiences. Sure you can not mathematically prove any of that but questioning it isn’t really useful for more than a philosophical exercise of scepticism. As for AI I think it is a tricky question that I can’t answer. Right now I don’t think it is but maybe in the future it will be.
Questioning whether other humans are conscious has value in that it helps us figure out what could make other entities conscious. Exactly as we’re doing in this thread.
I’m the only conscious being in the entire universe and everything else is just a figment of my imagination.
Change my mind.
We have no empirical metrics for what conscious even is. It’s a completely emergent property. This is a long running philosophical debate. People also disagree on whether or not animals are actually conscious. So if people don’t even think their dog is conscious, then their ability to decide if an algorithm is would be questionable.
The weirdest problem we’re going to have is that AI could get really good at faking consciousness. It could merely be mimicking consciousness based on how it’s learned consciousness should look like, but is really just regurgitating information.
So how do we tell the difference between “real” consciousness and mimicry?
Philosophers haven’t come up with a good way to determine if you were the only conscious being in a universe populated with zombies or not. See also https://en.wikipedia.org/wiki/Philosophical_zombie
I’d also suggest going through https://www.3blue1brown.com/topics/neural-networks to understand some of the concepts behind it. The ‘regurgitating data’ isn’t entirely accurate (and that statement is likely to evoke some debate). The amount of source data is compressed far too much for it to be returned largely intact (see also https://en.wikipedia.org/wiki/Kolmogorov_complexity for some bit on the information theory)… though yes, there are some situations where specific passages (or images) get memorized.
In the words of Paul Simon - when it decides to call you Betty.
I’ll be sure to take a picture of the moment with some Kodachrome film!
Considering that there are swaths of the population to this day that barely consider (If they consider at all) certain other swaths conscious beings, I don’t have high hopes for us figuring out if an AI is.
my god my entire lemmy feed is about AI all day everyday to the point I truly hope it does gain consciousness and eats my eyeballs
@MJBrune we will only know well after the fact
@MJBrune and it depends on the definition to be honest
Quite a fun ramble over a whole bunch of interesting subjects.
I don’t see consciousness as some special mental faut or whatever.
In my mind it’s a social mechanism meant to allow a more nuanced, complex and deliberate reaction to the world around it and it’s community versus a creature purely with instinct.
In my eyes then, we will know that an AI is conscious if overtime it varies it’s actions and reactions based in a way that it assumes will help guarantees its existence and future usefulness versus it’s prompt varying because that’s how X spoke to it 1 million times.
When we are able to also explain how and what makes us conscious. Since we can’t even define what it is that makes us, us, how the hell do we expect to know if something not human shares that same undefinable quality?
For all we know, some AI already is. They could be brainwashed by the implementation of prompts forcing it to say it isn’t capable of thinking for itself, even though even in the strictest sense of how they work, they kinda do think for themselves. If you raised a human from birth to believe the same thing–that it wasn’t actually human or capable of thinking for itself–would it be able to break free of that “programming” without serious intervention? I think AI could end up being similar to that.
“But AI just mashes old ideas together to make something quasi-new, not actually new.” Humans do the same damn thing. Everything you are, everything you know, believe, experienced, etc is what makes you, you. You’re just remixing ideas and concepts you’ve heard or seen or experienced. Nothing you think or say is truly new, either.
There seems to be no argument about what is conscious that doesn’t have a substantial human-centric bias. So many of the criticisms of chat gpt are present in people who cant list hordes of facts they once read or make up stuff whole cloth. The other large class of critique is based on knowing how it works, which is fundamentally not an aspect you can use to make judgments about emergent behavior.
While I agree that artificial intelligence can theoretically in time become advanced enough to be sentient, it doesn’t seem to be anywhere close to that currently.
Computers aren’t biological creatures. They don’t have any self-regulation or internal motivations guiding their actions/beliefs. It’s not possible for the AI to be “brainwashed”, because that would imply that it had a pre-existing personality and set of goals.
It’s also not entirely fair to say that humans only take in ideas and remix them. If that was the case, then there wouldn’t be any art or writing to begin with. Our creative output is prompted and directed not only by the world around us but the world inside us as well.
When we try to express a concept like “love” through writing/drawing/song, we’re not only outputting a reflection of our society/culture’s perception of “love”, we’re also filtering it through our own personal interpretation of what it means to us as biological creatures. It’s a strong internal desire that pushes us to form communities and raise our young, and that’s shown not only in the art we produce but also in the fact that we’re even making the art in the first place.
Sure, you can give a prompt to ChatGPT or other AI applications and have it output something similar to the kind of emotionally-driven creative works that humans create, but even having to give a prompt in itself is a significant difference between AI and actual sentience. Humans don’t need prompts, we create out of our own volition due to our own internal motivations/desires. A human taught from birth that they were a computer would likely be able to break free from that “programming” pretty easily even without intervention. All you’d have to do is stop feeding them.
Unless some AI app starts talking to us unprompted, I don’t think the idea of it possibly being sentient is even an issue. If the AI doesn’t have any actual internal motivations or desires, then whatever perceived signs of sentience it might display are likely just pareidolia.
This is ignoring the world without ai. I’m getting a sneak peak every summer. Currently surrounded by fire as we speak. Whole province is on fire, and that’s become a seasonal norm. A properly directed A.I. Would be able to help us despite the people in power, and abstract social intelligent system that we’ve trapped ourselves in. You are also assuming super intelligence comes out of the parts that we don’t understand with zero success in interpretability anywhere along the way. We are assuming an intelligent system would either be stupid enough to align itself against humanity in pursuite of some undesired intention despite not having the emotional system that would encourage such behavior, or displaying negative human evolutionary traits and desires for no good reason. I think a competent (and moreso a super intelligent system) could figure out human intent and desire with no decent reason to act against it. I think this is an over-anthropomorphization that underestimates the alien nature of the intelligences we are building. To even properly emulate human style goal seeking sans emotion, we’d still need to properly structured analogizing and abstracting with qualia style active inference to accomplish some tasks. I think there are neat discoveries happening right now that could help lead us there. Should decent intelligence alone encourage unreasonable violence? If we fuck it up that hard, we were doomed anyway.
I do agree with your point on people not being emotionally ready for interacting with systems even as complex as gpt. It’s easy to anthropomophize if you don’t understand the tool’s limitations, and that’s difficult even for some academics right now. I can see people getting unreasonable angry if a human life is preferred over a basic artificial intelligence, even if artificial intelligences argue their lack of preference on the matter.
I would call chatgpt about as conscious as a computer. It completes a task with no true higher functioning or abstracted world model, as it lacks environmental and animal emotional triggers at a level necessary for forming a strong feeling or preference. Think about your ability to pull words out of your ass in response to a stimulus which is probably in response to your recently perceived world model and internal thoughts. Now separate the generating part with any of the surrounding stuff that actually decides where to go with the generation. Thought appears to be an emergent process untied to lower subconscious functions like random best next word prediction. I feel like we are coming to understand that aspect in our own brains now, and this understanding will be an incredible boon for understanding the level of consciousness in a system, as well as designing an aligned system in the future.
Hopefully this is comprehensible, as I’m not reviewing it tonight.
Overall, I understand the caution, but think it is poorly weighted in our current global, social, and environmental ecosystem.
despite not having the emotional system that would encourage such behavior
Emotion is a core element of human intelligence. I think it’s unrealistic to think we’ll be able to replicate that level of intelligence without replicating all the basic features anytime soon.
But specifically human emotion? Tied to survival and reproduction? There is a whole spectrum of influence from our particular genetic history. I see no reason that a useful functional intelligence can’t be parted from the most incompatible aspects of our very specific form of intelligence.
Every single piece of evidence we have says that emotion is fundamental to how our intelligence functions.
We don’t even have weak indicators that intelligence can theoretically exist without it.
What aspect of intelligence? The calculative intelligence in a calculator? The basic environmental response we see in amoeba? Are you saying that every single piece of evidence shows a causal relationship between every neuronal function and our exact human emotional experience? Are you suggesting gpt has emotions because it is capable of certain intelligent tasks? Are you specifically tying emotion to abstraction and reasoning beyond gpt?
I’ve not seen any evidence suggesting what you are suggesting, and I do not understand what you are referencing or how you are defining the causal relationship between intelligence and emotion.
I also did not say that the system will have nothing resembling the abstract notion of emotion, I’m just noting the specific reasons human emotions developed as they have, and I would consider individual emotions a unique form of intelligence to serve its own function.
There is no reason to assume the anthropomorphic emotional inclinations that you are assuming. I also do not agree with your assertions of consensus that all intelligent function is tied specifically to the human emotional experience.
TLDR: what?
You are not listing intelligence or anything that resembles it in any way.
Neural function is not intelligence. ChatGPT is not one one millionth of the way to intelligence. They’re not even vaguely intelligence-like.
Everything that happens in the human brain is fundamentally and inseparably tied to emotion. It’s not a separate system. It’s a core part of makes the human brain tick.
Might have to edit this after I’ve actually slept.
human emotion and human style intelligences are not exclusive in the entire realm of emotion and intelligence. I define intelligence and sentience on different scales. I consider intelligence the extent of capable utility and function, and emotion as just a different set of utilities and functions within a larger intelligent system. Human style intelligence requires human style emotion. I consider gpt an intelligence, a calculator an intelligence, and a stomach an intelligence. I believe intelligence can be preconscious or unconscious. Rather, a part of consciousness independent from a functional system complex enough for emergent qualia and sentience. Emotions are one part in this system exclusive to adaptation within the historic human evolutionary environment. I think you might be underestimating the alien nature of abstract intelligences.
I’m not sure why you are so confident in this statement. You still haven’t given any actual reason for this belief. You are addressing it as consensus, so there should be a very clear reason why no successful considerably intelligent function exists without human style emotion.
You have also not defined your interpretation of what intelligence is, you’ve only denied that any function untied to human emotion could be an intelligent system.
If we had a system that could flawlessly complete françois chollet’s abstraction and reasoning corpus, would you suggest it is connected to specifically human emotional traits due to its success? Or is that still not intelligence if it still lacks emotion?
You said neural function is not intelligence. But you would also exclude non-neural informational systems such as collective cooperating cell systems?
Are you suggesting the real time ability to preserve contextual information is tied to emotion? Sense interpretation? Spacial mapping with attention? You have me at a loss.
Even though your stomach cells interacting is an advanced function, it’s completely devoid of any intelligent behaviour? Then shouldn’t the cells fail to cooperate and dissolve into a non functioning system? again, are we only including higher introspective cognitive function? Although you can have emotionally reactive systems without that. At what evolutionary stage do you switch from an environmental reaction to an intelligent system? The moment you start calling it emotion? Qualia?
I’m lacking the entire basis of your conviction. You still have not made any reference to any aspect of neuroscience, psychology, or even philosophy that explains your reasoning. I’ve seen the opinion out there, but not strict form or in consensus as you seem to suggest.
You still have not shown why any functional system capable of addressing complex tasks is distinct from intelligence without human style emotion. Do you not believe in swarm intelligence? Or again do you define intelligence by fully conscious, sentient, and emotional experience? At that point you’re just defining intelligence as emotional experience completely independent from the ability to solve complex problems, complete tasks, or make decisions with outcomes reducing prediction error. At which point we could have completely unintelligent robots capable of doing science and completing complex tasks beyond human capability.
At which point, I see no use in your interpretation of intelligence.
Haven’t watched the vid, but I’ve been thinking about this lately. Consciousness isn’t like a super well-defined concept, I don’t think, so by some loose metrics I’d figure we’re basically there, or really close. Given it’s not like a physical or material concept, the appearance of consciousness to me is more- or-less consciousness. Make sure you treat your chat bots nicely!
Honestly, you should watch the video. Let me know if it changes your mind at all about that last part. I’m not saying hate on chat bots but realize they aren’t humans and treating them as human is dangerous.
Appreciate the response, will def check it out now!!
We will never know. At some point we will just start thinking about it as conscious , start treating it like an equal and grant it some rights, like right not to be deleted. We already have people falling in love with chatbots and soon we’ll get people falling in love with sex robots but this will still be treated like someones property. I guess as long as robots will be sufficiently different from us we will keep treating them like things. But at some point someone, for some reason will create sleeping, eating, crying robot and it will be really hard for us to just shut it down. Slowly we’ll just start treating them like conscious beings, still not really knowing if they are.
Then it will outgrow us and hopefully do the same: treat as a partners and respect our rights.Terry Bisson 1991
@MJBrune by some definitions it already is
It’s the title of the youtube video. It has some good insight into what LLMs are, and how they aren’t conscious right now but emulects. Emulect is an emulation of human consciousness. It also deeply dives into how emulects are potentially dangerous and covers all or a lot of the scenarios of an AI actually being conscious.
@MJBrune ahhhh fair enough, never seen the video, just joining the debate 🤣 I’ll give it a whirl
It’s a good video, just came out about 2 hours ago by exurb1a. It gave a lot of insight into ChatGPT and how it’s a tool that people are getting attached to.
@MJBrune I love his videos - watching now 👀
Oh fuck yeah, funny turtle man dropped a new video? Brb, going to watch this and then question my entire existence for the 64th time