• 0 Posts
  • 73 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • I don’t disagree, but don’t pretend you haven’t effectively set up the equal and opposite thing here. No mods will ban anyone but other than that every comment section is an implicit competition for best pro-Palestinian talking point, even when decency demands otherwise. We don’t talk about Oct 7, and if we do it was friendly fire, and if it wasn’t it was a natural consequence of Israeli policy in Gaza and that is the real issue. Yeah fine we admit the attack was not a hundred percent morally sound if you insist so much, but we don’t assign a moral weight to it or linger on it because hey when you make innocents suffer, you sow the wind and eventually reap the whirlwind, oh sure Hamas’ response was ugly but what can you do, you know, be a bastard and it comes around. Now it is our moral duty to call loud and clear for a ceasefire – the cycle of violence must stop.

    I know what you’re thinking: that’s not fair! That’s not my opinion! Yeah, the circlejerk doesn’t care about your private opinion. You know better than to contradict any of the above around here in writing, and that’s enough. I’m sure a lot of people privately think “oh… tbh that last IDF strike was unconscionable” before posting on /r/worldnews the part of their opinion they know the crowd will like better.









  • The prime problem is that every social space eventually becomes a circlejerk. Bots and astroturfing exacerbate the problem but it exists perfectly fine on its own – in the early 2000s I had the misfortune of running across plenty of gigantic, years-long circlejerks where definitely no bots or nefarious foreign manipulators were involved (I’m talking console wars, Harry Potter ship wars, stupid shit like that). People form circle jerks in the same way that salts form crystals. It’s just in their nature.

    The thing with circlejerks isn’t that there’s overwhelming agreement on some subject. You’ll get dunked on in most any social media space for claiming that the Earth is flat or that Putin is a swell guy, that in itself is obviously not a problem. What makes a circlejerk is that takes get cheered for and upvoted not in proportion to how much they are anchored in reality, but in proportion to how useful they are in galvanizing allies and disrupting enemies. Whoever shouts “glory to the cause” in the most compelling way gets all the oxygen. At that point the amount of brain rot is only going to increase. No matter how righteous the cause, inevitably there comes the point where you can go on the Righteous Cause Forum and post “2+2=5, therefore all glory to the cause” and get 400 upvotes.

    Everyone talks a big game about how much they like truth, reason and moral consistency, but in the end when it’s just them and the upvote button and “do I stop and honestly examine this argument that gives me warm fuzzy feelings”, “is it really fair to dunk on Hated Group X by applying a standard I would never apply to anyone else” – the true colors show. It’s depressing and it makes most of social media into information silos where totalizing ideologies go to get validated, and if you feel alienated by this then clearly that space isn’t for you.




  • I do exactly this kind of thing for my day job. In short: reading a syntactic description of an algorithm written in assembly language is not the equivalent of understanding what you’ve just read, which in turn is not the equivalent of having a concise and comprehensible logical representation of what you’ve just read, which in turn is not the equivalent of understanding the principles according to which the logical system thus described will behave when given various kinds of input.


  • This is an issue that has plagued the machine learning field since long before this latest generative AI craze. Decision trees you can understand, SVMs and Naive Bayes too, but the moment you get into automatic feature extraction and RBF kernels and stuff like that, it becomes difficult to understand how the verdicts issued by the model relate to the real world. Having said that, I’m pretty sure GPTs are even more inscrutable and made the problem worse.







  • I’ve always viewed this as a politics problem in disguise.

    The cook wants to oust the king. He has no allies and no claim, but swears profusely that once he is king, every person who failed to back him is going to pay. Do you back the coup? What if you say yes and the cook’s assistant, who overheard you, proclaims that whatever punishment the cook had in store for your lack of cooperation, he’s going to do even worse? Do you switch your allegiance to the assistant then?

    What if this is a hypothetical cook, who the assistants are speculating they could bring over from abroad and are also speculating would mete out the punishment to end all punishments to his non-backers, because he is petty like that? They haven’t even met him, but they figure surely a petty enough cook to fit this description has to exist out there somewhere, and inevitably someone will find him, and bring him over, and he will surely attain power once everyone understands that this is inevitable? Do you throw yourself behind their coup and challenge the king? What if the jesters overhear you and proclaim “oh wait until you hear about our hypothetical jester, he is even worse than that hypothetical cook” – do you switch your allegiance to the jesters then?

    If implicit, empty “once I have power!” threats were horses, beggars would ride