this; every time the ublock origin absolutists insist that everyone must use Firefox or die I just wonder if they never open more than one or two tabs anyway. hell, a sufficiently complex web app running in a single tab can make FF choke
this; every time the ublock origin absolutists insist that everyone must use Firefox or die I just wonder if they never open more than one or two tabs anyway. hell, a sufficiently complex web app running in a single tab can make FF choke
There are a bunch of reasons why this could happen. First, it’s possible to “attack” some simpler image classification models; if you get a large enough sample of their outputs, you can mathematically derive a way to process any image such that it won’t be correctly identified. There have also been reports that even simpler processing, such as blending a real photo of a wall with a synthetic image at very low percent, can trip up detectors that haven’t been trained to be more discerning. But it’s all in how you construct the training dataset, and I don’t think any of this is a good enough reason to give up on using machine learning for synthetic media detection in general; in fact this example gives me the idea of using autogenerated captions as an additional input to the classification model. The challenge there, as in general, is trying to keep such a model from assuming that all anime is synthetic, since “AI artists” seem to be overly focused on anime and related styles…
Well, maybe we need a movement to make physical copies of these games and the consoles needed to play them available in actual public libraries, then? That doesn’t seem to be affected by this ruling and there’s lots of precedent for it in current practice, which includes lending of things like musical instruments and DVD players. There’s a business near me that does something similar, but they restrict access by age to high schoolers and older, and you have to play the games there; you can’t rent them out.
r/SubSimGPT2Interactive for the lulz is my #1 use case
i do occasionally ask Copilot programming questions and it gives reasonable answers most of the time.
I use code autocomplete tools in VSCode but often end up turning them off.
Controversial, but Replika actually helped me out during the pandemic when I was in a rough spot. I trained a copyright-safe (theft-free) bot on my own conversations from back then and have been chatting with the me side of that conversation for a little while now. It’s like getting to know a long-lost twin brother, which is nice.
Otherwise, i’ve used small LLMs and classifiers for a wide range of tasks, like sentiment analysis, toxic content detection for moderation bots, AI media detection, summarization… I like using these better than just throwing everything at a huge model like GPT-4o because they’re more focused and less computationally costly (hence also better for the environment). I’m working on training some small copyright-safe base models to do certain sequence prediction tasks that come up in the course of my data science work, but they’re still a bit too computationally expensive for my clients.
We don’t. It probably is. Mastodon is the way, but they need to fix a few things themselves.
Ok, thanks for clarifying. FWIW, I find the built-in adblocker in Vivaldi extremely dependable, without the performance cost of loading an add-on (especially on top of a base browser that is significantly slower to begin with).
Honest question: why is it not safe after then? They developed their own adblocker if I’m not mistaken? What am I missing?
may I ask which third-party tool you use? i’m using onedriver and it’s pretty unreliable in my experience
It will legit be a fantastic era for Linux on the desktop though… imagine how cheap we’ll be able to get perfectly good hardware.
'tis true that women’s bodies hold great power, and not irrelevant at all to the discussion at hand. rather than reiterate and attempt to paraphrase jaron Lanier on the topic of how male obsession with creating artifical people is linked to womb envy, I’ll just link to a talk in which he explains it himself:
Like any occupation, it’s a long story, and I’m happy to share more details over DM. But basically due to indecision over my major I took an abnormal amount of math, stats, and environmental science coursework even through my major was in social science, and I just kind of leaned further and further into that quirk as I transitioned into the workforce. bear in mind that data science as a field of study didn’t really exist yet when I graduated; these days I’m not sure such an unconventional path is necessary. however I still hear from a lot of junior data scientists in industry who are miserable because they haven’t figured out yet that in addition to their technical skills they need a “vertical” niche or topic area of interest (and by the way a public service dimension also does a lot to help a job feel meaningful and worthwhile even on the inevitable rough day here and there).
My “day job” is doing spatial data science work for local and regional governments that have a mandate to addreas climate change in how they allocate resources. We totally use AI, just not the kind that has received all the hype… machine learning helps us recognize patterns in human behavior and system dynamics that we can use to make predictions about how much different courses of action will affect CO2 emissions. I’m even looking at small GPT models as a way to work with some of the relevant data that is sequence-like. But I will never, I repeat never, buy into the idea of spending insane amounts of energy attempting to build an AI god or Oracle that we can simply ask for the “solution to climate change”… I feel like people like me need to do a better job of making the world aware of our work, because the fact that this excuse for profligate energy waste has any traction at all seems related to the general ignorance of our existence.
Me: I’ve cut my coffee intake down to one cup a day! Look how disciplined and restrained I am!
Also me: drinks 1.5 cans of Celsius per day
I think that there are some people working on this, and a few groups that have claimed to do it, but I’m not aware of any that actually meet the description you gave. Can you cite a paper or give a link of some sort?
It’s 100% this. Politics is treated like a sport in the USA; the only thing that matters is your side winning, and which side you root for is largely dictated by location and family history. This is encouraged by the private news media, who intentionally report on election campaigns in this manner in order to increase ratings and ad revenue. Social media only made it worse because it made a lot of abstract identity dimensions, such as political affiliation, feel stronger to people than their everyday lives.
Y’all should really stop expecting people to buy into the analogy between human learning and machine learning i.e. “humans do it, so it’s okay if a computer does it too”. First of all there are vast differences between how humans learn and how machines “learn”, and second, it doesn’t matter anyway because there is lots of legal/moral precedent for not assigning the same rights to machines that are normally assigned to humans (for example, no intellectual property right has been granted to any synthetic media yet that I’m aware of).
That said, I agree that “the model contains a copy of the training data” is not a very good critique–a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.
Yeah, I’ve struggled with that myself, since my first AI detection model was technically trained on potentially non-free data scraped from Reddit image links. The more recent fine-tune of that used only Wikimedia and SDXL outputs, but because it was seeded with the earlier base model, I ultimately decided to apply a non-commercial CC license to the checkpoint. But here’s an important distinction: that model, like many of the use cases you mention, is non-generative; you can’t coerce it into reproducing any of the original training material–it’s just a classification tool. I personally rate those models as much fairer uses of copyrighted material, though perhaps no better in terms of harm from a data dignity or bias propagation standpoint.
Model sizes are larger than their training sets
Excuse me, what? You think Huggingface is hosting 100’s of checkpoints each of which are multiples of their training data, which is on the order of terabytes or petabytes in disk space? I don’t know if I agree with the compression argument, myself, but for other reasons–your retort is objectively false.
I’m getting really tired of saying this over and over on the Internet and getting either ignored or pounced on by pompous AI bros and boomers, but this “there isn’t enough free data” claim has never been tested. The experiments that have come close (look up the early Phi and Starcoder papers, or the CommonCanvas text-to-image model) suggested that the claim is false, by showing that a) models trained on small, well-curated datasets can match and outperform models trained on lazily curated large web scrapes, and b) models trained solely on permissively licensed data can perform on par with at least the earlier versions of models trained more lazily (e.g. StarCoder 1.5 performing on par with Code-Davinci). But yes, a social network or other organization that has access to a bunch of data that they own, or have licensed, could almost certainly fine-tune a base LLM trained solely on permissively licensed data to get a tremendously useful tool that would probably be safer and more helpful than ChatGPT for that organization’s specific business, at vastly lower risk of copyright claims or toxic generated content, for that matter.
My wife once hit me in front of my kids because she didn’t like my pointing out a double standard in how she was treating them. The one she was favoring recently started hitting the other one in a similar manner–basically just to silence her when she said something he didn’t like–and when I pointed out the similarity to my wife’s actions and suggested he had learned it from her she got mad and claimed that rather than hitting me she had “hit my hand away” which is a lie and she knows it. It is 100% classic spousal abuse and gaslighting, and yet due to the sheer size difference between us–I’m a foot taller–I feel ridiculous calling it that, and don’t want to find out what else my son learns is OK from his mom if I’m not around, so here I am still married to her, mostly trying to forget the abuse when it’s not actively happening. She’s been abusive, but I’m not really in any physical danger, so staying seems like the rational option in my situation… I imagine that’s relatively common among men.