cross-posted from: https://programming.dev/post/3974080

Hey everyone. I made a casual survey to see if people can tell the difference between human-made and AI generated art. Any responses would be appreciated, I’m curious to see how accurately people can tell the difference (especially those familiar with AI image generation)

  • @LovableBastard@slrpnk.net
    link
    fedilink
    English
    18
    edit-2
    9 months ago

    I did bad.

    But I expected to do bad. AI generation has become too good.

    You tell yourself you can identify them, because sometimes you notice weird artifacts and spot the AI quickly. But we’re really only noticing the bad ones. We’ll never even know the good ones were AI most of the time, so we can’t balance how good we think we are at spotting them against how often we were actually wrong.

    • @scarabic@lemmy.world
      link
      fedilink
      English
      29 months ago

      Hands are often a giveaway. The first image for example shows perfect hand proportion - even through a glass. AI isn’t there yet.

      • @CoderKat@lemm.ee
        link
        fedilink
        English
        1
        edit-2
        9 months ago

        Hands are only a give away for bad AI art. There’s no shortage of examples with great hands (especially when using features like Stable Diffusion’s ControlNet, which allows you to give hints to the AI for the shape that something should match). Just so many people posting AI art generate once or twice and post that. If you’re more selective and selectively regenerate, you’ll be able to get much more believable results.

        This is also a rapidly changing area, with the most cutting edge AI being way better than something from even a year ago. Used to be that no AI could do even remotely believable text, but in recent weeks, I’ve been seeing many examples of AI art that got small amounts of text perfect.

    • wjrii
      link
      fedilink
      89 months ago

      Same. On the plus side, I guess we will happily trundle along to our inevitable doom, led by our impossible-to-identify AI overlords!

    • @CoderKat@lemm.ee
      link
      fedilink
      English
      39 months ago

      I only guessed a single one as generated by AI and I was wrong on that (the mouse in the boat drawing felt like unusual shaping and shading for a human). I really could not identify any telltale signs of AI in any of them, so answered entirely honestly that none of the others looked like AI generated.

      To be honest, I expected that. The telltale signs people often talk about are only problems for bad AI art. Well done AI art really is indistinguishable. Stuff like weird fingers, faces, and teeth are only problems if the prompter is lazy and just picks the first thing generated (and doesn’t selectively regenerate). If you’re selective, you can get AI art without the things some people claim make AI art easy to recognize.

      It’s like photoshop or movie CGI. Anyone can detect a bad photoshop and we’re used to seeing those. But well done photoshops by experts can be near impossible to detect (short of careful pixel level inspection, which doesn’t really apply to AI art). Yet, a lot of people are over confident in how well they can spot photoshops.

      I wonder if this will change anyone’s mind? I’ve always wanted to do this for a few topics, including AI. I’ve also wanted to do this for trans vs cis people (so many transphobes claim they can “always tell”), movie CGI vs practical effects (see also: Captain Disillusion videos), and for various kinds of food and drink (so many people are elitist that something tastes better – simple example that I’ve actually seen disproven is the various kinds of eggs, including store bought vs locally sourced).

  • @watersnipje@lemmy.blahaj.zone
    link
    fedilink
    English
    129 months ago

    13/20, I work in AI. The paintings were the hardest for me, because the art style obfuscates some of the AI artefacts that can be tells.

    • LUHG
      link
      fedilink
      English
      29 months ago

      Yeh, I don’t work in AI but got 12 because the art was difficult. It’s still a while off until it becomes impossible to tell.

  • @Kissaki@feddit.de
    link
    fedilink
    English
    109 months ago

    I was missing a “don’t know”/“can’t determine” option.

    For photographs specifically and some types of paintings/artificial stuff, there are things you can look for. But for other things, I feel like, or at least to my knowledge, you can’t.

    Like the pencil drawing. There’s not enough things it could be doing wrong. It’s a sketch. With simplistic but “error-excusing”/diffuse/transformable content.

    • @popcar2@programming.devOP
      link
      fedilink
      English
      12
      edit-2
      9 months ago

      The goal isn’t really to be a quiz, but rather just to see how susceptible people are to AI generated art. Many of the images I chose are intentionally vague, 80% of people so far got the line art sketch wrong, and that’s with knowing that many of these are AI generated. The results are definitely interesting to see.

      A “don’t know” option would ruin the point since most people would just choose that. I want to see where people lean towards.

      • @lloram239@feddit.de
        link
        fedilink
        English
        19 months ago

        The line art one has a stockphoto watermark still visible in the bottom right corner by the shoe (subtle white cross pattern). That’s the only thing that gave it away as being real.

        • @CoderKat@lemm.ee
          link
          fedilink
          English
          29 months ago

          I don’t think that’s necessarily a dead giveaway. Because there’s been controversy about AI art that added watermarks. The controversy being because it implied the AI was scraping images that it definitely wasn’t allowed to use.

    • @modeler@lemmy.world
      link
      fedilink
      English
      5
      edit-2
      9 months ago

      The back left leg of the bench in the pencil drawing is in the wrong place - at least that was what I considered the ‘tell’.

      But I found it really hard to spot the AI.

  • @Gork@lemm.ee
    link
    fedilink
    English
    109 months ago

    14 / 20 here. I dunno why there are so many people, particularly on Reddit, who absolutely hate AI art. Yeah some of it can look janky, uncanny valley, or such but a lot of it looks really damn cool.

    And not all of us have talents to create visual art of our own so text creation is much more accessible for us to explore our imaginations. Or lack the money to commission pieces from human artists.

    • FaceDeer
      link
      fedilink
      109 months ago

      I suspect they hate it not because of any features of the actual images themselves, but for what it means to how society as a whole treats art.

      For some it’s simply financial. Their career is at stake, an industry that they thought was a stable source of employment is now on the leading edge of a huge shake-up that might not need them at all in the future.

      For others it’s seen as an attack on their personal self-worth. For years - for generations - there has been a steady drumbeat insistence that art is what makes humans “special.” Both specific artists, and humanity in general. It was supposed to be a special skill that we had that set us above the animals and the machines. And now that’s been usurped.

      It’s like the old folk take of John Henry, the steel-driving man who made a heroic last stand against Skynet’s forces in the railroad construction industry. People want to think humans are irreplaceable and art seemed like a rock-solid anchor for that. Turns out it was actually not.

      • @CoderKat@lemm.ee
        link
        fedilink
        English
        19 months ago

        Agree and I sympathize with all the points.

        On the financial point, we, as a society, badly need to stop depending on jobs for survival before it’s too late. But I know that we’re unlikely to change until a lot of people get hurt.

        And on the self-worth point, it feels awful to be replaced, even if the money isn’t an issue. People take pride in their work and want their work to be celebrated. Yet, we’re quickly approaching a point where it’s going to be very difficult for people to create art by hand that can hold a candle to AI art. Sure, there’s still many master artists, but they got where they are through hard work. How many new potential artists will be willing to put in that hard work when any random Joe Blow can generate something better in seconds? Human made art (from scratch) won’t go away, but it is harder to feel good about what you create when it feels like your art has no place anymore.

        • FaceDeer
          link
          fedilink
          29 months ago

          I suspect that society isn’t going to stop depending on jobs for survival until it’s too late. That is, it’ll only implement something like UBI or equivalent solution once most jobs have been replaced and there’s a legion of permanent unemployed who are forcing the issue to be addressed. Unfortunately that just seems to be the way of things, very few problems ever get addressed preemptively.

          IMO this isn’t really a reason to try to slow down AI, because that will only slow down the eventual UBI-like solution to it. At this point I don’t think “change human nature first” is a viable approach.

    • @Pregnenolone@lemmy.world
      link
      fedilink
      English
      19 months ago

      A lot of Redditors don’t even know why they think a certain way, they think that way because everyone else around them thinks that way. There are some legit criticisms of AI art but most of the time it’s just bullshit lip service to artists when they don’t actually care

      • @Gork@lemm.ee
        link
        fedilink
        English
        19 months ago

        Yeah I’ve had posts deleted on Reddit before because “ew AI art”. Like, I’m just trying to share interesting images. I’m not profiting off them in any way. But they take it so personally.

    • @Sekoia@lemmy.blahaj.zone
      link
      fedilink
      English
      -19 months ago

      Personally, I have no issue with models made from stuff obtained with explicit consent. Otherwise you’re just exploiting labor without consent.

      (Also if you’re just making random images for yourself, w/e)

      ((Also also, text models are a separate debate and imo much worse considering they’re literally misinformation generators))

      Note: if anybody wants to reply with “actually AI models learn like people so it’s fine”, please don’t. No they don’t. Bugger off. https://arxiv.org/pdf/2212.03860.pdf here have a source.

      • @Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        39 months ago

        This paper is just about stock photos or video game art with enough dupes or variations that they didn’t get cut from the training set. The repeated images were included frequently enough to overfit. Which is something we already knew. That doesn’t really go to proving if diffusion models learn like humans or not. Not that I think they do.

        • @Sekoia@lemmy.blahaj.zone
          link
          fedilink
          English
          19 months ago

          Sure, it’s not proof, but it gives a good starting point. Non-overfitted images would still have this effect (to a lesser extent), and this would never happen to a human. And it’s not like the prompts were the image labels, the model just decided to use the stock image as a template (obvious in the case with the painting).

          • @Even_Adder@lemmy.dbzer0.com
            link
            fedilink
            English
            19 months ago

            Non-overfitted images would still have this effect (to a lesser extent),

            This is a bold claim to make with no evidence. When every trained image accounts for less than one byte of data in the model. Even the tiniest images file contain many thousands of bytes. One byte isn’t even enough to store a single character of text, most Latin-based alphabets and some symbols, use two bytes.

            and this would never happen to a human.

            There are plenty of artists that get stuck with same-face. Like Sam Yang for instance. Then there are the others who can’t draw disabled people or people of color. If it isn’t a beautiful white female character, they can’t do it. It can take a lot of additional training for people to break out of their rut, some don’t.

            I’m not going to tell you that latent diffusion models learn like humans, but they are still learning. https://arxiv.org/pdf/2306.05720.pdf Have a source.

            I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven’t already. The EFF is a digital rights group who most recently won a historic case: border guards in the US now need a warrant to search your phone.

            This guy also does a pretty good job of explaining how latent diffusion models work, You should give this a watch too.

  • @rDrDr@lemmy.world
    link
    fedilink
    English
    99 months ago

    8/20. I am pretty good on photorealistic images, but the random drawings… honestly a lot of the ones by people I tagged as AI generated because i thought they kinda sucked.

    • @Itsamelemmy@lemmy.zip
      link
      fedilink
      English
      29 months ago

      10/20. I thought I got the photorealistic right, but the first 2 I got wrong. The back of the bench being different on the girl with a drink made me think Ai. I still don’t know how that is real, bokeh can’t explain that difference.

  • @ImpossibilityBox@lemmy.world
    link
    fedilink
    English
    89 months ago

    I got 17 out of 20. I pegged the bezerk drawing as generated because the bottom part of the armor lacked symmetry and didn’t make any sense. I got the other three line drawings incorrect.

    I have spent WAAAAY to much of my freetime generating images and apparently have picked up an eye for the weird types of artifacts that these generators produce. The hardest one to articulate is that generated images have a very specific type of noise. Images create a very nice grainy type noise while digital images get more of the blocky jpeg artifacts and banding. Generated images get this weird hybrid of the two that isn’t consistent across the whole image.

  • @techlaito@lemmy.world
    link
    fedilink
    English
    69 months ago

    Regardless of score bragging, it requires some technical knowledge and pixel peeping to really be able to tell, and even then I can’t guarantee you can. I would imagine your average Joe wouldn’t even know any better.

  • @lloram239@feddit.de
    link
    fedilink
    English
    69 months ago

    10/20 and most of it was just guess work. It has become pretty much impossible to tell when you pick real images that are in the style of AI images, you really have to go pixel hunting to find artifacts and even that is getting difficult, when things get blurry or you get JPEG artifacts when you zoom in.

    That said, there are still plenty of images AI has a very hard time creating. Anything involving real world products will always look wonky in AI, interesting framing where you don’t have one object in the center is hard. Unusual aspect ratio are hard, as AI is trained on squares. Complex scenes with multiple characters rarely work. Facial expressions are hard. And generally just normal everyday photos don’t really work, AI stuff always looks like people posing for a stock photo.

  • @ComicalMayhem@lemmy.world
    link
    fedilink
    English
    59 months ago

    Got 10/20. The second photo really threw me for a loop. All the texture on the skin and and hair led me to believe human; I noticed the weird patch on the shoulder and the unnatural shine on the ear but excused it as technical flaws or something, chose human in the end. I really thought that corporate logo style drawing of the avocado was human, like it wasn’t even a question for me and yet the fact that it was AI really surprised me.

    • @ante@lemmy.world
      link
      fedilink
      English
      19 months ago

      I also got 10/20. The second one is fairly obvious, though, in my opinion. Look at the shape of the glasses – the lenses are uneven and don’t match.

  • jpj007
    link
    fedilink
    49 months ago

    13/20, but there was a lot of guessing in there. I would have believed any of them going either way.

  • @garyyo@lemmy.world
    cake
    link
    fedilink
    English
    4
    edit-2
    9 months ago

    Idk about anyone else but its a bit long. Up to q10 i took it seriously and actually looked for ai gen artifacts (and got all of them up to 10 correct) and then I just sorta winged it and guessed and got like 50% of them right. OP if you are going to use this data anywhere I would first recommend getting all of your sources together as some of those did not have a good source, but also maybe watch out for people doing what I did and getting tired of the task and just wanting to see how well i did on the part i tried. I got like 15/20

    For anyone wanting to get good at seeing the tells, focus on discontinuities across edges: the number or intensity of wrinkles across the edge of eyeglasses, the positioning of a railing behind a subject (especially if there is a corner hidden from view, you can imagine where it is, the image gen cannot). Another tell is looking for a noisy mess where you expect noisy but organized: cross-hatching trips it up especially in boundary cases where two hatches meet, when two trees or other organic looking things meet together, or other lines that have a very specific way of resolving when meeting. Finally look for real life objects that are slightly out of proportion, these things are trained on drawn images, and photos, and everything else and thus cross those influences a lot more than a human artist might. The eyes on the lego figures gave it away though that one also exhibits the discontinuity across edges with the woman’s scarf.

  • NessD
    link
    fedilink
    English
    49 months ago

    The avocado had real text. Is Dall-E 3 capable of creating legible text?

    • @popcar2@programming.devOP
      link
      fedilink
      English
      49 months ago

      Yes, it’s the only model that manages to get text right, and the results are usually pretty consistent. It’s a big step forward.

      AI generated photo of a cat saying "I'm king of the world!"

        • FaceDeer
          link
          fedilink
          29 months ago

          Control nets are kind of “cheating”, though, they’re a form of image-to-image where you provide them with something to trace over or otherwise guide them. I think in this area the open-source field has (briefly) fallen behind, we’ll need another round of catchup. That’s fine, though. Let competition drive hard.

    • @lloram239@feddit.de
      link
      fedilink
      English
      19 months ago

      Kind of. It can generate readable text, but not all the time. It will frequently turn parts of your prompt into text that aren’t meant to be text or mix perfectly readable text with AI gibberish: