Slow June, people voting with their feet amid this AI craze, or something else?

  • Platomus@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 year ago

    It’s because it’s summer and students aren’t using it to cheat on their assignments anymore.

    • TheEllimist@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      It’s definitely this. Except the kids taking summer classes, which statistically probably have higher instances of cheating.

  • i_lost_my_bagel@seriously.iamincredibly.gay
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I tried it for about 20 minutes

    Had it do a few funny things

    Thought huh that’s neat

    Went on with life

    Since then the only times I’ve thought about ChatGPT has been seeing people using it in classes I’m in and just sitting here thinking “this is a fucking introductory course and you’re already cheating?”

    • idolofdust@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      In discrete mathematics right now and overheard way too many students hitting a brick wall with the current state of AI chatbots. as if thats what they used almost exclusively up to this point

  • wackypants@kbin.social
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    It’s Summer. Students are on break, lots of people on vacation, etc. Let’s wait to see if the trend persists before declaring another AI winter.

    • twicetwotimes@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Agreed. I think being between academic years is likely a much bigger factor than we realize. I’m a college professor, and at the end of spring quarter we had a lot of conversations with undergrads, grad students, and faculty about how people are actually using AI.

      Literally every undergrad student I spoke with said they use it for every written assignment (for the large part in non-cheating legit educational resource ways). Most students used it for all or most of their programming assignments. Most use it to summarize challenging or long readings. Some absolutely use it to just do all their work for them, though fewer than you might expect.

      I’d be pretty surprised if there isn’t a significant bounce-back in September.

      • afraid_of_zombies2@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I have been using it to do deep dives into subjects. Especially text analysis. Do you want to know the entire voc of the Gospel of Mark in original greek for example? 1080. Now how does this compare to a section of Plato’s republic of the same size? About 6-7x as large.

        So right there we can see why Mark is often viewed as a direct text while Plato is viewed as a more ambiguous writer.

        • InverseParallax@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Mark is a direct and terse narrative of a specific segment of Jesus’s life and teachings while the republic is an attempt to expound a philosophy and system of government.

          I agree with you, but I’m not sure I’d call him a more ambiguous writer, mark is a ‘just the facts, ma’am’ notation of verbal histories near contemporary, with the other gospels being attempts to add on contemporary allegories and legends attributed by different groups to Jesus (or John who just did his own thing).

          I’d be curious at the comparison of the apology and crito, similar narratives of a similar figure in a specific segment of his life (the end of it). It’s fairly direct and terse as Socrates was portrayed as being direct and terse, but otherwise the styles are similar as (throw on hard hat) Jesus appears to have been attributed many of the allegories of Socrates in the recorded gospels, which makes sense if you’re trying to appeal to followers of hellenic religions such as those in Rome and Greece.

      • sndrtj@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        This worries me though. I’ve found chatgpt to be wrong in basically every fact-based question I’ve asked it. Sometimes subtly, sometimes completely, but it always hallucinates. You cannot use it as a source of truth.

        • twicetwotimes@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Honestly I feel like at this point its unreliability is kind of helpful for students. They have to learn how to use it most effectively as a tool for producing their own work and not a replacement. In my classes the more relevant “problem” for students is that GPT produces written work that on the surface feels composed and sensible but is actually straight up garbage. That’s good. They turn that in, it’s extremely obvious to me, and they get an F (because that’s the grade AI earned with the garbage paper).

          But they can and should use it for things it’s great at: reword this long sentence I’m having trouble phrasing concisely, help me think of a title for my paper, take my pseudocode and help me turn it into a while loop in R, generate a list of current researchers on this topic and two of their most recent publications, translate this paragraph of writing from Foucault/Marx/Bourdieu/some-good-thinker-and-bad-writer into simpler wording…

          I have a calculator in my pocket even though my teachers assured me I wouldn’t. Students will have access to and use AI forever now. The worry should be that we fail to teach them the difference between a homework-bot and an incredible, versatile tool to leverage.

    • potustheplant@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      I think you’re being a bit self-centered, i’s always going to be summer somewhere. This is a tool used globally.

      • Smatt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I see your point but:

        1. It’s not always summer somewhere, North and South are in spring/fall half the year.
        2. The global North has way more population than the south.
  • Magiwarriorx@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    I still use free GPT-3 as a sort of high level search engine, but lately I’m far more interested in local models. I havent used them for much beyond SillyTavern chatbots yet, but some aren’t terribly far off from GPT-3 from what I’ve seen (EDIT: though the models are much smaller at 13bn to 33bn parameters, vs GPT-3s 145bn parameters). Responses are faster on my hardware than on OpenAI’s website and its far less restrictive, no “as a large language model…” warnings. Definitely more interesting than sanitized corporate models.

    The hardware requirements are pretty high, 24GB VRAM to run 13bn parameter 8k context models, but unless you plan on using it for hundreds of hours you can rent a RunPod or something for cheaper than a used 3090.

  • gaiussabinus@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I have a number of language models running locally. I am really liking the gpt4all install with Hermes model. So in my case i used chatgpt right up untill i had one i could keep private.

    • ClemaX@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      How does it compare with ChatGPT (GPT 3.5), quality and speed wise?

      • gaiussabinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Depends how you get in accomplished if you use the python bindings its slow but using the gpt4all its quick and there is a gpt4all api should you wish to build a private assistant. I like that one but its still run by a company so mileage may vary there are a few projects on github for use with opensource models. I can get better quality from the hermes model than i can with GPT 3.5 IMO but some models are better than others in regards to what you are trying to do. If you have done any work with stable diffusion lots of different models are popping up right now for different use-cases like you see on civit.ai. A good coding bot is probably going to be a bit shit in a conversation.

  • Meow.tar.gz@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    ChatGPT has mostly given me very poor or patently wrong answers. Only once did it really surprise me by showing me how I configured BGP routing wrong for a network. I was tearing my hair out and googling endlessly for hours. ChatGPT solved it in 30 seconds or less. I am sure this is the exception rather than the rule though.

    • zeppo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It all depends on the training data. If you pick a topic that it happens to have been well trained on, it will give you accurate, great answers. If not, it just makes things up. It’s been somewhat amusing, or perhaps confounding, seeing people use it thinking it’s an oracle of knowledge and wisdom that knows everything. Maybe someday.

  • BonfireOvDreams@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    It’s not just that the novelty has worn off, It’s progressively gotten less useful. Any god damn question I ask gets 90,000 qualifiers and it refuses to provide any data at all. I think OpenAI is so terrified of liabilty they have significantly dumbed down it’s utility in the public release. I can’t even ask ChatGPT to provide a link to study it references, if it references anything at all rather than making ambiguous statements.

    • afraid_of_zombies2@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I got it to give me a book that was still in copyright status by selectively asking for bigger and bigger quotes. Took a while. Now it seems to have cottoned on to that trick.

    • Kerfuffle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Also, ChatGPT 4 came out but is still only available to people who pay (as far as I know). So using ChatGPT 3 feels like only having access to the leftovers. When it first came out, that was exciting because it felt like progress was going to be rapid, but instead it stagnated. (Luckily interesting LLM stuff is still happening, it’s just nothing to do with OpenAI.)

      • ultranaut@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Chatgpt4 has also noticeably declined in quality since it was released too. I use it less because it’s become less useful and more frustrating to use. I think openAI have been steadily gimping it trying to get their costs down and make it respond faster.

      • cybersandwich@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I pay for it and it’s… Okay for most things. It’s pretty great at nerd stuff though*. Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.

        *If you know enough to sus out the obviously wrong shit it produces every once in a while.

        • Kerfuffle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.

          I usually can find what I’m looking for unless it’s really obscure with days of searching. If something is that obscure, it seems kind of unlikely ChatGPT is going to give a good answer either.

          If you know enough to sus out the obviously wrong shit it produces every once in a while.

          That’s one pretty big problem. If something really is difficult/complex you likely won’t be able to tell the difference between a wrong answer from ChatGPT and one that’s correct unless it just says something obviously ridiculous.

          Obviously humans make mistakes too, but at least when you search you see results in context, other can potentially call out/add context to things that might not be correct (or even misleading), etc. With ChatGPT you kind of have to trust it or not.

          • shiftybits@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Yeah if it’s that hard to find gpt is just going to hallucinate some bs into the response. I use it as a stack overflow at times and often run into garbage when I’m trying to solve a truly novel problem. I’ll often try to simplify it to something contrived but mostly find the output useful as a sort of spark. I can’t say I ever find the raw code it generates useful or all that good.

            It’ll often give wrong answers but some of those can contain useful bits that you can arrange into a solution. It’s cool, but I still think people are oddly enamored with what is really just a talking Google. I don’t think it’s the game changer people are thinking it is.

  • Poob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    It’s really fucking annoying getting “As an AI language model, I don’t have personal opinions, emotions, or preferences. I can provide you with information and different perspectives on…” at the beginning of every prompt, followed by the driest, most bland answer imaginable.

    • afraid_of_zombies2@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      It definitely has its uses but it also has massive annoyances as you pointed out. One thing has really bothered me, I asked it a factual question about Mohammed the founder of Islam. This is how I a human not from a Muslim background would answer

      “Ok wikipedia says this ____”

      It answered in this long winded way that had all these things like “blessed prophet of Allah”. Basically the answer I would expect from an Imam.

      I lost a lot of trust in it when I saw that. It assumed this authority tone. When I heard about that case of a lawyer citing madeup caselaw from it I looked it as confirmation. I don’t know how it happened but for some questions it has this very authoritative tone like it knows this without any doubt.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, it’s boring as shit, if want a conversation partner there’s better (if less reliable) options out there, and groups like personal.ai that repackage it for conversation. There’s even scripts to break through the “guardrails”

      I love the boring. Every other day, I think "man, I really don’t want to do this annoying task. I’m not sure if it even saves much time since I have to look over the work, but it’s a hell of a lot less mentally exhausting.

      Plus, it’s fun having it Trumpify speeches. It’s tremendous. I’ve spent hours reading the bigglyest speeches. Historical speeches, speeches about AI, graduation speeches where bears attack midway through… Seriously, it never gets old

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    On that, what would people recommend for a locally hosted (I have a graphics card) chatgpt-like LLM that is open source and doesn’t require a lot of other things to install.

    (Just one CMD line installation! That is, if you have pip, pip3, python, pytorch, CUDA, conda, Jupiter note books, Microsoft visual studio, C++, a Linux partition, and docker. Other than that, it is just one line installation!)

    • festus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Look into llama.cpp - it’s a single C++ program that run quantified models (basically models with some less precision - don’t need a full 64 bits for a double, really). As for models to run on it, there’s so many but I think WizardLM is pretty good.

  • zeppo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I love Stable Diffusion but I really have no use for ChatGPT. I’m amazed at how good the output can be… i just don’t have a need to generate text like that. Also, OpenAI has been making it steadily worse with ‘safety’ restrictions. I find it super annoying and even insulting when Bing-Sydney is “THIS CONVERSATION IS OVER”. It’s like being chastised by facebook or twitter for being ‘violent’ when you made a joke.

    The ability to generate photographs and illustrations of practically anything, though, is fantastic. My girlfriend has been flagellating me into creating a bunch of really useless crap to promote her business on social media using SD, and I actually enjoy that part. I’ve made thousands of photos of scenery.

  • anlumo@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    For my professional work, the training data is way too outdated by now for ChatGPT to be anywhere near being useful. The browsing feature also can’t make up for it, because it’s pretty bad at Internet search (bad search phrases etc).

    • PupBiru@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      i find even for really complex stuff it’s pretty good as long as you direct it: it can suggest some things, you can do some searching based on that, maybe give it a few links to summarise for you, etc

      it doesn’t do the work for you, but it makes a pretty good assistant that doesn’t quite understand the subject matter

      • anlumo@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m old enough to not needing a babysitter to use the Internet for research.

        It even told me a few times that its training data is too outdated and that there probably was some progress in that area. I have to freaking push it to actually do a web search to update that knowledge with prompts like “You have web access, use it!”. It then finds a few posts on stackoverflow I’ve already seen and draws some incorrect conclusions from that.

        I’m way faster on my own.

  • binwiederhier@discuss.ntfy.sh
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I have noticed that I use it less myself. I think honestly though, at least for me, that it is 90% related to the clunky and awkward UI of ChatGPT. If it was easy to natively type the prompt in the browser bar I’d use it much more.

    Plus, the annoying text scrolling thingy … Just show me the answer already, hehe.

    • henrikx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      The annoying text scrolling can’t be removed because the AI generates one word at a time, which is what you are seeing.

  • SuperSleuth@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    The novelty has worn off. I jumped on board and tried out every bot when they were first released: Bard, Bind, Snapchat, GPT—I’ve given them all a go.

    It was a fun experience, asking them to write poems or delve into the mysteries of consciousness, as I got to know their individual personalities. But now, I mainly use them for searching niche topics or checking grammar, maybe the occasional writing.

    In fact, this very comment was reformated in Bard for instance. Though, since Google integrated their LLM into search (via Labs), I use them even less.