ChatGPT has a style over substance trick that seems to dupe people into thinking it’s smart, researchers found::Developers often prefer ChatGPT’s responses about code to those submitted by humans, despite the bot frequently being wrong, researchers found.

    • the_medium_kahuna@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      But the fact is that you need to check every time to be sure it isn’t the rare inaccuracy. Even if it could cite sources, how would you know it was interpreting the source’s statements accurately?

      imo, it’s useful for outlining and getting ideas flowing, but anything beyond that high level, the utility falls off pretty quickly

      • DreamButt@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Ya it’s great for exploring options. Anything that’s raw textual is good enough to give you a general idea. And moreoftenthannot it will catch a mistake about the explanation if you ask for a clarification. But actual code? Nah, it’s about a 50/50 if it gets it right the first time and even then the style is never to my liking