According to the analytics firm’s report, worldwide desktop and mobile web traffic dropped by 9.7% from May to June, and 10.3% in the US alone. Users are also spending less time on the site overall, as the amount of time visitors spent on chat.openai.com was down 8.5%, according to the reports.

The decline, according to David F. Carr, senior insights manager at Similarweb, is an indication of a drop in interest in ChatGPT and that the novelty of AI chat has worn off. “Chatbots will have to prove their worth, rather than taking it for granted, from here on out,” Carr wrote in the report.

Personally, I’ve noticed a sharp decline in my usage. What felt like a massive shift in technology a few months ago, now feels like mostly a novelty. For my work, there just isn’t much ChatGPT can help me with that I can’t do better myself and with less frustration. I can’t trust it for factual information or research. The written material it generates is always too generic, formal, and missing the nuances I need that I either end up re-writing it or spending more time instructing ChatGPT on the changes I need than it would have taken me to just write it myself in the first place. Its not great at questions involving logic or any type of grey area. Its sometimes useful for brainstorming, but that is about it. ChatGPT has just naturally fallen out of my workflow. That’s my experience anyway.

  • Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I didn’t either, actually. It seems to me that where LLMs excel is in situations where there will be a large consensus of a topic, so the training weights hit close to 100%. Anyone who has read through or Googled for answered for programming in the various sources online has seen how among the correct answers there are lots of deviations which muddy the waters even for a human browsing. Which is where the specialized training versions that hone down and eliminate a lot of the training noise come in handy.