• Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    ·
    16 hours ago

    Although I agree, I think AI code generation is the follow up mistake. The original mistake was to offshore coding to fire qualified engineers.

    Not all of offshore is terrible, that’d be a dumb generalization, but there are some terrible ones out there. A few of our clients that opted to offshore are being drowned is absolute trash code. Given that we always have to clean it up anyway, I can see the use-case for AI instead of that shop.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      14 hours ago

      I think the core takeaway is your shouldn’t outsource core capabilities. If the code is that critical to your bottomline, pay for quality (which usually means no contractors - local or not).

      If you outsource to other developers or AI it means most likely they will care less and/or someone else can just as easily come along and do it too.

      • Optional@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        12 hours ago

        The core takeaway is that except for a few instances the executives still don’t understand jack shit and when a smooth talking huckster dazzles them with ridiculous magic to make them super rich they all follow them to the poke.

        Judges and Executives understand nothing about computers in 2025. that’s the fucked up part. AI is just how we’re doing it this time.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          Companies that are incompetently led will fail and companies that integrate new AI tools in a productive and useful manner will succeed.

          Worrying about AI replacing coders is pointless. Anyone who writes code for a living understands the limitations that these models have. It isn’t going to replace humans for quite a long time.

          Language models are hitting some hard limitations and were unlikely to see improvements continue at the same pace.

          Transformers, Mixture of Experts and some training efficiency breakthroughs all happened around the same time which gave the impression of an AI explosion but the current models are essentially taking advantage of everything and we’re seeing pretty strong diminishing returns on larger training sets.

          So language models, absent a new revolutionary breakthrough, are largely as good as they’re going to get for the foreseeable future.

          They’re not replacing software engineers, at best they’re slightly more advanced syntax checkers/LSPs. They may help with junior developer level tasks like refactoring or debugging… but they’re not designing applications.