The US Department of Defense has deployed machine learning algorithms to identify targets in over 85 air strikes on targets in Iraq and Syria this year.

The Pentagon has done this sort of thing since at least 2017 when it launched Project Maven, which sought suppliers capable of developing object recognition software for footage captured by drones. Google pulled out of the project when its own employees revolted against using AI for warfare, but other tech firms have been happy to help out.

  • pearsaltchocolatebar@discuss.online
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    It depends on how well trained your FM is, really. AI/ML is already better than humans at things like cancer diagnoses and such, so there’s really no reason to think that using it in this instance would create more of a risk to civilians than a human operator.

    Most people’s experience with AI is ChatGPT or similar, but ChatGPT really isn’t a very good LLM. Plus, an LLM is only as good as your prompt engineering.

    All that being said, there should always be a human double checking the targets in order to catch hallucinations.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 months ago

      The issue behind the Jevons effect isn’t that the technology in question doesn’t work as advertised—it’s that, by reducing the negative consequences associated with a decision, people become increasingly willing to make that decision until the aggregate negative consequences more than cancel out the effect of the “improvement”.

      • pearsaltchocolatebar@discuss.online
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        There’s really no reason to think this technology will be victim to the Jevons paradox. These strikes are already happening remotely, and if AI/ML can better discern targets vs civilians there’s absolutely no reason to think civilian casualties will increase because of it.

        That’s like saying using AI/ML to screen for cancer will result in more people dying from cancer.

        You’re trying to apply an economical theory about the consumption of finite resources to a completely unrelated field/sector.