• theComposer@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    But it’s not just that “they effectively trained their model using OpenAI’s model”. The point Ed goes on to make is why hasn’t OpenAI done the same thing? The marvel of DeepSeek is how much more efficient it is, whereas Big Tech keeps insisting that they need ever bigger data centers.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      They HAVE done that. It’s one of the techniques they use to produce things like o1 mini models and the other mini models that run on device.

      But that’s not a valid technique for creating new foundation models, just for creating refined versions of existing models. You would never have been able to create for instance, an o1 model from Chat PT 3.5 using distillation.