• BB_C@programming.dev
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      I already mentioned why! It’s common pitfall. For example, try a large HTTP/2 transfer over a socket where TCP_NODELAY is not set (or rather, explicitly unset), and see how the transfer rate would be limited because of it.

      • lysdexic@programming.devOPM
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        2 months ago

        The only think that TCP_NODELAY does is disabling packet batching/merging through Naggle’s algorithm. Supposedly that increases throughput by reducing the volume of redundant information required to send small data payloads in individual packets, with the tradeoff of higher latency. It’s a tradeoff between latency and throughput. I don’t see any reason for transfer rates to lower; quite the opposite. In fact the very few benchmarks I saw showed exactly that: TCP_NODELAY causing a drop in the transfer rate.

        There are also articles on the cargo cult behind TCP_NODELAY.

        But feel free to show your data.

            • BB_C@programming.dev
              link
              fedilink
              arrow-up
              4
              ·
              2 months ago

              I specifically mentioned HTTP/2 because it should have been easy for everyone to both test and find the relevant info.

              But anyway, here is a short explanation, and the curl-library thread where the issue was first encountered.

              You should also find plenty of blog posts where “unexplainable delay”/“unexplainable slowness”/“something is stuck” is in the premise, and then after a lot of story development and “suspense”, the big reveal comes that it was Nagle’s fault.

              As with many things TCP. A technique that may have been useful once, ends up proving to be counterproductive when used with modern protocols, workflows, and networks.