I’m currently watching the progress of a 4tB rsync file transfer, and i’m curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there’s a lot that can effect transfer speeds, so I guess i’m not asking why my transfer itself isn’t going faster. I’m more just curious what the bottlenecks could be typically?

Assuming a file transfer between 2 physical drives, and:

  • Both drives are internal SATA III drives with 5.0GB/s 5.0Gb/s read/write 210Mb/s (this was the mistake: I was reading the sata III protocol speed as the disk speed)
  • files are being transferred using a simple rsync command
  • there are no other processes running

What would be the likely bottlenecks? Could the motherboard/processor likely limit the speed? The available memory? Or the file structure of the files themselves (whether they are fragmented on the volumes or not)?

  • MNByChoice
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    10 months ago

    Looks like you have your answer, but there are a crazy number of possible issues.

    The biggest cause is misreading the performance specs.

    A partial list of other options:
    Mechanical drives store data in rings. Outer rings have higher speeds than inner due to constant angular velocity.
    Seeks cost a lot of throuput on mechanical drives.
    Oversubscribed drive cables.
    HBA issues.
    PCIe data path conflicts
    Slow RAM
    RAM full or busy
    Extra cpy within RAM
    NUMA path issues (of drives are connected to different NUMA nodes. Not an issue on desktops.)
    CPU too busy
    Transfer software doing extra things
    File system doing extra.
    RAID doing extra.
    NIC on a different NUMA node than HBA (can be good or bad).
    NIC sharing the data path in a conflicting way.

    There are others. Start with checking theoretical performance from data sheets.

    Also, details matter, and I don’t have enough of them to guess.