Most of the problems in the current internet landscape is caused by the cost of centralized servers. What problems are stopping us from running the fediverse on a peer to peer torrent based network? I would assume latency, but couldn’t that be solved by larger pre caching in clients? Of course interaction and authentication should be handled centrally, but media sharing which is the largest strain on servers could be eased by clients sending media between each other. What am I missing? Torrenting seems to be such an elegant solution.

  • taladar@sh.itjust.works
    link
    fedilink
    arrow-up
    76
    arrow-down
    2
    ·
    1 year ago

    Authentication and authorization can not be handled centrally if the host performing the actual action you want to apply those to can not be trusted.

    Media sharing is mainly a legal problem. With decentralized solutions you couldn’t easily delete illegal content and anyone hosting it would potentially be legally liable.

    • uis@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      and anyone hosting it would potentially be legally liable.

      Bullshit in many countries. Well, I know two of them: USA and Russia. First requires intention, second has explicitly excludes P2P from liability.

  • Endorkend@kbin.social
    link
    fedilink
    arrow-up
    55
    arrow-down
    5
    ·
    1 year ago

    Quite a few systems use torrent style distribution.

    Heck, even Windows uses a distributed bandwidth system where you can set it to download chunks of updates from local networked systems.

    All technologies, like bittorrent, nonSQL databases, blockchain, AI and the like become used as an invisible part of systems once the idiotic hype about the technology wanes.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    4
    ·
    1 year ago

    Better question: how come we aren’t all using XMPP for chat instead of the current mess we have? Why can’t we just bring back XMPP with encryption? I don’t get people, all for federation and shit but when it comes to messages everyone suddenly forget that XMPP is the original and truly open messaging solution. You can message anyone by email no matter what’s or where’s their server. Pretty much like lemmy, true interoperability.

    • asudox@lemmy.world
      link
      fedilink
      arrow-up
      26
      ·
      1 year ago

      Currently, attempts at bringing back XMPP has been horrible. Some instances have different plugins, which often break things. Some have e2ee some don’t. XMPP is great but the current attempts are bad. Matrix is good enough for now.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I believe the worst part of XMPP isn’t the instances but the lack of a decent cross platform client that actually supports everything and has a decent UI. For eg. iOS clients are all shit. Without decent clients and push notifications people won’t be using XMPP ever.

        Matrix is good enough for now.

        Questionable…

        • Apollo2323@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          12
          ·
          1 year ago

          What’s wrong with Matrix in your opinion? I found it works fine for chat and group chat. Maybe video and audio calls lack but other than that it works fine.

          • asudox@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            There are some things such as metadata leakage and the server isn’t the best at being lightweight.

            • Apollo2323@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              I have read that they have improve the metadata leakage. And the server being lightweight is being working on too , on the new version.

            • TCB13@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 year ago

              Exactly my concerns, also Matrix isn’t an open standard, truly standardized. It is way more prone to be taken over by some company or ecosystem later on. For what’s worth what is even the Matrix Foundation? Where does the money come from?

        • Chobbes@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          The iOS clients have gotten miles better in recent years. It’s still far from perfect, but I’m grateful for the improvements.

              • TCB13@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Great, I used Monal a long time ago and it was buggy maybe I’ll try it again.

                • Chobbes@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  I stopped using monal in favour of Siskin, but I think it’s gotten a lot better recently too. My problem was monal started sending notifications for every message in every chat room at some point so I uninstalled it. I assume this has been resolved.

    • mark@programming.dev
      link
      fedilink
      arrow-up
      23
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I feel the same way about RSS feeds. It’s a technology meant to keep up with updates on nearly anything across the internet. Even social media sites. It’s been available for ages. But no one is pushing for sites to provide them. 🤷‍♂️

    • wagesj45@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      The software landscape for XMPP isn’t the best. I twisted the arms of my immediate family and have them using XMPP messaging with a Snikket server I set up, and we’ve had lots of issues between OMEMO support and the lack of good messaging clients for iOS. It works, but it isn’t the smooth-out-of-the-box experience that non-techies want/need.

      • Apollo2323@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        8
        arrow-down
        5
        ·
        1 year ago

        That’s another reason why I will never buy an iPhone there are no apps available for niche stuff such as XMPP or Managing torrent for my Linux ISO’s

        • bravemonkey@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Do you download ISOs directly on your phone? If not, lots of clients have web interfaces that make it trivial to manage from any device with a browser.

    • uis@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      SMTP is federation too. But certain megacorp basically fenced off huge chunk of users.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yes, that’s exactly by point. SMTP is federation, you can setup your own server and have interoperability with others and have 100% of its features working right, so you aren’t locked in to those megacorps. Chat applications should use XMPP to get the same for chat/video.

  • Bobby Turkalino@lemmy.yachts
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    1 year ago

    Torrenting requires way more resources than people realize. It’s easy to look at your torrents’ download speeds and think “oh, that’s less than a normal download, like from Steam, so it must not take nearly as many resources” – it’s not all about bandwidth. The amount of encryption and hashing involved in torrenting is fairly CPU heavy (every ~4 MB piece has to be hashed and verified), especially if your CPU doesn’t have onboard encryption hardware (think mobile devices). The sheer number of connections involved even in just one torrent can also bog down a network like you wouldn’t believe – anyone who runs a home seedbox can attest.

    • uis@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      “oh, that’s less than a normal download, like from Steam, so it must not take nearly as many resources”

      For me it’s always more.

      The amount of encryption and hashing involved in torrenting is fairly CPU heavy

      Same amount of encryption https requires. Hashing is completely optional and is not required for operation, encription is optional too, but other peers may require it.

      every ~4 MB piece has to be hashed and verified

      Which is same. I’m not sure, but steam probably verifies file at some stage too.

      The sheer number of connections involved even in just one torrent can also bog down a network like you wouldn’t believe – anyone who runs a home seedbox can attest.

      There is not difference between 4kpkts for 500 connections vs 4kpkts for 1 connection for network itself. IP and UDP don’t even have such concept. Network is stateless. But shitty routers with small conntrack table on the other hand…

  • makeasnek@lemmy.ml
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    1 year ago

    The short answer is that while torrents show great possibility for content distribution (as an alternative to CDNs for example), they inherently rely on some centralized resources and don’t make sense for a lot of use cases. Most websites are a bunch of small files, torrenting is really much more useful for offloading large bandwidth loads. On small files, the overhead for torrents is a waste. That’s why your favorite linux ISO has a torrent but your favourite website doesn’t.

    One major issue is difficulty in accurately tracking the contribution of each member of the swarm. I download a file and I seed it to the next person, sounds great right? But what if the next person doesn’t come along for a long time? Do I keep that slot open for them just in case? How long? How I prove I actually “paid my dues” whether that was waiting for peers or actually delivering to them? How do we track users across different swarms? Do we want a single user ID to be tracked across all content they’ve ever downloaded? When you get into the weeds with these kinds of questions you can see how quickly torrenting is not a great technology for a number of use cases.

    Being somewhat centralized, by the way, is how BitTorrent solved the spam issue which plagued P2P networks prior to it. Instead of searching the entire network and everything it contains (and everything every spammer added it to it), you instead rely on a trusted messenger like a torrent index to find your content. The torrent file or magnet link points to a link in a DHT and there you go, no need to worry about trusting peers since you are downloading a file by hash not by name. And you know the hash is right because some trusted messenger gave it to you. Without some form of centralization (as in previous P2P networks), your view of the network was whatever your closest peers wanted it to be, which you essentially got assigned at random and had no reason to trust or not trust. You couldn’t verify they were accurately letting you participate in the wider network. Even a 100% trustworthy peer was only as good as the other peers they were connected to. For every one peer passing you bad data, you needed at least two peers to prove them wrong.

    Blockchain gets us close to solving some of these problems as we now have technology for establishing distributed ledgers which could track things like network behavior over time, upload/download ratio, etc. This solves the “who do I trust to say this other peer is a good one?” problem: you trust the ledger. But an underlying problem to applying Blockchain to solve this problem is that ultimately people are just going to be self-reporting their bandwidth. Get another peer to validate it, you say? Of course! But how do we know that peer is not the same person (how do we avoid sybil attacks)? Until we have a solid way to do “proof of bandwidth” or “proof of network availability”, that problem will remain. There are many people working on this problem (they’ve already solved proof of storage so perhaps this could be solved in a similar way) but as of right now I know of no good working implementation that protects against sybil attacks. Then again, if you can use blockchain or some other technology to establish some kind of decentralized datastore for humanity, you don’t need torrents at all as you would instead be using that other base layer protocol for storage and retrieval.

    IPFS was intended as a decentralized replacement for much of the way the the current internet works. It was supposed to be this “other protocol”, but the system is byzantinely complex and seems to have suffered from a lack of usability, good leadership, and promotion. When you have an awesome technology and nobody uses it, there are always good reasons for lack of adoption. I don’t know enough about those reasons to really cover them here, but suffice to say they actually do exist. Then again, IPFS has been around for a while now (15 years?) and people use it for stuff so clearly it has some utility.

    That said, if you want to code on this problem and contribute to helping solve data storage/transmission problems, there are certainly many OSS projects which could use your help.

  • onlinepersona@programming.dev
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 year ago

    You’re basically talking about IPFS. I think their problem is that they gave it a local HTTP interface and the documentation is… in need of improvements.

    • Acters@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      I always shy away from newer tech because of lackluster documentation and poor leadership. The latter is rare enough. Without proper documentation, I feel like I have to read the code and make my own notes to put into their documentation platform. Which is not what I want to do when I use it. Contributing is nice, but when doing something a core member would do without credit, it will dissuade me from participating.

      • onlinepersona@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I know that feeling. Curiosity often gets the better of me though: I’m a nix / NixOS user - amongst the worst documented projects I’ve come across.

  • Eggymatrix@sh.itjust.works
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    I would not want to neither deal with security issues nor pay the data costs associated with some an app being able to connecting to my phone to download media

        • henrikx@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          8
          ·
          1 year ago

          Don’t see how this is much different to today’s way of doing things where pretty much everyone is a freeloader to the centralized server. The major benefit is that it doesn’t have to be just one server anymore.

          • Centralized servers have a vested interest in serving content; they’re built around expecting freeloaders.

            Torrent is designed around popularity. If a user seeks less popular content, it can easily be unavailable or slow. The fact that many expert seeders have leecher-hostile configurations exacerbates this.

            To answer the original question: because it’s a worse experience for casual users, which drives them to a centralized model. There’s also the issue that by changing the networking technology, you cut off a huge percent of your target audience, creating an adoption hurdle; c.f. Gemini which, despite enormous user benefits, has foundered in creating a critical mass making it worthwhile for content creators to bother publishing on it. Mastodon and Lemmy have been successful in part because they don’t require users to download a bespoke app - they’re built on the web, which is centralized in design.

            Change the Torrent content accessibility design issues, and you end up with something like Freenet, which ends up not only being slower (if, in the end, more reliable), but is far more resource hungry and unsuitable for mobile. IPFS might end up being a good alternative, but right now it’s still a bit technical for casual users, is slower than a centralized web, and can also be resource-heavy without client tuning.

            Freenet has a good design and addressed some other Torrent weaknesses, such as lack of content anonymity, but also has some inherant flaws which may be unresolvable.

          • folkrav@lemmy.ca
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            The trackers themselves are centralized. The .torrent file you download from a private tracker has a unique private ID tied to your account, which the torrent client advertises to the tracker when it phones home to the announce URL, alongside your leech/seed metadata.

  • andruid@lemmy.ml
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    For me I’ve had issues with getting organzational support for use anything close to p2p, with things like “keep that bot net off my system” being said. On personal side I had issues with ISPs assuming traffic was illegal in nature and sending me bogus cease and desist notices.

    Agreed though. At least webrtc has a strong market. IPFS and other web3 things also have tried to find footholds in common use, so the fight isn’t over for sure!

    • uis@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      On personal side I had issues with ISPs assuming traffic was illegal in nature and sending me bogus cease and desist notices.

      On the other hand check if you can sue them for bogus cease and desists. Of you can, do it after changing ISP.

  • Dunstabzugshaubitze@feddit.de
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Most clients are web browsers and support for torrents in http is the same as for every other file.

    So that would only give us a use for torrents as a form of content distribution plattform to get the actual files closer to the client.

    In cases where we have actual non browser clients: i like to curate what i am distributing and don’t want to distribute anything i happen stumble upon or would you be willing to store and more importantly share everything you find on 4chan or that might show up in your mastodon feed?

    • xoggy@programming.dev
      link
      fedilink
      arrow-up
      21
      ·
      1 year ago

      Gotta love the heavy use of buzzword technologies and no actual information on what is actual is. Then you click the “How does it work?” button and it takes you to a Google powerpoint… so much for the sleek website design.

    • blackfire@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      This is the only system i am aware of using torrent based content sharing. Its not a great system though as you are essentially downloading a whole archive everytime you connect so it just grows and grows unless you set some retention

  • gila@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    The instance is the aggregator, if it’s P2P then the aggregation is done by the client. In a torrent swarm you contribute bandwidth, not processing power

    • bigboismith@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I’m not necessarily experienced with server hosting, but isnt bandwidth the primary cost of fediverse instances for example? There shouldn’t be to much logical work compared to delivering content?

      • themoonisacheese@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Fediverse instances with image hosting are bandwidth limited, but that’s just a normal result of image hosting. If you remove image hosting then the bottleneck becomes processing power again.

        • cogman@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          AFAIK, the main bottleneck is data storage. Related to processing power, but also IO and having a central source of truth.

          • themoonisacheese@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Sure, but data storage is quite cheap these days. I’m not saying it isn’t a problem, but a 12 Tb raid goes a really long way, or AWS s3 charges pennies per GB per month and solves all your problems if you’re prepared to spend tens of dollars per month.

            Bandwidth on the other hand is either inaccessible (read: you have a normie ISP that has at most 2 speeds to sell you and neither of them have guarantees), or extremely expensive, on the order of thousands per month. On top of that, if you happen to pay AWS for storage, each request must be forwarded to AWS, converted in some way by your server then sent to the client, which means it eats both up and down bandwidth. Of course, if you know what you’re doing you can use Amazon’s CDN but at this point administering your instance is a full time job and your expenses are that of small company.

      • gila@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Sure, but this is largely because currently each client doesn’t need to aggregate the whole fediverse. In a decentralised network, you can’t split the sum total of processing required to run the fediverse equally amongst peers. Each peer would need to do more or less the same aggregation job, so the total processing required would be exponentially more than with the current setup. You could still argue it’s a negligible processing cost per client, but it’s certainly way less efficient overall even if we assume perfect i/o etc in the p2p system and even if the client only needs to federate the user selected content

        Also just practically deploying a client app that can federate/aggregate constantly in the background (kinda required for full participation) and scale with the growth of fedi without becoming a resource hog I imagine would be pretty tough, like maybe possible yeah but I feel like it makes sense why it isn’t like that also

  • deur@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I was thinking about the possibility torrent based public repo git clones but that isn’t going to pan out.