Today kbin.social is blocking a huge list of domains just to get federation working again.
The reason for this temporally block is not to defederate, but rather to get the large backlog of 500k messenger queue processed again. Anyway, this does mean that kbin.social is federating again with other instances.
This is a temporary measure. Several users / developers are looking into how to better optimize the failed message queue, as we speak. Hopefully Ernest has eventually time to dive into solutions as well instead of workarounds, once his instance is migrated to Kubernets. See my preview thread: https://kbin.melroy.org/m/updates/t/4257/Kbin-federation-issues-and-infra-upgrade
List of the domains causing trouble:
lemmygrad.ml, eientei.org, vive.im, lemmy.ml, lemmynsfw.com, kbin.lol, lemmy.webgirand.eu, tuna.cat, posta.no, lemmy.atay.dev, sh.itjust.works, kbin.stuffie.club, kbin.dssc.io, bolha.social, dataterm.digital, kbindev.lerman-development.com, test.fedia.io, mer.thekittysays.icu, lemmy.stark-enterprise.net, kbin.rocks, kbin.cocopoops.com, kbin.lgbt, lemmy.deev.io, lemmy.lucaslower.com, lemmy.norbz.org, social.jrruethe.info, digitalgoblin.uk, pwzle.com, lemmy.friheter.com, federated.ninja, lemmy.shtuf.eu, u.fail, arathe.net, lemmy.click, thekittysays.icu, lemmy.ubergeek77.chat, lemmy.maatwo.com, faux.moe, eslemmy.es, seriously.iamincredibly.gay, test.dataharvest.social, programming.dev, kbin.knocknet.net, pawb.social, lucitt.social, longley.ws, kbin.dentora.social, atay.dev, lemmy.kozow.com, ck.altsoshl.com, pawoo.net, techy.news, lemmy.vergaberecht-kanzlei.de, lemmyonline.com, beehaw.org, pouet.chapril.org, kbin.pcft.eu, fl0w.cc, lemmy.sdf.org, lemmy.zip, feddit.dk, fedi.shadowtoot.world, lemmy.noogs.me, lemmy.kemomimi.fans, social.agnitum.co.uk, fediverse.boo, hive.atlanten.se, forkk.me, lemmy.ghostplanet.org, lemmy.mayes.io, lemmy.mats.ooo, lemmy.world, lemmy.sdfeu.org, lemmy.death916.xyz, geddit.social, masto.fediv.eu
Does this point to an inherent problem with the federated approach; i.e. that every instance has to be able to handle the load of the content on all other instances it federates with?
Pardon if I’m misunderstanding something. But it seems like a big barrier to entry for new instances if e.g. an instance with 100 users has to sync the contents from 100,000 other users to work properly. As the fediverse keeps growing and the requirements to host instances keep increasing, won’t it end up where only a few instances have the money / resources to handle the load?
My understanding is that they don’t have to handle all traffic from all instances, but rather all traffic that anyone in that instance interacts with. So if you made one for your own personal use, then requirements will only scale up with the number of instances you interact with.
It does seem like it’s going to be a big issue specific interfacing with the really large instances though. We’ll see how it goes.
Well, actually… big instances like kbin.social (but that is the same with other big instances or software), need to process not only the outbox but also the inbox of the activitypub. So this thread with comments, need to be sent to several instances. And each instance need to process this comment now. And the other way around is also true, I created this thread but that needed to be sent towards multiple instances (which is converted the inbox from their server perspective). Kbin.social had mainly issues with sending out messages actually (the outbox), because many instances are not responding or are blocked, causing retries and eventually also an increase in the queue.
So it’s two-fold: the underlying technology and the amount of data it can handle.
Expect growing pains as they (the instances) find tech that works.
The underlying technology has quite some down-sides indeed (activitypub) in terms of scalability. At the same time, large/big instances of the fediverse need to process large amount of data (not only local data but also external data from remote instances). Plus /kbin was still in early development phase, not fully ready to scale yet, so it was a big unexpected (due to Reddit …) migration. All the things I just mentioned are now coming together, all at once.