For anyone wanting to contribute but on a smaller and more feasible scale, you can help distribute their database using torrents.
I know the last time this came up there was a lot of user resistance to the torrent scheme. I’d be willing to seed 200-500gb but having minimum torrent archive sizes of like 1.5TB and larger really limits the number of people willing to give up that storage, as well as defeats a lot of the resiliency of torrents with how bloody long it takes to get a complete copy. I know that 1.5TB takes a massive chunk out of my already pretty full NAS, and I passed on seeding the first time for that reason.
It feels like they didn’t really subdivide the database as much as they should have…
There are plenty of small torrents. Use the torrent generator and tell the script how much space you have and it will give you the “best” (least seeded) torrents whose sum is the size you give it. It doesn’t have to be big, even a few GB is suitable for some smaller torrents.
Almost all the small torrents that I see pop up are already seeded relatively good (~10 seeders) though, which reinforces the fact that A. the torrents most desperately needing seeders are the older, largest ones and B. large torrents don’t attract seeders because of unreasonable space requirements.
Admittedly, newer torrents seem to be split into 300gb or less pieces, which is good, but there’s still a lot of monster torrents in that list.
Thx.
Do you know how useful it is to host such a torrent? Who is accessing the content via that torrent?
Anyone who wants to. I think a lot of LLM trainers access them.
Doesn’t sound like I should host some of it. I’d be more down to host it for endusers
how big is the database?
books can’t be that big, but i’m guessing the selection is simply huge?
The selection is literally all books that can be found on the internet.
So how big is that?
I guess more than 5?
I imagine a couple of terabytes at the very least, though, I could be underestimating how many books have got deDRMed so far.
Apparently it’s 900TB
Girl, what? No wonder they’re having trouble hosting their archive. Does Anna’s Archive host copyrighted content as well or is all that copyleft?
They host academic papers and books, most of them are copyrighted contents. They recently got in trouble for scraping a book metadata service to generate a list of books that hasn’t been archived yet: https://torrentfreak.com/lawsuit-accuses-annas-archive-of-hacking-worldcat-stealing-2-2-tb-data-240207/
Is hosting all that stuff even legal? I mean, they’re not making any money off of it, but they’re still a “piracy” hub. How have they survived this long?
They index, not host, no? (Unless you count the torrents, which are distributed)
The archive includes copyrighted works. Often multiple copies of each work, across different formats.
bigger than zlib or project Gutenberg?
It is huge! They claimed to have preserved about 5% of the world’s books.
oh i actually tought it was way more! there wasnt a single book i wanted (or even tought to look up) that i didnt actually find in there.
Could anyone broad-stroke the security requirements for something like this? Looks like they’ll pay for hosting up to a certain amount, and between that and a pipeline to keep the mirror updated I’d think it wouldn’t be tough to get one up and running.
Just looking for theory - what are the logistics behind keeping a mirror like this secure?
Could be worth asking on selfhosted (how do I link a sub on lemmy ?) They probably have more relevant experience at this sort of thing.
Edit
Does this work ?
!selfhosted@lemmy.world might work for more people.
Is probably more suitable. I’d be interested in the total size, though.
900 TB, according to other comments here.
Is it all or nothing sort of deal?
There are partial torrents, also according to the other comments.
It does. 😉
They outline it pretty well here:
This is a fascinating read
Also link any ways to donate if they’re accepting that.
I had no idea about this project. Is it like a better search engine for libgen etc?
It searches through libgens, z-library and has it’s own mirrors of the files they serve on top of that. I think it was created as a response to Z-Library’s domain getting seized but I could be wrong.
It has way more content than Libgen