Deleted something I shouldn’t have. I learned my lesson, but I had to revert to a backup that was about 3 days old. My bad.

  • seahorse [Ohio]OPMA
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    I’m not sure what I could do with it unfortunately. If it can recover itself, that would be awesome.

    • Wander@yiffit.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I don’t think there’s an easy way to “replay” them. But in theory, you should be able to take the entries in that table related to midwest social from any other instance and start broadcasting them anew. The remote instances will reject them because to them they are duplicates, but you would be able to recover lost content.

      Now I realize this is far more complex, but in theory it should be possible to create a tool that does this specifically for these scenarios.

      I’m sorry this happened to you :(

      • seahorse [Ohio]OPMA
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Thanks. Yeah, this is what happens when my need to try everything to resolve an issue gets the better of me too fast. My filesystem was 90% full and I didn’t want it to run out, so I deleted something I shouldn’t have. If you have any idea why my disk would be using up lots of storage even though I’m using an S3 bucket and have my logs limited let me know.

        • trafguy
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Thanks for the transparency! I don’t mind, mistakes happen, but I understand it’s frustrating and a bit problematic with the lost content.

          There was a post about that on a Lemmy admin community a few days ago. Someone with a ~1k userbase was eating up a GB/day on average. IIRC, there were lots of logs, but also if I understand correctly, every server stores mirrors of the data from anything users subscribe to. That could eat up a lot of data pretty quickly as the fedeverse scales up.

          If you wanted to suggest a shift for improved scalability, maybe servers could form tight pacts with a few who mirror each other’s content, and then more loosely federated servers load data directly from the pact. A compromise between ultimate content preservation and every larger server having to host the entire fedeverse.

          So basically, a few servers would form a union. Each union would be a smaller fedeverse (like cells in an organ), and they’d connect to other organs to form the fedeverse/body.


          Also, are users who joined in the past few days affected? I suppose they might need to sign up again.

          • seahorse [Ohio]OPMA
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Yeah, if they signed up in the last few days they’ll need to do it again. Ugh.

        • Wander@yiffit.net
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Yes, that might actually be the activitypub table in the database. You can safely delete older entries, like from two weeks ago or older. Otherwise it just keeps growing with the logs of all activitypub objects that the server has sent out received.

          • seahorse [Ohio]OPMA
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Do you know where I can find the query to do that? Databases are not my forte.

            • Wander@yiffit.net
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 year ago

              Yes, let me logging to my server and try to retrieve the exact query I used. BRB

              Edit: here it is, the table is actually called activity

              DELETE FROM activity where published < '2023-06-27'; Just make sure to change the date to whatever you need. Leaving two weeks is more than enough to detect and refuse duplicates.

              In order to get access to the database you should probably be able to run docker exec -it midwestsocial_postgres_1 busybox /bin/sh. And then access postgres with psql -U username, the default username is ‘lemmy’.

              Then connect to the database with \c lemmy

              You can list tables with \dt and view definitions for each table with \d+ tablename. For example \d+ activity.

              You can get some sample data from the table with select * from activity LIMIT 10; You’ll see that the activity table holds activitypub logs and should be cleared out regularly as mentioned by dessalines in this post: https://github.com/LemmyNet/lemmy/issues/1133

              Important

              After deleting the entries (which could take some minutes depending on how much data it holds) you will not see a difference in the filesystem. The database keeps that freed up space for itself but you should see backups being much lighter and of course, the file system itself will stop growing so fare until it has reached the previous levels.

              If you want to free up that space to the filesystem you need to do a “vacuum full” but that will require downtime and could take several minutes depending on the space that was used up and the space that is still free. It could take up to an hour. I haven’t done this myself since backups have gone down in size and I don’t need extra free space in the filesystem as long as I stop the database from growing out of control again.