• 0 Posts
  • 128 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • VoterFrog@lemmy.worldtoPolitical Memes@lemmy.worldCrazy
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Thing is, normalizing stupidity and shitiness doesn’t help anything either. It actively makes things worse. I don’t know what the actual answer is to improving things with that kind of person though. I just think pretending that being a shitbag is acceptable behavior ain’t it.



  • I’ve got something similar except the number of items I want to produce is set by the constant combinator. The new logistics groups are awesome for this because I can have the combinator synced with things I want in my inventory.

    I also have the radar network set up where trains report the items they have and stations request trains with items they want so I can request more materials be delivered to my omni-assembler when I need them.

    The downside I’ve been wanting to fix is the need to specify all of the intermediates that are needed. That’s not too hard to fix, of course, just attempt to make ingredients that are missing (like you’re doing).

    I’ve also been wanting to try and change from using a constant combinator to using the requests on the logistics network. So then all you’d have to do to get something added to the recipe list is start requesting it.





  • I have a son that’s learning to read right now so I’ve got some first hand experience on this. This article is making a lot out of the contextual clues part of the method but consistently downplays or ignores that phonics is still part of what the kids are taught. It’s a bit of a fallback, sure, but my son isn’t being taught to skip words when he can’t figure it out.

    He’s bringing home the kinds of books mentioned in the article. The sentence structure is pretty repetitive and when he comes across a word he doesn’t know he tries to look at the picture to figure out what it is. Sometimes that works and he says the right word. Other times, like there’s a picture of a bear and the word is “cub” (but I don’t think my son knew what a bear cub was), he still falls back on “cuh uh buh” to figure it out.

    So he still knows the relationship between letters and sounds. He just has some other tools in his belt as well. I can’t say I find that especially concerning.




  • I’m using the radar network for dispatch and priority for tie breaking/to make sure the resources are distributed evenly.

    All my loading stations are simply called “Cargo Pickup” and all of my cargo trains go to any of them with an opening. Once there, the station reports on the red wire the ID of the train in the channel corresponding to the item being loaded (unless another train is already being reported by another station with the same items).

    On the demand side, stations look for the ID on the item they need. They copy the ID into the green network on the channel corresponding to their station name. In the simple case, a station serving copper ore to copper smelters copies the train ID from copper on the red network to copper on the green network. But stations can also request multiple ingredients in which case they have some other symbol in their name besides copper ore. (Of course, here too the copying only happens if no other station is requesting a train on that same channel).

    Back on the supply side, the station looks through all the IDs on the green network and sends the ones that match the waiting train to the train. The train uses the symbols to activate an interrupt to go to the corresponding station to deliver the goods.

    I just set this up today. I haven’t perfected it yet. One minor hiccup is handling the fact that you have no way to atomically access a channel. So two stations could request on the same channel at the same time, corrupting the ID. But that only happens if the stations are activated to make a demand on the exact same tick. It’s not so much that it’s a constant problem, it just bothers me that it could be.











  • The examples they provided were for very widely distributed stories (i.e. present in the data set many times over). The prompts they used were not provided. How many times they had to prompt was not provided. Their results are very difficult to reproduce, if not impossible, especially on newer models.

    I mean, sure, it happens. But it’s not a generalizable problem. You’re not going to get it to regurgitate your Lemmy comment, even if they’ve trained on it. You can’t just go and ask it to write Harry Potter and the goblet of fire for you. It’s not the intended purpose of this technology. I expect it’ll largely be a solved problem in 5-10 years, if not sooner.