• JakenVeina
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    1 day ago

    I’d say “Separation of Responsibilities” is probably my #1. Others here have mentioned that you shouldn’t code for future contingencies, and that’s true, but a solid baseline of Separation of Responsibilities means you’re setting yourself up for future refactors without having to anticipate and plan for them all now. I.E. if your application already has clear barriers between different small components, it’s a lot easier to modify just one or two of them in the future. For me, those barriers mean horizontal layers (I.E. data-storage, data-access, business logic, user-interfacing) and vertical slicing (I.E. features and/or business domains).

    Next, I’ll say “Self-Documenting Code”. That is, you should be able to intuit what most code does by looking at how it’s named and organized (ties into separation of responsibilities from above). That’s not to say that you should follow Clean Code. That takes the idea WAY too far: a method or class that has only one call site is a method or class that you should roll into that call site, unless it’s a separation of responsibility thing. That’s also not to say that you should never document or comment, just that those things should provide context that the code doesn’t, for things like design intent or non-obvious pitfalls, or context about how different pieces are supposed to fit together. They should not describe structure or basic function, those are things that the code itself should do.

    I’ll also drop in “Human Readability”. It’s a classic piece of wisdom that code is easier to write than it is to read. Even of you’re only coding for yourself, if you want ANY amount of maintainability in your code, you have to write it with the intent that a human is gonna need to read and understand it, someday. Of course, that’s arguably what I already said with both of the above points, but for this one, what I really mean is formatting. There’s a REASON most languages ignore most or all whitespace: it’s not that it’s not important, it’s BECAUSE it’s important to humans that languages allow for it, even when machines don’t need it. Don’t optimize it away, and don’t give control over when and where to use it to a machine. Machines don’t read, humans do. I.E. don’t use linters. It’s a fool’s errand to try and describe what’s best for human readability, in all scenarios, within a set of machine-enforceable rules.

    “Implement now, Optimize later” is a good one, as well. And in particular, optimize when you have data that proves you need it. I’m not saying you should intentionally choose inefficient implementations just because they’re simpler, but if they’re DRASTICALLY simpler… like, is it really worth writing extra code to dump an array into a hashtable in order to do repeated lookups from it, if you’re never gonna have more than 20 items in that array at a time? Even if you think you can predict where your hot paths are gonna be, you’re still better off just implementing them with the KISS principal, until after you have a minimum viable product, cause by then you’ll probably have tests to support you doing optimizations wolithout breaking anything.

    I’ll also go with “Don’t be afraid to write code”, or alternatively “Nobody likes magic”. If I’m working on a chunk of code, I should be able to trace exactly how it gets called, all the way up to the program’s entry point. Conversely, if I have an interface into a program that I know is getting called (like, say, an API endpoint) I should be able to track down the code it corresponds to bu starting at the entry point and working my way down. None of this “Well, this framework we’re using automatically looks up every function in the application that matches a certain naming pattern and figures out the path to map it to during startup.” If you’re able to write 30 lines of code to implement this endpoint, you can write one more line of code that explicitly registers it to the framework and defines its path. Being able to definitively search for every reference to a piece of code is CRITICAL to refactoring. Magic that introduces runtime-only references is a disaster waiting to happen.

    As an honorable mention: it’s not really software design, but it’s somethign I’ve had to hammer into co-workers and tutorees, many many times, when it comes to debugging: “Don’t work around a problem. Work the problem.”. It boggles my mind how many times I’ve been able to fix other people’s issues by being the first one to read the error logs, or look at a stack trace, or (my favorite) read the error message from the compiler.

    “Hey, I’m getting an error ‘Object reference not set to an instance of an object’. I’ve tried making sure the user is logged in and has a valid session.”

    “Well, that’s probably because you have an object reference that’s not sent to an instance of an object. Is the object reference that’s not set related to the user session?”

    “No, it’s a ServiceOrder object that I’m trying to call .Save() on.”

    “Why are you looking at the user session then? Is the service order loaded from there?”

    “No, it’s coming from a database query.”

    “Is the database query returning the correct data?”

    “I don’t know, I haven’t run it.”

    I’ve seen people dance around an issue for hours, by just guessing about things that may or may not be related, instead of just taking a few minutes to TRACE the problem from its effect backwards to its cause. Or because they never actually IDENTIFIED the problem, so they spent hours tracing and troubleshooting, but for the wrong thing.