A young computer scientist and two colleagues show that searches within data structures called hash tables can be much faster than previously deemed possible.
I’ve only used java but java hash tables are stupid fast in my experience, like everything else in my crap programs was 1000 times slower than the hash table access or storage.
Just reading the title, it’s talking about searching hash tables, which wasn’t something I was specifically doing.
I wrote my comment not to antagonize you but to point out that you’re asking the wrong questions. I failed to articulate that, and I’m sorry for being harsh.
Your prior comment indicated that you have used hash tables in Java, which were very fast. You said that your program accessed the hash tables, but did not “search” the table. These operations are the same thing, which led me to believe you’re out of your depth.
This last comment asks me how much this paper’s contribution speeds up an average program. You’re asking the wrong question, and you seem to be implying the work was useless if it doesn’t have an immediate practical impact. This is a theoretical breakthrough far over my head. I scanned the paper, but I’m unsurprised they haven’t quantified the real-world impact yet. It’s entirely possible that despite finding an asymptotic improvement, the constant factors (elided by the big O analysis) are so large as to be impractical… or maybe not! I think we need to stay tuned.
Again, sorry for being blunt. We all have to start somewhere. My advice is to be mindful of where the edge of your expertise lies and try to err on the side of not devaluing others’ work.
I’ve only used java but java hash tables are stupid fast in my experience, like everything else in my crap programs was 1000 times slower than the hash table access or storage.
Just reading the title, it’s talking about searching hash tables, which wasn’t something I was specifically doing.
If you use a hash table, you search every time you retrieve an object.
If you didn’t retrieve, why would you be storing the data in the first place?
I know that, but phrased that way it sort of sounds like they’re iterating over the entire thing.
Sorry to be blunt, but you don’t know what you’re talking about.
How much do you think this speeds up an average program? Enlighten us with your knowledge.
I wrote my comment not to antagonize you but to point out that you’re asking the wrong questions. I failed to articulate that, and I’m sorry for being harsh.
Your prior comment indicated that you have used hash tables in Java, which were very fast. You said that your program accessed the hash tables, but did not “search” the table. These operations are the same thing, which led me to believe you’re out of your depth.
This last comment asks me how much this paper’s contribution speeds up an average program. You’re asking the wrong question, and you seem to be implying the work was useless if it doesn’t have an immediate practical impact. This is a theoretical breakthrough far over my head. I scanned the paper, but I’m unsurprised they haven’t quantified the real-world impact yet. It’s entirely possible that despite finding an asymptotic improvement, the constant factors (elided by the big O analysis) are so large as to be impractical… or maybe not! I think we need to stay tuned.
Again, sorry for being blunt. We all have to start somewhere. My advice is to be mindful of where the edge of your expertise lies and try to err on the side of not devaluing others’ work.