ing seems the way to go.
As far as cache invalidation, i don't know how this is all going to work, but is there a way to make it like a FIFO Stack
, such that when a user creates a writeup, we could hit the DB to POP
?) the stack, and PUSH
the new writeup onto it? This could save us a few nanoseconds in regeneration, however if we have forgotten a place where the cache could go out of sync, it could further the problem.
jb sez: A cache set up like a FIFO stack is really what I was planning. It's fairly trivial to just tack on the the beginnging the id of a new writeup. Deletion is more of a bitch, but I can deal with that. It'll be a quick string replace on the cache. At any rate, that's probably how it's going to be done. It'll be a perl-based solution, as I can't see anyway to cache them directly in SQL.
Deletion seems like a good time to set the invalid bit.
If we are storing the stats in there too, so that when a user searched themselves we get C!
and vote status, that cache is going to be invalidated every few minutes for large users, seems impractical. Maybe just store C! status so when a regular user hits it they can see the # of C!'s, and we would need to invalidate on C! of that users nodes.
jb sez: We wouldn't be storing the vote data and such. It wouldn't be tough to pull that info out of the db. At the very least individual node lookups BY ID (which is the information that we'd have), is really trivial.
I'm not sure what extra info the Gods can see per-default when they user search, but probably also hide the hidden
and marked for destruction
bits in there too, but then if deletion invalidates the cache, whats the point of destruction caching.
Odd thing, is it faster to grab _just_ the nodes vote status, unsorted, into an associative array
or hash table
, and then when generating the page, merge the cache and the votes together useing a custom and preferably fast method?
jb sez: There's no sane way to cache stuff like votes and cools, really. We also do NOT want to be doing any sort of sorting in perl itself. That's kind of crazy and will eat our webserver alive. Let's let mySQL do it's own internal stuff, such as indexing and the like, to get the sorting speed boosts that perl with a million objects won't get.
replies to what jb sez
: well then maybe just the C!'s, because those are fairly static, and make C!'s invalidate the cache. That would mean anyone who is not a god, and not the actual user being searched could just use the cache.