So, when I posted the silly spectral analysis meme, I already suspected that the images were generated based solely on the string provided to the system. Here's the full explanation, courtesy of the creator (and yes, I did get permission to share it):
So, now you know. :-)
As I you've already determined, the graphic output is based solely on the username string and doesn't actually search for a journal or anything. (That would take a ridiculous amount of extra processing, of course.) However, I think you will find the actual algorithm less interesting than you might have expected. Essentially, there is a fixed random routine for assembling a complex function, and the random number generator is seeded by the username (fed through an MD5 hash, truncated, and converted from hex to decimal).
A random color (and realize that from now on when I say "random", I don't actually mean random, since we have seeded with the username) is chosen, and one of its rgb components is maxed to 255. This becomes the base color for the entire image. Every pixel will be colored as a brightness percentage of this color.
The 100x100 image represents the area of the xy plane from (-2,-2) to (2,2). We iterate through every pixel and feed the coordinates to a shading function which will return a value between 0 and 1 (inclusive). We then look at all of these shade values that have been returned and normalize them to the [0,1] interval (in case the max was less than 1) and color each pixel as that proportion of the aforementioned base color.
Obviously the shading function is the point of interest here. It takes in the x and y values, performs a series of random manipulations on them (the "modification" functions), sums them, performs a series of random manipulations on that (again a "modification" function), and then converts it into a value between 0 and 1 through yet another series of random manipulations (erroneously termed the "normalization" function).
The modification functions randomly choose from an assortment of manipulations such as performing a sine or cosine, adding a random value from -.75 to .75, multiplying by a random value +/- .33 to 3, raising to a random power 2 to 4, taking the absolute value or square root. They may perform up to 6 such modifications.
The normalization function first either performs a sine or cosine to get the value within the [-1,1] range, then performs either an absolute value or a (1-x)/2 to move it into the [0,1] range. From here it may choose to raise to a power or take the square root, but at any rate it returns a value within [0,1] as the final result. As mentioned far above, this is the value used to determine shading for a pixel.
As I coded this back in August 2004, I don't entirely remember what I was thinking at the time, but I do know that I played with different versions until I settled on something that made interesting graphics fairly consistently. The fact that the final "normalization" has to go through a trig function combined with the small scale of the values being input tends to yield fairly smooth looking transitions, except when the random scaling gets out of hand. But even in those cases, the "noise" generally looks interesting, if not pretty.
When a meld is generated, there are actually two versions saved in the cache. One is the normal solo meld, and one is the meld with 75% alpha transparency. The alpha meld is layered over other people's solo melds to create those influence images. The image-generator script checks the HTTP referrer to see if the meld is being viewed from an LJ friends list, and if so it will perform this combination with the name seen in the referrer URL. This explains how people can see their own influence melds with all the people on their friends list even though there is just a single image URL being used by each poster.
The script caches the solo meld, the alpha meld, the solo placard, and every single combination placard. So basically, every graphic that the image-generator script can output gets cached on the server. The first time I released the meme, in August 2004, it only cached the 100x100 part of the graphics, and the dynamic image generation ended up pegging the CPU. This time around we've mainly been restricted by upstream bandwidth (since we are hosting on a 384 kbps up DSL).
I hope you found this as interesting to read as I did to type. Maybe I'll save it to publish it somewhere later.
Thanks for the donation :)