I recently upgraded to MediaWiki 1.21 for my personal wiki, and I was annoyed to see that LaTeX support was removed, though it and MathJax are available as extensions.
Once I installed it, I was vexed by sporadic failures in typesetting — texvc would succeed in producing an image about 20% of the time.
Eventually, I found out why — the memory limits on the wfShellExec call were set pretty low, and so most of the time it would fail to malloc the memory it needed.
Increasing the global default for shell calls in LocalSettings.php fixed this:
$wgMaxShellMemory = 402400;
If you’re having the same problem, give it a try.
I think this is worth highlighting, because I’ve seen so many cases where programmers “parse” text using java tools like StringTokenizer or split() with a set of punctuation characters:
java already has a built-in, locale-aware method for getting sentences from text, and words from sentences:
Anything you write yourself to parse text will likely miss corner-cases✝ and be un-prepared for other languages.
BreakIterator does the job, isn’t difficult to use and has been around jdk 1.2, why not use it?
✝ odd punctuation like this, for example, when reading words.
Having previously read about hash collision attacks, it occurred to me that it was not trivial to generate collisions.
In particular, I’m looking for strings that have a hashCode() of 0, since this can cause the JVM to re-compute the hash.
What do colliding alphanumeric Strings look like in java? Here’s a sample.
Amit Patel’s article on procedural map generation is a real gem.
It even includes a flash application for creating your own procedural maps.
In one article he manages to tie together:
It’s part of a larger body of useful game programming links he maintains on his site.