In MediaWiki 1.6, a job queue was introduced to perform long-running tasks asynchronously. The job queue is designed to hold many short tasks using batch processing. Up to MediaWiki 1.16, an estimate of the length of the job queue was shown at Special:Statistics. By default, each time a request runs, one job is taken from the job queue and executed.
MediaWiki 1.6 adds a job to the job queue for each article using a template. Each job is a command to read an article, expand any templates, and update the link table accordingly. So null edits are no longer necessary, although it may take a while for big operations to complete. This can help to ease strain on users.
A wider class of operations can cause invalidation of the HTML cache for a large number of pages:
Except for template changes and uploading a not previously existing file, these operations do not invalidate the links tables, but they do invalidate the HTML cache of all pages linking to that page, or using that image. Invalidating the cache of a page is a short operation; it only requires updating a single database field and sending a multicast packet to clear the caches. But if there are more than about 1000 to do, it takes a long time. By default, jobs are added when more than 500 pages need to be invalidated, one job per 500 operations.
During a period of low loads, the job queue might be zero. At Wikimedia, the job queue is, in practice, almost never zero. In off-peak hours, it might be a few hundred to a thousand. During a busy day, it might be a few hundred thousand (values of several million are no cause for alarm), but it can quickly fluctuate by 10% or more. Also as mentioned above, several servers will have different estimates for this value so apparently more varying fluctuations can also be seen.