DXR is a code search and navigation tool aimed at making sense of large projects. It supports full-text and regex searches as well as structural queries.

Line Code
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617
Layout Engine Visual Tests (reftest)
L. David Baron <dbaron@dbaron.org>, Mozilla Corporation
July 19, 2006

This code is designed to run tests of Mozilla's layout engine.  These
tests consist of an HTML (or other format) file along with a reference
in the same format.  The tests are run based on a manifest file, and for
each test, PASS or FAIL is reported, and UNEXPECTED is reported if the
result (PASS or FAIL) was not the expected result noted in the manifest.

Images of the display of both tests are captured, and most test types
involve comparing these images (e.g., test types == or !=) to determine
whether the test passed.  The captures of the tests are taken in a
viewport that is 800 pixels wide and 1000 pixels tall, so any content
outside that area will be ignored (except for any scrollbars that are
displayed).  Ideally, however, tests should be written so that they fit
within 600x600, since we may in the future want to switch to 600x600 to
match http://lists.w3.org/Archives/Public/www-style/2012Sep/0562.html .

Why this way?

Writing HTML tests where the reference rendering is also in HTML is
harder than simply writing bits of HTML that can be regression-tested by
comparing the rendering of an older build to that of a newer build
(perhaps using stored reference images from the older build).  However,
comparing across time has major disadvantages:

 * Comparisons across time either require two runs for every test, or
   they require stored reference images appropriate for the platform and
   configuration (often limiting testing to a very specific

 * Comparisons across time may fail due to expected changes, for
   example, changes in the default style sheet for HTML, changes in the
   appearance of form controls, or changes in default preferences like
   default font size or default colors.

Using tests for which the pass criteria were explicitly chosen allows
running tests at any time to see whether they still pass.

Manifest Format

The test manifest format is a plain text file.  A line starting with a
"#" is a comment.  Lines may be commented using whitespace followed by
a "#" and the comment.  Each non-blank line (after removal of comments)
must be one of the following:

1. Inclusion of another manifest

   <failure-type>* include <relative_path>

   <failure-type> is the same as listed below for a test item.  As for 
   test items, multiple failure types listed on the same line are 
   combined by using the last matching failure type listed.  However, 
   the failure type on a manifest is combined with the failure type on 
   the test (or on a nested manifest) with the rule that the last in the
   following list wins:  fails, random, skip.  (In other words, skip 
   always wins, and random beats fails.)

2. A test item

   [ <failure-type> | <preference> ]* [<http>] <type> <url> <url_ref>


   a. <failure-type> (optional) is one of the following:

      fails  The test passes if the images of the two renderings DO NOT
             meet the conditions specified in the <type>.

      fails-if(condition) If the condition is met, the test passes if the 
                          images of the two renderings DO NOT meet the 
                          conditions of <type>. If the condition is not met,
                          the test passes if the conditions of <type> are met.

      needs-focus  The test fails or times out if the reftest window is not

      random  The results of the test are random and therefore not to be
              considered in the output.

      random-if(condition) The results of the test are random if a given
                           condition is met.

      silentfail This test may fail silently, and if that happens it should
                 count as if the test passed. This is useful for cases where
                 silent failure is the intended behavior (for example, in
                 an out of memory situation in JavaScript, we stop running
                 the script silently and immediately, in hopes of reclaiming
                 enough memory to keep the browser functioning).

      silentfail-if(condition) This test may fail silently if the condition
                               is met.

      skip  This test should not be run. This is useful when a test fails in a
            catastrophic way, such as crashing or hanging the browser. Using
            'skip' is preferred to simply commenting out the test because we
            want to report the test failure at the end of the test run.

      skip-if(condition) If the condition is met, the test is not run. This is
                         useful if, for example, the test crashes only on a
                         particular platform (i.e. it allows us to get test
                         coverage on the other platforms).

      slow  The test may take a long time to run, so run it if slow tests are
            either enabled or not disabled (test manifest interpreters may
            choose whether or not to run such tests by default).

      slow-if(condition) If the condition is met, the test is treated as if
                         'slow' had been specified.  This is useful for tests
                         which are slow only on particular platforms (e.g. a
                         test which exercised out-of-memory behavior might be
                         fast on a 32-bit system but inordinately slow on a
                         64-bit system).

      fuzzy(maxDiff, diffCount)
          This allows a test to pass if the pixel value differences are <=
          maxDiff and the total number of different pixels is <= diffCount.
          It can also be used with '!=' to ensure that the difference is
          greater than maxDiff.

      fuzzy-if(condition, maxDiff, diffCount)
          If the condition is met, the test is treated as if 'fuzzy' had been
          specified. This is useful if there are differences on particular

          Require some particular setup be performed or environmental
          condition(s) made true (eg setting debug mode) before the test
          is run. If any condition is unknown, unimplemented, or fails,
          revert to the fallback failure-type.
          Example: require-or(debugMode,skip)

          Loading the test and reference is known to assert exactly
          count times.
          NOTE: An asserts() notation with a non-zero count or maxCount
          suppresses use of a cached canvas for the test with the
          annotation.  However, if later occurrences of the same test
          are not annotated, they will use the cached canvas
          (potentially from the load that asserted).  This allows
          repeated use of the same test or reference to be annotated
          correctly (which may be particularly useful when the uses are
          in different subdirectories that can be tested independently),
          but does not force them to be, nor does it force suppression
          of caching for a common reference when it is the test that

          Loading the test and reference is known to assert between
          minCount and maxCount times, inclusive.
          NOTE: See above regarding canvas caching.

          Same as above, but only if condition is true.

      Conditions are JavaScript expressions *without spaces* in them.
      They are evaluated in a sandbox in which a limited set of
      variables are defined.  See the BuildConditionSandbox function in
      layout/tools/reftest.js for details.

      Examples of using conditions:
          fails-if(winWidget) == test reference
          asserts-if(cocoaWidget,2) load crashtest

   b. <preference> (optional) is a string of the form


      where <name> is the name of a preference setting, as seen in
      about:config, and <value> is the value to which this preference should
      be set. <value> may be a boolean (true/false), an integer, or a
      quoted string *without spaces*, according to the type of the preference.

      The preference will be set to the specified value prior to
      rendering the test and/or reference canvases (pref() applies to
      both, test-pref() only to the test, and ref-pref() only to the
      reference), and will be restored afterwards so that following
      tests are not affected. Note that this feature is only useful for
      "live" preferences that take effect immediately, without requiring
      a browser restart.

   c. <http>, if present, is one of the strings (sans quotes) "HTTP" or
      "HTTP(..)" or "HTTP(../..)" or "HTTP(../../..)", etc. , indicating that
      the test should be run over an HTTP server because it requires certain
      HTTP headers or a particular HTTP status.  (Don't use this if your test
      doesn't require this functionality, because it unnecessarily slows down
      the test.)

      With "HTTP", HTTP tests have the restriction that any resource an HTTP
      test accesses must be accessed using a relative URL, and the test and
      the resource must be within the directory containing the reftest
      manifest that describes the test (or within a descendant directory).
      The variants "HTTP(..)", etc., can be used to relax this restriction by
      allowing resources in the parent directory, etc.

      To modify the HTTP status or headers of a resource named FOO, create a
      sibling file named FOO^headers^ with the following contents:


      <http-status> A line of the form "HTTP ###[ <description>]", where
                    ### indicates the desired HTTP status and <description>
                    indicates a desired HTTP status description, if any.
                    If this line is omitted, the default is "HTTP 200 OK".
      <http-header> A line in standard HTTP header line format, i.e.
                    "Field-Name: field-value".  You may not repeat the use
                    of a Field-Name and must coalesce such headers together,
                    and each header must be specified on a single line, but
                    otherwise the format exactly matches that from HTTP

      HTTP tests may also incorporate SJS files.  SJS files provide similar
      functionality to CGI scripts, in that the response they produce can be
      dependent on properties of the incoming request.  Currently these
      properties are restricted to method type and headers, but eventually
      it should be possible to examine data in the body of the request as
      well when computing the generated response.  An SJS file is a JavaScript
      file with a .sjs extension which defines a global |handleRequest|
      function (called every time that file is loaded during reftests) in this

      function handleRequest(request, response)
        response.setStatusLine(request.httpVersion, 200, "OK");

        // You *probably* want this, or else you'll get bitten if you run
        // reftest multiple times with the same profile.
        response.setHeader("Cache-Control", "no-cache");

        response.write("any ASCII data you want");

        var outputStream = response.bodyOutputStream;
        // ...anything else you want to do, synchronously...

      For more details on exactly which functions and properties are available
      on request/response in handleRequest, see the nsIHttpRe(quest|sponse)
      definitions in <netwerk/test/httpserver/nsIHttpServer.idl>.

   d. <type> is one of the following:

      ==     The test passes if the images of the two renderings are the
      !=     The test passes if the images of the two renderings are 
      load   The test passes unconditionally if the page loads.  url_ref
             must be omitted, and the test cannot be marked as fails or
             random.  (Used to test for crashes, hangs, assertions, and
      script The loaded page records the test's pass or failure status
             in a JavaScript data structure accessible through the following

             getTestCases() returns an array of test result objects
             representing the results of the tests performed by the page.

             Each test result object has two methods:

             testPassed() returns true if the test result object passed,
             otherwise it returns false.

             testDescription() returns a string describing the test

             url_ref must be omitted. The test may be marked as fails or
             random. (Used to test the JavaScript Engine.)

   e. <url> is either a relative file path or an absolute URL for the
      test page

   f. <url_ref> is either a relative file path or an absolute URL for
      the reference page

   The only difference between <url> and <url_ref> is that results of
   the test are reported using <url> only.

3. Specification of a url prefix

   url-prefix <string>

   <string> will be prepended to relative <url> and <url_ref> for all following
   test items in the manifest.

   <string> will not be prepended to the relative path when including another
   manifest, e.g. include <relative_path>.

   <string> will not be prepended to any <url> or <url_ref> matching the pattern
   /^\w+:/. This will prevent the prefix from being applied to any absolute url
   containing a protocol such as data:, about:, or http:.

   While the typical use of url-prefix is expected to be as the first line of
   a manifest, it is legal to use it anywhere in a manifest. Subsequent uses
   of url-prefix overwrite any existing values.

4. Specification of default preferences

   default-preferences <preference>*

   where <preference> is defined above.

   The <preference> settings will be used for all following test items in the

   If a test item includes its own preference settings, then they will override
   any settings for preferences of the same names that are set using
   default-preferences, just as later items within a line override earlier ones.

   A default-preferences line with no <preference> settings following it will
   reset the set of default preferences to be empty.

   As with url-prefix, default-preferences will often be used at the start of a
   manifest file so that it applies to all test items, but it is legal for
   default-preferences to appear anywhere in the manifest. A subsequent
   default-preferences will reset any previous default preference values and
   overwrite them with the specified <preference> values.

This test manifest format could be used by other harnesses, such as ones
that do not depend on XUL, or even ones testing other layout engines.

Running Tests

(If you're not using a DEBUG build, first set browser.dom.window.dump.enabled
to true (in about:config, in the profile you'll be using to run the tests).
Create the option as a new boolean if it doesn't exist already. If you skip
this step you won't get any output in the terminal.)

At some point in the future there will hopefully be a cleaner way to do
this.  For now, go to your object directory, and run (perhaps using
MOZ_NO_REMOTE=1 or the -profile <directory> option)

./firefox -reftest /path/to/srcdir/mozilla/layout/reftests/reftest.list > reftest.out

and then search/grep reftest.out for "UNEXPECTED".

There are two scripts provided to convert the reftest.out to HTML.
clean-reftest-output.pl converts reftest.out into simple HTML, stripping
lines from the log that aren't relevant.  reftest-to-html.pl converts
the output into html that makes it easier to visually check for

Testable Areas

This framework is capable of testing many areas of the layout engine.
It is particularly well-suited to testing dynamic change handling (by
comparison to the static end-result as a reference) and incremental
layout (comparison of a script-interrupted layout to one that was not).
However, it is also possible to write tests for many other things that
can be described in terms of equivalence, for example:

 * CSS cascading could be tested by comparing the result of a
   complicated set of style rules that makes a word green to <span

 * <canvas> compositing operators could be tested by comparing the
   result of drawing using canvas to a block-level element with the
   desired color as a CSS background-color.

 * CSS counters could be tested by comparing the text output by counters
   with a page containing the text written out

 * complex margin collapsing could be tested by comparing the complex
   case to a case where the margin is written out, or where the margin
   space is created by an element with 'height' and transparent

When it is not possible to test by equivalence, it may be possible to
test by non-equivalence.  For example, testing justification in cases
with more than two words, or more than three different words, is
difficult.  However, it is simple to test that justified text is at
least displayed differently from left-, center-, or right-aligned text.

Writing Tests

When writing tests for this framework, it is important for the test to
depend only on behaviors that are known to be correct and permanent.
For example, tests should not depend on default font sizes, default
margins of the body element, the default style sheet used for HTML, the
default appearance of form controls, or anything else that can be

In general, the best way to achieve this is to make the test and the
reference identical in as many aspects as possible.  For example:

  Good test markup:
    <div style="color:green"><table><tr><td><span>green

  Good reference markup:
    <div><table><tr><td><span style="color:green">green

  BAD reference markup:
    <!-- 3px matches the default cellspacing and cellpadding -->
    <div style="color:green; padding: 3px">green

  BAD test markup:
    <!-- span doesn't change the positioning, so skip it -->
    <div style="color:green"><table><tr><td>green

Asynchronous Tests: class="reftest-wait"

Normally reftest takes a snapshot of the given markup's rendering right
after the load event fires for content. If your test needs to postpone
the moment the snapshot is taken, it should make sure a class
'reftest-wait' is on the root element by the moment the load event
fires. The easiest way to do this is to put it in the markup, e.g.:
    <html class="reftest-wait">

When your test is ready, you should remove this class from the root
element, for example using this code:
    document.documentElement.className = "";

Note that in layout tests it is often enough to trigger layout using 
    document.body.offsetWidth  // HTML example

When possible, you should use this technique instead of making your
test async.

Invalidation Tests: MozReftestInvalidate Event

When a test (or reference) uses reftest-wait, reftest tracks invalidation
via MozAfterPaint and updates the test image in the same way that
a regular window would be repainted. Therefore it is possible to test
invalidation-related bugs by setting up initial content and then
dynamically modifying it before removing reftest-wait. However, it is
important to get the timing of these dynamic modifications right so that
the test doesn't accidentally pass because a full repaint of the window
was already pending. To help with this, reftest fires one MozReftestInvalidate
event at the document root element for a reftest-wait test when it is safe to
make changes that should test invalidation. The event bubbles up to the
document and window so you can set listeners there too. For example,

function doTest() {
  document.body.style.border = "";
document.addEventListener("MozReftestInvalidate", doTest, false);

Painting Tests: class="reftest-no-paint"

If an element shouldn't be painted, set the class "reftest-no-paint" on it
when doing an invalidation test. Causing a repaint in your
MozReftestInvalidate handler (for example, by changing the body's background
colour) will accurately test whether the element is painted.

Opaque Layer Tests: class="reftest-opaque-layer"

If an element should be assigned to a PaintedLayer that's opaque, set the class
"reftest-opaque-layer" on it. This checks whether the layer is opaque during
the last paint of the test, and it works whether your test is an invalidation
test or not. In order to pass the test, the element has to have a primary
frame, and that frame's display items must all be assigned to a single painted
layer and no other layers, so it can't be used on elements that create stacking
contexts (active or inactive).

Layerization Tests: reftest-assigned-layer="layer-name"

If two elements should be assigned to the same PaintedLayer, choose any string
value as the layer name and set the attribute reftest-assigned-layer="yourname"
on both elements. Reftest will check whether all elements with the same
reftest-assigned-layer value share the same layer. It will also test whether
elements with different reftest-assigned-layer values are assigned to different
The same restrictions as with class="reftest-opaque-layer" apply: All elements
must have a primary frame, and that frame's display items must all be assigned
to the same PaintedLayer and no other layers. If these requirements are not
met, the test will fail.

Snapshot The Whole Window: class="reftest-snapshot-all"

In a reftest-wait test, to disable testing of invalidation and force the final
snapshot to be taken of the whole window, set the "reftest-snapshot-all"
class on the root element.

Avoid triggering flushes: class="reftest-no-flush"

The reftest harness normally triggers flushes by calling
getBoundingClientRect on the root element.  If the root element of the
test has class="reftest-no-flush", it doesn't do this.

This is useful for testing animations on the compositor thread, since
the flushing will cause a main thread style update.

Zoom Tests: reftest-zoom="<float>"

When the root element of a test has a "reftest-zoom" attribute, that zoom
factor is applied when rendering the test. The corresponds to the desktop "full
zoom" style zoom. The reftest document will be 800 device pixels wide by 1000
device pixels high. The reftest harness assumes that the CSS pixel dimensions
are 800/zoom and 1000/zoom. For best results therefore, choose zoom factors
that do not require rounding when we calculate the number of appunits per
device pixel; i.e. the zoom factor should divide 60, so 60/zoom is an integer.

Setting Scrollport Size: reftest-scrollport-w/h="<int>"

If either of the "reftest-scrollport-w" and "reftest-scrollport-h" attributes on
the root element are non-zero, sets the scroll-position-clamping scroll-port
size to the given size in CSS pixels. This does not affect the size of the
snapshot that is taken.

Setting Async Scroll Mode: reftest-async-scroll attribute

If the "reftest-async-scroll" attribute is set on the root element, we try to
enable async scrolling and zooming for the document. This is unsupported in many

Setting Displayport Dimensions: reftest-displayport-x/y/w/h="<int>"

If any of the "reftest-displayport-x", "reftest-displayport-y",
"reftest-displayport-w" and "reftest-displayport-h" attributes on the root
element are nonzero, sets the displayport dimensions to the given bounds in
CSS pixels. This does not affect the size of the snapshot that is taken.

When the "reftest-async-scroll" attribute is set on the root element, *all*
elements in the document are checked for "reftest-displayport-x/y/w/h" and have
displayports set on them when those attributes are present.

Testing Async Scrolling: reftest-async-scroll-x/y="<int>"

When the "reftest-async-scroll" attribute is set on the root element, for any
element where either the "reftest-async-scroll-x" or "reftest-async-scroll-y
attributes are nonzero, at the end of the test take the snapshot with the given
offset (in CSS pixels) added to the async scroll offset.

Testing Async Zooming: reftest-async-zoom="<float>"

When the "reftest-async-zoom" attribute is present on the root element then at
the end of the test take the snapshot with the given async zoom on top of any
existing zoom. Content is not re-rendered at the new zoom level. This
corresponds to the mobile style "pinch zoom" style of zoom. This is unsupported
in many configurations, and any tests using this will probably want to have
skip-if(!asyncPanZoom) on them.

Printing Tests: class="reftest-print"

Now that the patch for bug 374050 has landed
(https://bugzilla.mozilla.org/show_bug.cgi?id=374050), it is possible to
create reftests that run in a paginated context.

The page size used is 5in wide and 3in tall (with the default half-inch
margins).  This is to allow tests to have less text and to make the
entire test fit on the screen.

There is a layout/reftests/printing directory for printing reftests; however,
there is nothing special about this directory.  You can put printing reftests
anywhere that is appropriate.

The suggested first lines for any printing test is
<!DOCTYPE html><html class="reftest-print">

The reftest-print class on the root element triggers the reftest to
switch into page mode. Fixing the font size is suggested, although not
required, because the pages are a fixed size in inches. The switch to page mode
happens on load if the reftest-wait class is not present; otherwise it happens
immediately after firing the MozReftestInvalidate event.

The underlying layout support for this mode isn't really complete; it
doesn't use exactly the same codepath as real print preview/print. In
particular, scripting and frames are likely to cause problems; it is untested,
though.  That said, it should be sufficient for testing layout issues related
to pagination.

Plugin and IPC Process Crash Tests: class="reftest-expect-process-crash"

If you are running a test that causes an out-of-process plugin or IPC process
under Electrolysis to crash as part of a reftest, this will cause process
crash minidump files to be left in the profile directory.  The test
infrastructure that runs the reftests will notice these minidump files and
dump out information from them, and these additional error messages in the logs
can end up erroneously being associated with other errors from the reftest run.
They are also confusing, since the appearance of "PROCESS-CRASH" messages in
the test run output can seem like a real problem, when in fact it is the
expected behavior.

To indicate to the reftest framework that a test is expecting a plugin or
IPC process crash, have the test include "reftest-expect-process-crash" as
one of the root element's classes by the time the test has finished.  This will
cause any minidump files that are generated while running the test to be removed
and they won't cause any error messages in the test run output.

Skip Forcing A Content Process Layer-Tree Update: reftest-no-sync-layers attribute

Normally when an multi-process reftest test ends, we force the content process
to push a layer-tree update to the compositor before taking the snapshot.
Setting the "reftest-no-sync-layers" attribute on the root element skips this
step, enabling testing that layer-tree updates are being correctly generated.
However the test must manually wait for a MozAfterPaint event before ending.