Advogato blog for ssp
http://www.advogato.org/person/ssp/
Advogato blog for sspen-usmod_virguleFri, 31 Jul 2015 09:23:21 GMTSun, 2 Mar 2014 19:18:09 GMTFirst Class Goto
http://www.advogato.org/person/ssp/diary.html?start=29
http://ssp.impulsetrain.com/goto.html<p><a href="http://cgit.freedesktop.org/~sandmann/oort/" >Oort</a> is an experimental
programming language I have been working on, on and off (mostly off),
since 2007. It is a statically typed, object-oriented, imperative
language, where classes, functions and methods can be nested
arbitrarily, where and functions and methods are full closures, ie.,
they can be stored in variables and returned from functions. The
control structures are the usual ones: <strong>if</strong>, <strong>for</strong>, <strong>while</strong>,
<strong>do</strong>, <strong>goto</strong>, etc.</p>
<p>It also has an unusual feature: goto labels are <em>first class</em>.</p>
<p>What does it mean for labels to be first class? It means two things:
(1) they are lexically scoped so that they are visible from inside
nested functions. This makes it is possible to jump from any point in
the program to any other location that is visible from that point,
even if that location is in another function. And (2) labels can be
used as values: They can be passed to and returned from functions and
methods, and they can be stored in data structures.</p>
<p>As a simple example, consider a data structure with a “foreach” method
that takes a callback function and calls it for every item in the data
structure. In Oort this might look like this:</p>
<div>
<pre>table: array[person_t];
table.foreach (fn (p: person_t) -> void {
print p.name;
print p.age;
});
</pre>
</div>
<p>A note about syntax. In Oort, anonymous functions are defined like this:</p>
<div>
<pre>fn (<arguments>) -> <return type> {
...;
}
</pre>
</div>
<p>and variables and arguments are declared like this:</p>
<div>
<pre><name>: <type>
</pre>
</div>
<p>so the code above defines an anonymous function that prints the name
and the age of person and passes that function to the foreach method
of the table.</p>
<p>What if we want to stop the iteration? You could have the callback
return <code>true</code> to stop, or you could have it throw an
exception. However, both methods are a little clumsy: The first
because the return value might be useful for other purposes, the
second because stopping the iteration isn’t really an exceptional
situation.</p>
<p>With lexically scoped labels there is a direct solution – just use
<code>goto</code> to jump out of the callback:</p>
<div>
<pre> table.foreach (fn (e: person_t) -> void {
print p.name;
print p.age;
if (p.age > 50)
goto done;
});
@done:
</pre>
</div>
<p>Note what’s going on here: Once we find a person older than 50, we
jump out of the anonymous callback and back into the enclosing
function. The git tree has <a href="http://cgit.freedesktop.org/~sandmann/oort/tree/examples/foreach.nl" >a running
example</a>.</p>
<p><strong>Call/cc in terms of goto</strong><br/>
In Scheme and some other languages there is a feature called call/cc,
which is famous for being both powerful and mind-bending. What it does
is, it that it takes the concept of “where we are in the program” and
packages it up as a function. This function, called the
<em>continuation</em>, is then passed to another, user-defined, function. If
the user-defined function calls the continuation, the program will
resume from the point where call/cc was invoked. The mind-bending part
is that a continuation can be stored in data structures and called
multiple times, which means the call/cc invocation can in effect
return more than once.</p>
<p>Lexically scoped labels are at least as expressive as call/cc, because
if you have them, you can write call/cc as a function:</p>
<div>
<pre>call_cc (callback: fn (k: fn()->void)) -> void
{
callback (fn() -> void {
goto current_continuation;
});
@current_continuation:
}
</pre>
</div>
<p>Let’s see what’s going on here. A function called call_cc() is defined:</p>
<div>
<pre>call_cc (...) -> void
{
}
</pre>
</div>
<p>This function takes another function as argument:</p>
<div>
<pre>callback: fn (...) -> void
</pre>
</div>
<p>And that function takes the continuation as an argument:</p>
<div>
<pre>k: fn()->void
</pre>
</div>
<p>The body of call/cc calls the callback:</p>
<div>
<pre>callback (...);
</pre>
</div>
<p>passing an anonymous function (the continuation):</p>
<div>
<pre> fn() -> void {
goto current_continuation;
}
@current_continuation:
</pre>
</div>
<p>that just jumps to the point where <code>call_cc</code> returns. So when <code>callback</code>
decides to invoke the continuation, execution will resume at the point
where <code>call_cc</code> was invoked. Since there is nothing stopping
<code>callback</code> from storing the continuation in a data structure or from
invoking it multiple times, we have the full call/cc semantics.</p>
<p><strong>Cooperative thread system</strong><br/>
One of the examples on the <a href="http://en.wikipedia.org/wiki/Call-with-current-continuation" >Wikipedia page about
call/cc</a>
is a cooperative thread system. With the <code>call_cc</code> function above, we
could directly translate the Wikipedia code into Oort, but using the
second aspect of the first-class-ness of labels – that they can be
stored directly in data structures – makes it possible to write a
more straightforward version:</p>
<div>
<pre>run_list: list[label] = new list[label]();
thread_fork (child: fn() -> void)
{
run_list.append (me);
child();
goto run_list.pop_head();
@me:
}
thread_yield()
{
run_list.append (me);
goto run_list.pop_head ();
@me:
}
thread_exit()
{
if (!run_list.is_empty())
goto run_list.pop_head();
else
process_exit();
}
</pre>
</div>
<p>The <code>run_list</code> variable is a list of labels containing the current
positions of all the active threads. The keyword <code>label</code> in Oort is
simply a type specifier similar to <code>string</code>.</p>
<p>To create a new thread, <code>thread_fork</code> first saves the position of the
current thread on the list, and then it calls the child
function. Similarly, <code>thread_yield</code> yields to another thread by saving
the position of the current thread and jumping to the first label on
the list. Exiting a thread consists of jumping to the first thread if
there is one, and exiting the process if there isn’t.</p>
<p>The code above doesn’t actually run because the current Oort
implementation doesn’t support genericity, but
<a href="http://cgit.freedesktop.org/~sandmann/oort/tree/examples/pc.nl" >here</a>
is a somewhat uglier version that actually runs, while still
demonstrating the principle.</p>Wed, 26 Jun 2013 19:09:44 GMTCelebrities die 2.7218 at a time
http://www.advogato.org/person/ssp/diary.html?start=28
http://ssp.impulsetrain.com/2013-06-26_Celebrities_die_2_7218_at_a_time.html<p>The claim that celebrities die in threes is usually dismissed as the
result of the human propensity to see patterns where there are
none. But celebrities don’t die at regularly spaced intervals
either. It would be very weird if a celebrity predictably died on the
14th of every month. And once you deviate from a regularly spaced
pattern, some amount of clustering is inevitable. Can we make this
more precise?</p>
<p>Rather than trying to define exactly what constitutes a celebrity,
I’ll simply assume that they die at a fixed rate and that they do so
independently of each other (<a href="http://www.geeksofdoom.com/2013/02/03/remembering-february-3-1959-the-day-the-music-died" >The Day the Music
Died</a>
notwithstanding). It follows that celebrity deaths is a <a href="http://en.wikipedia.org/wiki/Poisson_process" >Poisson
process</a> with intensity
<mathjax>$\lambda$</mathjax> where <mathjax>$\lambda$</mathjax> is the number of deaths that occur in some
fixed time period.</p>
<p>As an example, suppose we define celebrityhood in such a way that
twelve celebrities die each year on average. Then <mathjax>$\lambda =
12/\text{year}$</mathjax>, and because the time between events in a Poisson
process is <a href="http://en.wikipedia.org/wiki/Exponential_distribution" >exponentially
distributed</a>
with parameter <mathjax>$\lambda$</mathjax>, the average time between two deaths is
<mathjax>$1/\lambda$</mathjax> = 1/12th year, or one month.</p>
<p>What does it mean for celebrities to die <mathjax>$n$</mathjax> at a time? We will simply
say that two celebrities die together if the period between their
deaths is shorter than expected. If the celebrity death rate is
12/year, then two celebrities died together if their deaths were less
than one month apart. Similarly, three celebrities died together if
the period between death 1 and death 2 and the period between death 2
and death 3 were both shorter than a month. In general, <mathjax>$k$</mathjax>
celebrities died together if the <mathjax>$k - 1$</mathjax> periods between their deaths
were all shorter than expected.</p>
<p>Here is a diagram of 10 years worth of randomly generated deaths with
12 deaths per year and clusters as defined above highlighted:
<img alt="" src="http://ssp.impulsetrain.com/celebrities/diagram.png"/></p>
<p><strong>Average cluster size</strong><br/>
Suppose a celebrity has just died after a longer than average
wait. This death will start a new cluster, and we want to figure out
what the size of it is. </p>
<p>In a Poisson process the waiting time between two events is
exponentially distributed with parameter <mathjax>$\lambda$</mathjax>, so it can be
modelled with a stochastic variable <mathjax>$W \sim Exp(\lambda)$</mathjax>. The cluster
size itself is modelled with another stochastic variable, <mathjax>$C$</mathjax>, whose
distribution is derived as follows.</p>
<p>The cluster size will be 1 when the waiting time for the next death is
larger than or equal to the average (which is <mathjax>$1/\lambda$</mathjax> for the
exponential distribution):</p>
<blockquote>
<p><mathjax>$\text{P}(C = 1) = \text{P}(W > 1/\lambda)$</mathjax></p>
</blockquote>
<p>The probability that the cluster will have size 2 is the same as the
probability that the next waiting time is shorter than average and the
next one after that is longer:</p>
<blockquote>
<p><mathjax>$\text{P}(C = 2) = \text{P}(W \le 1/\lambda)\cdot \text{P}(W > 1/\lambda)$</mathjax></p>
</blockquote>
<p>For size three, it’s the probability that the next two waiting times
are shorter and the third one longer:</p>
<blockquote>
<p><mathjax>$\text{P}(C = 3) = \text{P}(W \le 1/\lambda)^2\cdot \text{P}(W > 1/\lambda)$</mathjax></p>
</blockquote>
<p>In general, the probability that the next cluster will be size <mathjax>$k$</mathjax> is:</p>
<blockquote>
<p><mathjax>$\text{P}(C = k) = \text{P}(W \le 1/\lambda)^{k - 1}\cdot \text{P}(W > 1/\lambda)$</mathjax></p>
</blockquote>
<p>So what’s the average size of a Celebrity Death Cluster? The expected
value of <mathjax>$C$</mathjax> is given by:</p>
<blockquote>
<p><mathjax>$\displaystyle \text{E}[C] = \sum_{k=1}^\infty k \cdot \text{P}(C = k) = \sum_{k=1}^\infty k\cdot \text{P}(W \le 1/\lambda)^{k - 1}\cdot \text{P}(W > 1/\lambda)$</mathjax></p>
</blockquote>
<p>Plugging in the distribution function for the exponential
distribution, we get:</p>
<blockquote>
<p><mathjax>$
\begin{align*} \text{E}[C] &= \sum_{k=1}^\infty k \cdot (1 - e^{- \lambda \cdot (1/\lambda) })^{k - 1} \cdot (1 - (1 - e^{- \lambda \cdot (1 / \lambda)}))\\
&= \sum_{k=1}^\infty k \cdot (1 - e^{- 1})^{k - 1} \cdot e^{-1}
\end{align*}
$</mathjax></p>
</blockquote>
<p>It’s not hard to show that this infinite series has sum <mathjax>$e$</mathjax> (Hint: Use
the fact that <mathjax>$k x^{k - 1}$</mathjax> is the derivative of <mathjax>$x^k$</mathjax>), so on
average, celebrities die 2.7218 at a time.</p>Sat, 25 May 2013 13:18:35 GMTThe Radix Heap
http://www.advogato.org/person/ssp/diary.html?start=27
http://ssp.impulsetrain.com/2013-05-25_The_Radix_Heap.html<p>The <em>Radix Heap</em> is a priority queue that has better caching behavior
than the well-known <a href="http://en.wikipedia.org/wiki/Binary_Heap" >binary heap</a>, but also two restrictions: (a)
that all the keys in the heap are integers and (b) that you can never
insert a new item that is smaller than all the other items currently
in the heap.</p>
<p>These restrictions are not that severe. The Radix Heap still works in
many algorithms that use heaps as a subroutine: Dijkstra’s
shortest-path algorithm, Prim’s minimum spanning tree algorithm,
various sweepline algorithms in computational geometry.</p>
<p>Here is how it works. If we assume that the keys are 32 bit integers,
the radix heap will have 33 buckets, each one containing a list of
items. We also maintain one global value <code>last_deleted</code>, which is
initially <code>MIN_INT</code> and otherwise contains the last value extracted
from the queue.</p>
<p>The invariant is this:</p>
<blockquote>
<p>The items in bucket <mathjax>$k$</mathjax> differ from <code>last_deleted</code> in bit <mathjax>$k - 1$</mathjax>,
but not in bit <mathjax>$k$</mathjax> or higher. The items in bucket 0 are equal to
<code>last_deleted</code>.</p>
</blockquote>
<p>For example, if we compare an item from bucket 10 to <code>last_deleted</code>,
we will find that bits 31–10 are equal, bit 9 is different, and bits
8–0 may or may not be different.</p>
<p>Here is an example of a radix heap where the last extracted value was
7:</p>
<blockquote>
<p><img alt="" src="http://ssp.impulsetrain.com/radix-heap/radix1.png"/></p>
</blockquote>
<p>As an example, consider the item 13 in bucket 4. The bit pattern of 7
is 0111 and the bit pattern of 13 is 1101, so the highest bit that is
different is bit number 3. Therefore the item 13 belongs in bucket <mathjax>$3
+ 1 = 4$</mathjax>. Buckets 1, 2, and 3 are empty, but that’s because a number
that differs from 7 in bits 0, 1, or 2 would be smaller than 7 and so
isn’t allowed in the heap according to restriction (b).</p>
<p><strong>Operations</strong><br/>
When a new item is inserted, it has to be added to the correct
bucket. How can we compute the bucket number? We have to find the
highest bit where the new item differs from <code>last_deleted</code>. This is
easily done by <code>XOR</code>ing them together and then finding the highest bit
in the result. Adding one then gives the bucket number:</p>
<div>
<pre>bucket_no = highest_bit (new_element XOR last_deleted) + 1
</pre>
</div>
<p>where <code>highest_bit(x)</code> is a function that returns the highest set bit
of <code>x</code>, or <mathjax>$-1$</mathjax> if <code>x</code> is 0.</p>
<p>Inserting the item clearly preserves the invariant because the new
item will be in the correct bucket, and <code>last_deleted</code> didn’t change,
so all the existing items are still in the right place.</p>
<p>Extracting the minimum involves first finding the minimal item by
walking the lowest-numbered non-empty bucket and finding the minimal
item in that bucket. Then that item is deleted and <code>last_deleted</code> is
updated. Then the bucket is walked again and all the items are
redistributed into new buckets according to the new <code>last_deleted</code>
item.</p>
<p>The extracted item will be the minimal one in the data structure
because we picked the minimal item in the redistributed bucket, and
all the buckets with lower numbers are empty. And if there were a
smaller item in one of the buckets with higher numbers, it would be
differing from <code>last_deleted</code> in one of the more significant bits, say
bit <mathjax>$k$</mathjax>. But since the items in the redistributed bucket are equal to
<code>last_deleted</code> in bit <mathjax>$k$</mathjax>, the hypothetical smaller item would then
have to also be smaller than <code>last_deleted</code>, which it can’t be because
of restriction (b) mentioned in the introduction. Note that this
argument also works for two-complement signed integers.</p>
<p>We have to be sure this doesn’t violate the invariant. First note that
all the items that are being redistributed will satisfy the invariant
because they are simply being inserted. The items in a bucket with a
higher number <mathjax>$k$</mathjax> were all different from the old <code>last_deleted</code> in
the <mathjax>$(k-1)$</mathjax>th bit. This bit must then necessarily also be different
from the <mathjax>$(k-1)$</mathjax>th bit in the new <code>last_deleted</code>, because if it
weren’t, the new <code>last_deleted</code> would itself have belonged in bucket
<mathjax>$k$</mathjax>. And finally, since the bucket being redistributed is the
lowest-numbered non-empty one, there can’t be any items in a bucket
with a lower number. So the invariant still holds.</p>
<p>In the example above, if we extract the two ‘7’s from bucket 0 and the
‘8’ from bucket 4, the new heap will look like this:</p>
<blockquote>
<p><img alt="" src="http://ssp.impulsetrain.com/radix-heap/radix8.png"/></p>
</blockquote>
<p>Notice that bucket 4, where the ‘8’ came from, is now empty.</p>
<p><strong>Performance</strong><br/>
Inserting into the radix heap takes constant time because all we have
to do is add the new item to a list. Determining the highest set bit
can be done in constant time with an instruction such as <code>bsr</code>.</p>
<p>The performance of extraction is dominated by the redistribution of
items. When a bucket is redistributed, it ends up being empty. To see
why, remember that all the items are different from <code>last_deleted</code> in
the <mathjax>$(k - 1)$</mathjax>th bit. Because the new <code>last_deleted</code> comes from bucket
<mathjax>$k$</mathjax>, the items are now all <em>equal</em> to <code>last_deleted</code> in the <mathjax>$(k -
1)th$</mathjax> bit. Hence they will all be redistributed to a lower-numbered
bucket.</p>
<p>Now consider the life-cycle of a single element. In the worst case it
starts out being added to bucket 31 and every time it is
redistributed, it moves to a lower-numbered bucket. When it reaches
bucket 0, it will be next in line for extraction. It follows that the
maximum number of redistributions that an element can experience is
31.</p>
<p>Since a redistribution takes constant time per element distributed,
and since an element will only be redistributed <mathjax>$d$</mathjax> times, where <mathjax>$d$</mathjax>
is the number of bits in the element, it follows that the amortized
time complexity of extraction is <mathjax>$O(d)$</mathjax>. In practice we will often do
better though, because most items will not move through all the
buckets.</p>
<p><strong>Caching performance</strong><br/>
Some descriptions of the radix heap recommend implementing the buckets
as doubly linked lists, but that would be a mistake because linked
lists have terrible cache locality. It is better to implement them as
dynamically growing arrays. If you do that, the top of the buckets
will tend to be hot which means the per-item number of cache misses
during redistribution of a bucket will tend to be <mathjax>$O(1/B)$</mathjax>, where <mathjax>$B$</mathjax>
is the number of integers in a cache line. This means the amortized
cache-miss complexity of extraction will be closer to <mathjax>$O(d/B)$</mathjax> than to
<mathjax>$O(d)$</mathjax>.</p>
<p>In a regular binary heap, both insertion and extraction require
<mathjax>$\Theta(\log n)$</mathjax> swaps in the worst case, and each swap (except for
those very close to the top of the heap) will cause a cache miss.</p>
<p>In other words, if <mathjax>$d = \Theta(\log n)$</mathjax>, extraction from a radix heap will
tend to generate <mathjax>$\Theta(\log n / B)$</mathjax> cache misses, where a binary heap will
require <mathjax>$\Theta(\log n)$</mathjax>.</p>
<!--
Surprisingly, the English-language Wikipedia doesn't have an article
on radix heaps. If someone wants to fix that, feel free to use any
material in this post under whatever license is useful to that end.
-->Thu, 16 May 2013 05:14:35 GMTFast Multiplication of Normalized 16 bit Numbers with SSE2
http://www.advogato.org/person/ssp/diary.html?start=26
http://ssp.impulsetrain.com/2011-07-03_Fast_Multiplication_of_Normalized_16_bit_Numbers_with_SSE2.html<p>If you are compositing pixels with 16 bits per component, you often
need this computation:</p>
<div>
<pre>uint16_t a, b, r;
r = (a * b + 0x7fff) / 65535;
</pre>
</div>
<p>There is a well-known way to do this quickly without a division:</p>
<div>
<pre>uint32_t t;
t = a * b + 0x8000;
r = (t + (t >> 16)) >> 16;
</pre>
</div>
<p>Since we are compositing pixels we want to do this with SSE2
instructions, but because the code above uses 32 bit arithmetic, we
can only do four operations at a time, even though SSE registers have
room for eight 16 bit values. Here is a direct translation into SSE2:</p>
<div>
<pre>a = punpcklwd (a, 0);
b = punpcklwd (b, 0);
a = pmulld (a, b);
a = paddd (a, 0x8000);
b = psrld (a, 16);
a = paddd (a, b);
a = psrld (a, 16);
a = packusdw (a, 0);
</pre>
</div>
<p>But there is another way that better matches SSE2:</p>
<div>
<pre>uint16_t lo, hi, t, r;
hi = (a * b) >> 16;
lo = (a * b) & 0xffff;
t = lo >> 15;
hi += t;
t = hi ^ 0x7fff;
if ((int16_t)lo > (int16_t)t)
lo = 0xffff;
else
lo = 0x0000;
r = hi - lo;
</pre>
</div>
<p>This version is better because it avoids the unpacking to 32
bits. Here is the translation into SSE2:</p>
<div>
<pre>t = pmulhuw (a, b);
a = pmullw (a, b);
b = psrlw (a, 15);
t = paddw (t, b);
b = pxor (t, 0x7fff);
a = pcmpgtw (a, b);
a = psubw (t, a);
</pre>
</div>
<p>This is not only shorter, it also makes use of the full width of the
SSE registers, computing eight results at a time.</p>
<p>Unfortunately SSE2 doesn’t have 8-bit variants of <code>pmulhuw</code>, <code>pmullw</code>, and
<code>psrlw</code>, so we can’t use this trick for the more common case where
pixels have 8 bits per component.</p>
<p>Exercise: Why does the second version work?</p>Thu, 16 May 2013 05:14:35 GMTSysprof 1.1.8
http://www.advogato.org/person/ssp/diary.html?start=25
http://ssp.impulsetrain.com/2011-07-15_Sysprof_1_1_8.html<p>A new version <a href="http://sysprof.com/sysprof-1.1.8.tar.gz" >1.1.8</a> of
<a href="http://sysprof.com" >Sysprof</a> is out.</p>
<p>This is a release candidate for 1.2.0 and contains mainly bug fixes.</p>Thu, 16 May 2013 05:14:35 GMTGamma Correction vs. Premultiplied Pixels
http://www.advogato.org/person/ssp/diary.html?start=24
http://ssp.impulsetrain.com/2011-08-10_Gamma_Correction_vs__Premultiplied_Pixels.html<p>Pixels with 8 bits per channel are normally sRGB encoded because that
allocates more bits to darker colors where human vision is the most
sensitive. (Actually, it’s really more of a historical accident, but
sRGB nevertheless remains useful for this reason). The relationship
between sRGB and linear RGB is that you get an sRGB pixel by raising
each component of a linear pixel to the power of <mathjax>$1/2.2$</mathjax>.</p>
<p>A lot of graphics software does alpha blending directly on these sRGB
pixels using alpha values that are linearly coded (ie., an alpha value
of 0 means no coverage, 0.5 means half coverage, and 1 means full
coverage). Because alpha blending is best done with premultiplied
pixels, such systems store pixels in this format:</p>
<div>
<pre>[ alpha, alpha * red_s, alpha * green_s, alpha * blue_s ]
</pre>
</div>
<p>where alpha is linearly coded, and (<code>red_s</code>, <code>green_s</code>, <code>blue_s</code>) are
sRGB coded. As long as you are happy with blending in sRGB, this works
well. Also, if you simply discard the alpha channel of such pixels and
display them directly on a monitor, it will look as if the pixels were
alpha blended (in the sRGB space) on top of a black background, which
is the desired result.</p>
<p>But what if you want to blend in linear RGB? If you use the format
above, some expensive conversions will be required. To convert to
premultiplied linear, you have to first divide by alpha, then raise
each color to 2.2, then multiply by alpha. To convert back, you must
divide by alpha, raise to <mathjax>$1/2.2$</mathjax>, then multiply with alpha.</p>
<p>The conversions can be avoided if you store the pixels linearly, ie.,
keeping the premultiplication, but coding red, green, and blue
linearly instead of as sRGB. This makes blending fast, but the
downside is that you need deeper pixels. With only 8 bits per pixel,
the linear coding loses too much precision in darker tones. Another
problems is that to display these pixels, you will either have to
convert them to sRGB, or if the video card can scan them out directly,
you have to make sure that the gamma ramp is set to compensate for the
fact that the monitor expects sRGB pixels.</p>
<p>Can we get the best of both worlds? Yes. The format to use is this:</p>
<div>
<pre>[ alpha, alpha_s * red_s, alpha_s * green_s, alpha_s * blue_s ]
</pre>
</div>
<p>That is, the alpha channel is stored linearly, and the color channels
are stored in sRGB, premultiplied with the alpha value raised to
1/2.2. Ie., the red component is now</p>
<div>
<pre>(red * alpha)^(1/2.2),
</pre>
</div>
<p>where before it was </p>
<div>
<pre>alpha * red^(1/2.2).
</pre>
</div>
<p>It is sufficient to use 8 bits per channel with this format because of
the sRGB encoding. Discarding the alpha channel and displaying the
pixels on a monitor will produce pixels that are alpha blended (in
linear space) against black, as desired.</p>
<p>You can convert to linear RGB simply by raising the R, G, and B
components to 2.2, and back by raising to <mathjax>$1/2.2$</mathjax>. Or, if you feel
like cheating, use an exponent of 2 so that the conversions become a
multiplication and a square root respectively.</p>
<p>This is also the pixel format to use with texture samplers that
implement the sRGB OpenGL extensions
(<a href="http://www.opengl.org/registry/specs/EXT/texture_sRGB.txt" >textures</a>
and
<a href="http://www.opengl.org/registry/specs/ARB/framebuffer_sRGB.txt" >framebuffers</a>). These
extensions say precisely that the R, G, and B components are raised to
2.2 before texture filtering, and raised to 1/2.2 after the final
raster operation.</p>Thu, 16 May 2013 05:14:35 GMTOver is not Translucency
http://www.advogato.org/person/ssp/diary.html?start=23
http://ssp.impulsetrain.com/2011-09-26_Over_is_not_Translucency.html<p>The <a href="http://keithp.com/~keithp/porterduff/>" >Porter/Duff</a> Over
operator, also known as the “Normal” blend mode in Photoshop, computes
the amount of light that is reflected when a pixel partially covers
another:</p>
<blockquote>
<p><img alt="The Porter/Duff OVER operator" src="http://ssp.impulsetrain.com/bg-fg.png"/></p>
</blockquote>
<p>The fraction of bg that is covered is denoted alpha. This operator is
the correct one to use when the foreground image is an opaque mask
that partially covers the background:</p>
<blockquote>
<p><img alt="Red mask on blue background" src="http://ssp.impulsetrain.com/big-over1.png"/></p>
</blockquote>
<p>A photon that hits this image will be reflected back to your eyes by
either the foreground or the background, but not both. For each
foreground pixel, the alpha value tells us the probability of each:</p>
<blockquote>
<p><mathjax>$a \cdot \text{fg} + (1 - a) \cdot \text{bg}$</mathjax></p>
</blockquote>
<p>This is the definition of the Porter/Duff Over operator for
non-premultiplied pixels.</p>
<p>But if alpha is interpreted as <em>translucency,</em> then the Over operator
is not the correct one to use. The Over operator will act as if each
pixel is partially covering the background:</p>
<blockquote>
<p><img alt="" src="http://ssp.impulsetrain.com/shaped-over.png"/></p>
</blockquote>
<p>Which is not how translucency works. A translucent material reflects
some light and lets other light through. The light that is let through
is reflected by the background and <em>interacts with the foreground
again</em>.</p>
<p><img align="right" src="http://ssp.impulsetrain.com/Translucency.png" alt="" width="256" height="329"/>Let’s look at this in more detail. Please follow along
in the diagram to the right. First with probability <mathjax>$a$</mathjax>, the
photon is reflected back towards the viewer:</p>
<blockquote>
<p><mathjax>$a \cdot \text{fg}$</mathjax></p>
</blockquote>
<p>With probability <mathjax>$(1 - a)$</mathjax>, it passes through the foreground, hits the
background, and is reflected back out. The photon now hits the
<em>backside</em> of the foreground pixel. With probability <mathjax>$(1 - a)$</mathjax>, the
foreground pixel lets the photon back out to the viewer. The result so
far:</p>
<blockquote>
<p><mathjax>$
\begin{align*}
&a\cdot \text{fg} \\
+&(1 - a) \cdot \text{bg} \cdot (1 - a)
\end{align*}
$</mathjax></p>
</blockquote>
<p>But we are not done yet, because with probability <mathjax>$a$</mathjax> the foreground pixel reflects the photon once again back towards the background pixel. There it will be reflected, hit the backside of the foreground pixel again, which lets it through to our eyes with probability <mathjax>$(1 - a)$</mathjax>. We get another term where the final <mathjax>$(1 - a)$</mathjax> is replaced with <mathjax>$a \cdot \text{fg} \cdot \text {bg} \cdot (1 - a)$</mathjax>:</p>
<blockquote>
<p><mathjax>$
\begin{align*}
&a\cdot \text{fg} \\
+&(1 - a) \cdot \text{bg} \cdot (1 - a)\\
+&(1 - a) \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot (1 - a)
\end{align*}
$</mathjax></p>
</blockquote>
<p>And so on. In each round, we gain another term which is identical to
the previous one, except that it has an additional <mathjax>$a \cdot \text{fg}
\cdot \text{bg}$</mathjax> factor:</p>
<blockquote>
<p><mathjax>$
\begin{align*}
&a\cdot \text{fg} \\
+&(1 - a) \cdot \text{bg} \cdot (1 - a)\\
+&(1 - a) \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot (1 - a)\\
+&(1 - a) \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot (1 - a) \\
+&\cdots
\end{align*}
$</mathjax></p>
</blockquote>
<p>or more compactly:</p>
<blockquote>
<p><mathjax>$\displaystyle
a \cdot \text{fg} + (1 - a)^2 \cdot \text{bg} \cdot
\sum_{i=0}^\infty (a \cdot \text{fg} \cdot \text{bg})^i
$</mathjax></p>
</blockquote>
<p>Because we are dealing with pixels, both <mathjax>$a$</mathjax>, <mathjax>$\text{fg}$</mathjax>, and
<mathjax>$\text{bg}$</mathjax> are less than 1, so the sum is a <a href="http://en.wikipedia.org/wiki/Geometric_series" >geometric
series</a>:</p>
<blockquote>
<p><mathjax>$\displaystyle
\sum_{i=0}^\infty x^i = \frac{1}{1 - x}
$</mathjax></p>
</blockquote>
<p>Putting them together, we get:</p>
<blockquote>
<p><mathjax>$\displaystyle
a \cdot \text{fg} + \frac{(1 - a)^2 \cdot bg}{1 - a \cdot \text{fg} \cdot \text{bg}}
$</mathjax></p>
</blockquote>
<p>I have sidestepped the issue of premultiplication by assuming that
background alpha is 1. The calculations with premultipled colors are
similar, and for the color components, the result is simply:</p>
<blockquote>
<p><mathjax>$\displaystyle
r = \text{fg} + \frac{(1 - a_\text{fg})^2 \cdot \text{bg}}{1 - \text{fg}\cdot\text{bg}}
$</mathjax></p>
</blockquote>
<p>The issue of destination alpha is more complicated. With the Over
operator, both foreground and background are opaque masks, so the
light that survives both has the same color as the input light. With
translucency, the transmitted light has a different color, which means
the resulting alpha value must in principle be different for each
color component. But that’s not possible for ARGB pixels. A similar
argument to the above shows that the resulting alpha value would be:</p>
<blockquote>
<p><mathjax>$\displaystyle
r = 1 - \frac{(1 - a)\cdot (1 - b)}{1 - \text{fg} \cdot \text{bg}}
$</mathjax></p>
</blockquote>
<p>where <mathjax>$b$</mathjax> is the background alpha. The problem is the dependency on
<mathjax>$\text{fg}$</mathjax> and <mathjax>$\text{bg}$</mathjax>. If we simply assume for the purposes of
the alpha computation that <mathjax>$\text{fg}$</mathjax> and <mathjax>$\text{bg}$</mathjax> are equal to
<mathjax>$a$</mathjax> and <mathjax>$b$</mathjax>, we get this:</p>
<blockquote>
<p><mathjax>$\displaystyle
r = 1 - \frac{(1 - a)\cdot (1 - b)}{1 - a \cdot b}
$</mathjax></p>
</blockquote>
<p>which is equal to</p>
<blockquote>
<p><mathjax>$\displaystyle
a + \frac{(1 - a)^2 \cdot b}{1 - a \cdot b}
$</mathjax></p>
</blockquote>
<p>Ie., exactly the same computation as the one for the color
channels. So we can define the <em>Translucency Operator</em> as this:</p>
<blockquote>
<p><mathjax>$\displaystyle
r = \text{fg} + \frac{(1 - a)^2 \cdot \text{bg}}{1 - \text{fg} \cdot \text{bg}}
$</mathjax></p>
</blockquote>
<p>for all four channels.</p>
<p>Here is an example of what the operator looks like. The image below is
what you will get if you use the Over operator to implement a
selection rectangle. Mouse over to see what it would look like if you
used the Translucency operator.</p>
<blockquote>
<p><a>
<img src="http://ssp.impulsetrain.com/select-over1.png" alt=""/></a></p>
</blockquote>
<p>Both were computed in linear RGB. Typical implementations will often
compute <a href="select-over-srgb.png" >the Over operator in sRGB</a>, so that’s
what see if you actually select some icons in Nautilus. If you want to
compare all three, open these in tabs:</p>
<blockquote>
<p><a href="select-over-srgb.png" >Over, in sRGB</a></p>
<p><a href="select-trans.png" >Translucency, in linear RGB</a></p>
<p><a href="select-over.png" >Over, in linear RGB</a></p>
</blockquote>
<p>And for good measure, even though it makes zero sense to do this,</p>
<blockquote>
<p><a href="select-trans-srgb.png" >Translucency, in sRGB</a></p>
</blockquote>Thu, 16 May 2013 05:14:35 GMTSysprof 1.2.0
http://www.advogato.org/person/ssp/diary.html?start=22
http://ssp.impulsetrain.com/2012-09-08_Sysprof_1_2_0.html<p>A <a href="https://lkml.org/lkml/2012/9/8/143" >new stable release</a>new stable
release of <a href="http://sysprof.com/" >Sysprof</a> is now available. Download
<a href="http://sysprof.com/sysprof-1.2.0.tar.gz" >version 1.2.0</a>.</p>Thu, 16 May 2013 05:14:35 GMTBig-O Misconceptions
http://www.advogato.org/person/ssp/diary.html?start=21
http://ssp.impulsetrain.com/2012-10-16_Big-O_Misconceptions.html<p>In computer science and sometimes mathematics, big-O notation is used
to talk about how quickly a function grows while disregarding
multiplicative and additive constants. When classifying algorithms,
big-O notation is useful because it lets us abstract away the
differences between real computers as just multiplicative and additive
constants.</p>
<p>Big-O is not a difficult concept at all, but it seems to be common
even for people who should know better to misunderstand some aspects
of it. The following is a list of misconceptions that I have seen in
the wild.</p>
<p>But first a definition: We write</p>
<blockquote>
<p><mathjax>$f(n) = O(g(n))$</mathjax></p>
</blockquote>
<p>when <mathjax>$f(n) \le M g(n)$</mathjax> for sufficiently large <mathjax>$n$</mathjax>, for some positive constant <mathjax>$M$</mathjax>.</p>
<p><b>Misconception 1:</b> “The Equals Sign Means Equality”</p>
<p>The equals sign in</p>
<blockquote>
<p><mathjax>$f(n) = O(g(n))$</mathjax></p>
</blockquote>
<p>is a widespread travestry. If you take it at face value, you can
deduce that since <mathjax>$5 n$</mathjax> and <mathjax>$3 n$</mathjax> are both equal to <mathjax>$O(n)$</mathjax>, then <mathjax>$3 n$</mathjax>
must be equal to <mathjax>$5 n$</mathjax> and so <mathjax>$3 = 5$</mathjax>.</p>
<p>The expression <mathjax>$f(n) = O(g(n))$</mathjax> doesn’t type check. The left-hand-side
is a function, the right-hand-side is a … what, exactly? There is no
help to be found in the definition. It just says “we write” without
concerning itself with the fact that what “we write” is total
nonsense.</p>
<p>The way to interpret the right-hand side is as a <em>set</em> of functions:</p>
<blockquote>
<p><mathjax>$ O(f) = \{ g \mid g(n) \le M f(n) \text{ for some \(M > 0\) for large \(n\)}\}. $</mathjax></p>
</blockquote>
<p>With this definition, the world makes sense again: If <mathjax>$f(n) = 3 n$</mathjax>
and <mathjax>$g(n) = 5 n$</mathjax>, then <mathjax>$f \in O(n)$</mathjax> and <mathjax>$g \in O(n)$</mathjax>, but there
is no equality involved so we can’t make bogus deductions like
<mathjax>$3=5$</mathjax>. We can however make the correct observation that <mathjax>$O(n)
\subseteq O(n \log n)\subseteq O(n^2) \subseteq O(n^3)$</mathjax>, something
that would be difficult to express with the equals sign.</p>
<p><b>Misconception 2:</b> “Informally, Big-O Means ‘Approximately Equal’"</p>
<p>If an algorithm takes <mathjax>$5 n^2$</mathjax> seconds to complete, that algorithm is
<mathjax>$O(n^2)$</mathjax> because for the constant <mathjax>$M=7$</mathjax> and sufficiently large <mathjax>$n$</mathjax>, <mathjax>$5
n^2 \le 7 n^2$</mathjax>. But an algorithm that runs in constant time, say 3
seconds, is also <mathjax>$O(n^2)$</mathjax> because for sufficiently large <mathjax>$n$</mathjax>, <mathjax>$3 \le
n^2$</mathjax>.</p>
<p>So informally, big-O means <em>approximately less than or equal</em>,
not <em>approximately equal</em>.</p>
<p>If someone says “Topological Sort, like other sorting algorithms, is
<mathjax>$O(n \log n)$</mathjax>", then that is <em>technically</em> correct, but severely
misleading, because Toplogical Sort is also <mathjax>$O(n)$</mathjax> which is a subset
of <mathjax>$O(n \log n)$</mathjax>. Chances are whoever said it meant something false.</p>
<p>If someone says “In the worst case, any comparison based sorting
algorithm must make <mathjax>$O(n \log n)$</mathjax> comparisons” that is <em>not</em> a
correct statement. Translated into English it becomes:</p>
<blockquote>
<p>“In the worst case, any comparison based sorting algorithm must make
fewer than or equal to <mathjax>$M n \log (n)$</mathjax> comparisons”</p>
</blockquote>
<p>which is not true: You can easily come up with a comparison based
sorting algorithm that makes more comparisons in the worst case.</p>
<p>To be precise about these things we have other types of notation at
our disposal. Informally:</p>
<blockquote>
<p/><table><tr><td><mathjax>$O()$</mathjax>:</td><td>Less than or equal, disregarding constants</td></tr><tr><td><mathjax>$\Omega()$</mathjax>:</td><td>Greater than or equal, disregarding constants</td></tr><tr><td><mathjax>$o()$</mathjax>:</td><td>Stricly less than, disregarding constants</td></tr><tr><td><mathjax>$\Theta()$</mathjax>:</td><td>Equal to, disregarding constants</td></tr></table></blockquote>
<p>and <a href="http://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann.E2.80.93Landau_notations" >some more</a>. The correct statement about lower bounds is this: “In the worst case,
any comparison based sorting algorithm must make <mathjax>$\Omega(n \log n)$</mathjax>
comparisons. In English that becomes:</p>
<blockquote>
<p>“In the worst case, any comparison based sorting algorithm must make
at least <mathjax>$M n \log (n)$</mathjax> comparisons”</p>
</blockquote>
<p>which is true. And a correct, non-misleading statement about
Topological Sort is that it is <mathjax>$\Theta(n)$</mathjax>, because it has a lower
bound of <mathjax>$\Omega(n)$</mathjax> and an upper bound of <mathjax>$O(n)$</mathjax>.</p>
<p><b>Misconception 3:</b> “Big-O is a Statement About Time”</p>
<p>Big-O is used for making statements about functions. The functions can
measure time or space or cache misses or rabbits on an island or
anything or nothing. Big-O notation doesn’t care.</p>
<p>In fact, when used for algorithms, big-O is almost never about
time. It is about primitive operations.</p>
<p>When someone says that the time complexity of MergeSort is <mathjax>$O(n \log
n)$</mathjax>, they usually mean that the number of comparisons that MergeSort
makes is <mathjax>$O(n \log n)$</mathjax>. That in itself doesn’t tell us what the <em>time</em>
complexity of any particular MergeSort might be because that would
depend how much time it takes to make a comparison. In other words,
the <mathjax>$O(n \log n)$</mathjax> refers to <em>comparisons</em> as the primitive operation.</p>
<p>The important point here is that when big-O is applied to algorithms,
there is always an underlying model of computation. The claim that the
<em>time</em> complexity of MergeSort is <mathjax>$O(n \log n)$</mathjax>, is implicitly
referencing an model of computation where a comparison takes constant
time and everything else is free.</p>
<p>Which is fine as far as it goes. It lets us compare MergeSort to other
comparison based sorts, such as QuickSort or ShellSort or BubbleSort,
and in many real situations, comparing two sort keys really does take
constant time.</p>
<p>However, it doesn’t allow us to compare MergeSort to RadixSort because
RadixSort is not comparison based. It simply doesn’t ever make a
comparison between two keys, so its time complexity in the comparison
model is 0. The statement that RadixSort is <mathjax>$O(n)$</mathjax> implicitly
references a model in which the keys can be lexicographically picked
apart in constant time. Which is also fine, because in many real
situations, you actually can do that.</p>
<p>To compare RadixSort to MergeSort, we must first define a shared model
of computation. If we are sorting strings that are <mathjax>$k$</mathjax> bytes long, we
might take “read a byte” as a primitive operation that takes constant
time with everything else being free.</p>
<p>In this model, MergeSort makes <mathjax>$O(n \log n)$</mathjax> string comparisons each
of which makes <mathjax>$O(k)$</mathjax> byte comparisons, so the time complexity is
<mathjax>$O(k\cdot n \log n)$</mathjax>. One common implementation of RadixSort will make
<mathjax>$k$</mathjax> passes over the <mathjax>$n$</mathjax> strings with each pass reading one byte, and
so has time complexity <mathjax>$O(n k)$</mathjax>.</p>
<p><b>Misconception 4:</b> Big-O Is About Worst Case</p>
<p>Big-O is often used to make statements about functions that measure
the worst case behavior of an algorithm, but big-O notation doesn’t
imply anything of the sort.</p>
<p>If someone is talking about the randomized QuickSort and says that it
is <mathjax>$O(n \log n)$</mathjax>, they presumably mean that its <em>expected running
time</em> is <mathjax>$O(n \log n)$</mathjax>. If they say that QuickSort is <mathjax>$O(n^2)$</mathjax> they
are probably talking about its worst case complexity. Both statements
can be considered true depending on what type of running time the
functions involved are measuring.</p>Thu, 16 May 2013 05:14:35 GMTPorter/Duff Compositing and Blend Modes
http://www.advogato.org/person/ssp/diary.html?start=20
http://ssp.impulsetrain.com/2013-03-17_Porter_Duff_Compositing_and_Blend_Modes.html<p>In the Porter/Duff compositing algebra, images are equipped with an
alpha channel that determines on a per-pixel basis whether the image
is there or not. When the alpha channel is 1, the image is fully
there, when it is 0, the image isn’t there at all, and when it is in
between, the image is partially there. In other words, the alpha
channel describes the <em>shape</em> of the image, it does not describe
opacity. The way to think of images with an alpha channel is as
irregularly shaped pieces of cardboard, not as colored glass. Consider
these two images:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/source.png"/>
<img src="http://ssp.impulsetrain.com/dest.png"/></p>
</blockquote>
<p>When we combine them, each pixel of the result can be divided into four regions:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/diagram.png"/></p>
</blockquote>
<p>One region where only the source is present, one where only the
destination is present, one where both are present, and one where
neither is present.</p>
<p>By deciding on what happens in each of the four regions, various
effects can be generated. For example, if the destination-only region
is treated as blank, the source-only region is filled with the source
color, and the ‘both’ region is filled with the destination color like
this:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/destatop-diagram.png"/></p>
</blockquote>
<p>The effect is as if the destination image is trimmed to match the
source image, and then held up in front of it:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/destatop.png"/></p>
</blockquote>
<p>The Porter/Duff operator that does this is called “Dest Atop”.</p>
<p>There are twelve of these operators, each one characterized by its
behavior in the three regions: source, destination and both. The
‘neither’ region is always blank. The source and destination regions
can either be blank or filled with the source or destination colors
respectively.</p>
<p>The formula for the operators is a linear combination of the contents
of the four regions, where the weights are the areas of each region:</p>
<blockquote>
<p><mathjax>$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both} \cdot [b]$</mathjax></p>
</blockquote>
<p>Where <mathjax>$[s]$</mathjax> is either 0 or the color of the source pixel, <mathjax>$[d]$</mathjax>
either 0 or the color of the destination pixel, and <mathjax>$[b]$</mathjax> is either
0, the color of the source pixel, or the color of the destination
pixel. With the alpha channel being interpreted as coverage, the areas
are given by these formulas:</p>
<blockquote>
<p><mathjax>$A_\text{src} = \alpha_\text{s} \cdot (1 - \alpha_\text{d})$</mathjax><br/><mathjax>$A_\text{dst} = \alpha_\text{d} \cdot (1 - \alpha_\text{s})$</mathjax><br/><mathjax>$A_\text{both} = \alpha_\text{s} \cdot \alpha_\text{d}$</mathjax></p>
</blockquote>
<p>The alpha channel of the result is computed in a similar way:</p>
<blockquote>
<p><mathjax>$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$</mathjax></p>
</blockquote>
<p>where <mathjax>$[\text{as}]$</mathjax> and <mathjax>$[\text{ad}]$</mathjax> are either 0 or 1 depending
on whether the source and destination regions are present, and where
<mathjax>$[\text{ab}]$</mathjax> is 0 when the ‘both’ region is blank, and 1 otherwise.</p>
<p>Here is a table of all the Porter/Duff operators:</p>
<table>
<tr><td/>
<td>$[\text{s}]$</td>
<td>$[\text{d}]$</td>
<td>$[\text{b}]$</td>
</tr>
<tr><td>Src</td>
<td>$s$</td>
<td>$0$</td>
<td>s</td>
</tr>
<tr><td>Atop</td>
<td>$0$</td>
<td>$d$</td>
<td>s</td>
</tr>
<tr><td>Over</td>
<td>$s$</td>
<td>$d$</td>
<td>s</td>
</tr>
<tr><td>In</td>
<td>$0$</td>
<td>$0$</td>
<td>s</td>
</tr>
<tr><td>Out</td>
<td>$s$</td>
<td>$0$</td>
<td>$0$</td>
</tr>
<tr><td>Dest</td>
<td>$0$</td>
<td>$d$</td>
<td>d</td>
</tr>
<tr><td>DestAtop</td>
<td>$s$</td>
<td>$0$</td>
<td>d</td>
</tr>
<tr><td>DestOver</td>
<td>$s$</td>
<td>$d$</td>
<td>d</td>
</tr>
<tr><td>DestIn</td>
<td>$0$</td>
<td>$0$</td>
<td>d</td>
</tr>
<tr><td>DestOut</td>
<td>$0$</td>
<td>$d$</td>
<td>$0$</td>
</tr>
<tr><td>Clear</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
</tr>
<tr><td>Xor</td>
<td>$s$</td>
<td>$d$</td>
<td>$0$</td>
</tr>
</table><p>And here is how they look:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/table.png"/></p>
</blockquote>
<p>Despite being referred to as alpha blending and despite alpha often
being used to model opacity, in concept Porter/Duff is not a way to
blend the source and destination shapes. It is way to overlay, combine
and trim them as if they were pieces of cardboard. The only places
where source and destination pixels are actually <em>blended</em> is where
the antialiased edges meet.</p>
<p><strong>Blending</strong><br/>
Photoshop and the Gimp have a concept of layers which are images
stacked on top of each other. In Porter/Duff, stacking images on top
of each other is done with the “Over” operator, which is also what
Photoshop/Gimp use by default to composite layers:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/over-diagram.png"/>
<img src="http://ssp.impulsetrain.com/over.png"/></p>
</blockquote>
<p>Conceptually, two pieces of cardboard are held up with one in front of
the other. Neither shape is trimmed, and in places where both are
present, only the top layer is visible.</p>
<p>A layer in these programs also has an associated <em>Blend Mode</em> which
can be used to modify what happens in places where both are
visible. For example, the ‘Color Dodge’ blend mode computes a mix of
source and destination according to this formula:</p>
<blockquote>
<p><mathjax>$
\begin{equation*}
B(s,d)=
\begin{cases} 0 & \text{if \(d=0\),}
\\
1 & \text{if \(d \ge (1 - s)\),}
\\
d / (1 - s) & \text{otherwise}
\end{cases}
\end{equation*}
$</mathjax></p>
</blockquote>
<p>The result is this:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/colordodge-diagram.png"/>
<img src="http://ssp.impulsetrain.com/colordodge-both.png"/></p>
</blockquote>
<p>Unlike with the regular Over operator, in this case there is a
substantial chunk of the output where the result is actually a mix of
the source and destination.</p>
<p>Layers in Photoshop and Gimp are not tailored to each other (except
for layer masks, which we will ignore here), so the compositing of the
layer stack is done with the source-only and destination-only region
set to source and destination respectively. However, there is nothing
in principle stopping us from setting the source-only and
destination-only regions to blank, but keeping the blend mode in the
‘both’ region, so that tailoring could be supported alongside
blending. For example, we could set the ‘source’ region to blank, the
‘destination’ region to the destination color, and the ‘both’ region
to ColorDodge:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/colordodge-dest-diagram.png"/>
<img src="http://ssp.impulsetrain.com/colordodge-dest.png"/></p>
</blockquote>
<p>Here are the four combinations that involve a ColorDodge blend mode:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/colordodge-none.png"/>
<img src="http://ssp.impulsetrain.com/colordodge-source.png"/>
<img src="http://ssp.impulsetrain.com/colordodge-dest.png"/>
<img src="http://ssp.impulsetrain.com/colordodge-both.png"/></p>
</blockquote>
<p>In this model the original twelve Porter/Duff operators can be viewed
as the results of three simple blend modes:</p>
<table>
<tr><td>Source:</td>
<td>$B(s, d) = s$</td>
</tr>
<tr><td>Dest:</td>
<td>$B(s, d) = d$</td>
</tr>
<tr><td>Zero:</td>
<td>$B(s, d) = 0$</td>
</tr>
</table><p>In this generalization of Porter/Duff the blend mode is chosen from a
large set of formulas, and each formula gives rise to four new
compositing operators characterized by whether the source and
destination are blank or contain the corresponding pixel color.</p>
<p>Here is a table of the operators that are generated by various blend
modes:</p>
<blockquote>
<p><img src="http://ssp.impulsetrain.com/colordodge-table.png"/></p>
</blockquote>
<p>The general formula is still an area weighted average:</p>
<blockquote>
<p><mathjax>$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both}\cdot B(s, d)$</mathjax></p>
</blockquote>
<p>where [s] and [d] are the source and destination colors respectively
or 0, but where <mathjax>$B(s, d)$</mathjax> is no longer restricted to one of <mathjax>$0$</mathjax>, <mathjax>$s$</mathjax>,
and <mathjax>$d$</mathjax>, but can instead be chosen from a large set of formulas.</p>
<p>The output of the alpha channel is the same as before:</p>
<blockquote>
<p><mathjax>$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] +
A_\text{both} \cdot [\text{ab}]$</mathjax></p>
</blockquote>
<p>except that [ab] is now determined by the blend mode. For the Zero
blend mode there is no coverage in the both region, so [ab] is 0; for
most others, there is full coverage, so [ab] is 1.</p>