mbrubeck is currently certified at Journeyer level.

Name: Matt Brubeck
Member since: 2001-12-24 06:03:41
Last Login: 2011-01-07 00:31:21

FOAF RDF Share This

Homepage: http://limpet.net/mbrubeck/

Notes:

I work for Mozilla on mobile Firefox; in the past I was a contributor to Debian and to Audacity, a free sound editor and recorder. I live in Seattle, WA, USA.

You can see my home page for more information, or contact me at mbrubeck@limpet.net.

Projects

Recent blog entries by mbrubeck

Syndication: RSS 2.0

Let's build a browser engine! Part 3: CSS

This is the third in a series of articles on building a toy browser rendering engine. Want to build your own? Start at the beginning to learn more:

This article introduces code for reading Cascading Style Sheets (CSS). As usual, I won’t try to cover everything in the spec. Instead, I tried to implement just enough to illustrate some concepts and produce input for later stages in the rendering pipeline.

Anatomy of a Stylesheet

Here’s an example of CSS source code:

    h1, h2, h3 { margin: auto; color: #cc0000; }
div.note { margin-bottom: 20px; padding: 10px; }
#answer { display: none; }

  

Now I’ll walk through some the CSS code from my toy browser engine, robinson.

A CSS stylesheet is a series of rules. (In the example stylesheet above, each line contains one rule.)

    struct Stylesheet {
    rules: Vec<Rule>,
}

  

A rule includes one or more selectors separated by commas, followed by a list of declarations enclosed in braces.

    struct Rule {
    selectors: Vec<Selector>,
    declarations: Vec<Declaration>,
}

  

A selector can be a simple selector, or it can be a chain of selectors joined by combinators. Robinson supports only simple selectors for now.

Note: Confusingly, the newer Selectors Level 3 standard uses the same terms to mean slightly different things. In this article I’ll mostly refer to CSS2.1. Although outdated, it’s a useful starting point because it’s smaller and more self-contained than CSS3 (which is split into myriad specs that reference both each other and CSS2.1).

In robinson, a simple selector can include a tag name, an ID prefixed by '#', any number of class names prefixed by '.', or some combination of the above. If the tag name is empty or '*' then it is a “universal selector” that can match any tag.

There are many other types of selector (especially in CSS3), but this will do for now.

    enum Selector {
    Simple(SimpleSelector),
}

struct SimpleSelector {
    tag_name: Option<String>,
    id: Option<String>,
    class: Vec<String>,
}

  

A declaration is just a name/value pair, separated by a colon and ending with a semicolon. For example, "margin: auto;" is a declaration.

    struct Declaration {
    name: String,
    value: Value,
}

  

My toy engine supports only a handful of CSS’s many value types.

    enum Value {
    Keyword(String),
    Color(u8, u8, u8, u8), // RGBA
    Length(f32, Unit),
    // insert more values here
}

enum Unit { Px, /* insert more units here */ }

  

All other CSS syntax is unsupported, including at-rules, comments, and any selectors/values/units not mentioned above.

Parsing

CSS has a regular grammar, making it easier to parse correctly than its quirky cousin HTML. When a standards-compliant CSS parser encounters a parse error, it discards the unrecognized part of the stylesheet but still processes the remaining portions. This is useful because it allows stylesheets to include new syntax but still produce well-defined output in older browsers.

Robinson uses a very simplistic (and totally not standards-compliant) parser, built the same way as the HTML parser from Part 2. Rather than go through the whole thing line-by-line again, I’ll just paste in a few snippets. For example, here is the code for parsing a single selector:

        /// Parse one simple selector, e.g.: `type#id.class1.class2.class3`
    fn parse_simple_selector(&mut self) -> SimpleSelector {
        let mut result = SimpleSelector { tag_name: None, id: None, class: Vec::new() };
        while !self.eof() {
            match self.next_char() {
                '#' => {
                    self.consume_char();
                    result.id = Some(self.parse_identifier());
                }
                '.' => {
                    self.consume_char();
                    result.class.push(self.parse_identifier());
                }
                '*' => {
                    // universal selector
                    self.consume_char();
                }
                c if valid_identifier_char(c) => {
                    result.tag_name = Some(self.parse_identifier());
                }
                _ => break
            }
        }
        result
    }

  

Note the lack of error checking. Some malformed input like ### or *foo* will parse successfully and produce weird results. A real CSS parser would discard these invalid selectors.

Specificity

Specificity is one of the ways a rendering engine decides which style overrides the other in a conflict. If a stylesheet contains two rules that match an element, the rule with the matching selector of higher specificity can override values from the one with lower specificity.

The specificity of a selector is based on its components. An ID selector is more specific than a class selector, which is more specific than a tag selector. Within each of these “levels,” more selectors beats fewer.

    pub type Specificity = (uint, uint, uint);

impl Selector {
    pub fn specificity(&self) -> Specificity {
        // http://www.w3.org/TR/selectors/#specificity
        let Simple(ref simple) = *self;
        let a = simple.id.iter().len();
        let b = simple.class.len();
        let c = simple.tag_name.iter().len();
        (a, b, c)
    }
}

  

[If we supported chained selectors, we could calculate the specificity of a chain just by adding up the specificities of its parts.]

The selectors for each rule are stored in a sorted vector, most-specific first. This will be important in matching, which I’ll cover in the next article.

        /// Parse a rule set: `<selectors> { <declarations> }`.
    fn parse_rule(&mut self) -> Rule {
        Rule {
            selectors: self.parse_selectors(),
            declarations: self.parse_declarations()
        }
    }

    /// Parse a comma-separated list of selectors.
    fn parse_selectors(&mut self) -> Vec<Selector> {
        let mut selectors = Vec::new();
        loop {
            selectors.push(Simple(self.parse_simple_selector()));
            self.consume_whitespace();
            match self.next_char() {
                ',' => { self.consume_char(); }
                '{' => break, // start of declarations
                c   => fail!("Unexpected character {} in selector list", c)
            }
        }
        // Return selectors with highest specificity first, for use in matching.
        selectors.sort_by(|a,b| b.specificity().cmp(&a.specificity()));
        selectors
    }

  

The rest of the CSS parser is fairly straightforward. You can read the whole thing on GitHub. And if you didn’t already do it for Part 2, this would be a great time to try out a parser generator. My hand-rolled parser gets the job done for simple example files, but it has a lot of hacky bits and will fail badly if you violate its assumptions. Eventually I hope to replace it with a “real” parser built on something like rust-peg.

Exercises

As before, you should decide which of these exercises you want to do, and skip the rest:

  1. Implement your own simplified CSS parser and specificity calculation.

  2. Extend robinson’s CSS parser to support more values, or one or more selector combinators.

  3. Extend the CSS parser to discard any declaration that contains a parse error, and follow the error handling rules to resume parsing after the end of the declaration.

  4. Make the HTML parser pass the contents of any <style> nodes to the CSS parser, and return a Document object that includes a list of Stylesheets in addition to the DOM tree.

Shortcuts

Just like in Part 2, you can skip parsing by hard-coding CSS data structures directly into your program, or by writing them in an alternate format like JSON that you already have a parser for.

To be continued…

The next article will introduce the style module. This is where everything starts to come together, with selector matching to apply CSS styles to DOM nodes.

The pace of this series might slow down soon, since I’ll be busy later this month and I haven’t even written the code for some upcoming articles. I’ll keep them coming as fast as I can!

Syndicated 2014-08-13 19:30:00 from Matt Brubeck

Let's build a browser engine!

This is the second in a series of articles on building a toy browser rendering engine:

This article is about parsing HTML source code to produce a tree of DOM nodes. Parsing is a fascinating topic, but I don’t have the time or expertise to give it the introduction it deserves. You can get a detailed introduction to parsing from any good course or book on compilers. Or get a hands-on start by going through the documentation for a parser generator that works with your chosen programming language.

HTML has its own unique parsing algorithm. Unlike parsers for most programming languages and file formats, the HTML parsing algorithm does not reject invalid input. Instead it includes specific error-handling instructions, so web browsers can agree on how to display every web page, even ones that don’t conform to the syntax rules. Web browsers have to do this to be usable: Since non-conforming HTML has been supported since the early days of the web, it is now used in a huge portion of existing web pages.

A Simple HTML Dialect

I didn’t even try to implement the standard HTML parsing algorithm. Instead I wrote a basic parser for a tiny subset of HTML syntax. My parser can handle simple pages like this:

    <html>
    <body>
        <h1>Title</h1>
        <div id="main" class="test">
            <p>Hello <em>world</em>!</p>
        </div>
    </body>
</html>

  

The following syntax is allowed:

  • Balanced tags: <p>...</p>
  • Attributes with quoted values: id="main"
  • Text nodes: <em>world</em>

Everything else is unsupported, including:

  • Namespaces: <html:body>
  • Self-closing tags: <br/> or <br> with no closing tag
  • Character encoding detection.
  • Escaped characters (like &amp;) and CDATA blocks.
  • Comments, processing instructions, and doctype declarations.
  • Error handling (e.g. unbalanced or improperly nested tags).

At each stage of this project I’m writing more or less the minimum code needed to support the later stages. But if you want to learn more about parsing theory and tools, you can be much more ambitious in your own project!

Example Code

Next, let’s walk through my toy HTML parser, keeping in mind that this is just one way to do it (and probably not the best way). Its structure is based loosely on the tokenizer module from Servo’s cssparser library. It has no real error handling; in most cases, it just aborts when faced with unexpected syntax. The code is in Rust, but I hope it’s fairly readable to anyone who’s used similar-looking languages like Java, C++, or C#. It makes use of the DOM data structures from part 1.

The parser stores its input string and a current position within the string. The position is the index of the next character we haven’t processed yet.

    struct Parser {
    pos: uint,
    input: String,
}

  

We can use this to implement some simple methods for peeking at the next characters in the input:

    impl Parser {
    /// Read the next character without consuming it.
    fn next_char(&self) -> char {
        self.input.as_slice().char_at(self.pos)
    }

    /// Do the next characters start with the given string?
    fn starts_with(&self, s: &str) -> bool {
        self.input.as_slice().slice_from(self.pos).starts_with(s)
    }

    /// Return true if all input is consumed.
    fn eof(&self) -> bool {
        self.pos >= self.input.len()
    }

    // ...
}

  

Rust strings are stored as UTF-8 byte arrays. To go to the next character, we can’t just advance by one byte. Instead we use char_range_at which correctly handles multi-byte characters. (If our string used fixed-width characters, we could just increment pos.)

        /// Return the current character, and advance to the next character.
    fn consume_char(&mut self) -> char {
        let range = self.input.as_slice().char_range_at(self.pos);
        self.pos = range.next;
        range.ch
    }

  

Often we will want to consume a string of consecutive characters. The consume_while method consumes characters that meet a given condition, and returns them as a string:

        /// Consume characters until `test` returns false.
    fn consume_while(&mut self, test: |char| -> bool) -> String {
        let mut result = String::new();
        while !self.eof() && test(self.next_char()) {
            result.push_char(self.consume_char());
        }
        result
    }

  

We can use this to ignore a sequence of space characters, or to consume a string of alphanumeric characters:

        /// Consume and discard zero or more whitespace characters.
    fn consume_whitespace(&mut self) {
        self.consume_while(|c| c.is_whitespace());
    }

    /// Parse a tag or attribute name.
    fn parse_tag_name(&mut self) -> String {
        self.consume_while(|c| match c {
            'a'..'z' | 'A'..'Z' | '0'..'9' => true,
            _ => false
        })
    }

  

Now we’re ready to start parsing HTML. To parse a single node, we look at its first character to see if it is an element or a text node. In our simplified version of HTML, a text node can contain any character except <.

        /// Parse a single node.
    fn parse_node(&mut self) -> dom::Node {
        match self.next_char() {
            '<' => self.parse_element(),
            _   => self.parse_text()
        }
    }

    /// Parse a text node.
    fn parse_text(&mut self) -> dom::Node {
        dom::text(self.consume_while(|c| c != '<'))
    }

  

An element is more complicated. It includes opening and closing tags, and between them any number of child nodes:

        /// Parse a single element, including its open tag, contents, and closing tag.
    fn parse_element(&mut self) -> dom::Node {
        // Opening tag.
        assert!(self.consume_char() == '<');
        let tag_name = self.parse_tag_name();
        let attrs = self.parse_attributes();
        assert!(self.consume_char() == '>');

        // Contents.
        let children = self.parse_nodes();

        // Closing tag.
        assert!(self.consume_char() == '<');
        assert!(self.consume_char() == '/');
        assert!(self.parse_tag_name() == tag_name);
        assert!(self.consume_char() == '>');

        dom::elem(tag_name, attrs, children)
    }

  

Parsing attributes is pretty easy in our simplified syntax. Until we reach the end of the opening tag (>) we repeatedly look for a name followed by = and then a string enclosed in quotes.

        /// Parse a single name="value" pair.
    fn parse_attr(&mut self) -> (String, String) {
        let name = self.parse_tag_name();
        assert!(self.consume_char() == '=');
        let value = self.parse_attr_value();
        (name, value)
    }

    /// Parse a quoted value.
    fn parse_attr_value(&mut self) -> String {
        let open_quote = self.consume_char();
        assert!(open_quote == '"' || open_quote == '\'');
        let value = self.consume_while(|c| c != open_quote);
        assert!(self.consume_char() == open_quote);
        value
    }

    /// Parse a list of name="value" pairs, separated by whitespace.
    fn parse_attributes(&mut self) -> dom::AttrMap {
        let mut attributes = HashMap::new();
        loop {
            self.consume_whitespace();
            if self.next_char() == '>' {
                break;
            }
            let (name, value) = self.parse_attr();
            attributes.insert(name, value);
        }
        attributes
    }

  

To parse the child nodes, we recursively call parse_node in a loop until we reach the closing tag:

        /// Parse a sequence of sibling nodes.
    fn parse_nodes(&mut self) -> Vec<dom::Node> {
        let mut nodes = vec!();
        loop {
            self.consume_whitespace();
            if self.eof() || self.starts_with("</") {
                break;
            }
            nodes.push(self.parse_node());
        }
        nodes
    }

  

Finally, we can put this all together to parse an entire HTML document into a DOM tree. This function will create a root node for the document if it doesn’t include one explicitly; this is similar to what a real HTML parser does.

    /// Parse an HTML document and return the root element.
pub fn parse(source: String) -> dom::Node {
    let mut nodes = Parser { pos: 0u, input: source }.parse_nodes();

    // If the document contains a root element, just return it. Otherwise, create one.
    if nodes.len() == 1 {
        nodes.swap_remove(0).unwrap()
    } else {
        dom::elem("html".to_string(), HashMap::new(), nodes)
    }
}

  

That’s it! The entire code for the robinson HTML parser. The whole thing weighs in at just over 100 lines of code (not counting blank lines and comments). If you use a good library or parser generator, you can probably build a similar toy parser in even less space.

Exercises

Here are a few alternate ways to try this out yourself. As before, you can choose one or more of them and ignore the others.

  1. Build a parser (either “by hand” or with a library or parser generator) that takes a subset of HTML as input and produces a tree of DOM nodes.

  2. Modify robinson’s HTML parser to add some missing features, like comments. Or replace it with a better parser, perhaps built with a library or generator.

  3. Create an invalid HTML file that causes your parser (or mine) to fail. Modify the parser to recover from the error and produce a DOM tree for your test file.

Shortcuts

If you want to skip parsing completely, you can build a DOM tree programmatically instead, by adding some code like this to your program (in pseudo-code; adjust it to match the DOM code you wrote in Part 1):

    // <html><body>Hello, world!</body></html>
let root = element("html");
let body = element("body");
root.children.push(body);
body.children.push(text("Hello, world!"));

  

Or you can find an existing HTML parser and incorporate it into your program.

The next article in this series will cover CSS data structures and parsing.

Syndicated 2014-08-11 15:00:00 from Matt Brubeck

Let's build a browser engine!

I’m building a toy HTML rendering engine, and I think you should too. This is the first in a series of articles describing my project and how you can make your own. But first, let me explain why.

You’re building a what?

Let’s talk terminology. A browser engine is the portion of a web browser that works “under the hood” to fetch a web page from the internet, and translate its contents into forms you can read, watch, hear, etc. Blink, Gecko, WebKit, and Trident are browser engines. In contrast, the the browser’s own UI—tabs, toolbar, menu and such—is called the chrome. Firefox and SeaMonkey are two browsers with different chrome but the same Gecko engine.

A browser engine includes many sub-components: an HTTP client, an HTML parser, a CSS parser, a JavaScript engine (itself composed of parsers, interpreters, and compilers), and much more. The many components involved in parsing web formats like HTML and CSS and translating them into what you see on-screen are sometimes called the layout engine or rendering engine.

Why a “toy” rendering engine?

A full-featured browser engine is hugely complex. Blink, Gecko, WebKit—these are millions of lines of code each. Even younger, simpler rendering engines like Servo and WeasyPrint are each tens of thousands of lines. Not the easiest thing for a newcomer to comprehend!

Speaking of hugely complex software: If you take a class on compilers or operating systems, at some point you will probably create or modify a “toy” compiler or kernel. This is a simple model designed for learning; it may never be run by anyone besides the person who wrote it. But making a toy system is a useful tool for learning how the real thing works. Even if you never build a real-world compiler or kernel, understanding how they work can help you make better use of them when writing your own programs.

So, if you want to become a browser developer, or just to understand what happens inside a browser engine, why not build a toy one? Like a toy compiler that implements a subset of a “real” programming language, a toy rendering engine could implement a small subset of HTML and CSS. It won’t replace the engine in your everyday browser, but should nonetheless illustrate the basic steps needed for rendering a simple HTML document.

Try this at home.

I hope I’ve convinced you to give it a try. This series will be easiest to follow if you already have some solid programming experience and know some high-level HTML and CSS concepts. However, if you’re just getting started with this stuff, or run into things you don’t understand, feel free to ask questions and I’ll try to make it clearer.

Before you start, a few remarks on some choices you can make:

On Programming Languages

You can build a toy layout engine in any programming language. Really! Go ahead and use a language you know and love. Or use this as an excuse to learn a new language if that sounds like fun.

If you want to start contributing to major browser engines like Gecko or WebKit, you might want to work in C++ because it’s the main language used in those engines, and using it will make it easier to compare your code to theirs. My own toy project, robinson, is written in Rust. I’m part of the Servo team at Mozilla, so I’ve become very fond of Rust programming. Plus, one of my goals with this project is to understand more of Servo’s implementation. (I’ve written a lot of browser chrome code, and a few small patches for Gecko, but before joining the Servo project I knew nothing about many areas of the browser engine.) Robinson sometimes uses simplified versions of Servo’s data structures and code. If you too want to start contributing to Servo, try some of the exercises in Rust!

On Libraries and Shortcuts

In a learning exercise like this, you have to decide whether it’s “cheating” to use someone else’s code instead of writing your own from scratch. My advice is to write your own code for the parts that you really want to understand, but don’t be shy about using libraries for everything else. Learning how to use a particular library can be a worthwhile exercise in itself.

I’m writing robinson not just for myself, but also to serve as example code for these articles and exercises. For this and other reasons, I want it to be as tiny and self-contained as possible. So far I’ve used no external code except for the Rust standard library. (This also side-steps the minor hassle of getting multiple dependencies to build with the same version of Rust while the language is still in development.) This rule isn’t set in stone, though. For example, I may decide later to use a graphics library rather than write my own low-level drawing code.

Another way to avoid writing code is to just leave things out. For example, robinson has no networking code yet; it can only read local files. In a toy program, it’s fine to just skip things if you feel like it. I’ll point out potential shortcuts like this as I go along, so you can bypass steps that don’t interest you and jump straight to the good stuff. You can always fill in the gaps later if you change your mind.

First Step: The DOM

Are you ready to write some code? We’ll start with something small: data structures for the DOM. Let’s look at robinson’s dom module.

The DOM is a tree of nodes. A node has zero or more children. (It also has various other attributes and methods, but we can ignore most of those for now.)

    struct Node {
    // data common to all nodes:
    children: Vec<Node>,

    // data specific to each node type:
    node_type: NodeType,
}

  

There are several node types, but for now we will ignore most of them and say that a node is either an Element or a Text node. In a language with inheritance these would be subtypes of Node. In Rust they can be an enum (Rust’s keyword for a “tagged union” or “sum type”):

    enum NodeType {
    Text(String),
    Element(ElementData),
}

  

An element includes a tag name and any number of attributes, which can be stored as a map from names to values. Robinson doesn’t support namespaces, so it just stores tag and attribute names as simple strings.

    struct ElementData {
    tag_name: String,
    attributes: AttrMap,
}

type AttrMap = HashMap<String, String>;

  

Finally, some constructor functions to make it easy to create new nodes:

    impl Node {
    fn new(children: Vec<Node>, node_type: NodeType) -> Node {
        Node { children: children, node_type: node_type }
    }
}

fn text(data: String) -> Node {
    Node::new(vec!(), Text(data))
}

fn elem(name: String, attrs: AttrMap, children: Vec<Node>) -> Node {
    Node::new(children, Element(ElementData {
        tag_name: name,
        attributes: attrs,
    }))
}

  

And that’s it! A full-blown DOM implementation would include a lot more data and dozens of methods, but this is all we need to get started. In the next article, we’ll add a parser that turns HTML source code into a tree of these DOM nodes.

Exercises

These are just a few suggested ways to follow along at home. Do the exercises that interest you and skip any that don’t.

  1. Start a new program in the language of your choice, and write code to represent a tree of DOM text nodes and elements.

  2. Install the latest version of Rust, then download and build robinson. Open up dom.rs and extend NodeType to include additional types like comment nodes.

  3. Write code to pretty-print a tree of DOM nodes.

References

Here’s a short list of “small” open source web rendering engines. Most of them are many times bigger than robinson, but still way smaller than Gecko or WebKit. WebWhirr, at 2000 lines of code, is the only other one I would call a “toy” engine.

You may find these useful for inspiration or reference. If you know of any other similar projects—or if you start your own—please let me know!

Syndicated 2014-08-08 16:40:00 from Matt Brubeck

Better automated detection of Firefox performance regressions

Last spring I spent some of my spare time improving the automated script that detects regressions in Talos and other Firefox performance data. I’m finally writing up some of that work in case it’s useful or interesting to anyone else.

Talos is a system for running performance benchmarks; we use it to run a suite of benchmarks every time a change is pushed to the Firefox source repository. The Talos test harness reports these results to the graph server which stores them and can plot the recorded data to show how it changes over time.

Like most performance measurements, Talos benchmarks can be noisy. We need to use statistics to separate signal from noise. To determine whether a change to the source code caused a change in the benchmark results, an automated script takes multiple datapoints from before and after each push. It computes the average and standard deviation of the “before” datapoints and the “after” datapoints, and uses a Student’s t-test to estimate the likelihood that the datasets are significantly different. If the t-test exceeds a certain threshold, the script sends email to the author(s) of the responsible patches and to the dev-tree-management mailing list.

By nature, these statistical estimates can never be 100% certain. However, we need to make sure that they are correct as often as possible. False negatives mean that real regressions go undetected. But false positives will generate extra work, and may cause developers to ignore future regression alerts. I started inspecting graph server data and regression alerts by hand, recording and investigating any false negatives or false positives I found, and filed bugs to fix the causes of those errors.

Some of these were straightforward implementation bugs, like one where an infinite t-test score (near certain likelihood of regression) was treated as a zero score (no regression at all). Others involved tuning the number of datapoints and the threshold for sending alerts.

Some fixes required more involved changes to the analysis. For example, if one code change actually caused a regression, the pushes right before or after that change will also appear somewhat likely to be responsible for the regression (because they will also have large differences in average score between their “before” and “after” windows). If multiple pushes in a row had t-test scores over the threshold, the script used to send an alert for the first of those pushes, even if it was not the most likely culprit. Now the script blames the push with the highest t-test score, which is almost always the guilty party. This change had the biggest impact in reducing incorrect regression alerts.

After those changes, there was still one common cause of false alarms that I found. The regression analyzer compares the 12 datapoints before each push to the 12 datapoints after it. But these 12-point moving averages could change suddenly not just at the point where a regression happened, but also at an unrelated point that happens to be 12 pushes earlier or later. This caused spooky action at a distance where a regression in one push would cause a false alarm in a completely different push. To fix this, we now compute weighted averages with “triangular” weighting functions that give more weight to the point being analyzed, and fall off gradually with increasing distance from that point. This smooths out changes at the opposite ends of the moving windows.

There are still occasional errors in regression detection, but as far as I can tell most of them are caused by genuinely misleading random noise or bimodal data. If you see any problems with regression emails, please file a bug (and CC :mbrubeck) and we’ll take a look at it.

Syndicated 2013-11-10 19:53:00 from Matt Brubeck

A good time to try Firefox for Metro

“Firefox for Metro” is our project to build a new Firefox user interface designed for touch-screen devices running Windows 8. (“Metro” was Microsoft’s code name for the new, touch-friendly user interface mode in Windows 8.) I’m part of the small team working on this project.

For the past year we’ve been fairly quiet, partly because the browser has been under heavy construction and not really suitable for regular use. It started as a fork of the old Fennec (mobile Firefox) UI, plus a new port of Gecko’s widget layer to Microsoft’s WinRT API. We spent part of that time ripping out and rebuilding old Fennec features to make them work on Windows 8, and finding and fixing bugs in the new widget code. More recently we’ve been focused on reworking the touch input layer. With a ton of help from the graphics team, we replaced Fennec’s old multi-process JavaScript touch support with a new off-main-thread compositing backend for the Windows Direct3D API, and added WinRT support to the async pan/zoom module that implements touch scrolling and zooming on Firefox OS.

All this work is still underway, but in the past week we finally reached a tipping point where I’m able to use Firefox for Metro for most of my everyday browsing. There are still bugs, and we are still actively working on performance and completing the UI work, but I’m now finding very few cases where I need to switch to another browser because of a problem with Firefox for Metro. If you are using Window 8 (especially on a touch-screen PC) and are the type of brave person who uses Firefox nightly builds, this would be a great time to try Metro-style Firefox and let us know what you think!

Looking to the future, here are some of our remaining development priorities for the first release of Firefox for Metro:

  • Improve the installation and first-run experience, to help users figure out how to use the new UI and switch between “Metro” and desktop modes. (Our UX designer has user testing planned to help identify issues here and throughout the product.)

  • Fix any performance and rendering issues with scrolling and zooming, and add support for double-tap to zoom in on a specific page element.

  • Make the Metro and desktop interfaces share a profile, so they can seamlessly use the same bookmarks and other data without connecting to a Firefox Sync account.

And here are some things that I hope we can spend more time on once that work has shipped:

  • Improve the experience on pages with plugins, which currently require the user to switch to the desktop Firefox interface (bug 936907).

  • Implement a “Reader Mode,” like Firefox for Android. (A pair of students have started working on this project, and their work should also be useful for adding Reader Mode to Firefox for desktop.)

  • Add more features, and more ways to customize and tweak the Metro UI.

If you want to contribute to any of this work, please check out our developer documentation and come chat with us in #windev on irc.mozilla.org or on our project mailing list!

Syndicated 2013-11-10 17:20:00 from Matt Brubeck

129 older entries...

 

mbrubeck certified others as follows:

  • mbrubeck certified mbrubeck as Journeyer
  • mbrubeck certified habes as Journeyer
  • mbrubeck certified cinamod as Master
  • mbrubeck certified samth as Journeyer
  • mbrubeck certified drow as Master
  • mbrubeck certified Sunir as Journeyer
  • mbrubeck certified hub as Journeyer
  • mbrubeck certified tod as Apprentice
  • mbrubeck certified Bram as Journeyer
  • mbrubeck certified JesseR as Journeyer
  • mbrubeck certified mwh as Journeyer
  • mbrubeck certified xiphmont as Master
  • mbrubeck certified whytheluckystiff as Master

Others have certified mbrubeck as follows:

  • mbrubeck certified mbrubeck as Journeyer
  • chipx86 certified mbrubeck as Apprentice
  • SteveMallett certified mbrubeck as Apprentice
  • thomasvs certified mbrubeck as Apprentice
  • hub certified mbrubeck as Apprentice
  • KlausWuestefeld certified mbrubeck as Apprentice
  • tod certified mbrubeck as Apprentice
  • trs80 certified mbrubeck as Apprentice
  • jao certified mbrubeck as Apprentice
  • salmoni certified mbrubeck as Journeyer
  • habes certified mbrubeck as Journeyer
  • mattr certified mbrubeck as Journeyer
  • cTaylor certified mbrubeck as Journeyer
  • sand certified mbrubeck as Journeyer
  • mpr certified mbrubeck as Journeyer
  • realblades certified mbrubeck as Apprentice
  • fxn certified mbrubeck as Journeyer
  • joshuat certified mbrubeck as Journeyer
  • zotz certified mbrubeck as Journeyer
  • ebf certified mbrubeck as Journeyer
  • mdupont certified mbrubeck as Journeyer
  • mishan certified mbrubeck as Journeyer
  • lerdsuwa certified mbrubeck as Journeyer
  • e8johan certified mbrubeck as Journeyer
  • pvanhoof certified mbrubeck as Journeyer
  • araeed156 certified mbrubeck as Journeyer
  • chakie certified mbrubeck as Journeyer
  • hiddenpower certified mbrubeck as Journeyer
  • ittner certified mbrubeck as Journeyer
  • softkid certified mbrubeck as Apprentice

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page