$talk) { $str += '<li>' . $talk->name . '</li>'; } $str += '</ul>'; Way back in time, in the early days of Facebook when Mark Zuckerberg was still in his dorm room, the way you would build websites using PHP is with string concatenation. It turns out that it’s a very good way to build website, whether you are back-end, front-end or even have no programming experience at all, you can build a big website.
'<li>' . $talk->name . '</li>'; } $str += '</ul>'; XSS Injection! String Concatenation — 2004 The only issue with that way of programming is that it’s insecure. If you use this exact code, an attacker can execute arbitrary JavaScript. This is especially bad for Facebook since this code is going to be executed in the user context. So you can basically take over the user account
do anything, you are vulnerable. Worse, for most inputs, it’s actually going to render fine for the developer working on the feature. So there is very few incentive for him/her to add proper escaping
there’s a vulnerability And, the property that’s really bad is the fact that you need to have every single call site in your millions of lines of codes written by hundreds of engineers to be safe. Made one mistake? You are subject to account takeover
there’s a vulnerability You can’t over escape One idea to escape this impossible situation is to just escape everything no matter what. Unfortunately it doesn’t quite work, if you double escape a string it’s going to display control characters. If you accidentally escape markup, then it’s going to show html to the user!
$talk) { $content->appendChild(<li>{$talk->name}</li>); } The solution we came up with at Facebook is to extend the syntax of PHP to allow the developer to write markup. In this case <ul /> is not in a string anymore.
$talk) { $content->appendChild(<li>{$talk->name}</li>); } Markup Content Now, everything that’s markup is written using a different syntax so we know not to escape it when generating the HTML. Everything else is considered an untrusted string and automatically escaped. We get to keep the ease of development while being secure
{ $content->appendChild(<talk talk={$talk} />); } Once XHP was introduced, it wasn’t long until people realized that they could create custom tags. It turns out that they let you build very big applications easily by composing a lot of those tags. This is one implementation of the concepts of Semantic Web and Web Components
to avoid the latency between client and server. We’ve tried many techniques like having a cross-browser DOM library and a data binding approach but none of them really worked well for us. ! Given this state of the world, Jordan Walke, a front-end engineer, pitched his manager the idea of porting XHP to JavaScript. He somehow managed to get to work on it for 6 months in order to prove the concept. ! The first time I heard about this project, I was like, there’s absolutely no way that it’s going to work, but in the rare chance it does, this is going to be huge. When I finally got to play with it I immediately started evangelizing it :)
talk={talk} />)} </TalkList>; ES6 Arrow Function The first task was to write an extension of JavaScript that supports this weird XML syntax. It turns out that at Facebook we’ve been using JavaScript transforms for a while. In this example, I’m using the alternative way to write functions of ES6, the next JavaScript standard. ! It took like a week to implement JSX and is not really the most important part of React
enough? What was way more challenging is to reproduce the update mechanism of PHP. It’s really simple, whenever anything changes, you go to a new page and get a full new page. For a developer point of view this makes writing apps extremely easy as you don’t have to worry about mutations and making sure everything is in sync when something changes in your UI ! However, the question that everybody asks … It’s going to be super slow
than previous implementations! After 2 years of production usage, I can confidently say that it’s surprisingly faster than most of the code that we replaced it with. In the rest of this talk I’m going to explain the big optimizations that makes this possible
Demaille My teacher at school used to say that you need to be right before being good. What he meant is that if you are trying to build something performant, you have a much higher chance to succeed if you first build a naive but working implementation and iterate on the performance rather than trying to build it the best way from the start
his advice. We’re first going to implement the most naive version. Whenever anything changes, we’re going to build an entire new DOM and replace the old one
This is kind of working but there are a lot of edge cases. If you blow up the DOM you’re going to lose the currently focused element and cursor, same for the text selection and scroll position. What this really means is that DOM nodes actually contain state. ! The first attempt was to try to restore those state, we would remember the focused input and focus the new element, same for cursor and scroll position. Unfortunately, this isn’t enough.
If you are using a mac and scrolling, you’re going to have inertia. Turns out that there is no JavaScript API to read or write scrolling inertia. For iframe it’s even worse, if it’s from another domain, the security policy actually disallow you to even look at what’s inside, so you cannot restore it. Not only the DOM is stateful, but it contains hidden state!
around this, the idea is instead of blowing up the DOM and recreating a new one, we’re going to reuse the DOM nodes that stayed the same between two renders
removing the previous DOM tree and replacing it with the new one, we’re going to match nodes and if they didn’t change, discard the new one and keep the old one which is currently rendered on screen
repeat the process. But at some point we’re going to see a new node that wasn’t there before. In this case, we’re going to move the new one to the old (and currently rendered on screen) dom tree
repeat the process. But at some point we’re going to see a new node that wasn’t there before. In this case, we’re going to move the new one to the old (and currently rendered on screen) dom tree
the DOM” — AdonisSMU We now have a general idea of how we want React to work but don’t have a specific plan. This is the moment when I pull out the Analogy card out of my hat
the dark age of programming, if you wanted someone else to try out your code, you would create a zip and send him. If you changed anything you would send a new zip file.
came along and the way it works is that it takes those snapshots of the code and generates a list of mutations like “remove those 5 lines”, “add 3 lines”, “replace this word”… using a diff algorithm
GHz Optimal Diff — O(n3) So, as any good engineer, we looked at diff algorithms for trees and found that the optimal solution was in O(n ! Let say we’ve got a page with 10,000 DOM nodes. It’s big but not unthinkable. To get an order of magnitude we’re going to assume that we can do one operation in one CPU cycle (not going to happen) and have a 1GHz machine
/> In order to understand why, the best way is via a small example. Let’s get into React shoes for a minute. We see that the first render had three inputs and the next only has two. The question is how do you match them?
/> One less obvious solution, but still totally valid, is to remove all the previous elements and create two new ones. So at this point, we don’t have enough information to do that matching properly as we want to be able to handle all the above use cases
id=“i5235"/> Another more promising attribute is the id one. In a form context, it usually contains the id of the model that the input is corresponding to
id=“i5235"/> Now, we’re able to match the two lists successfully! (Did you notice that it was yet another matching than the three examples I shown before?)
key=“i5235"/> But, if you are submitting the form via AJAX instead of letting the browser do it, you’re unlikely to put that id attribute in the DOM. ! React introduces the key attribute. Its only job is to help the diff algorithm do the matching
a DOM node is used for a lot of steps in the browser rendering pipeline. The browser first looks at the CSS rules and find the ones that matches that node, and stores a variety of metadata in the process to make it faster. For example it maintains a map of id to dom nodes. ! Then, it takes those styles and compute the layout, which contains a position and location in the screen. Again, lots of metadata. It will avoid recomputing layout as much as possible and caches previously computed values. ! Then, at some point you actually a buffer either on the CPU or GPU. ! All those steps require intermediate representations and use memory and cpu. The browser are doing a very good job at optimizing this entire pipeline
if you think about what’s happening in React, we only use those DOM nodes in the diff algorithm. So we can use a much lighter JavaScript object that just contains the tag name and attributes. We call it the virtual DOM.
9/16/2013 10/26/2013 11/26/2013 12/27/2013 1/27/2014 2/27/2014 3/30/2014 4/30/2014 5/31/2014 7/1/2014 React got extremely popular in just a year. If it continues to grow at this rate, it’s going to be the biggest Facebook open source project in a couple of months!
they also use it in production. For example, the New York Times is using React to spice up their big news coverage like the Festival de Cannes and the world cup
are using React but they are contributing back! And it’s not only typos in the docs. The two next optimizations have been brought to life by the community
main contributors to sluggish JavaScript We’ve talked about the DOM being slow, the second source of slowness are reflows and repaints. Those scary words just mean that when you modify the DOM, then the browser has to update the position of elements and update the actual pixels. Th ! When you try to read some attributes from the DOM, the browser, in order to give you a consistent view, has to trigger those expensive operations. If you are doing a “read, write, read, write…” sequence of operations, you’re going to trigger those expensive reflow and repaint without knowing. ! In order to mitigate that, the idea is to reorder “read, write, read, write…” sequence of operations into “read, read, read…” then “write, write, write…”. concatenation was insecure by default, writing JavaScript applications in the conventional way is very prone to trigger reflows and repaints
changed, you call setState on an element. React will just mark the element as dirty but will not compute anything right away. If you call setState on the same node multiple times, it will be just as performant
the Virtual DOM, we feed that to the diff algorithm which outputs DOM mutations. Nowhere in this process did we have to read from the DOM. React is (outside of optimizations i’m not going to cover in this talk) write-only
model is “re-rendering everything when anything changes”. This is not exactly correct in practice. We only re-render the subtree from elements that have been flagged by setState.
you can implement “shouldComponentUpdate” which with both previous and next state/props can say: “You know what, nothing changed, let’s just skip re-rendering this sub-tree”
we did not quite know how to actually implement it correctly. ! The problem is that in JavaScript you often use objects to hold state and mutate it directly. This means that the previous and next version of the state is the same reference to the object. So when you try to compare the previous version with the next, it’s going to say yes, even though something changed.
New York Times, figured out a good solution. In ClojureScript all most values are immutable, meaning that when you update one, you get a new object and the old one is left untouched. This works very well with shouldComponentUpdate. ! He wrote a library on-top of React in ClojureScript called Om which uses immutable data structures in order to implement shouldComponentUpdate by default
mental leap that everyone is not yet ready to take. So for now and the foreseeable future React has to work without them and therefore cannot implement shouldComponentUpdate by default. ! Instead, we just released a performance tool. You play around with your application for a while and every time a component is re-rendered, if the diff doesn’t output any DOM mutation, then it remember the time it took to render. At the end, you get a nice table that tells you the components that would benefit most from shouldComponentUpdate! ! This way, you can put it on a few key places and reap most of the perf wins
doing: diff algorithm, virtual DOM, batching and pruning. I hope that it shed some light on the reasons why they exist and how they work. ! React is used to build our desktop website, mobile website and the Instagram website. It is so successful at Facebook that basically all the new front-end products are written using React. This is not a project that we just use in internal tools or small features, this is used by the main page of Facebook used by hundreds of millions of people every month! Conclusion
like to end by reflecting a bit on it. We open sourced XHP in 2010 but we’ve done a very bad job at it, we just wrote a single blog post in 4 years. We didn’t go to conferences to explain it, write documentation … And yet, inside of Facebook we absolutely love it and use it everywhere. ! When we open sourced React last year, it was much harder because we had to explain at the same time the benefits of XHP and all the crazy optimizations we had to do in order to make it work on the client. ! We talk a lot about the benefits of open sourcing. This was a very good reminder that not open sourcing your core technologies can make it harder to open source other projects down the line Conclusion