WHAT'S NEW IN HTTP/2 BY DAWN PARZYCH
The front end optimization (FEO) movement was a result of the inefficiencies of HTTP/1.1. The availability of HTTP/2 may mean more work for companies that have already spent countless hours and hundreds of thousands of dollars implementing FEO strategies; and a decline in companies and offerings focused on FEO as HTTP/2 will erode the benefits. There has already been a shift in the FEO space with Google shutting down it’s PageSpeed Service in August. While the official reason is that they saw more interest in the offering through partners than through their service, part of me thinks it has to do with the release of HTTP/2. The deprecation announcement came out on May 5, 2015 less than 2 weeks before HTTP/2 was published as RFC7450. This post in our HTTP/2 series will explore four key components of HTTP/2 - header compression, multiplexing & concurrency, priorities & dependencies, and server push - and what that means for developers and companies that have previously implemented various FEO strategies. HEADER COMPRESSION HTTP is highly repetitive and a stateless protocol, this requires the conversation between a client and a server to contain many identical pieces of information. A typical HTTP/1.1 request header contains the following information:
GET / HTTP/1.1 Host: instartlogic.com Connection: keep-alive Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-US,en;q=0.8 Cookie: During a typical browsing session there is certain information here that will not change such as the User-Agent, Language preference, ability to accept compressed content and the types of content the browser can accept; but this information is communicated on each and every request. Previously there has been no way to compress header content and reduce the unnecessary transmission of identical data until SPDY and HTTP/2. Header compression is handled by the HPACK standard or RFC 7541. HPACK eliminates the redundancy and reduces the size of headers helping to reduce page weight.
Given the growth in the number of resources on a page reducing the size of headers is a much needed functionality. Prior to SPDY, there has previously been no work arounds or options to reduce the size of headers, this is a net new win the performance optimization space. Earlier this year the team at HttpWatch ran some tests comparing the performance of HTTPS to SPDY to HTTP/2; for an object that returned no content (204 response code) the request and response sizes are listed below: Overall header size can be reduced by 66% with HTTP/2 header compression, for pages with hundreds of resources those savings can add up quickly.
MULTIPLEXING AND CONCURRENCY One of the biggest inefficiencies of HTTP/1.1 was the ability to only send one request per TCP connection. This led to multiple work arounds being implemented from registry hacks to force browsers to open more TCP connections or FEO techniques like domain sharding, concatenation and image sprites. In the early days of HTTP browsers only opened up two connections per domain, when the maximum connection speed was 56K this seemed reasonable. As connection speeds increased along with the number of resources per page this became a bottleneck. Domain sharding arose to work around this issue. Domain sharding is the process of splitting content across domains to “trick” the browser into opening up additional connections to a web site. For example if your domain is www.example.com a browser would open two connections to download all content from that domain. With domain sharding you could create additional DNS entries and host content on www.example.com, images.example.com, and scripts.example.com this way the browser went from opening two connections to six connections; making more effective use of available bandwidth. The downside of this is additional DNS entries and TCP connections add additional time. As domain sharding became more popular browsers quickly moved from only allowing 2 connections to up to 6 connections or more per domain, yet domain sharding still exists which means up to 18 connections to a web application.
Lazy loading is a FEO technique used to delay the loading of images that are at the bottom of the page or not in the viewport. Images are not loaded until a user scrolls to them or at a predefined time. The downside of this optimization is the additional JS code that is necessary to implement this. Each line of code adds additional weight to a page which can impact overall performance. With HTTP/2 images that are below the fold can have a lower priority set and the additional lines of code for lazy loading can be removed, making your pages load even faster. Setting priorities and dependencies is not a requirement to upgrade an application to HTTP/2, they are optional. These will require additional effort on the part of application developers to test and implement. If this is functionality you are looking to implement, check with the vendor as some implementations may not support in their initial HTTP/2 releases. As a result, I would not expect to see wide adoption of these features in the short term. SERVER PUSH Web site analytics can reveal trends of users after landing on the home page click on the login page or when viewing a photo album most users view the next image in the album after viewing the first image. Knowing this it is possible to send data that a user may want before they even ask for it. HTML5 defined the ability to prefetch content that the browser may need in the future. The catch is that the content is only downloaded when the browser is idle and given the vagueness in the specification browser implementations can vary widely. Server push in HTTP/2 allows the server to proactively send resources to the client it knows or predicts will be needed without waiting for the client to request them, for the browser to be idle and without adding additional page weight to a page. As with priorities I expect to see server push be implemented at a future date - Nginx
WHERE DO I GO FROM HERE HTTP/2 is eliminating the need for many FEO techniques that have previously been used to speed up web pages. Below is a table highlighting which techniques will no longer be applicable and which ones should be removed if implemented due to potential negative consequences. Many web performance optimizations such as image optimization, minification, caching, and use of a CDN are still relevant in an HTTP/2 world and will continue to help improve the performance of web applications. Take a look at what you have implemented and determine what changes need to be made to your applications either by eliminating FEO techniques in use or adding new optimizations.
For additional information on HTTP/2 vs HTTP/1.1 check out Ilya Grigorik’s HTTP/2 anti-patterns presentation. Read Previous post