`
netcome
  • 浏览: 467145 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

A (somewhat) brief history of the performance landscape

阅读更多

I’d like to enlist your help. As I’ve mentioned, last week I led a session onweb performance automation for the members of the NY Web Performance Meetup group. For the session, I created a set of slides that outline my theory about how the front-end performance landscape has evolved over the past 15 years. Now I want to solicit your feedback and help me fill the gaps.

Evolution: From delivery to transformation

Most companies know that if site speed is an issue for them, the problem isn’t infrastructure, and throwing more bandwidth and servers at the problem isn’t the solution. As I understand the current solution landscape, the web performance problem can be approached in two ways:

1. Delivery
Delivery-based solutions are focused on getting the data from the server to the browser more quickly. This is a $4.2 billion/year market, encompassing CDNs, network devices/accelerators, and others:

  • CDNs
    Pros: Make sites faster by shortening round trips; easy to deploy
    Cons: Expensive; don’t take advantage of acceleration opportunities like roundtrip reduction and optimizing pages for browsers
  • Network devices/accelerators (e.g. load balancers)
    Pros: Proven technology; easy to implement and deploy
    Cons: Don’t address performance problems that occur at the browser level;very hard to configure which is why many sites using them don’t even use the basic features of compression and keep-alive
  • Other (TCP, DNS, etc.)
    Other delivery players exist, such as DNS solution and TCP optimization solution, but they are at the fringes of this market, and I consider them features rather than unique market segments when it comes to performance.

Here’s the diagram I’ve created to show the breakdown of delivery-based solutions and the major players in this space:

Includes companies like F5, Citrix, Akamai, Limelight, Cotendo and CDNetworks

2. Transformation
Transformation-based solutions focus on analyzing each page of a site from the browser’s perspective and optimizing each page so that it is delivered most efficiently to the browser. Thanks to teams at Yahoo and Google, there are emerging sets of best practices that serve as guidelines for this recoding.

Note that transformation is a complement to, not a replacement for, delivery-based solutions.

It is difficult to segment this emerging market, as very few players are actively involved in it. I choose to segment it by how transformation is delivered (via server, network or cloud) as this discussion seems to be a clear dividing line between the various players.

  • Server: In this category I put all of the tools that sit within the datacenter on the server. In this category we have the pure-play server plugs-ins as well as the virtual machines. I see a further distinction in this market between platform-specific products (i.e. this only works on Apache or IIS) versus solutions that work across all platforms.
  • Network: In this category I have placed all of the physical hardware devices that do transformation. You will see an eclectic mix of new and old, with 10+ year code bases like F5 and Cisco mixed in with modern transformation products.
  • Cloud: In this category I put all of the solutions you can subscribe to. This is a very small category. I really hesitated to include Akamai, as they do almost no transformation today, but they do parse HTML for the pre-fetching feature, which gets objects to the edge faster. (I also didn’t want to have a category of one.)

This is a first stab and I’m not convinced I have it right, however I am excited to put something down on virtual paper, so in three years I can look back and see how far our industry has evolved and realize how naive I was.

Server-, network- and cloud-based solution providers, including Strangeloop, Aptimize, Acceloweb, and Webo

Web performance timeline: Any trends here?

After organizing the solution providers in both the delivery and transformation camps, I thought it would be interesting to put the key players in front-end performance on a timeline and see if any patterns emerged:

Includes Gomez, Akamai, Strangeloop, SPDY, and Velocity

As you can see, in addition to showing solution providers, this timeline also shows when new browsers appeared on the market, as well as the appearance of widely embraced performance tools and reference materials. This is a brain dump, but I tried to capture they key elements that I think of when it comes to front-end performance.

This historical bird’s eye view corroborates my delivery-to-transformation theory of performance evolution:

  • The early web was all about the basics: seeing content (i.e. browsers) and getting to modems (Gzip and other server side tricks).
  • The exuberance of the late ’90s was made possible by huge investments in basic infrastructure and foundational datacenter technology. In our world, the key developments were the first load balancers (F5/Netscaler), the introduction of Akamai, and the development of measurement tools such as Gomez and Keynote, which set the standard for web performance measurement.
  • The late ’90s was a hotbed for innovation and produced the first interesting cloud play for dynamic content (Netli) and the first real transformation play (Pivia, which was subsequently bought by Swan Labs and then swallowed by F5; this 10-year-old technology is now branded as the F5 Web Accelerator).
  • 2000-2006 was a tough time for the front-end performance market. We did see some incredible innovation in related markets, such as the branch office acceleration market (i.e. technology that speeds up Outlook and Office between branch offices). The only interesting and key innovator in my eyes was Fineground, which blazed a trail in transformation but sold to Cisco and subsequently was killed.
  • With the recovery of the web economy came greater investment in new tools and research. In 2006, I co-founded Strangeloop and we filed our first patent on the technology that formed the basis for the set of solutions now known as Site Optimizer.
  • Shortly afterward, O’Reilly published Steve Souders’ book High Performance Web Sites. On its heels came a number of developer resources and diagnostic tools such as Webpagetest, and Browserscope, as well as the Velocity conference, which quickly became an unofficial hub of the performance community.
  • In more recent times, our industry has matured with more entrants into the transformation space and legitimization of the core premise with seminal moments like the inclusion of page speed as a key ranking factor in the Google search algorithm.

Your thoughts?

This is just my wide-angle take on the front-end web performance landscape. I’m very interested to hear yours. Is my classification scheme accurate? Have I left out any major developments or solution providers? Are there any gaps that need to be filled? Trends I’ve missed?

And what about the future of solution delivery? Given the trajectory we’re on, where do you see our industry going in the next few years?

Related posts


分享到:
评论

相关推荐

    A SIMPLE MODEL OF THE BELOUSOV-ZHABOTINSKY REACTION FROM FIRST PRINCIPLES

    of the BZ reaction to simulate the evolution of these spirals. The models typically use cellular automata to allow progression of a wavefront through a notional substrate. Usually a single substrate ...

    Alan Greenspan

    The latter half of the book also includes an analysis and brief history of major global economic constructs (of yesterday and today), along with a somewhat subjective measure of their success (or ...

    a project model for the FreeBSD Project.7z

    performance and stability.” The architectural guidelines help determine whether a problem that someone wants to be solved is within the scope of the project Chapter 2 Definitions 2.1. Activity An...

    Introduction to the Practice of Statistics

    Although the first edition of IPS was a somewhat radical departure from the then-standard course, which emphasized probability and inference, this third edition now represents the current standard, ...

    The Art of Computer Networking

    It is somewhat biased towards the Internet and the protocols the Internet uses, namely TCP/IP. Other network technologies are touched on more to give a flavour of alternatives and contrasts of ...

    Hardware-Software Co-Design of Embedded Systems: The POLIS Approach

    Use of higher-level languages such as C helps structure the design somewhat, but with increasing complexity it is not sufficient. Formal verification and automatic synthesis of implementations are ...

    Machine Translation: Its Scope and Limits

    This book presents a history of machine translation (MT) from the point of view of a major writer and innovator in the subject. It describes and contrasts a range of approaches to the challenges and ...

    Basic.Theory.of.Ordinary.Differential.Equations

    Differential Equations are somewhat pervasive in the description of natural phenomena and the theory of Ordinary Differential Equations is a basic framework where concepts, tools and results allow a ...

    The Definitive Guide to JSF in Java EE 8

    be completed in only a few weeks, so the (somewhat) lengthy finalization process could start. On March 28, 2017, JSF 2.3 was then eventually released, bringing with it the start of replacing JSF ...

    微软内部资料-SQL性能优化5

    Each index row in node pages contains an index key (or set of keys for a composite index) and a pointer to a page at the next level for which the first key value is the same as the key value in the ...

    Kenneth Reek - Pointers on C

    Dynamic memory allocation has been at times a somewhat painful aspect of programming in C, but the author shows how to do straightforwardly in the book. Having a book like this that is predominantly...

    Lectures.on.Nonlinear.Hyperbolic.Differential.Equations

    This book is a somewhat revised version of notes from lectures giyen at the University of Lund during three semesters 1986-87. The aim of those lectures was to present the main results then known ...

    AD630锁相放大资料

    The AD630 can be thought of as a precision op amp with two independent differential input stages and a precision comparator that is used to select the active front end. The rapid response time of this...

    The Econometric Modelling of Financial Time Series 3rd

    In the nine years since the manuscript for the second edition of The Econometric Modelling of Financial Time Series was completed there have continued to be many advances in time series econometrics, ...

    iphone h.264 live encode 实时 硬编码

    Latency with VLC and QuickTime playback is a few seconds, since these clients buffer somewhat more data at the client side. The whole example app is available in source form here under an ...

    ASIC设计- ASIC Design in the Silicon Sandbox Keith Barr

    Further, the cost of having a design house do the work can easily approach a million dollars, even for a fairly simple design. If you do your own design, you can keep the details as the intellectual ...

    Financial Applications Using Excel Add-in Development in C/C++

    them in Visual Basic, but yields the enormous performance benefit of compiled C/C++ and the Excel C API. In setting goals for this book, I was particularly inspired by two excellent books that I have ...

    Beginning Artificial Intelligence with the Raspberry Pi-Apress(2017).pdf

    This is quite impressive when you realize that this performance comes with a price tag of only $35 (USD). However, the key feature that makes the Raspberry Pi so attractive for AI demonstrations is ...

    Heterogeneous Computing with OpenCL Revised OpenCL 1.2 Edition

    Despite a small influence that shows that CUDA has a somewhat higher level of programming abstraction than OpenCL, it's not enough to be a decision point. All the current GPGPU references make it ...

Global site tag (gtag.js) - Google Analytics