In a lot of projects I worked on, JS & CSS inclusions were a mess. Too much files, no coherence, bad cache usability, etc. In this article, I'll try to bring you some concepts and solutions to solve these problems and optimize the loading time of your pages.

These concepts and solutions are valid for both JS and CSS. So I'll use the words client-side resource _s for filesand client-side _ script _sfor piece of code. And that always will be about JS and CSS. I won't talk about compression, or minification of resource_ s. We know that they should be compressed, regardless what CMS or tools we are using. I will talk about unification options, and choices that impact on performance.

Assume we develop a site with ten different pages. Some client-side script _sare common, (used by more than one page). In contrast, there's also some heavy script_ _sthat are specific to a page. Finally, some pages don't need scripts _at all.

Now the questions are:

  • How many client-side resources (how many files) we'll have ;
  • Which resource contains which scripts ;
  • And from where they will be loaded.

The unique file approach

We unify all the client-side scripts in one single resource, a “complete-website.js” or “styles.css”. This reduces the number of requests. And this _ resource_is putted in the caches after a visit of one page of our site. Every visits on other pages will benefit of the cache usage.

But even a simple call to one page will load a heavy file with maybe only 5% used. The cache may be empty when we visit a page, and we'll load awesome UI stuffs such as carousel, swiping and parallax, for nothing. Do we really want to?

Now if we do a modification on a small script used by one page, we have to reunify our big resource and invalidate caches that store it. Modification on a small _script _happens, meaning frequent cache invalidations.

Many feature-related files approach

So what if each page has it's own resource? We'll have files with small code, and we load only the code we need. A client-side resource contains the code for the features a page provides.

But there are many chances that we want to reuse our code in other pages. And I'd rather work on IE than duplicate a code _. _So let's say “a client-side resource contains the code of a feature” ! We'll have files with small code, and we'll load only the code we need. It's more modular and easy to maintain.

If we have to modify one of these resource s, we just do it, and invalidate caches for one specific URI. Meaning good cache usage.

But if we visit for example a homepage with many client-side features, we will have to load many files. Only the code we need, yes, but with many requests.

So what's the best approach?

Round-trip time (RTT) is known as more problematic than latency due to bandwidth limitation or too heavy files. So the problem are big amount of files and bad cache usage.

As Andrew B. King says in his book first tip is to limit number of requests.

And as Ilya Grigorik demonstrate in this post, we first have to:

  • Reduce RTT latency by using caches (and especially geolocalized cache such as Akamai).
  • Reduce number of RTT simply by reducing number of requests.

So for a good cache usage in a project that is modified frequently, we'll choose the “Many feature-related files” approach. And for a less number of requests, we'll choose “the unique file approach”.

Find the golden mean

If the site is not huge, and the client side scripts not frequently updated. Stop digging your head. Choose the unique file approach. It will be cached. And we'll have only one request to load all the scripts.

But on a big site, we need some modularity. At least, we need to segment our client-side scripts in three categories:

  • A common _client-side resource_used by all pages. Let's name it “internal-lib” ;
  • A few resource _s _for very specific codes, used by only a few pages: “specific-lib” ;
  • And the external libraries or framework we use (but never modify): “external-libs”.

As we never modify the “external-libs”, the cache usage is good. And if we are using cdn and havea bit of chance, the “external-libs”may be cached even before the client visit our site. The “internal-lib” is unified, used everywhere, and cached if we don't modify it too frequently. And finally the “specific-libs” are loaded, just on need. As far as I know, this is a very good compromise.

Concretely

In a classic architecture, such as Twig & Symfony2. We include all our resource s in a “layout” page level, exactly to manage coherence (our golden mean) in one single place. And of course, to do these inclusions in the head of the DOM for CSS and at the end of it for our JS.

The problem is that, at this level, we may not know what specific code we have to include. For the “specific-libs”, we'll have to determine which templates will be used before we access these templates.

<script src="{{ asset('external-libs/jquery.js') }}"></script>
<script src="{{ asset('external-libs/select2.js') }}"></script>
<script src="{{ asset('internal-lib.js') }}"></script>
{% if isSpecificPageX %}
    <script src="{{ asset('external-libs/parallax.js') }}"></script>
    <script src="{{ asset('internal-libs/x-init.js') }}"></script>
{% endif %}

That's a bit tricky. I would say that all the very specific pages that needs specifics script s, should use a specific “layout” page, and redo the inclusions at all. In twig, we simply use the blocks and extensions.

In your default twig:

{% block javascript %}
    <script src="{{ asset('external-libs/jquery.js') }}"></script>
    <script src="{{ asset('external-libs/select2.js') }}"></script>
    <script src="{{ asset('internal-lib.js') }}"></script>
{% block %}

In your specific page:

{% extends "::default.html.twig" %}

{% block javascript %}
    {{ parent() }}
    <script src="{{ asset('external-libs/parallax.js') }}"></script>
    <script src="{{ asset('internal-libs/x-init.js') }}"></script>
{% block %}

After all

How to ensure that resources are included in the right order (respecting the dependency chain)? Should it only be about “layout” page coding, and developer‘s good will?

If we don't pay attention while the project grows and evolve, our “internal-lib” will grow up, becoming a mess, our “specific-libs” will become a transversal norm, and our “external-libs” will be loaded in bad orders. I even seen “external-libs” included multiple time with different versions for a same page!

These kind of problems may need a large refactoring and be very difficult to resolve after a year of development. So when you add a page that needs some client-side scripts, ask yourself:

  • Is that feature already given by “internal-lib” ?
  • If not, should I add it ?
  • Do I really need a new “external-lib” ?
  • Can I replace an old “external-lib” by a new one that meet old and new goals ?

Then, always call the “external-libs” first, then the “internal-lib” and finally the “specific-libs”. If you have problem with dependencies using this order, it probably means that some code in your “specific-libs” should be moved in you “internal-lib”.

Conclusion

Applying these practices will not ensure a high website optimization and performance, but it will contribute for sure.

  • It will allow to control browser cache, in an efficient way, by setting the cache instructions on each client-side resource.
  • It will reduce the weight of your page and number of RTT. Two very important impacts on page performance.

I just heard about new HTTP2 protocol that tend to resolve the performances problems itself. At least regarding RTT. As far as I know, if you design your website for HTTP2, you should reduce amount of data with a high granularity of resources.

In the next article, we'll adapt these concepts and solutions to a content-centric application and more specifically to Adobe Experience Manager (AEM).