Other Cool

Notes from The Startup Playbook

Know Yourself
Very few entrepreneurs ever go into field they don’t have unfair advantage in, a deep knowledge or passion about something. Breakout success is really difficult.

Focus on Biggest Idea
Great entrepreneurs have great sense of broad range of trends, this knowledge enable to them to see great opportunities at right with correct timing. When they see one big opportunity they focus on it. Choosing among many options isn’t natural, and in fact the hardest part often as there is opportunity cost associated with every action. Keeping options open enable dilution of focus and prevent from failing fast, delays the inevitable as well.

Build Painkillers, Not Vitamins
If your service is akin to vitamin then revaluate your value proposition. Painkillers are businesses which customers can’t do without, a necessity. As market matures all side services like vitamins often have nothing left to give.

Be Ten Times Better
People won’t leave competition if your product is just better or equal. There is learning cost, and all sorts of other costs associated with change. Change has to be worth the pain. However, this is the simplest and hardest part. You have to be better ten times in one important thing, and whole company should be geared toward it. Only from this winning position you can leverage in future into other opportunities.

Have Monopolistic Mindset
Mediocre goals results in mediocre executions and plans. Monopolistic thinking helps keep focus on bigger picture and bigger goals.



Take Risks life is short, life lived without adventure is a boring life.

Everyone is influenced by Comparison, Follow your own path your requirements might be different than of others.

Become strong as a person, focus on being productive
How confidently you value your focus, efforts and time? if highly then stop all that waste them. The physical and intellectual time recovered if repurposed into useful things will greatly improve you as a person. It could be simple as outsourcing chores..Basically you are valuable as much as you value can create and increase the amount of value you can create.

Contempt kills companies, culture that doesn’t celebrate risk suffer death.
Create 7:1 rule, meaning succeed 7 times more per 1 failure. This is unrealistic, but that’s the point, create culture where failure is normal, and born out of trying hard. It’s thus important to get rid of people who dont accept this look on failure from and team and investors as they will lead company to death through mediocrity.

It’s only when you stare the death you mature as a startup, so solve problem you are passionate about and its big enough otherwise it’s logical to quit in face of odds faced by startups.

Chris Anderson: ceo of tedtalks

Passion is a proxy for potential: passion of others can show untapped opportunities.
You cant win until other people do work for you, ideally people should love you and your product. Network effect can be great for this, the more people you get, the more you will get.
Spread for niche, while maintaining Broad appeal: win small group of people and then they tell others.

Engage in whole process and keep rearranging parts until it “SNAPS”.
Stay in touch with audiences to avoid surprises by shift in technology or etc.
Failure drives determination: preparation for failure shows your determination.
Don’t Make Big Decisions when Weak.
Define brand in 8 words or less: ..BUSINESS 2.0
Don’t be afraid to give away, if something can be shared online, it should be.
Build relationship through transparency.
Vision is a team thing..



Other Cool

Rapid Prototyping

Throw away prototypes

Prototype Fast

Start from High Level

I am near finishing Udacity course on Rapid Prototyping. It was just as great, casual, and useful as I have come to expect from other courses on Udacity.

The whole purpose of prototyping is to fail fast. It sucks when your site is even just in beta version and you get email or some feedback, somoene saying “it’d be so great if i could just do this”, and then you realize that it’s a very logical complementary feature to have. But now building it into the app requires you change code on at least 3 levels (db, backend, css/js)…reevaluate design aesthetics and reconsider functionality impact of this new feature as a whole. So, even though, this feature is very useful and should be no brainer to be present in the app, and it’s not technically hard to implement…beside all of this it is likely to be put off for todo list of next version x.

Bug discovered in development/production costs many times more to fix than in prototype stage

Another reason, for rapid prototyping is that you don’t need to spend long time shooting in dark and trying to find meaning in life to come up with what user would want, or if this is good. When you can just ask people. You will get richer insight and get it fast, without exhausting your brain stamina. In Rapid prototyping Quantity made of iteration, may they be dumb, is favored over Quality. It’s simply easier to see what’s working and working when you see it than to sit and try to visualize few scenarios.

Stages of Prototype


Low Fidelity -> Medium Fidelity -> High Fidelity -> Beta -> V1.

In a sense goal behind prototyping is testing User experience, and is a long continuous process. A/B testing is one good example of it that is used in apps in production However, in early stages of prototype the user feedback you look for is more geared toward critical aspects of app functionality and hardly about font or color, and in later stages feedback is fine tuned to things like animation, font and so on.

Low Fidelity: Test critical points of interaction, layout, message, less about aesthetics, use wireframes if needed instead full photoshop mockups.

Medium Fidelity: Testing userflow, interaction between pages and ui elements, navigation, design should help in navigation, but dont need to be eye candy and balanced. Interactive mockups are required.

Circles of Feedback:

There are about four circles of feedback that you can get feedback from, each is good for certain type of feedback.

  1. Friends & coworkers
  2. Experts, UX designers, Professor, etc
  3. Customers (who are paying)
  4. Customers’s customers (who use the product)

Not all people are good for advice, though they can give good advice. Some people are more likely to give good advice. People who are too close like your dad is likely to say yes *put anything you made* looks awesome. Similarly, people who have worked on the idea/app will be experts in what the app and will not notice unintuitive parts or want parts that they have grown comfortable to not having.

This is further extended to people who are too technical, in Low fidelity stage, you want to get feedback on basic structure of app and see if elements are making sense, Technical people are more likely to jump in to specific details such as design. Which is not what you are testing right now, even if advice is right. But you haven’t decided on the design yet, and other factors may influence it, and those factors may have not even come up and certainly the user is not aware of them yet.

Remember do not let suggestions lead the design blindly. It’s important to get questions answered on what you are testing for.

Interacting with the User

The video of user being recorded while getting is better than audio, Audio is better than notes, notes is better than just conversation.

Video of user getting excited while using your app is great for investors and presentation purposes as well.

You want users to think aloud, one way to get them started is tell them to practice by telling you how many windows are there in their house?

Now tell ask them how did they come up with this number, can they lead you through the thought process. Now do the same thing with the app testing.

It’s very important to let user know that you are not testing them, they can’t make a wrong moved, being confused is completely ok. What you are testing is your app. If it makes you confuse, it’s great, since this is exactly what you wanted to know that what parts are confusing.

Guide user by asking to do certain task from certain screen.

Help out user if they get stuck on some parts and move them to testing other parts, other parts need testing as well.

Ask questions..qualitative for low fidelity

  • about completing a task
  • purpose of ui elements
  • suggestions on critical things they had confusion on
  • give their thought on other ideas you have on ui elements
  • other interactions with apps
  • can user easily learn this app?


Once we have feedback for low fidelity stage prototype we can move onto next stage if answers to both questions is yes:

  • Did we get all questions answered?
  • Did feedback say we are moving into right direction?

Many apps go through 5 or more iteration process.

Feedback by 5 users cover 75% of problems.



Other Cool

Interesting Parts of Javascript.

Javascript is pretty logical language once you get to know all the parts. Time is the best teacher, some problems you have to experience to comprehend them. Still here are few unintuitive things you may run into as new developer or even existing one.

Objects are passed by Reference

In lots of languages Objects are passed by “reference” and primitive values like strings and integers are passed by value. Being passed by value mean the copy of value is made and modifying it wont affect the original.

Demo: http://jsfiddle.net/ob6yLmof/

var o = { a:1 };
console.log(o.a); //1
console.log(o.a); //2

function addOne(x) {
  x.a += 1;

var n = 1;
console.log(n); //1
console.log(n); //1 - still

function increaseByOne(x) {
   x += 1;

Properties inherited can’t be directly changed

When property on an object is accessed and it doesn’t exist on it then lookup happens in prototype chain. And if property exists then the value of that property is returned. However, when assigning a property which doesn’t exist on the object but is present on its prototype chain won’t matter, it will be assigned to the object itself.

var p = { x: 1 };

var a = {};
var b = {};

a.__proto__ = p;
b.__proto__ = p;


a.x = 5;
console.log(a.x); //5

//but it didn't really change property

console.log(b.x); //1


In JavaScript, functions and variables are hoisted. Hoisting is JavaScript’s behavior of moving declarations to the top of a scope (the global scope or the current function scope).

play(); // Works because foo was created before this code runs
function play() {
   // code code code

however only function declarations are hoisted not anonymous functions.

play(); // this raises a TypeError
var play = function() {};

Hoisting of variables can create unexpected errors and or lead to misunderstandings.

Because variable declarations (and declarations in general) are processed before any code is executed, declaring a variable anywhere in the code is equivalent to declaring it at the top. This also means that a variable can appear to be used before it’s declared. This behavior is called “hoisting”, as it appears that the variable declaration is moved to the top of the function or global code.

var myvar = 'Ike'; 

function logIt() { 
  var myvar = 'thunder'; 


it will result in alert showing undefined. and looking at code will show why? This code will essentially converted to this:

var myvar = 'Ike'; 

function logIt() { 
  var myvar;
  myvar = 'thunder'; 

Only Function Definition create scope

Demo: http://jsfiddle.net/gu2n6Lqt/

Nothing beside function creates new scope, especially curly brackets. This behavior is opposite of some languages. And combined with this misunderstanding and Hoisting it can be really cause unexpected bugs.

var n = 3;
console.log(i); //doesn't give error, 
//console.log(a); //gives error, can't call undeclared variable

while (n--) {
  var i = 3234023;

setInterval (setTimeout) doesn’t follow time precisely

function foo(){
// something that blocks for 1 second
setInterval(foo, 100);

There are two things that on need to be aware of about timers in Javascript.

A) They are not precise.

B) They dont wait for previous interval to finish.

Js is single threaded, and timers simulate multithreading using event loop. Event loop goes through all function calls and then through timers, it checks if timer is due and should be called now, it calls it. If however, code beforehand took more time to execute then it can’t execute code inside timer inside hasn’t gotten there yet.

In Short, setInterval/setTimeout could take more than second even if you have set it to 1000ms. This has implications for gaming, web audio, and chat applications.

But what is real problem with this is that if duration of interval is short enough it could cause double calls, Since, delaying of one timer call won’t postpone the call for next interval call.

Strings are immutable

Unlike, Ruby, string in JavaScript can’t be modified. Only way to change something in a string is to get a new string.

This is however not a big problem but something to be aware of as no errors are thrown.

var myString = "aaaaaaa";
myString[3] = 'c';
console.log(myString); // aaaaaaa

var str = 'hi i am umer';
var newStr = str.replace('umer','abk');
console.log(str); //hi i am umer
console.log(newStr); //hi i am abk

Semicolons aren’t really optional

Before executing code JS compiler tries to auto insert semicolons where it sees fit it results in code such as below ..

function a() {
        a: "hello"

resulting in this

function a() {
  return; // <---
        a: "hello"

Which completely changes the behavior,Omitting semicolon doesn’t result in any change in problems 80% of the time, but due it other 20% it’s easier to add semicolon everywhere then to trying to memorize the rules of how compiler works.

Always specify radix parameter in parseInt

parseInt function’s 2nd argument specifies a radix for the number conversion. If no radix is specified, the results is unexpected.

For example, if the string begins with a 0, the string is interpreted as an octal number:

parseInt("032") //returns 26
parseInt("032", 10) //returns 32

The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping consecutive binary digits into groups of three (starting from the right).

‘this’ in JS.

There are only 4 ways to set this in JS.

  1. obj.func()
  2. new
  3. apply/call
  4. bind

using obj.fun():

Inside function this statements are excuted in context with whatever is on left of dot.

function add1() {
  this.sum += 1;

var a = {sum:0};
var b = {sum:0};

a.add = add1;
b.add = add1;


using new keyword, it goes something like this:

First the constructor function declaration.

function Person() {
   this.name = 'Umer';

then we do this:

var a = new Person();

This is what happens here:

1. First It creates a new empty object, {}.

2. Then it sets this new object’s internal, __proto__ property to be the constructor function Person.prototype object (every function object automatically has a prototype property).
{}.__proto__ = Person.prototype;

3. Call Person function ( a constructor function) with this = {}.


4. It returns the newly created object, unless the constructor function returns a non-primitive value. In this case, that non-primitive value will be returned.

apply vs call

This lets you define the value of this when calling function.

For Example

var o = {
    sum: 1,
    showSum: function(){ console.log(this.sum); }

o.showSum(); // shows 1

//but what if you wanted to use this function with this as something else, like another object.

var anotherObj = { sum: 9999999 };
o.showSum.call(anotherObj); // 9999999

apply and call Do the same thing, but apply takes parameters as an array where as, call takes each parameter to be passed to array as separate argument.

theFunction.apply(valueForThis, arrayOfArgs)

theFunction.call(valueForThis, arg1, arg2, ...)

using bind keyword:

bind unlike call/apply doesn’t call the function right away. But instead pass returns the function with its this already set.

For example:

var obj = { 
    name: 'umer', 
    sayIt: function() {

var o2 = {
    name: 'test',
    sayIt: obj.sayIt.bind(obj)

It helps a lot in nodejs and events in browser, you can pass function without worrying too much about value of this.

Other Cool

Blackbox source files to make debugging easier.

Like everyone, you get bugs in your application code. You start debugging, but when you step through your code line-by-line, the debugger sometimes jumps into a source file that’s not your code like JQuery. I’m sure you’ve experienced the annoyance of stepping through the library code before getting back to your own application code.

The new feature called Blackboxing is available in Chrome and Firefox which makes it much easy to debug files and increase productivity. Blackbox is what it sounds like, it abstracts away the inner working of code that you shouldn’t need to spending time one. It gives you a way to tell debugger, that you dont want to go through this code which means compiler/debugger is not going to stop at every function call in that javascript file while debugging.

This simplifies the process greatly. It’s something that I think all Js Developer should know.

How to Blackbox

There are essentially two ways as of 10/18/2015.

Settings panel

Open the DevTools Settings and under Sources click Manage framework blackboxing.
In here you can set defaults for common javascript libraries and frameworks. This is the ideal way in my opinion.

Context menus

However, easiest way to is find the source file for libraries and mark them as Blackbox.

You can use the context menu when working in the Sources panel. When viewing a file you can right-click in the editor. And you can right-click on a file in the file navigator. From there choose Blackbox Script.



Other Cool

Patience on Web: How to Make a Website Faster!


Patience is a virtue!

Unless It costs you billions. It’s been well documented and understood. That long load times, sluggish UI, and unresponsive app is the best way to loss users/buyers/customers. Some years ago Amazon calculated that a page load slowdown of just one second could cost them $1.6 billion in sales per year. In general, If an eCommerce site is making $100,000 per Day then 1 second delay can cause loss of $20,000. Expectations of users are only going to go up; faster load times, smooth UX and intelligent interaction aren’t an afterthought for serious businesses whose bottom line relies on the technology. In fact, Reliability and Performance can be distinguishing features for the startups, not to mention Search Engines now take into account the loading time of sites into Search Page Ranking. Overall Good experience improves the satisfaction of their Users.

Arvind Jain
“Every millisecond matters.”
Arvind Jain, a Google engineer

Improving Performance and Load Times

Measure first optimize second, start with highest solvable bottleneck.

Outline :

  • Images
  • Svg
  • Spritesheet
  • Minification (css,js,html)
  • Http Compression (gzip)
  • Caching On Client
  • Caching On Server
  • Cdn
  • Ajax
  • New Protocols: Http2, Websocket
  • Blocking stylesheets (link media print)
  • Blocking javascript, Async, Defer
  • Streaming response on Server
  • Less specific CSS rules are faster
  • Redirects
  • Server Side Rendering of View
  • Service Worker & Offline
  • Streaming Api

First, Easiest things you can do are:

Install Pagespeed module: https://developers.google.com/speed/pagespeed/module/

Run your site through Pagespeed Insights: https://developers.google.com/speed/pagespeed/insights/


Use Correct Format

Use jpg over png where it’s possible, mainly when transparency isn’t required. And use svg over all other formats when possible, usually only possible for simpler graphics.

Compress Images automatically and manually

Use build tools to minify images, like gulp-imagemin/grunt-contrib-imagemin and gulp-imageoptim both together. For large images Designer should compress them manually using better compressing algorithm while keeping eye for quality.

There are also couple of standalone tools paid and free, cli, online, gui and photoshop plugin all of which can help further and perhaps better.

more information: http://addyosmani.com/blog/image-optimization-tools/
more information: https://youtu.be/pNKnhBIVj4w?t=170
more information: https://www.udacity.com/course/viewer#!/c-ud892/l-5332430837/m-5325220785
more information: http://jamiemason.github.io/ImageOptim-CLI/comparison/jpeg/jpegmini-and-imageoptim/desc/

Serve Images Responsively and Generate Different sizes of images for different resolutions and screen widths

grunt-responsive-images (generates images of varying sizes) with imager.js (lazy loads appropriate image, minimum size and resolution needed)
checkout html img tag attribute src-set and picture element : http://alistapart.com/article/responsive-images-in-practice
Though it has a bad support, and is very complex. It allows to define various formats of images, different src for different sizes and resolutions.


Detect and Use Webp

On Server : It’s becoming more common for web clients to send an “Accept” request header, indicating which content formats they are willing to accept in response. If a browser indicates in advance that it will “accept” the image/webp format, the web server knows it can safely send WebP images, greatly simplifying content negotiation


On Client: User modernizr to load correct image format. http://www.stucox.com/blog/using-webp-with-modernizr/


Serve low Resolution and then load high resolution in background.
If speed trumps quality in your case then just load lower quality of image  by default then load second higher quality image using javascript and replace src of lower quality image to higher quality image. Only downside to this is that more bandwidth gets used, so only recommended for desktop users.

Enable client side caching
Set max age to high number on static resources using server side configuration. And whenever files change update the name of file. The easiest way to do this is use gulp-rev, it attaches file hash to filename thus if file changes its name also changes which invalidates the cache.

Use Progressive Jpg
Save images as progressive jpg which is not a default in Photoshop and other programs. Progressive jpgs take longer to load overall. But image becomes visible altogether instead of being loaded pixel by pixel it’s loaded layer by layer.

Inline small image
small size images can be converted to data uri and inlined right into css. sparing extra http request.




Use build tools to minify svg files, like gulp-svgmin or grunt-contrib-svgmin.

Inline svg files (note inlining disable caching, so test and compare the benefits).

Manually optimize, Sometimes, it’s possible to reduce points in svg without affecting quality which can be done with tools like https://github.com/svg/svgo-gui




Spritesheet is basically all pictures of a site combined into one big image then they are referred to using background image property in css and clipped. This reduces 13 requests for 13 assets to 1 request for 13 assets.

There are many tools that assist in the process.

Minification (css,js,html)

Inline Critical css and js. Minify and concat rest of js and css into one big file.

Minify simply removes all white space and simplify variables name. check out https://github.com/gmarty/grunt-closure-compiler

For css, minifiers can remove unused css rules, remove white space, and merge duplicate css rules. Check out https://github.com/ben-eb/gulp-uncss


Http Compression (gzip)

Wether using ngnix or apache, both easily allow to enable gzip for all resources it helps in reducing file size. Network speed is slower than computing speed of today’s devices. And there is a breaking point.

However, measure and test. Gzip sometimes can result in higher file sizes for some files, especially smaller files, due to how gzip works.

Pre compress files using gzip, ngnix uses lowest compression ratio for per request gzipping, understandably. check out https://www.npmjs.com/package/gulp-zopfli

cloudflare cdn automatically gzip files that it’s caching.

Caching On Client

Enable caching of resources by setting expiration date on all resources. Good practice is to set long expiration date and then change filename when update has been done. Setting expiration date on files tells browser how long they are expected to not change and browser saves them in memory and next time instead of hitting the server it just serves from memory.

However, page load time performance shouldn’t completely based on client side caching. Page needs to be fast as is, most of the time cache aren’t reliable and easily invalidated. Therefore a bad footing to place performance strategy on.


Caching On Server

On server side implement a through caching strategy. One of the best ways to do this is use memcache.

Cache static pages or pages that change slowly. Caching is faster than hitting database, because of two reasons, cache is in memory whereas database stores data in hard drive. And secondly cache often stores end result of data. Database often have to ‘calculate’ the final data.

View udacity course on web development.

Domain Sharding

Browser has limit to how many connections it can have simultaneously to one host. Allocate subdomains to certain resources then requests won’t be backlogged until some other resource frees up the connection.

Another benefit of it as that cookies are sent along every http request. There is no point in getting cookie data for static assets. So domain static.example.com can be cookie free. Where as http://www.example.com can have cookies. If however cookies has been set for example.com then they will be sent with static.example.com also. Therefore, in that scenario it’s best to just buy another domain and allocate it to cookie free resources.

However, there is dns lookup cost associated with too many domains. So measure and test.


Use cdn for all static resources, cdn usually have domain sharding built in by default. Cdn databases and servers are closer to user physically and delivery time is less. CDN cache results and serve them faster. As with everything test and measure.



Post Load essential resources html, css, js first. Then load everything afterwards using javascript. Make sure to enable caching on ajax response by setting expires on date.

Preloading is loading assets of pages ahead just once all for current page has been loaded, this puts content in browser’s cache if it wasn’t there.

Configure ETags

Remove eTags from http response headers. It leads to browser interpreting same resource as being different due to eTag being generated on different server as is the case on CDN and proxies.


Iframes slow down things

if possible dont use iframe, they just slow down everything and there is no way around that.



Delegate Events

It’s better one element listening to click event and then determine what to do based on what event target was, then to have to 10 buttons listening to click event. Too many event listener clogs js event loop unnecessarily.



New Protocols: Http2, Websocket

Blocking stylesheets (link media print)

Screen Shot 2015-12-05 at 3.50.13 PM

Normal Critical Path: Get html → Parse/Construct Dom tree → Get Css & JS → Parse/Construct CSSOM tree  → Run JS  → Merge & Render CSSOM and whatever amount of DOM is present → Paint.

Put media query on link tags, this tells browser to not wait for them to be parsed if they don’t apply immediately, and stylesheet downloading is already async natively. (Js downloading is synchronous & blocks parser).

<link rel="stylesheet" type="text/css" href="print.css" media="print">

Inline The critical CSS in header tag and load rest of CSS using javascript which can be done by hiding a main div and then when CSS is loaded using JS to show it.

Just Moving stylesheet LINK tag down in body wont help since, unlike DOM, CSSOM isn’t built incrementally..all css is downloaded parsed then applied.

JS is ran after the CSSOM is done, and JS blocks DOM parsing until it has finished. Large CSS will delay JS fetching/execution and that will delay DOM parsing which delays paint. So if CSS file is big and hasn’t loaded + parsed it will stop the DOM parser if it encounters the script tag.

Script tags however aren’t the same. They don’t block parsing but they do block rendering. So putting them in at bottom of body tag wont prevent browser form constructing DOM by waiting for all JS files to download but their execution time would still delay the Rendering and Painting of Web page.

more information: https://www.youtube.com/watch?v=hW4FDYeONdg

Blocking javascript, Async, Defer

Normal Critical Path: Get html → Parse/Construct Dom tree → Get Css & JS → Parse/Construct CSSOM tree  → Run JS  → Merge & Render CSSOM and DOM → Paint

Here’s what happens when a browser loads a website:

  1. Fetch the HTML page (e.g. index.html)
  2. Begin parsing the HTML
  3. The parser encounters a <script> tag referencing an external script file.
  4. The browser requests the script file. Meanwhile, the parser blocks and stops parsing the other HTML on your page.
  5. After some time the script is downloaded and subsequently executed.
  6. The parser continues parsing the rest of the HTML document.

Step 4 causes a bad user experience. Your website basically stops loading until you’ve downloaded all scripts. If there’s one thing that users hate it’s waiting for a website to load.

Why does this even happen?

Any script can insert its own HTML via document.write() or other DOM manipulations. This implies that the parser has to wait until the script has been downloaded & executed before it can safely parse the rest of the document. After all, the script could have inserted its own HTML in the document.

…Add async attribute to any Script tag which refers to Javascript which isn’t critically important. Async tags prevents parser blocking. Putting script tag at bottom of body alone isn’t enough since it prevents the downloading of script tags till very end of html file.

defer attribute is the same as async except order of script tags matters.

more information: http://stackoverflow.com/questions/436411/where-is-the-best-place-to-put-script-tags-in-html-markup

Streaming response on Server

To start getting data to user as soon as possible streaming is an easy solution. Even if db is taking time to respond back you can probably start sending header and navigation related html.

It’s quite easy in nodejs, you just write or pipe whenever you get data.

Or using http2’s multiple data frames.

Less specific CSS rules are faster

Though this could be seen as micro optimization, but it’s something to keep in the mind nonetheless and efforts shouldn’t be wasted on unless it’s really easy to avoid or through testing it’s apparent that CSS files are being the bottleneck in Critical Rendering Path.

Basically idea is that more specific the Css selector is the more time it takes browser to apply that style to DOM.

div p { .... }  #> is slower than
p { ... }

Reason is that for every node that browser come across in DOM tree browser has to access other DOM nodes to determine if the constraints are true.


There are three types of redirects DNS, Html, Javascript. We are talking about Html and Javascript Redirects. These Redirects increases the page load time, as browser has to make a new request which involves DNS lookup, TCP handshake and TLC negotiation. So it’s even more costly for https sites. The worst case scenario might look like:

http://example.com → https://example.com → https://www.example.com → https://m.example.com

Solutions Include: 

Using Responsive Design so mobile site and desktop site is the same site.

Use Adaptive Design serve custom site to mobile users by sniffing http headers.

Server Side Rendering of View

Sending script to client that in turn request more data and then renders it overall accumulates more requests overhead. Rendering the content on server often reduces the load time by reducing the extra requests needed.

However, it needs to be measured and rendering on server could increase the time of whole html. Where as before first html was loading faster. So even though overall it takes smaller time, it might appear that as a whole site took longer since first thing appeared much later.

more information: https://youtu.be/d5_6yHixpsQ?t=223


Streaming Api

need to do more research on this..but main idea is that your client side makes ajax request and gets data back in streaming fashion.



Web Service Worker and Offline viewing

It’s a new improvement over app cache. Acts as a cache between browser and server which you can control using javascript.

need to do more research..




PreResolve DNS

This is new, by heading this in header tag you indicate browser to start DNS look up for these domains. So when actual resources request something and their domain is among these domains the dns lookup wont be needed since the ip address to server would already be cached

    <link rel="dns-prefetch" href="//www.domain1.com">
    <link rel="dns-prefetch" href="//www.domain2.com">





All resources

















Other Cool

Use Spritesheets for faster page load times!

The page load time of a site is one of the things that distinguish a well ran professional website from amateur/failing site. It’s thus also a way for Developers to showcase their experience by creating a fast loading quality site.

One of the ways to instantly improve the site performance beside caching, proper compression of images, and using CDN (content delivery network), is to use reduce the number of http requests made. Even milliseconds matter and they add up, every individual request is a carries a lot of wasted overhead that is needed to establish a request. Among these can be dns lookup, protocols handshake, headers exchange and so on.

Screen Shot 2015-09-16 at 11.04.25 PM

What is a spritesheet?

The Spritesheet is the perfect way to reduce number of requests made for images. The Spritesheet is term borrowed from gaming industry. Where game developer would take all the game graphics into one big picture and then use it by copying and cropping to show only specific parts. Additionally, If timed properly spritesheets can be and are used for animation purposes.

Some Examples of Spritesheets

Early Games’ Examples

This would allow Game Developers not only reduce file size, io operations, but also animate more easily. For example take this image:

I have made jsfiddle that uses this image and iterate over them making it look like an animation.



Work Flow

First Create all assets as normally you would, then once you are done creating and compressing images you can replace all you image tags with divs and set their background image to div. Usually there are task runners that take care of this.

These task runners combine all images into one image file, generate relevant css classes. This process is similar to how javascript files may be merged and minified for production.

You can manually create do all of it, but I won’t advice it at all.

Disadvantages & Alternatives

You can’t use img tag, which could cause conflicts. Alternatively you could also use data uri for image instead of url.

Increased complexity of an app.