TLDR
This lesson is about diagnosis, not optimization.
You will get to know the tools for synthetic measurements, learn how to interpret them, and understand why “PageSpeed points” can lie. You will also learn how to separate backend and frontend measurements - because this distinction is absolutely key.
Why do we need synthetic measurements at all?
In Lesson 1, you had learned about RUM (Real User Monitoring) data - the true data collected from actual users.
RUM results tell us what happened on our site, but they won’t always tell us why. Unless we have a very large dataset, CrUX isn’t particularly detailed, especially regarding specific elements of your store. CrUX data is aggregated and anonymized, which makes it excellent for trends, but weak for root-cause analysis.
And this is where the second category of measurements comes in: Synthetic Speed Tests (Lab Tests).
These are measurements performed in controlled environments.
They are not based on real user behavior - the lab tests simulate user behavior in a controlled, artificial environment, ideal for diagnosing problems. You won’t always be able to simulate exactly the same scenarios as on a real store (e.g., sessions with cookies or entries with marketing parameters - we’ll come back to this later!).
Popular tools include:
You can also use tools built into your browser: the Console, Network tab, and Performance tab.
A lab-based synthetic test shows you a complete overview of the requests and how they rendered a page in a browser. By studying the details, we can later ask the question:
What can I fix in the code to make it faster?
However, the starting point before running synthetic tests should always be your RUM results.
Rest of the lesson is available only to logged in users
To log in use the link from the email you registered with.
Required active marketing subscription to newsletter.
Don't have an account? Sign up here.
Go to login