Google Search Console's Speed Report

Have any of you guys looked closely at the new (“experimental”) speed report in Google Search Console? It’s a bit terrifying…

I’ve been developing some new sites (for years now, but finally starting to at least look somewhat complete), and pretty much all the pages on those sites are considered “slow” for mobile users, but the same exact page served to desktop users is considered to have a “moderate” speed.
[ATTACH=JSON]{“alt”:“Click image for larger version Name: Screen Shot 2019-12-01 at 8.25.12 AM.jpg Views: 1 Size: 37.0 KB ID: 296898”,“data-align”:“center”,“data-attachmentid”:“296898”,“data-size”:“full”,“title”:“Mobile vs Desktop”}[/ATTACH]

That spurred me to take a closer look at how quickly my pages were being served. I found some slow SQL queries and I’ve been trying to rework my queries since it seems using SQL views really slow things down, so now I need to go through my code and rewrite things so I execute the SQL directly rather than through a view.

Backing up a bit… the data comes from actual users. Chrome reports data back to Google, so Google has access to real user experience data. They use two measures of speed: [LIST=1]

  • "FID" which is how fast the HTML gets returned to the browser[LIST=1]
  • Fast = under 100ms
  • Moderate = 100 to 300ms
  • Slow = over 300ms [/LIST]
  • "FCP" which is how quickly the page starts rendering. For FID[LIST=1]
  • Fast = under 1 second
  • Moderate = 1 to 3 seconds
  • Slow = over 3 seconds [/LIST] [/LIST] They then classify the page by the worst of those two measures. So if you're slow to serve something that renders quickly, then you're slow. I just don't understand that logic. Isn't FCP the ultimate standard? The user doesn't care about what's happening behind the scenes (FID), they only care about what they see (FCP).

    The other thing that’s completely frustrating is that it’s the same exact standard on desktop and mobile. I’m serving the same exact page by the same exact server, yet you can see huge differences in the reports for the same page. Here’s the desktop report…
    [ATTACH=JSON]{“alt”:“Click image for larger version Name: Screen Shot 2019-12-01 at 8.19.42 AM.jpg Views: 1 Size: 10.7 KB ID: 296899”,“data-align”:“center”,“data-attachmentid”:“296899”,“data-size”:“full”,“title”:“Screen Shot 2019-12-01 at 8.19.42 AM.jpg”}[/ATTACH]

    Notice that /blog takes 151ms to serve, and the home page takes 125ms to serve. Now here’s the mobile report…
    [ATTACH=JSON]{“alt”:“Click image for larger version Name: Screen Shot 2019-12-01 at 8.18.33 AM.jpg Views: 1 Size: 19.6 KB ID: 296900”,“data-align”:“center”,“data-attachmentid”:“296900”,“data-size”:“full”,“title”:“Screen Shot 2019-12-01 at 8.18.33 AM.jpg”}[/ATTACH]

    The mobile report says /blog takes 618ms to serve (4.1x longer) and the home page 362ms (2.9x longer). The blog page is the exact same page, while the home page does have less content on it (so a smaller page).

    Before this report came out I would have thought a FID of 151 was pretty fast. But no, it’s not fast enough. And now I’m being judged on the quality of my user’s mobile networks. All I can hope is that all the other sites out there have the same problem. But the indexing changed a while ago now to “mobile first”, so it’s the mobile speed numbers that matter most. But I’m just flat out slow on mobile. Even though I’m returning pages in what I think of as a reasonably fast time.

    I know Bjorn is going to chime in here saying he prerenders all his pages, so his pages are all fast. That might work on some of my pages. It’s just really frustrating to have to do that, and it doesn’t work in a lot of cases. I’m already precalculating/caching some of the more complex queries. Guess I need to do even more of that.

    Oh, and then there’s data that just makes no sense. Moving away from my new sites, to my forum site (which is based on IP.Board), I see this…
    [ATTACH=JSON]{“alt”:“Click image for larger version Name: Screen Shot 2019-12-01 at 8.57.53 AM.jpg Views: 1 Size: 35.7 KB ID: 296901”,“data-align”:“center”,“data-attachmentid”:“296901”,“data-size”:“full”,“title”:“Screen Shot 2019-12-01 at 8.57.53 AM.jpg”}[/ATTACH]

    So it would seem that mobile was slow and desktop was mostly “moderate”. Then there was an update and desktop got slow as well. Then there was another update and mobile went from slow to mostly moderate, but there was no change to desktop. Then there was a yet another update and they fixed what they messed up and desktop went back to being mostly moderate. I suppose that’s all possible – IP.Board is pretty advanced and I can see them serving very different things to mobile and desktop. It’s just a little scary to see that required security updates of 3rd party software can make your speed ratings change so drastically.

    I’m just curious to hear everyone’s thoughts/experiences with this…

    Screen Shot 2019-12-01 at 8.25.12 AM.jpg

    Screen Shot 2019-12-01 at 8.19.42 AM.jpg

    Screen Shot 2019-12-01 at 8.18.33 AM.jpg

    Screen Shot 2019-12-01 at 8.57.53 AM.jpg

  • Well, I’ve tried it on one of my sites. I really don’t know how this test works, but the results were identical to yours. I worked on caching, and the result was a 2-point increase in GMetrix to 98 points, but in the Google console, my site continues to be displayed as moderate to slow. Probably the idea of caching all content, including the database is not so bad, but for a small site. If you have 100 000 pages for example, it seems impossible.

    Bjorn – It’s comforting to see that you’re not more green. Given that you work off a lot of static pages, you’ve always had very fast sites. If you can’t achieve mostly green, then it means very few sites achieve a lot of green.

    My strategy at this point is to focus on desktop speeds, since that’s easier to improve. What’s discouraging is that I make improvements that, in unit testing, have great speed improvements and those changes don’t seem to make a dent in Google’s graphics.

    I also want to see if there’s any blocking javascript loads, etc. that would delay page rendering.

    As far as the big switches in the graphics, I doubt it’s due to cleaning up non-existent pages. Cleaning up non-existent pages would lower the one of the lines (or multiple), not make them switch. So let’s say the non-existent pages were the green line. The drop in that line could be explained by cleaning up those pages. But that doesn’t explain the rise in the yellow line. What you see are the pages that people visit. They probably weren’t visiting the non-existent pages. So cleaning up those pages shouldn’t have (much of) any effect.

    Oh, one thing I read said that redirects are counted against page speed. So if yo have any cases where you redirect to a canonical URL, it’s better to serve the page on the requested URL with a canonical meta tag (if possible).

    Ah, I just realized why their scoring on mobile is so tough… They’ve basically set up a scenario where the only way to score fast on mobile is to use AMP pages. Thing is, there are some serious limitations to AMP pages. Most banner ad serving systems don’t work on AMP because of the Javascript limitations. But if you think of them as landing pages that draw people further into your site, they have a purpose.

    Well, my efforts to speed things up are showing results. I’m down to zero slow pages on desktop on the biggest of my new sites…
    [ATTACH=JSON]{“alt”:“Click image for larger version Name: Screen Shot 2019-12-04 at 7.45.52 AM.jpg Views: 1 Size: 19.4 KB ID: 296927”,“data-align”:“center”,“data-attachmentid”:“296927”,“data-size”:“full”,“title”:“Screen Shot 2019-12-04 at 7.45.52 AM.jpg”}[/ATTACH]

    So that’s a start. But I also noticed when I did a page speed insights report on a particular URL that my “FID” (serving of HTML) is “fast” 80% of the time (which surprises me, actually)…

    [ATTACH=JSON]{“alt”:“Click image for larger version Name: Screen Shot 2019-12-04 at 7.47.26 AM.jpg Views: 1 Size: 20.3 KB ID: 296928”,“data-align”:“center”,“data-attachmentid”:“296928”,“data-size”:“full”,“title”:“Screen Shot 2019-12-04 at 7.47.26 AM.jpg”}[/ATTACH]

    That’s before any work was done. Not I just need to get the FCP numbers up (or down, as the case may be)… I’d like at least some fast pages and currently I have none at all. But there are a lot more SQL queries to optimize. Just glad I figured out what my issue was, but fixing all the queries takes time.

    Screen Shot 2019-12-04 at 7.45.52 AM.jpg

    Screen Shot 2019-12-04 at 7.47.26 AM.jpg

    Just a note really, the slowest part for me has always been the database (SQL).

    I’ve been working heavily with databases for 28 years, but most of those years were working with an obscure, French database that (being French) thought differently than SQL. I knew some SQL, but not much. Then a few years ago I made the jump to MySQL (and PHP) and it’s been an interesting learning process. Some things work exactly like I expected them to, other things are completely different. So with that said, here are a few things I’ve learned recently…

    Never use SQL views to serve public-facing pages. When I first saw them I thought SQL views were great. They seemed really elegant and I thought they must be a best practice since you could define things all in one place and have multiple uses all hitting the same business logic. But MySQL / MariaDB analyzes the query very inefficiently and if there’s any complexity you’re suddenly hitting a tmp table and things are 10 times slower than if you just did a straight SQL SELECT. I now think SQL views only make sense in a corporate environment where you have users with basic SQL skills needing to do complex things. Views dumb things down to the point where the user can get what they want. It doesn’t matter if they need to wait a few seconds for the result.

    Indexing is critical. If something is slow – look at the indexes that are involved in JOINs and WHERE statements. Indexes on numeric data are generally faster than indexes on textual data. Create compound indexes if needed, but make sure the non-compound index is there as well.

    Large, seldom-used text/binary data should be in a separate table. Consider the amount of data that has to be loaded when you select a record. If you do SELECT * FROM… you’ll be loading whatever is in the row, even if you don’t need it. If it’s a large amount of data, it can slow things down. So if there are columns with large amounts of data that you don’t need very often, put them in a separate table with the same primary key and then do a LEFT JOIN to load the data when you actually need it.

    Watch out for “copying to tmp table”. Watch the connections to your SQL server in real-time with something like MySQL Workbench and when you see “copying to tmp table” that’s a good indicator that a slow query is happening. Then look at those queries more closely. Apparently there’s also a “slow query log” that MySQL can generate, if you want it to.

    Use EXPLAIN & ANALYZE to diagnose problems. This doesn’t always help, but sometimes it does. And realize that ANALYZE FORMAT=JSON (MariaDB syntax) returns a lot more info than just ANALYZE. That will tell you which part of a complex query is taking a lot of time. Then dig into what could be going wrong with that part of the query.

    Cache results when possible. I have Memcache on my server (or is it Memcached? I can never remember). Anyway, most of my SQL queries are cached, and it makes a huge difference. For example, I just ran a fairly complex page with caching turned off and it took 336ms to generate the HTML. But with caching on it took 78.6ms. How caching is implemented will depend on the framework you’re using. I write all my PHP with Fat Free Framework and I just pass the acceptable cache length as a parameter when I do a call to SQL, and the caching just happens without any effort on my part.

    One other thing about speed… Make sure you’re database tables are using InnoDB, and then give a lot of RAM to the InnoDB buffer pools. I think they say 70% of your machine’s RAM should go to the buffer pools. And it should be Gigabytes of RAM. I have 3Gb allocated and wish could do more, but it seems to be enough for now. And, you should change the default config for the buffer pools. To avoid problems you want just a few pools with each having a lot of RAM. I think the default is 8 pools. Change that to 4 pools. What all that will do is avoid MySQL having to hit disk – the data will probably already be available in memory.

    Then use something like MySQL Workbench to make sure everything is good. “Key Efficiency” should be 100% (or close to it). Ditto for “InnoDB Buffer Usage” (mine seems pegged at 97.8%). If they’re not near 100%, then something is wrong.

    And speaking of hitting disk… Databases should always be on SSDs, never on HDDs. The same is true for PHP files and other small files.

    A wonderful explanation of what needs to be done. Thank you, Jay. Recently, I migrated huge databases to InnoDB, and the results were great. At the moment, I’m experimenting with Lightspeed server, there are some limitations, but it is much faster than Apache, for example. In combination with InnoDB, it gives very goods results.

    And a few other things you can do to make your SQL queries faster…

    If you don’t need the data, don’t load it. So I was converting some SQL views into regular SQL, and there was a bunch of stuff in the view I didn’t really need. Including JOINS that then necessitated a GROUP BY, etc. Getting rid of everything I didn’t need made it much faster. But you don’t need to go overboard on that. If it’s already loaded the row, having a few extra columns won’t make that much of a difference.

    Help MySQL get to what you want quickly. I was just working on something where I wanted to find the porn stars who had done the most bareback scenes. My initial query had no WHERE, just an ORDER by on the number of bareback scenes. That was somewhat slow. Then I added WHERE scenesbareback > 0. That made it faster. Then I changed it to WHERE scenesbareback > 20 and it got a substantially faster again. So if you can add stuff to help MySQL narrow things down quickly, it will return the results faster.