Indexing your PWA (Discoverability & SEO) – Progressive Web App Training

Indexing your PWA (Discoverability & SEO) – Progressive Web App Training

SARAH CLARK: Once you have
a progressive web app, do you know how to give
it a good search ranking? Hi, I’m Sarah
Clark, and I’m here to lead you into the world
of discoverability and search engine optimization for PWAs. [MUSIC PLAYING] Every search engine has a
different way of ranking pages, but they all depend on a web
crawler to gather information. And when you build a
JavaScript-driven site, the crawler might not be
able to find everything. You might need to
give it a little help. While every search engine
has its own way of crawling, there are two fairly
obvious rules. First, if the
crawler can’t see it, it’s not going to be indexed. And everything
needs its own URL. There may be a trivial
solution for your site. If customers always
search for a landing page or other static
content, let those pages be static content. This won’t index
client-rendered content, but that may be
exactly what you want. This does raise an
interesting distinction. A PWA does not have to
be a single-page app. You could add a service worker
to every page in a website, or a multi-page app. As long as these pages have
the same origin and path, they will share
a service worker. Another option is to serve or
render the dynamic content, and then let the client
take over rendering. This lets any crawlers see
and index all of your content. You can use these
solutions with any crawler, since there is no
JavaScript involved. And if you want your app
to be indexed everywhere, you’ll have to render
it on the server. You can write code that
renders on the client, or as server-side JavaScript. It’s called
Isomorphic JavaScript. But that assumes you’re using
node or another JavaScript server. And if you want an easy
test, you can run Lighthouse. It includes some basic
SEO discoverability tests. Lighthouse runs some
basic SEO tests, as if you have an
HTML-only crawler. Each test has instructions
for fixing or improving shortcomings. OK, so the universal answer is
not to depend on JavaScript. But, Google’s crawler can
run JavaScript, so you can index client-rendered sites, as
long as you follow some rules. There are about a dozen
rules, but the top five will take you most of the way. We’ve already covered
the first rule– make your content crawlable. That means rendering it so
the crawler can find it. If you’re writing
a single-page app, the top five rules become
these top five tips. Many developers provide
navigation links, with a hash for the URL, and
use a click listener instead. These should point
to actual paths in your app to trigger changes. You also need to avoid URL
fragments, the part that begins with a hash sign. These break many
tools in libraries, and are now deprecated. We used to recommend
hash-bang prefixes for crawling Ajax-powered
sites as a way to change URLs without reloading the page. But now, you should use
the History API instead. The next rule is to
use canonical URLs for duplicate content. For example, AMP pages normally
have a server-rendered page and the client-rendered
AMP page. The client-rendered
page has a link back to the server-rendered page
using the rel=canonical attribute. The crawler will index the
canonical server-rendered page. Some developers even shadow
their client-rendered pages with server-rendered pages,
and use the canonical link to point back to the server. This makes more of
the app discoverable. Tip number four also gives
you great accessibility. Use the native HTML
elements whenever possible. Crawlers know what to do
with an actual button, but won’t recognize a div of
class button in the same way. Finally, use
progressive enhancement. Use polyfills where
it makes sense to support older browsers. You never know which
version of a browser is used in a particular
crawler, so play it safe. Some simple changes can
improve your data quality, and give users much
better results. One is to use the annotations for structured data. There are predefined
schema for common areas, such as e-commerce,
scheduling, and job postings. Search engines can use
the schema annotations to pass your data accurately. The same logic applies to
the Open Graph protocol, which allows any web page
to become a rich object in a social graph. Finally, the Twitter Cards
provide a rich media card that displays when anyone links
to your site from Twitter. It’s important to test your
work, and work iteratively, so that you can see the
effects of each change. Testing on multiple
browsers is not only a best practice for
everyday development, it ensures your site renders
correctly on multiple crawlers. Testing with a Google
Webmaster Search Console will crawl your site
and show the result. And you should
always pay attention to loading performance. Use tools such as PageSpeed
Insights or WebPageTest to measure the loading
performance of your site. Remember, about 40%
of consumers will leave a page that takes longer
than three seconds to load. Of course, the
most important rule is to treat
client-side rendering as a progressive enhancement. If you test on a range of
browsers, you’re probably fine. If you want to be certain,
you can use the Fetch as Google tool on the site. If that went by a little
fast, see the Google Webmaster Central blog for
the details on how to make your PWA search-ready. Then come back here,
and I’ll tell you how to measure user
engagement in your PWAs. Thanks for watching. [MUSIC PLAYING]

Danny Hutson

1 thought on “Indexing your PWA (Discoverability & SEO) – Progressive Web App Training

Leave a Reply

Your email address will not be published. Required fields are marked *