Search Engines Have Learned
Loading Content Via Load Instead of User Events
These results are fired from the browser as soon as a site's DOM tree is loaded. Search engines like Google allow for load events during crawling, so that the site's content is usually only indexed after the execution of the load events.
User events are not loaded, though. Thus, all changes triggered via click or touch events, for instance, will not be considered during indexing.
Push-States and URLs
In order to allow Google to index a site, it always has to be accessible via a URL. Thus, click events cannot be considered either, as they always display content triggered by an individual user.
As Google can't index URLs that are exclusively realized via push state API, each URL created using "pushState()" also needs to have a "real, existing" URL.
By the way, this is not only interesting for search engines, but for social networks as well. That's because you can only share sites that have a "real" URL. Facebook and Twitter also need to extract content from a site, which only works if there's a URL.
This principle pursues the approach that content has to be prepared in a way that makes them available, regardless of the browser, or crawler.
Here, you have to judge which effort is reasonable for your project.
Testing Crawler View
Google's "Search Console" is one of the things helping you here. Under "Crawl", you'll find the function "Fetch as Google". Here, you can display a website for mobile and desktop devices, in the way that Google actually crawls it.