The retrieval and storage of the feeds happens irrespective of any session, and would as discussed previously be sped up greatly by differentiating the alive-times of each feed. It would be a good idea to set debug: true in the TwigFeed config and enable Grav’s debugger, as that will add timings to the debugger so you know exactly how long it takes to retrieve the data.
In the neighboring file log://twigfeeds.log you’ll also find data about how long each feed took to retrieve and parse. 20+ seconds isn’t horrible, but shouldn’t be imposed on the user. Because of how TwigFeeds looks for new data, you could probably even run the wget or CLI even more frequently, but as said it’s best if it’s somewhat aligned to the caching done for each feed.
Rate-limiting is rare, but sometimes happens because of poor server- or feed-configurations that doesn’t maximize efficiency with caching on the feed’s sources’ end.
I know we’ve said before about twigfeed refresh isn’t dependent on client side actions, yet I can’t think of anything else. Pinging the fullfeed template that ‘gets everything’ has resolved the long load times. Everything is very fast now.
differentiating the alive-times of each feed
I changed the refresh time gap for feeds several weeks ago, ranging from 1 hour to 12 hours depending on the source. But this makes no difference to the dormant user action/then visit the site and slow log in issue. Only the ping makes a difference.
Ill look into it more, and also implement the debug/ Grav debugger to see if they tell me anything.
Yes Ive considered going to 30 minutes with the ping. Ill add 10 more feeds, with different refresh times, and maybe doo this.
That points to a somewhat different conflict: Grav’s template-rendering. Because the feeds are rendered by Twig, Grav has to recompile that code whenever the feeds change. I suspect the data-retrieval, even from many feeds, won’t take all that much time as the manifest ensures it isn’t done unnecessarily if each feed has a defined cache-time to keep it alive.
At scale, there’s a valid argument for lazy-loading and rendering the contents of each feed to defer the page-load. That would be done through JavaScript, rather than Twig. The data is already retrieved at the defined intervals in the TwigFeeds-configuration, and stored as JSON. It’s really just a case of using static_cache: true, supporting JSON as a page type, and writing some JS to do it.
Yes Id thought already that it is about populating the front end, so youve explained this, thanks.
My fullfeed template uses a limit of 30 entries per page, to limit single page load stress, and I limit the number of pages being rendered, I think its 6 atm. So scale isnt massive.
It’s really just a case of using static_cache: true , supporting JSON as a page type, and writing some JS to do it.
I’ll look into what you say here but it sounds like it might be above my tech ability. Maybe not though! Any guidance or forum posts you know about to help would be appreciated.
I’ll report back on 50 feed list/staggered refresh times and ping time increase to 30 minutes. Overall it is working very well currently.