Digital optimizer with a love for metal music 🤘 and a nerdy interest in #data + #webanalytics.
Working @ Transact Denmark.
I love SEO news - when reported honestly (sorry SEJ for being a pain)
I just discovered roughly 200K URLs w status 204 (no content) not blocked in robots.txt.
Would a status 204 waste crawl budget or not?
I guess the question is: What IS a crawl (in terms of crawl budget)?
Is it both request, response header and content OR just request + response?
I just discovered roughly 200K URLs w status 204 (no content) not blocked in robots.txt.
Would a status 204 waste crawl budget or not?
I guess the question is: What IS a crawl (in terms of crawl budget)?
Is it both request, response header and content OR just request + response?
Gbot reports a bug that was introduced after Gbot crawl.
Does anyone (perhaps @johnmu.com) know why GSC lists a page as last crawled May 21 21:28 - w user declared canonical showing a bug, that was introduced in a midnight release between the 21. and the 22?
Gbot reports a bug that was introduced after Gbot crawl.
Does anyone (perhaps @johnmu.com) know why GSC lists a page as last crawled May 21 21:28 - w user declared canonical showing a bug, that was introduced in a midnight release between the 21. and the 22?
:: Is Google showing Favouritism? ::
After years (and years) of complaints,
of people showing G examples of weak, bad, spammy, unhelpful, unsatisfactory content,
ranking on "brand sites" (particularly Large/Enterprise Publishers) ...
... G made the #SRA.
>>>
X: x.com/darth_na/sta...
:: Is Google showing Favouritism? ::
After years (and years) of complaints,
of people showing G examples of weak, bad, spammy, unhelpful, unsatisfactory content,
ranking on "brand sites" (particularly Large/Enterprise Publishers) ...
... G made the #SRA.
>>>
X: x.com/darth_na/sta...
Do you know how Gbot would interpret this in robots.txt:
/oplevelser/*$
GSC says it's crawlable when inspecting: www.dailys.dk/oplevelser/m...
technicalseo.com/tools/robots... + screaming Frog says its not due to robots.txt.
Is *$ an invalid combo making Gbot ignore that line?
Do you know how Gbot would interpret this in robots.txt:
/oplevelser/*$
GSC says it's crawlable when inspecting: www.dailys.dk/oplevelser/m...
technicalseo.com/tools/robots... + screaming Frog says its not due to robots.txt.
Is *$ an invalid combo making Gbot ignore that line?
I wrote an article showing you how to fix it: www.linkedin.com/pulse/scamme...
I wrote an article showing you how to fix it: www.linkedin.com/pulse/scamme...
MEN vi så dyrespor i sandet ved havnen og lugtede kraftigt dyretis 50 meter derfra.
Aftrykkene var på størrelse med 2/5-krone ca.
Kan du se, om sporene kan passe med Mårhund?
MEN vi så dyrespor i sandet ved havnen og lugtede kraftigt dyretis 50 meter derfra.
Aftrykkene var på størrelse med 2/5-krone ca.
Kan du se, om sporene kan passe med Mårhund?