Logo

Office Address

123/A, Miranda City Likaoli
Prikano, Dope

Phone Number

+0989 7876 9865 9

+(090) 8765 86543 85

Email Address

info@example.com

example.mail@hum.com

From what I’ve read and seen, CloudFront does not consistently identify itself in requests. But you can get around this problem by overriding robots.txt at the CloudFront distribution.

1) Create a new S3 bucket that only contains one file: robots.txt. That will be the robots.txt for your CloudFront domain.

2) Go to your distribution settings in the AWS Console and click Create Origin. Add the bucket.

3) Go to Behaviors and click Create Behavior: Path Pattern: robots.txt Origin: (your new bucket).

4) Set the robots.txt behavior at a higher precedence (lower number).

5) Go to invalidations and invalidate /robots.txt.

Now domainname.cloudfront.net/robots.txt will be served from the bucket and everything else will be served from your domain. You can choose to allow/disallow crawling at either level independently.

Another domain/subdomain will also work in place of a bucket, but why go to the trouble.

Tags

  • Google webmaster fetch as google temporarily unreachable

  • Google Webmaster tool fetch error

Related Posts

O

Outsource And Offshore Testing Company India

Blazingcoders an Outsource and Offshore testing company in India provides various options in So

Read More
S

Simple Method To Solve Google Webmaster Temporarily Unreachable

In this Article, we are going to explain why this error occurs this is because of Google can’t

Read More
Google Webmaster Temporarily Unreachable

Would you like to exchange Guest Posts with us?

Blazingcoders love to exchange articles and Guest posting. We love to share knowledge, learning, and

Read More