Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tell HN: A case of negative SEO I caught on my service and how I dealt with it
278 points by santah on Feb 11, 2021 | hide | past | favorite | 88 comments
Recently, my service https://next-episode.net experienced a huge drop in Google rankings. As I've been running it for more than 15 years, this is far from the first time this has happened. Usually I've been able to attribute big fluctuations (positive or negative) either to something I did, a Google algo change, or some external factor.

For example, about 2 years ago, something similar happened. While digging through my Search Console I discovered that Russian websites generated thousands of links pointing to a page on Next Episode with pornographic keywords used as link anchors. This was so effective that they managed to get those keywords to the top of the "Top linking text" in Google Search Console - naturally (most likely) resulting in drop in rankings for the regular keywords and the domain in general.

About a week ago, while trying to investigate the current drop in rankings and browsing through my "Latest links" external links export from Google Search Console, I noticed something funny. There were thousands of links in there (from 3 domains) following the same structure as on Next Episode: domain/show-name domain/show-name/browse domain/show-name/season-1, etc.

Following these links revealed something even funnier: all of them displayed content directly from my site! Not even scraped/cached content - they were dynamically pulling content from my server and displaying it on their domain. Even the search worked, the news archive and the top charts. Here is a list of those domains as an image: https://i.imgur.com/PjNKh0b.png. I've since blocked their access, so opening any of them will not show my website right now, but here is how it looked: https://i.imgur.com/HBiL3yh.png

Now, my first thought was that those were maybe scraping the content as part of a link farm (to spam with ads?), but I also wanted to know more. I experimented with Google searches that included pages from my website, like "Hot Shows - Next Episode" and ones with very specific news posts subjects like "Streaming Services Availability added to Episodes and Movies" (posted in September last year). Imagine my surprise when I discovered that not only the domains above were indexed by Google (and were listed in the Search results), but there were 4-5 more domains that did the same thing and some of them even outranked mine!

Here is a full list of domains that I discovered by searching for my news posts subjects: https://i.imgur.com/dAm1CzI.png. If you Google for site:domain.com you'll see some of them have thousands of pages indexed by Google. Trying out more keyword searches, I was also able to discover these domains: https://i.imgur.com/s5YjJWK.png (as they've cached the content, they still work). Those all seem to be part of the same operation, but they serve a different purpose - they have only scraped the home page of Next Episode and all their links point to inside pages on the other domains. I suspect this is to generate incoming links to the other domains and give them some credibility.

As with the links with adult keywords text anchors mentioned above - I suspect this whole thing is a negative SEO campaign - I don't see any other reason for it to be happening and it seems to be achieving its goal. Once I found all I could find about the domains involved in this, I took some action:

1) disavowed all those domains through the Google disavow tool

2) investigated if I could redirect their pages to mine (as they were dynamically pulling the content - I could change it to whatever I wanted). I managed to make it work through JavaScript (though interestingly, it had to be obfuscated as they were doing some sanitizing when pulling my content and replacing strings like "window.location.href" with "window.loc1ion.href"), but in the end I decided against it and:

3) I blocked their IPs through CloudFlare (all Russian IPs). An interesting thing here is that once I blocked an IP, the domain would somehow automatically switch to another IP to pull my content from, but once I blocked like 10 or 15 of them - they seem to have run out of IPs and now they stay blocked.

I looked for a way to report those domains to Google, but as of today, I've not found the place to do it. Does anybody know? Today, about a week after I blocked the domains that pulled content from my site, they still have thousands of my pages indexed in Google and are ranking better in some search results than me. I'm guessing with time, Google will catch up with the fact they don't show any content anymore and will delist those pages.

This whole thing was very new to me so I hope it'll raise awareness that this is going on and maybe help someone else catch it happening to their website. I'd appreciate any feedback on this and I'm around if you have any questions. It would also be interesting to hear about anyone's related experiences. Cheers!



I'm sorry to say, but the neg SEO didn't drop your rankings, it was to do with the Google algorithm update [1]. Check the screenshot from Ahrefs [2], and your traffic drops on 3rd of December which is when the update went live. [1] https://moz.com/blog/googles-december-2020-core-update [2] https://i.imgur.com/DBkdUEk.png

Google's algorithm is smart enough to recognise Neg SEO attacks. Sure five years ago you could buy a blast of spammy links using Xrumer or GSA with some viagra anchor text and boom you're competition is gone.

From a quick glance, most of your pages have pretty thin content, and I assume it's pulling from an API, so none of it is unique. If there was one thing I would do is try to build some content on pages. A great tool to analyse and develop content that is SEO friendly is SurferSEO - highly recommend it.

I'm surprised your forum doesn't rank as well as your main site as it looks fairly active. However, I'm not sure about how PunBB does SEO wise.


>Google's algorithm is smart enough to recognise Neg SEO attacks. Sure five years ago you could buy a blast of spammy links using Xrumer or GSA with some viagra anchor text and boom you're competition is gone.

I distinctly remember 8 years ago dealing with negative SEO and reading the same thing everywhere while researching it. "Negative SEO used to work in the olden days of the internet, but now in the modern era Google is all over it."

I wonder if in 5 years people will be admitting that negative SEO worked 5 years ago.


I had a ten year old site virtually de-indexed as a result of someone plagiarising one single page. It wasn't even an attack as such, just someone being too lazy to at least re-word the article.

This was within the last 3 years. Until then, I too believed Google were 'smart enough' to recognise such things.


> From a quick glance, most of your pages have pretty thin content, and I assume it's pulling from an API, so none of it is unique. If there was one thing I would do is try to build some content on pages.

Your advice will help OP, but it's sad to see it on here.

Adding content to sites that don't need it ruins the experience. Don't build for the bot, build for humans.

SEO is a cancer on the web. A few days ago Google directed me to a recipe for a particular type of bread. I swear I had to scroll through the authors entire life story... how their grandmother handed down this recipe from her grandmothers mother, how it feels to make you own bread, how to save your sanity with bread, how best to store bread.

The recipe at the very bottom of the page could have fit neatly above the fold.

Here are two options for OP, either of which will improve his website, unlike your suggestion: https://i.imgur.com/9VlBguW.png


This 100% with looking for recipes. I understand that they need a lot of "rich content" in order to improve SEO rankings, but when it takes a good 20 seconds to scroll on a phone, past 3 inline ads just to get to the recipe, it's beyond frustrating.


Here's a recipe search site I built not too long ago that shows you just the recipe: https://recipe-search.typesense.org/


My understanding with recipe sites is the copy isn't really for SEO as much as it is to have something you can copyright, because recipes in isolation can't be.


But anyone can still legally copy the recipe (the ingredients and how to combine them) and ignore the superfluous content.


Hey there and thanks for looking into it.

I'm aware of the algo update that happened in December (and that it correlated very closely with my drop in rankings).

However, in my experience - even though jumps or drops in ranking are almost always triggered by such updates - there is a good reason for it to happen.

But you're right, what is happening here may've also had no effect whatsoever.

In any case - I found it because I dug up to try and what was going on and I can't explain what I found in any other way.

Even if it didn't affect ranking - does it look like negative SEO to you or do you think something else is going on here?


This December update was quite significant, as I know a few peoples sites tanked as well, so you're not alone.

I still think the main problem for you is thin content. Google doesn't seem to be as strict as they are about duplicate content. But if you're taking the movie/episode description from TMDb, there are most likely thousands of other sites doing it too.

Google could have put you in a category of thin content [1], you may have got a notification in Search Console (but unlikely, I'm not a fan of the new Search Console since literally all the data in there is so vague which makes it not very helpful).

Another thing is that when I look in Ahrefs for pages with the most links [2], they are forum profiles that have been hit with spammy Xrumer/GSA link. They're referred to as profile links. They are pretty useless, but people still build them as there is literally $0 cost to do so.

If I were to offer my advice, my first step would be to build some content. First, add some content on the homepage (if you get SurferSEO it will tell how many words to write based on your competition) and then if you're willing to build a blog in (you seem to be a developer). You could write some roundup articles that are SEO focused and then link to the relevant Show/Movie, which would also help your internal linking.

If you wanted to explore building backlinks (depending on if you want to go on a whitehat or greyhat road) - you can look into paid guest posts or niche edits.

[1] https://www.vertical-leap.uk/blog/what-is-thin-content-why-d... [2] https://i.imgur.com/xZ8VEJQ.png


I'll check out what SurferSEO is suggesting, thanks for the recommendation, though generally - I try to avoid tailoring my content for bots and optimal SEO.

Good catch with the spammy forum profiles - I moderate the forum quite heavily, so there is no spam there, but I never thought to monitor inactive profiles. I'll clean them up and will see what I can do to try and prevent those from popping up in future.

Thanks


Like the poster above -- I don't think it's negative SEO either.

I think it's just an opportunistic person who saw an easy way to rank their own stuff by taking advantage of your hard work

Like you said -- they ranked higher in some cases (without doing any work besides copying you).


But what are they ranking higher?

Only the domains that display the stolen content, with no ads or anything.

What is in it for them?

It doesn't make much sense to do it other than if you're trying to affect someone else's ranking by causing them to be penalized for duplicate content. No?


if you look through their source code there are ad tags.

view-source:https://[redacted].com/titansgrave--the-ashes-of-valkana/sea...

unless you have a competitor you really hates you for some personal vendetta, it's exceptionally rare to see this level of effort for the purpose of negative SEO in 2021.

like I said just looks like a spammer going for low hanging fruit, throwing everything at the wall and seeing what sticks


Not sure what you mean.

The AD code there is mine and the only thing they added is some JavaScript code for a Russian web page counter ...

Can you please remove the link to the offending site as not to give them any link juice?


I think the OP is saying it's a spammer running experiments to see what works, with the intention of maybe one day doing something.

I'm not sure that is a correct assessment, just (hopefully) clarifying the OP's intent.


it looks like my post is too old at his point to allow me to edit it. If DanG or any mods are around, I would appreciate if you could remove the link on my previous post.


Yup. Edited now.


"Google's algorithm is smart enough to recognise Neg SEO attacks"

This is mysterious to me. That bad links hurt your site unless someone else bought them. Google's smart and all, but?


John Mueller even said himself: https://twitter.com/johnmu/status/913222203071594496

This is not an isolated thing, negative SEO attacks happen on a daily basis, mostly intention but I bet you there are some accidental ones too.

If neg SEO still worked efficiently, then SEO's would be focusing on spamming their competition instead of investing in quality content and backlinks.

Like any case, disavow the links if you need, then move on and continue to build content Google likes with quality backlinks.


Until recently, Google had a form you could report scraped pages that outranked your original content.

https://docs.google.com/forms/d/1Pw1KVOVRyr4a7ezj_6SHghnX1Y6...

The fact they removed it tells me they're confident enough in their algorithms nowdays, but that doesn't mean they're perfect.

I can see my own content being outranked by the scraping domains in this very case.

Just google "Streaming Services Availability added to Episodes and Movies" and you'll see.

If they can outrank me, I don't see how this will not affect me negatively by my content being considered duplicate ...


> The fact they removed it tells me they're confident enough in their algorithms nowdays

Ot it could mean something completely different.


This is it, and the Jan update and then the end of Jan update too. All of which has wiped a lot of traffic for others out there all with solid sites.

Right now getting links indexed is a 15 day+ affair for some, some have no luck at all. From search results that I have been getting, it's almost like nothing makes sense anymore. Pure garbage content is at the top, or worst of all content from 3+ years ago and content freshness is considered a key ranking factor.


Btw, forgot to comment on your remark about the forum.

The thing is, I have "nofollow"-ed all links to the forum from the main site. Honestly, at this point - I don't even remember why I did it, it must've been close to 10 years ago.

Now this makes me think if I should remove those nofollows ...


It's fine to leave them. The people who create these forum profile links will be doing it manually. Whether they're nofollow or dofollow, people will still try to build these links.

Nofollow is another thing that isn't important as used to be. Sure dofollow does give you more link juice, but nofollows are still good. If you have 99% dofollow links to your site it's going to look unnatural, so a good mix of dofollow and nofollow always heps.


May I just say kudos, sir, for dealing with this situation with such aplomb. It is easy to imagine an alternative response, with far more anger and less curiosity. You are like a doctor looking at a disease: "Ah, look at this awful thing happening, how interesting!"

Also, given the way they were using your site, effectively reverse-proxying you and adding ads, it implies that you have access, in your server logs at least, to all of their traffic! And that might give you insight into their motivations, and maybe other elements of their operations. I mean, it sounds like a reasonably clever, small scale scam operation in Russia; but if they proved out the technique with your niche site, then they can easily duplicate with other sites, in which case it is effectively a new kind of malware that has to be solved by Google!

Last but not least, I wanted to encourage you, and others, to consider whether this kind of attack would work in a decentralized world, what search looks like in that world, and therefore how this kind of attack might be mitigated.


Thanks for the kind words :) It is indeed both terrifying and super exciting and interesting to figure out what is going on (and how to stop it).


Update: After a week of doing nothing - they finally noticed their thing is blocked and sprang into action.

Apparently, they expanded their pool of available IPs they pull data from and now they seem to be endless (so some of the scraping domains actually work now).

I'm investigating what I can do about it. I'd appreciate any advice!


Update 2: After banning close to 2k individual IPs, it looks like I got it under control, for now.

I wonder how banning so many IPs affects CloudFlare performance and if I should optimize it to block whole IP ranges instead ...


Update 3: After about a day - they expanded their pool of IPs to more than 2k new ones.

I blocked what I could and then I blocked the whole country (Russia) behind a CloudFlare javasript challenge (so that any legit traffic that wanted to pass it - could).

Everything stayed blocked for about a day and now it seems they gave up:

- all domains that pulled dynamic content from my site now show some other content and do not try to scrape Next Episode

- all static domains that delivered the cached version of the Next Episode homepage now deliver some other website(s)

- CloudFlare shows no further traffic on the firewall rules I've set (so the bots from those IPs are, for now, gone)

As this post is relatively old now (more than 3 days) I doubt many people will go back and check for new developments, but I wanted to give this the proper closure and let you know of the apparent happy end.


Hey, I read it! Thanks for the update! Now I see why some people block countries or IP ranges in their sites.


Are the IPs from the same ARN block? Cloudflare should show you that in the Firewall events page. If they're using IPs from a single VPN/Server company then you might be able to just block the ASN.

You might also be able to find common user agent headers through the CF firewall page and block them based on their UA. That'd not work if their scraper tool was randomizing the UA string but quite a few of them don't


Including all IPs I blocked today, they're spread between 5 different ASNs. I may resort to blocking them eventually, but for now - individually blocking the IPs (even in the thousands as it is) - seems to be working well enough.

As for user agent - they're using a very common, real browser user agent that's impossible to distinct from legit users.


If you can look closer at the HTTP requests, it may still be distinct. For example, the header order may not match any legitimate browser, or some other header doesn't make sense in the context of that UA.


Do you have many users in Russia? You could block the whole country ;-)

Russia has no GDPR or something. So you could put (special key in) a cookie? They probably do not process it so subsequent requests without a cookie are to be discarded?


> Russia has no GDPR or something. So you could put (special key in) a cookie?

It is entirely permissible under the GDPR to use cookies for security purposes.


Do they use a web browser for the scraping or simply a http library?

You could look at the http request headers and perhaps identify the scrapper script. You could also put a javascript challenge that is required to solve before pulling more data, and disable it for Google and Bing ips, so it's more work for them to pull data for some time.

Instead of simply blocking, you could detect them and do some kind of http slowloris response.


This is a good question I don't have an answer to.

I'll try and find out and also I'll have to learn exactly what "slowloris" is. It may be helpful indeed!


slowloris is an old attack on HTTP. The idea is to send garbage HTTP headers as slow as possible while keeping the TCP connection open, you send one letter every 5 seconds for example. The HTTP stack on the other side stays busy waiting for HTTP headers and can't do anything else meanwhile.

It's usually targeted to webservers, before most HTTP servers got fixed you could DDOS a server with a tiny connection, but some HTTP clients can also be vulnerable.

But you may find better usage of your time than implementing this.


I had not heard about slowloris, thanks for the tip. I am always on the lookout for ways to make life harder for scrappers and scanners.

In this case though, to defeat scrapers maybe create some link which only the scraper sees and leave a "gzip-bomb"' like described here https://blog.haschek.at/2017/how-to-defend-your-website-with... and see how their scraper handle that :-)

Personally I just used a html-fuzzer to generate 5 MiB of junk html and named it wp-login.php :-) And a ssh-tarpit


Setup a honeypot page to log the ‘users’ IP. Keep hitting it via their domain and you’ll build up a list of IP’s to block?

As an aside, I’ve fought credential stuffers by returning real looking but actually false data, and initiating password resets... start serving different data on each hit, you may need to be annoying enough that they give up.


A honeypot is exactly how I caught the IPs the first time around.

Problem is - right now I'm over 250 (new) IPs and they keep piling up (their domains now rarely use an IP more than once).

I may have to block entire ranges of IPs or whole ASNs.


How about automatically honeypotting them? Add some code to your site that will IP ban a user that searches for some random string (and when I say random, I mean literally generate a random string - something no legit user would search for).

Then, setup a script on your laptop or whatever to search this string on their domains every half hour or so.


It's basically what I've done, though have not automated it yet.

It even prepares the expression snippet for me to paste directly into a CloudFlare firewall rule.

That's how I got to quickly identify and ban almost 2000 different IPs.

If they continue to expand the IP pool I may need to automate it though.


They must've seen this post.


Browsing through your SEO results. I also don't think the negative SEO is necessarily what did you in.

You have a very straight forward value prop. "Next episode" of some-show. I think these sort of optimized results are probably things that Google has been algorithmically adjusting for.

Looking at the Ranked 1-3 terms you dropped for, it seems you dropped some pretty big terms and even keyword terms.

You were #1 for "seal team next episode", but now you rank #3. #1 got replaced by CBS's page, which is arguably a better result.

"black clover new episode" also dropped from #1. Replaced by Wikipedia.

"the good place next episode" similar story.

I don't know what the best move is here. Algorithmic changes are really hard to combat without major changes and even then, you don't have a ton of room to wiggle with next-episode content.


Wow. That's absolutely horrible.

Looking at Google's search results, it's obvious that these tactics are rampant and really winning the war here.

We need a new search engine that cannot be gamed so easily. I know it's non trivial but the stakes are high as is the reward for making such.

This is a real engineering challenge. I'm excited about the problem space and opportunity.


I think the real lesson is that under enough pressure every large system leaks. Anything that gatekeeps millions of real dollars (search engines, stock markets, Amazon reviews, insurance claims, etc) will constantly be exploited and patched by nature of the thing. Only "solution" is to decrease pressure, by say fragmenting market into 20+ search engines, so that SEO people can't realistically optimize for all of them at once.

Some smaller scope things can be made completely watertight, for example mathematically proven cryptography, but even that often leaks to government pressure.


"a new search engine that cannot be gamed"

So you mean a search engine that's 100% human curated? Or rather, a directory, it wouldn't really be a search engine.

Any algorithmic signal can be gamed. Although I'd be curious to hear how I'm wrong about that.


Am I crazy to think a "human curated X" is less impossible than we think it is, even at scale?

Imagine if you could upvote/downvote Google search results, and got rewarded for being "right" or something...


Totally, it would be curated by people who’ve read the results and though it had its place on the board. We could call it "ReadIT".


You joke, but searching with “site:reddit.com” is half the battle to getting good results on Google. See also: stackoverflow, HN, various forums, basically any user generated + user voted content.

A user-voted search engine is a good idea, I think.


Totally agree, point is they already exist in some form.


Am I crazy to think a "human curated X" is less impossible than we think it is, even at scale?

Once upon a time, back in 1994, a startup used this idea as their first product. It was very popular, very useful. It was called Jerry and David's Guide to the World Wide Web.

But, as the WWW grew insanely fast in the 1990s, the human approach couldn't keep up with the algorithms. Eventually it was abandoned.

https://en.wikipedia.org/wiki/Yahoo!_Directory


The issue with "human curated X" isn't the X-factor, but the "human" one. Even an upvoting system like you're suggesting can be abused by bad actors who want to bury legitimate results in favor of seedier ones. At the end of the day, your best bet is to take a combination approach: using RSS for news/reading is a pretty good upgrade, and switching to a decentralized search engine like searx will at least provide a benevolent result.


I agree there has to be a way, but it'll have to be cleverer than what you suggested. App stores already rank apps based on user reviews, and reddit ranks posts based on user voting. Both are manipulated to hell, and I've read reports of third world sweatshops dedicated to churning out those reviews. Since the farmed reviews are written by humans, they don't get filtered by AI reading the text or watching people's mouse movements or whatever.


The word "right" in my comment is doing a lot of work, heh...


It only fails on large subreddits. The key is scale. Reddit still has the best content on the web despite a few scale problems.


why wouldn't it be just as easy to game a human?


This used to exist and was called DMOZ. The search engines used to give more credit to dmoz sites at the time as well. There weren't enough moderators and gatekeeping was a big issue. I wonder if it could work today.


That also was abused - there where DMOZ category editors who took $ to add sites


Humans are also readily gamed. See, for example, Troy.


And Reddit with the top 100 subs moderated by the same ~dozen people who are almost certainly making a living out of it from back channel advertising/marketing/bribery.

And Wikipedia, where power tipping super editors can control whatever they want.


Is this a reference to the Iliad/Aeneid/Odyssey and how humans were gamed by gods into fighting each other pointlessly for ten years? Or is there something else called Troy?


I have to assume this was specifically a reference to the Trojan Horse, although I like your interpretation better.


Yahoo was originally almost entirely human curated. I remember having to pay around $500 to get a business listing for a mortgage broker added.


I don't understand the implicit assumption that a new search engine that reaches Google's scale will be more adept at curtailing abuse.


Even though Google’s not bulletproof, I don’t think any search engine that indexes literally every page could be created to block all abuse.


Made an account here just to make this comment: You're going to want to send DMCA notices to both the registrar(s) AND Google.

1. Compile a list of domains and sitemaps that are 100% stealing and mirroring your content.

2. Go to Google's DMCA request page: https://www.google.com/webmasters/tools/legal-removal-reques...

3. Fill out all relevant data, and submit the offending domains and URL's.

Wait a few days, and you'll be happy to see that those pages are blocked from Google entirely. Not many people know what to do when Google DMCA's them, so it could solve your problem permanently (or you can automate it).

Regarding physically blocking them from scraping your site, you've got a few options. Put Cloudflare up if it isn't already. They've got at least one anti-scraping application (Scrape Shield) that may help.

Another thing you can do is automate the scraping of their websites using distinct query parameters and try to exhaust their list of proxies by automatically logging and filtering them. This might be a fruitless endeavor if they're using rotating residential proxies though.

Hope this helps, and good luck!


Thanks for the suggestions.

I've added filing a DMCA removal request to Google to the list of things to do if this continues ...

The rest of what you mentioned we've discussed in prior comments and are indeed helpful in mitigating this.


Sucks to see this. I think I even mentioned your site on here just this past week.

Didn't think I'd see the author but since you're here, thanks, this has been my go-to over the years.


Ey thanks for taking the time to write this, very helpful during this stressful time :)


It's an interesting story, I won'der if you could turn the automation trick around on them.

Would you be able to make them do the same negative SEOing but to their own site?

Fill their site with unrelated garbage and internal links with undesirable anchor text.

* unbock their IP * create content that links back to their site with the undesirable keywords * only show this content to them and not regular visitors * don't let them grab much / any legitimate content


I think I can do this easily yes.

I can either fill their content with pornographic stuff or simply redirect it to adult websites.

I think if that happens - Google will de-rank and de-list them super quickly.

I chose not to, because I'm not sure if it's not going to affect real people ...


I'd use a Lorem Ipsum text generator or just the same paragraph(s) regardless of page which would give them lots of duplicate content.


Try to insert a canonical meta tag back to your site.


They rewrite all URLs pointing to Next Episode to their domain ...


Earlier today in another thread we were joking about GaaS (Goatse as a Service) but now maybe I think that's not so crazy an idea after all.

https://news.ycombinator.com/item?id=26104087

Your YC application practically writes itself.


Also, another update has been noticed for V-day celebrations: https://www.seroundtable.com/google-search-ranking-algorithm...


This is really frustrating, thanks for sharing. Google has had decades to figure out a way to detect duplicated content, spammy sites with this structure random-spam-keyword.spam-site.xyz/more-spam-words.html and the problem seems to get worse every year.


I feel you. First how bad that feels and second, the amount of time you need to spend in fighting these things :/


Yeah, It's frustrating though..


Of course Russians

That happens when it is legal to hack/steal/cause damage to people from other countries


Is all of your content pulled via javascript? Could a server side language prerendering the content be part of your solution. You can still use javascript for everything else just not the content.


Dmca the registrar.


Was/is your canonical set correctly?


I believe so, yes.

Do you see anything wrong?


Sorry, I didn't even think to look myself before asking.A quick look at the source seems ok, but you might want to run it through "view as google it" on GSC. My question was instinctive to ask because I've heard that incorrect canonical settings make you open to this sort of attack. Good luck either way - ranking is hard enough without dealing with these issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: