Tag Archives: website content

Duplicate Content For Dummies

Duplicate Content For Dummies

Duplicate content is content that appears on the Internet in more than one place (URL). This is a problem because when there are more than one piece of identical content on the Internet, it is difficult for search engines to decide which version is more relevant to a given search query. To provide the best search experience, search engines will rarely show multiple, duplicate pieces of content and thus, are forced to choose which version is most likely to be the original (or best).

hree of the biggest issues with duplicate content include:
1. Search engines don’t know which version(s) to include/exclude from their indices
2. Search engines don’t know whether to direct the link metrics (trust, authority, anchor text, link juice, etc.) to one page, or keep it separated between multiple versions
3. Search engines don’t know which version(s) to rank for query results
When duplicate content is present, site owners suffer rankings and traffic losses and search engines provide less relevant results.
SEO Best Practice
________________________________________
Whenever content on a site can be found at multiple URLs, it should be canonicalized for search engines. This can be accomplished using a 301 redirect to the correct URL, using the rel=canonical or in some cases using the Parameter handling tool in Google Webmaster Central.
301 Redirect
In many cases the best way to combat duplicate content is to set up a 301 redirect from the “”duplicate”” page to the original content page. When multiple pages with the potential to rank well are combined into a single page, they not only no longer compete with one another, but create a stronger relevancy and popularity signal overall. This will positively impact their ability to rank well in the search engines.

Rel=””canonical””
Another option for dealing with duplicate content is to utilize the rel=canonical tag. The rel=canonical passes the same amount of link juice (ranking power) as a 301 redirect, and often takes up much less development time to implement.
The tag is part of the HTML head of a web page. This meta tag isn’t new, but like nofollow, simply uses a new rel parameter. For example: This tag tells Bing and Google that the given page should be treated as though it were a copy of the URL www.example.com/canonical-version-of-page/ and that all of the links and content metrics the engines apply should actually be credited toward the provided URL.
noindex, follow
The meta robots tag with the values “”noindex, follow”” can be implemented on pages that shouldn’t be included in a search engine’s index. This allows the search engine bots to crawl the links on the specified page, but keeps them from including them in their index. This works particularly well with pagination issues.
Parameter Handling in Google Webmaster Tools
Google Webmaster Tools allows you to set the preferred domain of your site and handle various URL parameters differently. The main drawback to these methods is that they only work for Google. Any change you make here will not affect Bing or any other search engines settings.
Set Preferred Domain
This should be set for all sites. It is a simple way to tell Google whether a given site should be shown with or without a www in the search engine result pages.
Additional Methods for Removing Duplicate Content
1. Maintain consistency when linking internally throughout a website. For example, if a webmaster determines that the canonical version of a domain is www.example.com/, then all internal links should go to http://www.example.com/example.html rather than http://example.com/page.html. (notice the absence of www)
2. When syndicating content make sure the syndicating website adds a link back to the original content. See Dealing With Duplicate Content for more information.
3. Minimize similar content. Rather than having one page about raincoats for boys (for example), and another page for raincoats for girls that share 95% of the same content, consider expanding those pages to include distinct, relevant content for each URL. Alternatively, a webmaster could combine the two pages into a single page that is highly relevant for childrens’ raincoats.
4. Remove duplicate content from search engine’s indices by noindexing with meta robots, or through removal via Webmaster Tools (Google and Bing)

http://www.seomoz.org/learn-seo/duplicate-content

Content is King

Content is King

This is the kind of all-or-nothing catastrophizing that does confuse a lot of folks. Somebody says, “”Getting thousands of links from low-quality ‘directories’ is probably not going to help you these days,”” and all of a sudden the Chicken Littles come streaming out of the woodwork declaring “”All links are worthless!””

“”Content is king””
As Qwerty said, nobody here is saying you shouldn’t promote your website. And as Jill said, nobody here is saying “”content is king”” (at least, not in the sense that all you have to do is create awesome content and everything else will fall into your lap automatically).

Good content has always been an important component of real SEO as we advocate here. Crappy, keyword-stuffed, “”spun”” articles that were written strictly to attract search engine spiders have never been a good idea in our book. Nothing there has changed in the advice we’ve been giving since I joined this forum in 2003.

“”Backlinking is the devil””
Likewise, we’ve always been in favor of getting good links from high-quality pages. Nothing lately has changed that, either. There’s nothing wrong with back links. There’s not even anything necessarily wrong with reciprocal links, as long as there’s a good reason for the two pages to link to each other.

It should be noted, though, getting higher rankings is not a good reason to trade links. Your page offers good information that my visitors would find useful is a good reason to link to another page. I’d argue it’s the only reason you should link to a page. If it happens that the site owner at the other end thinks the same thing about your page and chooses to link back to it, that’s not a problem. Never has been and I doubt it ever will be.

“”Don’t do any SEO””
Puh-leeze. None of the moderators or administrators here has ever said that. What they have said is that you don’t want to do crappy, obvious SEO. You know — the home page that declares: “”We have a great Denver area web design company that offers excellent Denver web design services to customers all over Denver who are seeking locally-created Denver web designs.”” With a zillion backlinks from a zillion crappy “”directories”” and comment spam on a zillion splogs and poorly moderated forums, all with the anchor text “”Denver web design.””

That would be a footprint as big as the iceberg that sank Titanic.

But, you know, that was never SEO in the first place. At least not as we’ve ever defined SEO here. The “”O”” of “”SEO”” stands for “”optimization.”” That is, making something the best it can be. And that kind of crap, which has been passing for SEO in some circles, was never about making things evenmarginally better, much less the best they can be.

The best SEO doesn’t look like SEO. That’s what Chrishirst was talking about. There is nothing wrong with optimizing a page — in fact, that’s what youshould do — but when it becomes obvious not only that the page has been “”optimized”” but exactly what phrase(s) it’s been “”optimized”” for, then the line has been crossed.

“”God forbid you ever buy an exact match domain(!)””
And finally, if an exact match domain makes sense for your business, go for it. But you need to keep in mind that the folks at Google have already said they’re specifically targeting exact match domains, so when you buy it you need to keep your expectations reasonable. You’re not going to shoot to the top of the rankings and stay there just because you have an exact match domain. In fact, in light of what Google has said, you may even be making your job more difficult by using a keyworded-domain rather than going for something brandable.

We wouldn’t be doing a responsible job here if we didn’t point this out to people. Sadly, there are a lot of folks out there who are still operating under the idea that all they need to do is have a keyword in their domain name and they’ll automatically rank well for that keyword. And a lot of them never consider things like: what happens if you change your business model so the keyword in your domain no longer applies? Are you prepared to start all over from scratch, or would it be better for your business to have a brandable domain so you can change your business focus at any time without launching a whole new domain?

So we try to educate them, bring them up to speed, make sure they know what they’re potentially getting themselves in to.

But if you understand all the potential ramifications — not just SEO-related, but business evolution, branding and marketing implications as well — and you’re ready to put in the work it takes to promote it, then by all means buy whatever domain you want.

Reality Check
Paint-by-numbers “”SEO”” doesn’t work as well today as it used to. Does it still work sometimes? Absolutely. Are the folks at Google and Bing and Baidu and elsewhere working to ensure it works less and less well in the future? From what I’ve seen, I believe so. Is there a future in the kind of algo-chasing, rules-based, formulaic “”optimization tactics”” such as have been promoted on some venues (not here) in the past? Not if you’re interested in building solid traffic and conversions that will keep your business humming on into the future. Does that mean people shouldn’t do SEO at all? No, just that they shouldn’t do crappy so-called “”SEO.”” Which is exactly the same thing we’ve been saying here since 2003.

Does your list of statements bear even a passing resemblance to anything the moderators or administrators have said here, ever? Not even close.