Private Label Rights (PLR) article sites have cropped up all over the internet marketing community. You may already be a member to one or many.
But what are they really worth?
Is buying the rights to hundreds of articles that hundreds of other people have really the way to go for search engine traffic in 2008 and beyond?
I’d like to address that here first.
I’m just stating the obvious, but since PLR members share the exact same articles as all the other members, the articles are not original.
That means you cannot submit these articles to any high quality article directory in the hopes of traffic or PageRank or backlinks (unless of course you’re the first one to submit any particular article … the rest will likely be rejected).
So what many folks do is put these articles on their site. It’s likely that half the other members of the club are going to do the same. That’s easily a few hundred people with the same articles on their site.
As I’m sure you’ve heard rumored before, Google has the technology to detect that this content is not unique.
And that leads us to the infamous “duplicate content filter” that causes rounds of arguments in forums and search engine seminars.
Expert #1 says a duplicate content filter exists, while expert #2 says it does not.
So which is it?
Does it exist?
Let’s step back and think about this rationally for a minute.
I’d like to give you an example.
If I’m creating a site on weight loss and I want to rank for the words “weight loss in santa monica”, which is going to get higher rankings in Google? Look over the 2 methods I have listed on the next page and take your pick:
Method #1: Write my own unique article dealing with weight loss in santa monica (or have one ghostwritten). Use the phrase “weight loss in santa monica” in my headline (using title tags), put the phrase in my text a few times, maybe mix the words up just a little, “santa monica weight loss”, and use words related to weight loss like dieting, etc…
Method #2: Join a PLR article membership and take an article about “weight loss in santa monica” (for this example let’s just assume there is one) and place it on my website with title tags and other good SEO practices like those listed above.
For this example we’ll assume that I use the same bag of link building tricks on both sites, so neither really has much of an advantage over the other.
The only real difference between the two methods is that one uses a unique article to my site and the other uses the exact same article found on hundreds of other sites.
So what happens? Who wins the higher search engine ranking?
I’ll get back to that in a minute. I promise.
Let me ask you this, do you know what happens when you type a keyword phrase into Google? The first thing we need to touch on is …
How pages get into Google’s index at all
Google crawls and indexes the pages on the web by looking at the pages already in its database and searching for links to new pages. When a new page is found, that page has now been considered “fetched”.
All the fetched pages are numbered and ready for indexing (without indexing or some sort of categorization, search would not be possible).
In order to index a page, Google looks for specific words on the pages. In this way, the pages can be categorized. At this point, Google has the start of an index.
Of course, it’s a lot more complicated than this, but I want you to read this.
When someone types in a search term, Google must find the set of pages containing that particular word from its index and rank the pages in order of relevance. For searches with more than one word, Google finds all the pages with the word separately and then lists the pages that have the words together.
Once all the pages related to a particular search are found, Google is able to rank them. PageRank and other off-page factors are a big part of that as well as on-page optimization. If two pages have the same on-page optimization, the slimmed-down explanation is that Google will pick the page with the most trusted incoming links.
Google puts the pages in order of relevance, taking out snippets from each page as its description and organizing it as a search results page.
So in a nutshell, that’s how Google generates the list of sites related to your keyword search.
Obviously, it’s a more highly sophisticated system than this, but that’s basically how it works. Your pages are fetched, indexed, and stored away by Google until a surfer types in a keyword or phrase Google thinks your site is about.
Let’s move on to how duplicate content fits into all this…
Whether you believe me or not, the fact remains, for a search engine, showing duplicate results wastes the searchers time.
The search engines don’t want duplicate content in their listings. Why would they?
Are they going to remain the #1 search engine for long if a surfer types in “safe dog food” and is shown 400 different places where they can read the exact same article entitled, “Safe Dog Food for Your Special Friend”?
You can bet money that Google wants to remain #1. Yahoo and MSN both want to become #1. The search engines no one’s ever heard of want to be known. So they’re all interested in the same thing.
Think it’s just my crazy thought?
Here’s a sentence taken from the “Rebirth of Internet Marketing” (page 26), written by John Reese…
“The search engines realize that if they can’t greatly reduce the number of zero-value sites in their index their business is going to be in deep trouble – because their users will go elsewhere to search for that value.”
–John Reese “Rebirth of Internet Marketing”
Do you really think sites with tons of articles that are already all over the internet are thought of as anything other than zero-value?