The Biggest Problem Of Google Search Post Panda Algorithmic Update


It has been nearly one month since Google rolled out the famous Panda update, which is geared towards returning high quality sites on search results instead of content farms, scraper sites and sites who don’t produce original content.

Since the day Google’s Panda update went global, a lot of websites have seen an overwhelming change in their organic traffic. A lot of high traffic websites practically disappeared from Google search while Google’s own properties saw an increase in search traffic. Search metrics has the complete details about the early losers and winners of Google’s Panda update.

Recently, Google gave some additional guidelines to webmasters affected by the Panda update and how they can focus on delivering the best possible user experience. Google clearly says that the Panda algorithmic update incorporates user feedback as a strong signal, so website owners should focus on making their content or service as user friendly as possible.

Some Questions To Ask and Ponder Upon

The webmaster tools blog post list all the questions site owners should ponder upon, I will highlight some specific questions from that article here:

  • Would you trust the information presented in this article? As a user, trust comes by looking at the cover and the symbol associated with it. If I have never heard about a website before, it has ugly design,   bombards me with popups and subscription boxes every second it is most likely that I won’t trust the content or the service provided on the website. And there is a good chance that I will also tell my friends not to visit that website again.Inference: Blog design matters. So does usability.
  • Is this the sort of page you’d want to bookmark, share with a friend, or recommend? Google wants to know the feel goodfactor about a particular page by judging whether users would want to share the content with their friends or family members. I think this is doubtful because not all users can judge the usefulness of every page, at first glance.Example: Consider a set of users who are searching   for information about Mothers day. All they want is to read quotations and facts about Mom’s day and if you take them to this Wikipedia page , chances are that many won’t read it top to bottom. They might instead prefer this page which has a list of quotations about Mom’s day. This is a fact all users can’t judge the usefulness of a webpage, which is why we need search engines to return them the best possible page.Inference: No doubt Google is taking social sharing as one of their ranking signals, which can be gamed thanks to retweet clubs, spamming each others Facebook fan pages, paying someone to get 100 stumbles the list is endless.
  • Was the article edited well, or does it appear sloppy or hastily produced? Google wants to know the grammatical level of a webpage and how well written it is. This is obvious, because search engines don’t want to return a page which is sloppy.But there is no guarantee that a page which is written badly, does not contain useful information. I know a couple of friends who don’t have good writing skills, but they are really good at their subject. They don’t know how to write like a seasoned blogger, who on the other hand knows how to collect information from different sources, mask it and produce a blog post.
  • Does this article have an excessive amount of ads that distract from or interfere with the main content?

There are more questions on the blog post and the probable answers depend from one webmaster level to another. However, the biggest consequence of Google’s Panda update has remain unanswered scraper sites outranking the sources for the original content they have written.

Google’s Algorithm Doesn’t Make Exceptions

An algorithm works for everyone whether you are a spammer or a genuine source, an algorithm doesn’t give a damn care. It will continue to work the way it is designed and this is exactly where the problem begins.

There is no way a machine or a computer program can accurately determine whether John or harry wrote a piece of content. Before the Panda update was rolled out, Google did a good job keeping the scrapers away and showed the original article on top.

Here are a few examples which show that Google’s so called algorithm is not able to differentiate between the real source and the copycats.

Example # 1: Matt Cutt’s Personal Blog

Performing a search for this string (not an exact match) from Matt’s post on overdoing URL removals shows the following result:


So what do we have here.

1. Matt’s blog post is nowhere to be found on the first page.

2. The same scraper site is ranking for the first three results on SERP’s which violates Google’s own theory that they tend to show content from different domains on search results. There is more, Google thinks that this piece of content in exclusively genuine and unique on the spam site, so they are showing the little suggestion Read more content from this source. Ridiculous !

3. Google also offers a translated version of another spammer site who has completely copied the original article from top to bottom.

Now consider the following facts

  • Matt’s blog is highly informative and considered an authority site on the subject.
  • Matt’s blog is a trusted source among users.
  • There are ZERO advertisements on the blog.
  • Has good quality backlinks, high domain age, good social influence and a decent design.
  • Google pagerank: 7.

So why is that Matt’s page is not shown at all in the SERP’s?

Example #2: Search Engine Land

Performing a search for this string (not an exact match) from   Greg’s post at SearchEngineLand shows the following results:


The same thing holds true for SearchEngineLand’s article. The original article is nowhere to be found on the first page, while Google thinks it is good to return even the auto generated RSS feed on the scraper site? And why is that Googlebot fails to read the title Latest News?

For the record, SearchEngineLand has Google PR 7 and it’s a high quality site with genuine content and reports on search   analysis. In fact, it is one of the oldest sites to break developments and news about search engines, in this case the latter is not giving the due credit to the former.

Example #3:

Performing a search for this string (not an exact match) from Alexia’s post on Techcrunch, shows the following result:


Techcrunch’s original article is nowhere shown on the first page, while the first result takes you to an ad laden page with no content.

Note: On performing the above example searches, you might see different results on search result pages. Search rankings of a particular phrase can change any second and it also depends upon your geographical location amongst other factors. If you are seeing different results than those shown in the screenshots above, you might want to check the following video where I have performed the above example searches one by one:

Now looking at the above examples, we go back to the same echo chamber Physician, first heal thyself.

The sole purpose of Google’s new algorithm was to remove content farms, scraper sites and people who blindly copy whole or part of the article from the original source. But the results speak a different story there are numerous occasions when original content is nowhere to be found on the search results. And it’s not that only we are saying this, the folks at Seomoz and Ubergizmo have produced their reports here and here

Judging quality of content comes later, first you should find out who the real source is and Google is failing terribly here.

If you are a webmaster and find that scrapers are outranking for the content you have written, I am afraid there is nothing much you can do here. Because this is an algorithm which detects and differentiates between the source and the scraper and if it is failing to do it’s job, nothing is in your hands.

You can file DMCA complaints and take down the scraper sites one by one, but this is impossible for sites having thousands of pages.

Come on Google ! We have circled back to the point we were before – meaningful results. That’s what Google search is known for and everybody used to appreciate that. You have changed the rules of the game, but please don’t take away the real players and let robots dominate the search results.

Wake up !

Google Search Result Pages Soon To Get a Design Overhaul?

Looks like the Google search team is really very busy these days, rolling up updates one after another.

Just earlier yesterday, they added a small tweak to Adsense ads, showing the Advertiser URL on mouse hover.   Then there is the famous and much talked about Panda algorithmic tweak which is geared towards removing low quality sites from search results. What’s coming ? you guessed it right a redesign of Google search result pages is on it’s way.

One of my Twitter friends saw a totally revamped Google search interface this morning, with newer fonts and colors.


Observations from the above screenshot:

1. The font and color of links in Google search result pages will be the same as of the fonts and color of Google’s own links in the top bar.
2. The basic elements are all the same as before, no major changes as such with the position of search box, sidebar elements, search tools etc.
3. Google wants to increase whitespace between the target links, this is clearly illustrated by the dotted border.
4. The font color of the description snippet has changed from Black to light Grey. The color and boldness of the URL has remained the same.

The newer design of Google search result pages might be a part of usability testing so only a few people are seeing it randomly. Personally, I didn’t liked the dotted separators placed between the links but the newer fonts and colors appear more soothing and fresh.

Now if they could redesign their panda algorithm that way Smile clean away the junk and add a separator between the scrapers and legitimate sources. More updates will be posted here as and when we discover it. Stay tuned !

Google Adsense Ads Now Shows The URL Of Advertiser Website On Mouse Hover

I don’t know whether it’s just me or everyone but I am observing a new update on how the Advertiser URL’s of Google Adsense ads are displayed. If you hover your mouse over an Adsense advertisement, chances are that you might know the URL of the target site, as shown in the following screenshot:


When you hover your mouse cursor over an image, the Alternate text of the image is shown (if any). This new feature of showing Adsense ad URL’s works exactly the same but the only catch here is that the URL is shown only for text ads. While ideally, it should have been exactly the opposite.

This is because users can already see the Advertiser URL when they are viewing a text ad. Typically, a text ad contains a title of the link, some description text followed by the complete URL of the target site whose ad is being shown. There are some situations when the webmaster of a site may want to block a specific Adsense advertisement, so knowing the URL of a text based ad is not at all a problem (both for the user and the webmaster of the site in question).

But think of image ads, there is no way to determine the Advertiser URL just by looking at the Adsense advertisement block


So a brief URL preview of image or rich media ads was required and Google has implemented just the opposite showing the URL preview for text based advertisements at the moment.

Important note to Adsense publishers: As far as policy guidelines go, Adsense publishers are not permitted to click their own ads for the sake of determining the URL of the advertiser. And neither you should ask anyone else to let you know the URL of a specific advertisement. If you can determine the URL by looking at the mouse hover text, that’s fine. Otherwise, leave it as it is.

Forum discussion at Webmaster World via Seroundtable

Likester Is More Than The Digg For Facebook Likes

Do you hit the Likebutton on a website when you find something really interesting? It’s the easiest way to share content with your Facebook friends, hit that magic button once and a link is posted on your Facebook profile. But there are a couple of problems with Facebook Likes vs Twitter Retweets, both considered as a vote or Thumbs upfor content you have enjoyed reading.

First, Facebook likes can not be searched on the web. You can only find what your friends are liking and that too is not very convenient in the first place. Unlike Tweetmeme, there is no open directory for Facebook Likeswhere you can find the hottest links shared on Facebook or perform a simple keyword search.

The second disadvantage with Facebook likes is that there is no way to know current trends and rising likes. Twitter has a trending topicssection so you can quickly find out the trending topics that are receiving a lot of Twitter love at this moment. But trends for Facebook Likes is yet to be discovered.

Likester wants to solve both the problems, and some more.

The site lets you quickly find the topics or pages which gets the maximum number of Facebook likes. With Likester, you can find what your Facebook friends are liking, which pages they are reading, which fan pages they have liked earlier and so on. Then there is a global Facebook Like mapwhere you can filter Facebook likes within a particular geographic location.

After connecting the app with your Facebook account, you will be taken to the main interface of Likester with 5 main panels Your likes, friends, trending, like list and like idol.


The Like Listpage contains a list of webpages which have been recently liked by your Facebook friends. You can scroll through the list and check which of these pages have got the maximum likes, this is easier than scrolling through your Facebook timeline from top to bottom.

On the right side, there is another column where you can find out which Facebook fan pages are receiving the maximum votes from your Facebook circle. Note that Likester only works when you’re signed on to your Facebook account, unlike Tweetmeme which works universally.

Another neat thing regarding Likester is that you can filter topics by interest. Though the category list isn’t very elaborate, you can filter celebrities, actors, sportsman, places, businesses and applications from the Like feed.

However, what I liked best about Likester is the ability to search for specific topics and find out which webpages are getting the maximum Facebook Likes. As an example, I searched for WordPress and was shown the following result:


The first three results were Facebook fan pages, each receiving more than 20 K Likes!

All in all, Likester can be called a Digg for Facebook likes, where users can find out the currently trending pages getting maximum number of Facebook likes from all over the globe. A similar tool worth checking out is but Likester is probably better because they have the search box which lets you filter specific topics.

Julian Assange: Facebook Is A Spying Machine

Facebook helps you connect and share with the people in your life

facebok-julian-assange-interviewBut what about people outside your life ?

People whom you don’t know and would not want to know anyway in the near future includes Governments, political pears and other legal eyes prying on your data.You have phone numbers, addresses, contacts, photos, videos, relatives, conversations and everything else on Facebook.

Sure it’s not public and stored behind a password but we have seen example reports when Facebook and other social networking sites can be closely monitored by the state government. Thanks to the instantaneous nature of digital data, the Police and legal authorities can prevent crimes or accidents but there is always an other side of the coin.

In a recent interview with Russia Today, Wikileaks founder Julian Assange said that Facebook is a most dangerous spying machine ever invented. The interview is as follows:

Here goes the text transcript of Julian’s message

Interviewer : What do you think with sites like Facebook and Twitter ? How has these sites played in the revolutions in middle east? How easy would you say is to manipulate new media like that ?

Julian Assange: Facebook in particular is a growing spying machine ever invented. Here we have, the world’s most comprehensive database about people, relationships, names, addresses, locations and most importantly the communications with each other. All the information is accessible to US intelligence.

Facebook, Google, Yahoo all these major US organizations have built in interfaces, designed specifically for US intelligence. They have developed an interface which U.S Intelligence can use. Now is it the case that Facebook is actually run by U.S Intelligence ? No, it’s not like that.

It’s simply that the U.S intelligence is able to bring their legal and political pressure to them, when they need information. And it’s costly on the part of Facebook to hand out records one by one, so they have automated the process.

Everyone should understand that when they add their friends to Facebook, they are doing free work for United states intelligence agencies and building the database for them.

Assange’s views can’t be completely refuted and considering the recent iPhone and Android location tracking news, it’s no wonder that more and more organizations need the golden hen user data. Yes there are lawsuits being filed but at the end of the day, your location and other information is with the company who sold you a simple gadget an iPhone or Android.

Facebook’s Answer: We Don’t Respond to Pressure But Legal Processes

Reacting to Julian’s allegations, one Facebook spokesperson said:

We don’t respond to pressure, we respond to compulsory legal process. There has never been a time we have been pressured to turn over data — we fight every time we believe the legal process is insufficient. The legal standards for compelling a company to turn over data are determined by the laws of the country, and we respect that standard.

Question is, does U.S intelligence have access to user data through Facebook? Facebook isn’t saying but one of the spokesperson said that the social site has a dedicated team of CIPP certified professionals who manage requests from law enforcements. There is this public form which can be used for legal purposes and Facebook officials are most likely to give a nod, if the situation is truly Legalin nature.

I know some folks will laugh Look who is speaking of spying and espionage. Ironical !

But this guy who can reveal secret diplomatic documents of governments, must have some proof to backup his statement.

And as of now, Julian plays Black in this game of chess.

Use Your Android Device As A Side Monitor With A Windows or MAC Desktop

Dual monitor setups are ideal for multitasking. You can move application windows from one monitor to the other, keep an eye on programs like Tweetdeck on the secondary monitor while doing your primary work on the main one. Multiple monitors surely consume a fair amount of desk space but they are actually worth it.

In case you don’t have an extra monitor at home but have an Android tablet or a smartphone, you’re in good company. iDisplay is a simple application for Android devices which lets you use your Android smartphone as an extra side monitor with PC or MAC desktop. You will need a common Wi-Fi network at your home or office to use the Android device as a side monitor with PC or MAC.

Here are the step by step instructions that needs to be followed:

1. On your Windows or MAC desktop, fire up the browser and download the iDisplay desktop client. During installation, Windows firewall will block some features of the iDisplay server, click Allow Accessto grant all the necessary permissions.


2. On your Android device, go to the Android market and install the iDisplay client for Android. The application isn’t free but you can use the trial version of the app for 1 week.

3. Make sure both the computer and your Android device is connected to the common Wi-Fi network.

4. Now run the iDisplay desktop client on your Windows or MAC computer and launch the iDisplay application on your Android device.

5. On the first run, the Android app will automatically detect your desktop computer, as shown below:


6. Tap on the name of the computer and you will see a notification on the desktop computer, as shown below:


7. Click Allow Accessand it’s done. You will see the following notification window on your Android device:


8. Click Okand you will see a preview of your desktop screen on the Android device. Drag any program or application window towards the edge of the desktop monitor and you can slide the window into your Android device. Here is a view of Windows Explorer when viewed from my Android phone:


Using program or application Windows On the Android screen is a bit tricky but once you get used to it, you will love playing with them. There are two ways you can control an application or Explorer window on the Android screen. Either move the mouse cursor from your desktop computer to the Android screen or use Android’s own touch features for minimizing, maximizing and moving program windows from one position to another.

And here is a broader view of my laptop running Windows7 and the Android phone acting as an extended side monitor using iDisplay ( I regret the poor image quality, blame my retro mobile camera).


Note that I can move multiple browser windows from the desktop computer to the Android device. The only downside is that the corresponding size of the window decreases when you slide the window on your Android device. While using a smartphone as a side monitor is not that useful due to it’s small screen size, folks who have an Android tablet can make good use of iDisplay. [ via ]

Give this a try and let us know your thoughts in the comments section.

Completely Get Rid Of Facebook Questions In Firefox And Google Chrome

The Questionsfeature in Facebook may be useful to some folks, but I don’t find it useful for a couple of reasons.

First, the questions are more geared towards fun and virality rather than usefulness. I agree that the nature of questions depends on the people you’re connected to but I have hardly seen anyone praising the usefulness of Facebook questions.

Second, whenever there is a world event like the Royal wedding or the launch of a new gadget, my timeline suddenly gets filled with dozens of irrelevant questions in which I have no interest at all. Worst thing is that the same question appears multiple times in the timeline, whenever there is a new answer or someone posts a comment to the question

And the notifications.Mr X has answered Mr Y’s question. After a while, this gets really annoying and the sad part is that Facebook does not allow you to block Facebook questions like you can block a particular application, game, invites or a user.

Here is what the FAQ page reads

As with other Facebook applications like Photos and Events, there is no way to turn off Questions.

If you’re fed up with the spammy nature of Facebook questions and want to turn off the clutter, try the Hide Facebook Questions extension for Google Chrome. The extension hides every evidence of the “Questions” feature so your news feed doesn’t get cluttered with polls.Once installed, you will need to refresh Google Chrome and the Questions feature will vanish from your timeline.

And so will the notifications that used to come when one of your friends answered a question asked by another Facebook friend of yours.



Firefox fans can try the more advanced FB purity add-on which lets you fix some other annoyances apart from hiding Facebook questions in Firefox. For example, FB Purity allows you to use the older Facebook commenting system, where pressing Enter or Return adds a new line to your comment, and pressing the “Comment” button submits the comment.

Both the browser extensions works out of the box there are no options to configure and nothing to tweak.

Earth Day Awareness With Today’s Google Doodle

Google Doodles are a great way to learn about international events, famous inventions, important   dates and what not.   Over the years, Google doodles have always added a fun element to search and today’s Google doodle is no exception.

Inspired from the Earth day 2011, today’s Google doodle is all about the environment, animals, nature and going green.

Google's Earth day doodle

As always, Google wants to create awareness for Earth day by educating users about the global event. For over 40 years, Earth Day (April 22) has inspired and mobilized individuals and organizations worldwide to demonstrate their commitment to environmental protection and sustainability.

This Google doodle is animated and there are a couple of hidden animals which are revealed only when you hove the mouse cursor to specific areas of the doodle.

There is a Lion, a pair of Penguins, a Koala bear, a frog but the scariest of them all are the two Pandas sitting beside the Bambooo trees. Is this the same Panda which has changed the fate of a lot of webmasters because of it’s April 12th global update ? Not to forget that there is a small child panda waiting, so an update to the Panda algorithm might be a work in progress Smile

Jokes apart, the philosophy of today’s Google doodle goes hand in hand with that of Earth day 2011 – a pledge campaign aiming to get a billion people from around the world to pledge their allegiance to the environment.

One question to our readers how many animals can you spot in today’s Google doodle ? Let’s see who can crack the right answer first in the comments.

Google Toolbar 7 Adds Instant Search To Internet Explorer 9

Some good news for Internet Explorer users.

Google has recently released a new version of Google toolbar for Internet Explorer and Firefox. Google Toolbar 7 making your web browsing faster, simpler and instant.

The newer UI of Google Toolbar 7 for Internet Explorer and Firefox is sleek and hides all the options under a drop down menu, as shown in the following screenshot



Enabling Instant Search In Google Toolbar 7

After you have downloaded and installed Google Toolbar 7, you will be first asked to choose your default search provider. You may either choose Bing or Google as your default search engine, hit OKand restart the browser for the changes to take effect.


To enable Instant search in Internet Explorer 9, go to the toolbar options panel by clicking the tiny wrench icon at the top right of Google Toolbar and choose Enable Instant for faster searching and browsing


Once you have turned on Instant search for Internet Explorer 9, you can preview search results on by typing the keywords on the Google search box of Google Toolbar.

Since I am a regular Google Chrome user, I confused this with the address bar of Internet Explorer only to find out that I have to type the same words in the Google search box of Google toolbar (and not in the address bar of Internet Explorer 9). You can also type Alt+G to get to the Toolbar search box more quickly.

Here is how the Instant search interface of Google toolbar looks like


To clear your search terms, hit the Escapekey on your keyboard and the search box will be highlighted, waiting for you to type the new keywords you want to search for.

With this new update, Google wants to push their instant search features deeper and let IE users get the feel of “Instant” search without having to use Google Chrome at all.

Privacy Options

Many features in Google Toolbar send anonymous usage information to Google, in order to improve your browsing experience.

If you’re concerned about the privacy of your system and don’t want to share anonymous usage data with Google, you can turn off specific privacy features from the preferences panel


Other features are quite the same as before, it’s just that Google has revamped the overall look and feel of Google toolbar by removing unnecessary UI clutter of buttons and icons from the toolbar panel. To customize which buttons and options appear in the toolbar panel, click the wrenchicon, go to Custom buttonsand choose the buttons you want to see in the toolbar area.

Overall, the new Google Toolbar adds a hint of Google Chrome in Internet Explorer 9. While I continue to use Google Chrome as my default browser, those who want Google’s Instant search feature on Internet explorer, can try the improved Google toolbar 7 here. The following video gives a short introduction of what Google toolbar 7 is all about:

You might want to check our ultimate list on Internet Explorer 9 tips and tricks

Trying To Recover From Google’s Panda Algorithm? Some Mistakes You Should Avoid

If you have a couple of MFA sites and you specialize in auto blogging software for producing content, please skip this article.

So here we are, Google’s new wild animal (read Panda) is out from it’s cage and there has been a lot of speculation on the effects it has brought as a result. As before, some webmasters are on the winning side while some lost a significant proportion of traffic and rankings. Google did announced in their official blog post about the new algorithm and how it is geared towards increasing the user interaction and quality of sites in Google search results.



Back in December 2010, we told you how spam sites were polluting Google search results. Typically irrelevant content, little information, scraping, aggregating stuff from other sources and gaming the game of search to get ranks in SERP’s.

Seeing the immense amount of pressure from a wide variety of sources, Google had to do something about the search spam.

They introduced a chrome extension which allowed users to blocklist specific websites from search results and then releasing another feature which allowed users to blocklist sites directly from the search result pages. No wonder, this move was deployed to see the user behavior for specific queries/sites and cross check whether the results produced from the upcoming algorithm go hand in hand with the obtained data.

Tip: You can read our earlier tutorial to check whether your site is a victim of Google’s farmer update

After Google’s Farmer Update

There are two possible scenarios either you’re on the winning side or you’re on the losing one.

I am not going to discuss the details on why a specific site was penalized, Good content, trustworthy links and factors influencing the sudden downfall or raise of specific sites. If you’re a blogger or web publisher, chances are that you have already done your homework and know all the basic stuff.

Instead, I would want to put some light on things you should not do, trying to recover from Google’s Farmer or Panda algorithm.

Some possibly wrong assumptions:

1. Google’s Farmer Algorithm Is Incorrect

Just because you’re on the losing side, does not necessarily mean that an entire algorithm is wrong. Google has deployed this algorithm after months of testing and collected user feedback, why do you think that the same guy who sent you thousands of visitors every single day will turn its back all of a sudden ?

2. It’s Not Just Me, thousands are saying the same

Yeah right.

Care to row the boat and go to the opposite end of the river ? You will find people shouting Google’s Panda algorithm is wonderful, we are getting 3 times more traffic than before…Thanks Google !

3. I think I would register a new domain name and 301 redirect all the pages to my new domain. Someone told me that Google has put penalization on specific domain names and blacklisted all of them.

This is a crazy idea and should be avoided at all costs.

Do you think that the search bots are so foolish that they wont recognize the new domain being related with the older one ?

Let me assure you that domain name is hardly a factor, it’s the content, links, reputation, user friendliness and overall reach that counts.

4. Some scraper has copied my article and is ranking ahead of me. Doesn’t that sound absurd ?

This is a problem. And I have to say that Google is losing it’s edge here

First ensure that your website is not sending “content farm” signals to Google. Most webmasters either don’t use canonical URL’s in their theme or have the same content accessible via different URL’s, which confuses the bot and annoys them over and over again.

The only thing you can do here is file a DMCA notice and take down the scraped content yourself. This is a indeed very very difficult for large sites who have thousands of pages but you have to keep an eye on the scrapers and take them down, if they are ranking ahead of you in SERP’s.

5. Google says that low content pages are no good for users. Hence, I will delete all those blog posts that are less than 200 words, I don’t want low quality pages to hurt my high traffic pages that have valuable content.

Okay this one is a bit tricky but first you need to define a few things.

1. What is good content ?

2. Does Word count play any role in deciding the usefulness of a page ?

There is hardly a proper definition of what Good content is all about. It depends from one perspective to another and every website has it’s own way to benchmark the quality, usefulness and overall usability factorof the content they are creating.

And, neither is word count.

If a page has 2000 words on some topic and another page has only 300 words; it does not automatically guarantee that the earlier one is more rich in terms of content.

Enter word padding, keyword stuffing, sentence repetition, citation, irrelevant opinions and user comments.

What I am trying to convey here is that the same information can be masked in a label of 1000 word article, could have been said in lesser words.

5. What’s the harm in deleting low content pages ? I hardly get any traffic to those blog posts and I think they are hurting my pillar content

Yes, but at the same time you will lose the number of indexed pages and the page rank which used to follow through those pages.   I agree that your pillar content is the cannon but at the end of the day, it needs those tiny little matchsticks to fire a shell.

Removing all the low content pages will also result in a good number of 404’s, which might break your site architecture and you will lose the Google Juice flowing through those pages. Don’t make this mistake right now, you can always hammer down the tree later.

Instead, noindex, follow the archive pages of your blog which is nothing but a collection of content, actually residing on your single post pages.

6. I have linked to a lot of external sites in the past. Since the Farmer algorithm is live for everyone, I must remove all those links as they are draining the pagerank out of my blog

If you have sold text link ads or linked to untrusted sites who don’t produce original content, you might want to remove those links. But don’t overdo and start removing all the links from scratch.

Remember Linking to external sites does not necessarily reduce your site’s pagerank or authority and neither it drains out Google juice from your pages in that sense.

7.   I didn’t pay enough attention to link building earlier on. I will contact the SEO guy and buy 50 dofollow text links from authority pages to inflate my lost traffic

I doubt it, and I wont recommend this either.

The search bots can determine whether a link is natural or forced, so you might get that initial thrust but believe me if your main product ain’t right, every other effort will fail.

Things I Would Recommend Doing

I would recommend performing a routine check on the technical aspects first.

1. Login to your Google Webmaster tools reports and fix those crawl errors. 301 redirect the bad links to the actual pages, let the Google juice flow smoothly across your site.

2. Use Google’s URL removal tool and remove the previously crawled 404’s.

3. Check your robots.txt file, check for unnecessary directories deeper in your site which you forgot to include in the Robots.txt file.

4. Check those codes, not just the theme files but also the corresponding output in the browser.

5. Noindex, follow the tag, category and archive pages of your site

6. Be patient and don’t do anything silly just because some problogger wrote a possible fix in his MMO website.

7. There is gold in the archives – login to your web analytics program and find those pages whose traffic has reduced considerably. Compare these pages with the past data, try to find a pattern.

8. Remember that this is an algorithm which works the same way as Mathematics does. You can crack a 100 out of 100 in the next exam, if your current score is only 40.

At the end of the day, it’s you who have to find what’s wrong with your site and fix the problems. Me saying that this works and that doesn’t, is just a light in darkness. You have to analyze your current situation and make decisions. Some of these decisions will be hard and unjustified to other’s eyes, but remember that noone else can judge your site and it’s internal behavior as well as you can.