foaf+ssl in Mozilla’s Fennec works!

Comments Off on foaf+ssl in Mozilla’s Fennec works!

At yesterday’s Bar Camp in La Cantine I discovered that Mozilla’s Fennec browser for mobile phones can be run on OSX (download 1.0 alpha 1 here). So I tried it out immediately to see how much of the foaf+ssl login would work with it. The answer is all of it, with just a few easy to fix user experience issues. I really am looking forward to trying the Nokia N810 Internet Tablet for real.

Anyway here are quick snapshots of the user experience.

Getting a certificate

First of all the best news is that the <keygen> tag, now documented in html5 works in Fennec. This means that one can get a client certificate in one click without going through the complex dance I described in “howto get a foaf+ssl certificate to your iPhone“.

This is how easy it can be. Go to

After filling out the form, you can create yourself an account on

To make your WebId useful all you need to do is click on the “Claim account with SSL certificate” button — which could certainly be phrased better — on the account creation successful page:

Once clicked, your browser will start calculating a new public private key pair, send the public key to the server which will turn it into a certificate, and send that back to your browser, which will then add it to they keychain! All you will see of this whole transaction is:

The Fennec message here is a bit misleading: you should not in fact need to keep a backup copy of your certificate. Foaf+ssl certificates are very cheap to produce. And without a link to the keychain from the popup, most users won’t know what is being talked about, or how to keep a backup. Also on a cell phone they may well wonder where to put the backup anyway. So in this case it is wrong, and not that helpful. Much better would be to have a popup say: “Your certificate has been installed. Would you like to see it?” Or something like that. Most people won’t care.

Using the certificate

You can then test the foaf+ssl certificate on any number of sites. The site has a login button for example that when clicked will get the browser to ask the user to choose a certificate. And, this is where the User Interface choices made by the Mozilla team are just simply embarrassing. Not unusable, but just really bad.

No user ever cares about these details! It is confusing. Do you think users have issues with URLs? Well what do you think they are going to make of the old outdated Distinguished Names?

Just compare this with the User Experience on the iPhone

Quite a few bug/enhancement reports have been reported on this issue on the Mozilla site. See for example Bug 396441 – Improve SSL client-authentication UI, and my other enhancement requests.

Still this user interface issue should be really easy to fix, as it is just a question of making things simpler, ie. of reducing the complexity of their code. And clearly on a cell phone that should be a priority.

Another issue I can see on the Fennec demo browser, is that I could not find a way to remove the certificates…. That would be quite an important functionality too.

But in any case using foaf+ssl on Fennec is the easiest of all cell phone browsers to use currently – and one of the rare ones, if not the only one, that works correctly! So kudos for that! Fennec and the Nokia N810 is the place to look for what a secure life without passwords, without user names, and a global distributed social network can look like on a mobile platform.

September 30th 2009 Mobile, security

Another great Bar Camp in La Cantine

Comments Off on Another great Bar Camp in La Cantine

Today, well yesterday (Tuesday) I was at a Bar Camp on Cloud computing, social networks, the Open Stack and Geolocation in the very friendly La Cantine organized by Silicon Sentier in Paris.

La Cantine is a great place to meet lots of people anytime. You can just stop by and drink some coffee while hacking a project on the web. But today with guests from Google, Mozilla, Sun (me and some others) and a very enthusiastic and technical audience the place was full of energy.

As it was a Bar Camp the timetable organized itself. A track on social networks appeared, and so of course I presented foaf+ssl as I had done 10 days before at the Social Web Bar Camp, except that we had to do this without projector this time as we, 20 or so people, were gathered around the bar. So for those who were there who would really like to get a better overview of what this enables, I recommend the following links:

  • The second video of the blog “FrOSCon: the Free and Open Source Conference in Sankt Augustin, Germany“, (best viewed in Firefox 3.5 at present)
  • The essential web site, where one can put together in a few clicks a foaf file and on browsers other than Internet Explorer, get a certificate in one click, (Firefox and Opera are recommended)
  • The foaf+ssl wiki which contains the links to all the papers and howtos, including the essential mailing list.

This was also the occasion of meeting a lot of very knowledgeable people from Google such as Patrick Chanezon for example, and from the European Mozilla team, such as Tristan Nitot. I was so busy answering questions sadly that I missed quite a lot of the other talks. But I did make a lot of good contacts, that I will now be following up on.

September 30th 2009 Uncategorized

You Shouldn’t Use YouTube For Building YouLinks

Comments Off on You Shouldn’t Use YouTube For Building YouLinks







As a link builder I’m not enamoured with YouTube and do not recommend using it as a primary way to build links or as an integral part of your SEO program. 


  • Videos on YouTube are on YouTube so any optimization effort you implement helps YouTube and not your website/pages.     

  • YouTube contributes to the pinking of the ‘Net/Web (uses nofollow) so any link you insert to guide people back to your site passes no link popularity. 
  • While traffic from YouTube can be beneficial, you have to optimize the content on YouTube like any other in order for people to find it.  This is time better spent elsewhere.
  • Efforts to make a video go viral begin with the webmaster, not YouTube
  •  It’s doubtful you’ll build a brand following on YouTube unless the public is already aware of your brand.      

  •  By-n-large people look  for information on a search engine first, they don’t search on YT for a place to buy baseball cards.  There is a reason Google has become a verb and YouTube a pastime.

But the number one reason?

  • YouTube results bump web pages down in the general search results and web pages make sales , videos don’t!

Want to see what I mean?  Look here, here, and here  and notice how the videos are all ranking in the top five but the sites they represent – don’t.    Yes the exposure is nice but where is there opportunity to make a sale??    Throw in local search results and images showing up and it can take a while to get to a static search result.   If your goal is to make your website an authority in your industry/niche, you should house and promote the videos on your site, not YouTube.  This will help with algorithmic authority, branding and traffic.

So is using YouTube to build SEO  links a wasted effort?   Pretty much which is why I don’t recommend using it to increase your link popularity but I wouldn’t totally discount using the number two search engine on the Net to build awareness.  Consider doing this: 

  • Make shorter versions of your video’s and insert on YouTube, longer vid stays on your site
  • Create those shorter versions as teasers and as a lead-in to promotions/information on your site
  • Be sure the start and ending frame of the vid include the URL to your website
  • Optimize your YouTube listing with your keywords
  • Be the first one to leave a comment/review under your vid, include the URL to your website and explain a longer more detailed version of the vid exists on your website
  • Encourage everyone you know to drop a comment/review on the video  (re/views help push your vid to the top for your keywords)
  • Create a video area on your site just as you would a media room and promote it to the media, your customers, vendors etc.
  • Make the vid’s on your website available through Creative Commons, make full descriptions embedded with kw rich links part of your attribution.

You need to decide what’s best for your site and if having YouTube video’s come up in the serps for your keywords is your goal, power to you.  But if you’re in business to make a profit and plan to use video to attract links, know the links you point at YouTube will have little to no effect on your overall rankings. 

Use YouTube or any image/audio hosting site wisely and they can be your greatest ally  and not a ranking enemy.

(photo taken from Zazzle.  Buy a tee shirt!)

September 29th 2009 YouTube

Chris Pratley on Channel 9

Comments Off on Chris Pratley on Channel 9

We have gotten a lot of feedback that you would like to hear more about what is happening behind the scenes of Office Labs.
Over the past months we have been busy at work on additional projects, some of which we hope to share with you in the next several months. In the mean time, we thought you might enjoy this Channel 9 interview with the head of Office Labs, Chris Pratley. Channel 9 is a site that hosts interviews with the people behind creating Microsoft products. 
Hope you enjoy it and let us know what you think.
Published: 10/2/2009 3:18 PM
September 26th 2009 Uncategorized

Google Penalty for No rel=nofollow on Affiliate Links

Comments Off on Google Penalty for No rel=nofollow on Affiliate Links

Got affiliate links? Not Using the rel=nofollow? then Google may give you a penalty so that you no longer rank. Use rel=nofollow on affiliate links!

September 25th 2009 Uncategorized

Why Doesn’t Google Have a Dictionary? Still Link to

Comments Off on Why Doesn’t Google Have a Dictionary? Still Link to

Why does Google still use for word definition as opposed to establishing a dictionary/reference function of their own?

September 22nd 2009 Uncategorized

Social Web Bar Camp in Paris

Comments Off on Social Web Bar Camp in Paris

social web bar camp program drawn up on the black board

After flying in from Berlin on Friday and celebrating the Jewish new year late into the night with Ori Pekelman, I woke up earlyish on Saturday to go to the Social Web Bar Camp organized in and by La Cantine, the very friendly Parisian conference, community, meeting space for creative people in the digital age.

At 10am the conference started and people slowly arrived for the freely available espresso coffee and pastries. The conference was free too, being sponsored by the member organizations of La Cantine. At 10:20am as the coffee had worked itself into the 60 or more attendees, Ori started the workshop (picture) by having everybody introduce themselves shortly by name and 3 tags. The Bar Camp rules of the game were then explained:

  • Everybody is a participant
  • You make the event
  • Feel free to move between sessions if you feel you are not getting what you were looking for at one of them
  • Write up your interests on the black board, this will be used to create the time table.

So the sessions were put together on the spot there and then.

Of course I put up a session on foaf+ssl and Distributed Social Networks on the black board, for the session starting at 11am.

After a last coffee, a little over 20 people gathered in the room. I connected the laptop to the projector, introduced myself and the W3C Social Web XG, before starting the presentation (slides in pdf) which I have been giving in various universities and hacker spaces around Europe for the past 5 months. (see the FrOSCon video for example)

picture of the discussion in the foaf+ssl session

A round table discussion of this size has a very different dynamic to conference presentations. It is a lot more free flowing and people can ask question and did as I went through the presentation, leading to lively discussions on security, identity and web architecture. At times it seemed in danger of veering off into widely philosophical discussions, but somehow we always got back to the topic helped by the real implementations of foaf+ssl that are now available. Somehow we did in fact manage to complete covering the subject by 12:30 including an excursion into a description of the very real business opportunities this enables.

From the twitter posts (tagged #swcp) and the invitations to follow up with other French public and private institutions that I got over the course of the day, I can only say that this conference was a great success. I could not have started my 1 month stay in Paris in a better way. I will clearly be very busy during the coming month, before my return to Berlin.

Thanks to Huges M for the photos. More of his pictures are available on his flickr account under the #swcp tag.

Further pointers

September 20th 2009 security

Google Stepping Up to Counter Bing’s Growth?

Comments Off on Google Stepping Up to Counter Bing’s Growth?

Have you noticed the increase in Google’s output? Is this because of Bing? Glad we’ve got such an interesting search battle ahead!

September 19th 2009 Uncategorized

Chris Silver Smith Interviewed by Eric Enge

Comments Off on Chris Silver Smith Interviewed by Eric Enge

Published: September 17, 2009

Chris Silver Smith is the Director of Optimization Strategies at KeyRelevance. Chris has extensive background experience in search engine optimization and Internet application development and he is a regular speaker at Search Marketing Expo, Search Engine Strategies, American Marketing Association seminars, and other technology and marketing conferences.

Chris previously worked as Lead Strategist at Netconcepts where he provided search marketing consulting and product development for their GravityStream automated optimization software. Prior to that, Chris served as Head of the Technology Department for Verizon’s sites, leading teams which focused upon advertising applications, taxonomic development, usability, user interface design, and more. Chris worked at Superpages for over a decade and his projects included work in: R&D, map-based search, Campus Area Yellow Pages, weather forecasting systems, ecards, XML APIs, RSS feeds, mobile applications, city guides, and more.

While at, Chris founded their extensive SEO program, initiating research on increasing search engine referral traffic naturalistically as far back as 1997, and he was later honored with the corporation’s highest award for this work in 2004, the Verizon Individual Excellence Award, for increasing site visits and associated click-through revenues by many millions of dollars. In 2006, Chris went on to found and chair the Idearc SEO Council, pulling together individuals from across the organization who worked on elements of natural search optimization.

Interview Transcript

Eric Enge: What are the most important things for people to be concerned about when they want to rank well in local search?

Chris Silver Smith: In my opinion, local search optimization is based upon a foundation of regular search engine optimization, but there are a number of additional factors beyond what we see in regular SEO, so there is a little bit more complexity here. Although some of the same classic SEO ranking factors are some of the things that feed into local search rankings as well, including well-formed titles and good H1 tags on pages.

There is also another way to rank well within local search for non-businesses websites. Even if a business doesn’t have a website, it can rank within local search because it may have listings within business directories whose listings get indexed and ranked by various local search engines. (For more on this, see Chris’s article on how Google Maps may’ve switched from ranking sites to ranking businesses, independent of website.)

Eric Enge: That’s one of the big problems in local search. There is a very little control over the way that data is maintained and propagated in both on and offline directory listings. It seems almost like a mass of random Brownian activity at times.

Chris Silver Smith: That’s a really good description of it. All these different little pieces of information are in a way feeding off of and influencing one another, and then showing up in different places within search results. All of them may have something to do with the business that you are particularly interested in.

Eric Enge: Right. There are actually a lot of things that can go wrong, including someone incorrectly typing a phone number or website address, the business may change locations or possibly going out of business. These are just a few examples of things that make the data go bad.

Chris Silver Smith: Correct. There are many factors, and you’ve already touched on a few of them. Another example is that there may be multiple ways to write a street address. I’ve seen this happen before in big cities and small towns. In big cities like New York City, there could be an east version and a west version of a street or an avenue, for example. Local businessmen could be writing their street address in the way locals refer to it, and may not include the East or West.

Google, and other mapping systems, may pinpoint the address in the wrong spot along the street. That has impacted me personally, and consumers are impacted by it very heavily in general, as it impacts their trust of online directories, online mapping systems and local search systems. It could also affect someone’s overall assessment of a business, if they go to an address and the business is not there, they get irritated. They are not going to have a very high tolerance for this, and even if they eventually find the business their opinion of it has probably suffered.

The difficult thing about this is that it may be completely unrelated to anything that business did, and it can affect a customer’s appreciation and review of a business. It can impact customer satisfaction before they’ve ever even arrived at the business. The mapping issue has been one of the biggest problems with local search since the inception of the Internet. There are a whole lot of reasons why businesses may be incorrectly located on maps, and it’s one of the areas for which Superpages received more complaints about than anything else over the years. Of course, we didn’t even provide maps that we made in-house, we used a few different external mapping systems, and each of the different mapping systems had its own problems and issues.

All of those factors feed into the process of getting listed and located correctly, and they affect the local search engine and directory’s ability to canonicalize listings and all of the different specs of a business’s information within one particular listing. I refer to the Brownian specs of information floating around as a sort of “constellation of local search information sources”. In a recent article, I wrote that if the search engine successfully associates all those diverse specs of info with each other then they can help to build a particular business’s rankings within the search results.

Eric Enge: One issue I think most people don’t understand is that if a search engine doesn’t have confidence that a business is at a particular location, it’s enthusiasm for ranking that business highly for related searches is going to drop.

Chris Silver Smith: That’s true.

Eric Enge: This could be potentially harmful for business if they go in and correct their listing at Superpages, for example. It’s great that it’s correct there, but it could still be wrong at, or at thousands of other websites. The business could then have one place where their listing is correct and a bunch of other places where it’s wrong, and what’s worst is that they are different. So if the search engine sees a bunch of data and it’s not really confident that the data is accurate then, that can be a ranking issue.

Chris Silver Smith: I can describe how that works to some degree. Search engines, local directories and online yellow pages use a variety of methods to try to associate businesses’ information that they get from multiple sources all with the same listing. There are a few different things that they do, including comparing the business name, street address and phone number.

The phone number has in the past typically been considered something that doesn’t vary as much as some of the other information from all the different business sources. As a result, it may be used for associating those various pieces of information all within one listing, but there are cases where people start adding on phone numbers, and then these directories don’t know which is the primary number for a particular listing.

In addition, there are businesses using various tracking phone numbers to determine how many calls they get from the various types of promotional work and advertising they do. They might use a different phone number in their newspaper ad, in each of the different yellow pages print directories and on their different ads online, so they might have a whole series of different phone numbers showing up across these different mediums.

If the phone number is different, it may result in the search engines having difficulty associating the same business’s information if some of the other pieces of information are not identical or very similar, such as the business name and the street address. There are additional problems with that, however, because there are many variations in the way people cite street addresses, as I mentioned before.

Google may think that a street address is “Highway 1” for example, whereas the more common name locally may be “Main Street” or some other alternate name. I’ve seen cases where streets have three or four different possible names or spellings, and an address could have multiple different businesses all at the same street number, like in a shopping center, a large office building or a shopping mall. This is how it starts getting more and more difficult for the search engine to associate these pieces of information all with one listing.

There are multiple different ways that directories cite the business name as well. Doctors, for example, often have their last name listed first in directories. There are also other cases where people use variations in business names, and they all may be valid, just different and used for different reasons. So if a business has different listings in InfoUSA and in Superpages, then associating those pieces of information gets very difficult. And if a business has one listing associated with one website and another associated with an alternate website, this is not going to give them a good chance to rank well on the search results. The same is true for reviews and ratings. If three or four different online business directories have reviews and ratings for a business, they can’t get collected together under a single unified rating when Google pulls all those pieces of information together. In this situation that particular business is not going to have the best chance to rank highly.

Eric Enge: I interviewed Pankaj Mathur of InfoUSA a few months back, and he told me that InfoUSA has 14,000,000 different businesses listed in its data, and Google has four or five times this many, almost one business for every five people in the United States.

The reason Google’s number is so much higher is because it gets its data by crawling all the web, whereas. InfoUSA does work to verify and confirm every listing it has.

Chris Silver Smith: There are a couple of reasons why InfoUSA might show fewer listings. One of them could be that Google has a variety of different information sources and is having problems collapsing those listings together. The problem is that Google is taking information from sources that are not as high quality as InfoUSA, which is one of the very few business listing aggregators that actually calls every single business in its directory once a year to verify its information. One problem is that businesses fail all the time, and they are not real good about notifying directories that they are no longer in business. Those dead, old listings get left in directories everywhere because there are not any good processes to get them deleted comprehensively. Unfortunately for Google, this is one case where “having more” search results is actually an indication of inferior quality.

Eric Enge: We have established that if a listing aggregator isn’t managing and monitoring its data, it is eventually going to go wrong, so what’s the best way to then clean it up?

Chris Silver Smith: There are a few different ways, and unfortunately there is no universal way that is going to work for everyone. One of the best ways is to try to clean it up with a main data aggregator such as InfoUSA or Axciom. They provide their data to many of the most important places, including Google, Superpages, and many other directories. If the business goes to those main data aggregators and tries to get its listing information updated, then that’s great, but it can be challenging because those companies are not set up to deal with a lot of small businesses. They rely on getting information back from some of their data partners, including, and Google.

The best thing to do is have a shotgun approach, where they make a list of all the top directories and then go and check their information in each directory periodically to see how they are showing up.

I’ve had clients who were not very careful about this, who would check only one listing that looked right and was ranking well within that directory. If they had looked more closely, however, they would have found a handful of other listings associated with it that were also showing up.

One of the ways to solve that problem would be to go in and search by phone number if reverse search is offered. They have to ask themselves if there is possibly some other listing showing up under their phone number.

I even found one egregious case where one of my client’s listings had been hijacked by a competitor who added their own URL into the listing in an attempt to steal their referrals!

Eric Enge: Right, they are using the phone number to hijack the listing.

Chris Silver Smith: Yes, exactly.

Eric Enge: That gets back to what you said before about how most people assume a company’s phone number is the item least likely to change.

Chris Silver Smith: Right.

Eric Enge: Right. So, working with InfoUSA, Acxiom, and also working directly with the major yellow pages sites as well, would be the basic recipe.

Chris Silver Smith: That’s correct. All of those data sources are really good places to search for local listings, and I have seen variations on how they’ve operated over time. Localeze also could be a very good partner if a business doesn’t have a lot of time and it wants to pay someone to go out and try to get their information updated in all those different locations. They are a little odd in my view because they straddle the line between being an information provider and an advertising publisher for businesses.

They are selling on both ends of the equation, the information to business directories and search engines, and advertising to end-users.

Eric Enge: Of course companies like InfoUSA and Acxiom are originally direct marketing companies, so they sell lists for people to mail, email, or call.

Chris Silver Smith: Yes, that is correct, but they are not really selling advertising to small businesses, at least I don’t think that they are.

Eric Enge: Can you talk a little bit about links and web references in the context of improving ranking?

Chris Silver Smith: Google is using as many ranking factors as possible in an effort to broaden its sources and stop people from exploiting the local search results. Google wants to give fair ranking status to all businesses and to provide high quality information to its end-users. They don’t want to be exploited by more simplistic ranking routines, as we’ve seen in the past, so there is no overemphasis on some ranking factors like inbound links.

Inbound links used to be the web references or citations method of choice for Google, and it was the foundation of its PageRank linking algorithm, but it recently broadened to a larger spectrum of different ranking factors that could play into and help influence what should rank best. In terms of ranking businesses, the old ranking methods used by business directories were proximity and alphabetical order, and of course there were people who ranked businesses based on how much money they spent on advertising as well.

Now, I am talking a little bit about online yellow pages with the evolution of local search, because online yellow pages were the only sources of local search before local search engines and map-based search engines were developed. Those two ranking criteria, proximity and alphabetical order, were the original dominant ranking methods. Google changed that paradigm and tried to make keyword relevancy the higher ranking factor, which was a real interesting development.

They have since broadened beyond mere keyword relevance to also taking other factors like ratings and popularity of a business into account. Popularity is a very vague notion and is very difficult to define or quantify, but Google uses a number of different ranking methods to try and do this as effectively as possible. One of those is to measure how many times a business is referred to by people, so a business that is talked about or referred-to regularly would rank higher than a business that is not.

They may be looking at a whole lot of different sources of information for these types of citations, which could include how many times a particular business is mentioned within blog posts, within microblogging platforms like Twitter or Facebook, or within news stories. There have even been some people who claim that Gmail could factor into this, based on how many times businesses are mentioned within emails. I think that could be a really compelling signal for Google to use. When we talk about citations or web references online, there are a number of different types of references.

Links used to be the basis of Google’s PageRank algorithm, and they continue to be a factor in the ranking of businesses in local search results and of regular web pages in web search. There are additional types of references that Google is now allowing to influence rankings within Google Maps, including how many times a phone number of a business is referred to in all those various sources on web pages, blog postings and news stories.

It also possibly considers how many times an identifiable business name or URL is mentioned in relation to a local area. There are many news feeds and newspaper sites that have a policy of not linking, but they will occasionally mention a URL in plain text within a news story.

Another factor that influences rankings within Google Maps is how many times the address itself is mentioned in that web space. I mentioned this in an article I wrote on the topic of using reverse search for local search optimization.

If there are two different businesses across town from each other, but one is in a more highly popular location, then perhaps that business should be ranked more highly. This is especially applicable in tourist hotspots, like next to Grauman’s Chinese Theater in Los Angeles or in a shopping center that has various other businesses within. Those businesses could be cited as more popular because the street address of that shopping center is referred to over and over again in many different media sources.

So even though someone might not mention the restaurants by name, for example, it might have a better chance of getting ranked higher because it’s in a popular location. If there is a particularly popular shopping district, even mentioning it and associating it with the business could be very worthwhile. There are a handful of different types of web references, and they seem to be getting weight within Google Maps.

Eric Enge: There is a fairly strong consensus that these types of web references are a factor in local search. Any reason why they can’t use that kind of signal in global, regular search?

Chris Silver Smith: They certainly could be, particularly if those local search ranking factors could easily be feeding back into regular web search. If a business is ranking well within local search, why wouldn’t the same types of ranking factors feed back out into the regular web keyword search results? Some of those references could be feeding back into the overall keyword rankings for pages. A business’s phone number could be associated with its website, and perhaps that could feed into the overall rankings, which are spread out through the pages of that particular business’s website.

Eric Enge: In simplistic terms, a newspaper article could mention a URL and not implement it as a link. That’s a very simple case of a web reference that could easily be associated as a vote by a search engine.

Chris Silver Smith: That’s right.

Eric Enge: Also, one might also have high authority sites that implement links but use Nofollow, which is supposed to restrict link juice from being passed. If by context the search engine knows that that’s really a policy and still wants to treat it as an endorsement, it can.

Chris Silver Smith: That’s exactly right. Many of us in the SEO profession have the sense that even Nofollowed Links might have some level of value. I think that NoFollowed links could be considered by Google, as well as mentions of URLs within text that are not hyperlinked could be considered to some degree. I think that Google probably considers those two types of references at a lower rate, and I think they probably rate links from different sources in varying value levels.

Those Nofollowed Links could still feed in and give a fractional amount of PageRank transfer compared to a pure link from the same site. Wikipedia is probably the biggest example of this at this point. If Google didn’t pay attention to links within Wikipedia pages, then I don’t believe we would see those links and their anchor keywords listed within Google Webmaster Tools, but that’s actually what we do see in practice.

If someone has a valuable link that they’ve added to Wikipedia for a valid reason, they could then see that link appearing within the metrics that are shown in Google Webmaster Tools. I don’t think Google would be listing these if that wasn’t something they are going to use in some way, so I believe they have some influence. Google probably just weights them a little bit less heavily.

Eric Enge: Even if the links aren’t Nofollowed, Google certainly knows when a link is coming from a blog or from a comment from a user. These are things that they think about all the time. To bring it back around to local search, can you talk a little bit about how social media plays into this situation. How can people use social media to help with their web reference campaign?

Chris Silver Smith: The social media services that are out there are the closest thing Google has to word of mouth endorsements, which is the golden standard that Google would ideally like to use as a relative ranking signal. Since they obviously can’t hear what we are all saying to each other all the time, the next best thing currently is all the different types of social media. We believe that Google will see these as signals for relative popularity.

Google was interested in this with blogs, in terms of “burstiness”. They might look very suspiciously at a site that suddenly got thousands of inbound links. If all those inbound links came from credible sources, however, that is considered to be normal bursty behavior for something that just abruptly appeared on the scene and became popular overnight.

Social media is a good source for this, and we can see that Google is interested in it because many people believe their recent search engine development, the “Caffeine” test platform, was geared in large part to try to absorb highly bursty messaging from sites like Twitter. They are trying to absorb that content and get the information to feed into the system, not just so those little pieces of information can be available, but also so they can influence the rankings of other pages.

Eric Enge: It’s great stuff and a great opportunity for people. I think we are still at a stage where there is a lot of bias introduced by the people who use various social media platforms, which are not yet used by a large percentage of the people who use the web. It’s a signal, and you can count on them using those types of signals. As those platforms grow and get broader, they are likely to give it more and more weight.

Chris Silver Smith: I think that’s right. They see a future within all sorts of social media, even if it’s a little bit undefined right now.

Eric Enge: Thanks Chris!

Chris Silver Smith: Thank you Eric!

Have comments or want to discuss? You can comment on the Chris Silver Smith interview here.

Other Recent Interviews

About the Author

Eric Enge is the President of Stone Temple Consulting. Eric is also a founder in Moving Traffic Incorporated, the publisher of Custom Search Guide, a directory of Google Custom Search Engines, and City Town Info, a site that provides information on 20,000 US Cities and Towns.

Stone Temple Consulting (STC) offers search engine optimization and search engine marketing services, and its web site can be found at:

For more information on Web Marketing Services, contact us at:

Stone Temple Consulting
(508) 485-7751 (phone)
(603) 676-0378 (fax)

September 18th 2009 News

Who Controls Link Building Success (LinkMoses Resurrected 6)

Comments Off on Who Controls Link Building Success (LinkMoses Resurrected 6)

Who Controls Link Building SuccessIf you are seeker of link building services, and are evaluating companies or persons to help with your linking strategies or link building, it’s logical and reasonable to have some questions you want answers for.

One of the most telling questions of all is this one:

“How many links can you get for us and how much will it cost?”

If you asked me this question, the best answer is the most honest answer, and here it is.

For highest quality content seeking links, once you have identified the highest caliber and most credible targets, it is never me, the link builder who gets you the link, and it is never me, the link builder who controls anchor text or any other HTML based editorial choices. It is your content that dictates the ultimate result, not me or any other merit-based link builder.

Put another way, the higher the quality of the target site, the more likely it is the editor/owner is a “curator” of links, passionately picky about what does and does not get on their pages, links, text, anchors, and otherwise. Thus it is not the link builder who controls the success or failure of that process. Even fantastic content doesn’t assure links will be granted, or granted in the manner you, as the link builder, wish they were. To try and hold a link builder accountable for editorial decisions made on high merit sites they do not control is, frankly, silly.

Example? Sure.

Let’s say I want on this page. For this client. Pagerank 8 blah, blah, blah. It wasn’t me that got this link. It was the quality of the site I was seeking a link for. All I did was match content of merit with link curator of merit.

Yes, I agree it’s easy to do this when the content is that strong. But again, this is the exact point of merit-based linking. And it’s why the engines give them such weight. And once again, notice, no anchor text. It isn’t needed, isn’t used, and to ask for it from this particular target site is like asking the Soup Nazi from Seinfeld for an extra roll with your Crab Bisque.

Just shut up and be happy a link is there at all.

At best, the link builders role with merit or citation based link building is to have the skills to identify the right targets and editorial contacts at those targets, make a brief and polite content introduction, and then leave.

NOTE: To ask a question, click the “comments” link below, or email your question to eric at eric ward dot com

The post Who Controls Link Building Success (LinkMoses Resurrected 6) appeared first on

September 16th 2009 Uncategorized