SEO

Web 3.0 is Official With Schema.org

We now finally have an agreement among the three main search engines on a standard for structured data, and how to mark it up. Schema.org was released and it documents how publishers can standardize the structure of their content.

The potential effect can be huge on users, where machines now are getting a huge boost of intelligence simply by us feeding them with knowledge instead of having them become so intelligent that they can understand things the way a human being can.

None of this is actually new, and there is no technology breakthrough. It is simply a human system that we agree upon, and feed the computers with data and "meanings" about these data. Hence, the name 'semantic web'

Imagine how much process and statistics a computer or a search engine needs to do in order to understand when to treat apple as a fruit and when to treat it as a computer company. Imagine something more dramatic like "banana republic", two unrelated words, each in a different field, and yet the phrase is something completely different.

What will happen now is that when someone writes an article, they will mark it up with the related meta tags, so that the computer understands how to treat this string of letters. The users reading, will see the same content, but the computer will see structured data.

The effect can easily be seen on search engine results, and this where search engines graduate to the next level of usefulness and become, as Bing claimed to want to be, decision engines.

They will give us meaningful information about the search queries we are looking for in order to better decide where to go.

A big category of content is recipes:

 

Structured Data Search Results

Traditionally search engines would try to figure out what part of the page is the most relevant to your search query, and then provide the closest thing in the search results' snippets.This required the search engine to understand what you meant by that query, and accordingly provide you with something useful. With semantic markup, and if the user includes in the query something helpful like "recipe", this tremendously helps the search engine, and gives the users relevant results in two ways:

  1. Displaying relevant information: this is usually presented in gray, right under the headline or title of each result. Because the search engine figured out the relevant pages you might be interested in, it gives you information from that page that would help you better decide which result is the best for you. In the above case, you are probably interested to know about the calorie value and preparation time to select which option is the best for you.
  2. Special search options: Since each type of content has its own set of attributes and uses, we need a different set of tools to refine results for each type of information. In a different example, after Google figured out that "laptop" is probably a shopping query, it gave me different options for shopping. It asked for my location, so it can give results "nearby", and it also presented me with a whole bunch of attributes, based on which I can filter the right laptop for me.

Structured Data Search Results

The really important thing here, is that the search engine doesn't really need to deeply know the different options a laptop has, and figure out how to display them. That would require a lot of processing and intelligence.

The content publishers, the sellers of laptops in this case already did the homework for google and when they published their products, they semantically tagged each attribute. Now all the search engine has to do is pick up these attributes and display them for the user in a structured way, where the filter is a useful one for the user.

Instead going to ten different pages, going back, refining your query and finding the best way to phrase it, you can do much of the filtering and choice before you go to the page, and this dramatically increases your chances of finding what you are looking for. 

This is a crucial step in how we find information.

MediaME Forum 2010 - Amman

It was great being in Amman for this forum, catching up with some friends and meeting new people. This is my presentation and the video. 

 

 

 

Web 3.0 and Search Engine Optimization

Although the coolest potential applications of Web 3.0 are potentially achieved when our machines start talking to each other in a smart way, making decisions on our behalf, and suggesting meaningul things based on past data, and our preferences, one of the first steps to get there is simply structuring data in a way that computers can deal with immediately, instead of having to extract meaning and pattern from any piece of text.

Semantic search engines extract meaning by "reading" the text and inferring that France is a country, Nescafe is a coffee brand, and The Dalai Lama is a person. This is great, but requires a lot of computing power, and has a lot of challenges in understanding different kinds of text, and the different meanings the same word can have in different contexts.

The simple way to help search engines "understand" content, is to extract those entities ourselves and give them to the search engine.

Structured data simply means that certain "entities" are tagged in a way that describe them as the entities they are. For example, instead of writing

"I live in Dubai, United Arab Emirates" you can tag the same sentence with tags that make "Dubai" a city, and not just the letters D-U-B-A-I, as follows:

I live in<div class="adr">
  <span class="locality">Dubai,</span>

  <span class="country-name">United Arab Emirates</span>

 </div>

The user will still read the same sentence, but search engines and other sites working on structured data will find your content much easier because your entities are identified. Moreover, you can export your reviews, products, information, and anything you want with ease to other sites that classify certain information.

For example, if your site offers product reviews, and your reviews are tagged properly, other shopping sites or shopping engines will be able to extract the relavant data from you, and thus make your products available without much effort on your part.

This is clearly going to become an essential part of search engine optimization, and as anything else in technology it will only pickup when a large enough number of websites start using it. Then we will witness a transformation of our web experience.

SEO for Images and Videos

Search engines can easily read and process text, although not in natural language yet. They actually "love" text because it can easily help them make sense of the pages they are crawling. The challenge with images and videos is that search engines cannot "read" them, yet.
They rely on the tags, descriptions, and the title of the page to categorize and index images and videos.
A good idea to help search engines understand these media better is to encourage users to describe, instead of comment on them. The typical comments to a video are; "nice", "cool", "very bad" etc... Nothing descriptive of the image or video.
Implementing this would give you two main advantages:

1. Better indexing for your pages: since your page will be full of relevant keywords that actually describe your content, the search engines will find it much easier. This is especially beneficial for sites with huge content, in leveraging a very long tail of keywords. Someone looking for "dog with green shirt" will definitely land on your page if a user described it this way. Moreover, the user will be happy to have found exactly what he is looking for.

2. Rich user experience: having a synopsis of a video, or a creative description for an image will help the viewer in recognizing certain things they would have not recognized otherwise. A new angle, or a special detail would give the content a new perspective. This would especially helpful with videos, since many times you wait several minutes hoping it is the content you are looking for, and then discover it is not. This way users are prepared for what they will watch.

Implementation

It is not as easy as it sounds (I have yet to test it and see for myself) especially if you have users already user to just drop simple comments.

A good idea might be to introduce descriptions with (and eventually replace) comments. Users who don't want to write descriptions can vote for each description. This way, the best description rises in position and gives new users the best description available.

The level of interaction with the content goes up when people try to describe, instead of just venting their impressions. The number of people participating will definitely be less, because naturally, it is more difficult to describe than to just comment. I think it is a risk worthy of taking, to improve the way people use your service and to differentiate it among competitors.

Tags: 

How a Bit More Content Means Much More Website Traffic

Let's explore whether or not we can get more website traffic by adding more content to our website. I'll try to make it scientific and later we will tackle the issue from a human perspective.

Create in your imagination a world with the following characteristics:

  • The internet has only two sites: 123marketing.com and ABCmarketing.com.
  • Pages indexed on search engines are two hundred pages, split in half between these two sites.
  • Each page talks about on of the only two topics; marketing or advertising.
  • All pages are ranked equally on search engines, meaning they have the same degree of optimization, and therefore, they are all equally likely to show up as the first search result for their respective keywords.
  • 123 has 80 pages about marketing and 20 about advertising. On the other hand ABC has 20 pages about marketing and 80 pages about advertising.

The table below hopefully simplifies the whole story:

                             Marketing         Advertising        Totals:

123marketing             80                     20                  100

ABCmarketing             20                     80                  100

Totals:                       100                    100                 200

As we assumed above since all pages are ranked and optimized equally, they are all equally likely to show up on search results and assuming the first result will always be clicked on, we can say that for the query “marketing” 123marketing.com has a 80% chance of getting that visitor, while ABCmarketing.com has a 20% chance. The same applies to the query “advertising” with the opposite results.

We can now easily see that for every 1000 queries of “marketing” 123marketing.com will be very happy that day, while ABCmarketing.com will feel a bit frustrated. This means that, with all factors held equal, the more pages you have about a certain topic or keyword the more likely you are to get traffic from search engines.

Another way of looking at is by considering that all the pages that contain the keywords you are writing about, are competing with you on that same visitor. The more pages you have about a certain topic (the bigger market share you have of that little universe) the easier it will be for you to get visitors.

That was simple math. Let's take a human look on the issue, and take a more realistic approach where not all pages are ranked equally, and many webmasters are trying to manipulate the search results. We will also consider that an intelligent human being is searching and she knows what she needs really useful information that will help her in her work or life in general.

First of all, there are many variations of one keyword, and it can be combined with a lot of other words, to form phrases. This means that people will not only be searching for “marketing”, they will search for “internet marketing”, “advertising and marketing”, “radio marketing” etc...

If you have 100 different articles about marketing, then you will qualify to satisfying 100 different people when they are searching. Each will be satisfied differently because they will probably have a certain aspect of marketing in their minds. Remember, the longer the keyword, the more the searcher knows what they are looking for, the happier they will be to find an article just about that niche of the subject.

What about the smart SEOs that can get to the top of the search results of a hot keyword? They will sure get tons of traffic!
But, if they lead users to a site that contains 3-4 pages about a certain query, those users will finish reading those pages, still be hungry for more and never come back. They will keep searching until they find a site that really has enough content to cater for a huge number of search queries, and a huge number of people.
Why does a bit more content mean a lot more traffic?

Simply because pages have more than one keyword in them, and because people search for keywords, and combinations of them.
You will be getting traffic for people searching for the article's main topic, and for all the related topics in that article.
One last reminder. Search engines are becoming much more efficient and accurate in directing you to the best available site. So, if you are like me, by the time you figure out how to manipulate your way to the top of search results that “algorithm” will have improved and you will be thrown away from the arena.
My simple advice: just try to be useful, write about it, and they will come.