Reason 1: Stand Out From The Competition
According to PQ Media estimates, an amazing amount of energy and money should be spent on content marketing in 2019, globally (PQ Media, Global Content Marketing Forecast, 2015).
However, despite the fact that content marketing is already blooming in the United States or the United Kingdom, some other markets – like France, Spain or Italy – are only starting to show some interest in the approach.
Therefore, if you plan to develop in one of these countries, it might be a good opportunity to take a decisive advance over the competitors and federate an audience.
Reason 2: Create value
While online advertising diverts the attention of users (Seth Godin, perfectly coined the term interruption marketing) and shows performances that are more and more challenged (by the wide adoption of adblockers, the development of ad fraud, the austerity of display Click Through Rates …), content marketing, intends to bring valuable information to users along their customer journey.
The purpose of the inbound approach in content marketing is to give the users the decision on the type and the context of their relationship with a brand.
Content marketing is different from other methods in that it aims to serve the audience before serving the brand. Through content, a brand must inform, entertain, or teach: it must bring value to the users.
At a time when innovation and information are accelerating at a rapid pace, it is not enough to just sell one’s products and services, they need to be given meaning and a long-term purpose. The value of an offer, the history and commitment of a brand, the service provided to the users must be consistently told and demonstrated to be really perceived.
“The only way to circumvent the bullshit detector is to not bullshit.”
Shane Smith, Vice Media Founder
In this new paradigm, traffic becomes audience and branded content intend is to capture users upstream of the customer journey and accompany them through unique content experiences. Content strategy must anchor the brand’s voice and tonality create a consistent experience by generating valuable organic traffic.
Reason 3: Build Visibility And Grow Traffic On The Long Term
With the generalization of Adblockers (since last year, they can also filter advertising from your Facebook thread) and mobile users less and less tolerant of unwanted advertising intrusions, inbound approach and content marketing have a bright future ahead.
Moreover, unlike paid channels (display and programmatic formats, search advertising or social) when there is no more budget, there is more traffic, content marketing will develop traffic that is bound to last for months if not years.
To achieve this, one of the main goal assigned to content is to drive traffic from organic channels: search engines, social networks, referral websites.
Reason 4: Adapt To Mobile And Video Growth
Since 2015, the number of searches made on Google from mobile phones surpassed desktop searches. The migration of behaviours also generates a growing demand for mobile video: a Cisco study from 2018 estimates that by 2022 the consumption of video on smartphones will represent 77% of global data traffic…
Beyond TV campaigns and corporate films, brands are expanding their video content to emerge on video platforms: YouTube, often considered the 2nd largest search engine in the world, represents a largely underused source of organic traffic that can’t be ignored much longer.
Search Quality Raters. These users evaluate the quality of the search results of the Google algorithm. Behind the magic of the automatic, the questions, intentions of users and results of research are analysed and evaluated, tightly circumscribed by guidelines provided by Google. Their feedback is used by the engineers to improve the quality of the research results proposed to the daily users. However, SEO professionals are worried about the impact of the work of these teams on the URL ranking. Despite claims by Google managers to the contrary, minds remain sceptical about the total use of these data collected.
The humans behind the algorithm
An unknown profession, but not secret
Since the early 2000s, people have been working and analysing the results of Google’s algorithm. Today, there are approximately 10,000 of them in the world. They are average people, users of search engines like everyone else. They applied for a part-time job offer at a third company such as Lionbridge or Leapforce and had to pass two tests in order to be selected. One tested their reasoning through questions and the other composed of ‘nearly real-life’ exercises. At home, they spend between 10 and 20 hours per week (paid between $ 12-15 / hour) studying and giving feedback on research results that have already happened.
The analysed results are mainly organic like texts, images, videos and news results (sometimes paid ad results, as well). Each day, they are offered to perform different tasks to evaluate research results. They can, for example, test a given URL and assess its relevance according to a query on desktop or mobile. They also make side-by-side comparisons of organic results of the same search and selecting the results that best match the query.
Companies provided them with information such as the language of the search, location and sometimes the map of queries (map restoring queries previously sought) to better understand the intention of the user. Their purpose, to put themselves in the shoes of any user and determine if the results are relevant to the intent and research.
— Jennifer Slegg (@jenstar) 9 novembre 2017
A very monitored job
Each task has an estimated completion time. Agencies are timing the Search Quality Raters during their tasks to judge their effectiveness. For example, evaluating the quality of a URL is estimated at 1 minute and 48 seconds. However, to ensure that the analysis is done without bias and with the application, the same tasks are assigned to several Search Quality Raters. If their results diverge, they are asked to agree together. In case of persistent disagreement, a moderator will decide
The Guidelines: Quality Made in Google
To best frame the evaluation of the quality of the search results, Google transmits (via third-party companies) guidelines. In 2015, after many leaks, Google finally decided to publish them officially.
Google regularly makes changes according to the new objectives of the algorithm. The last official publication dates back to July 20, 2018 and is 164 pages long.
In their guidelines, Google explains to their Search Quality Raters how to evaluate the quality of pages of their search engine. For this, it is necessary to carry out three notations.
The objective is to verify that the result corresponds to the query and the intention of the user. For this, Google identifies four kinds of queries: those with the objective to inquire (know), to act (do), to go to a specific site (website) and local visit (visit-in-person). The Search Quality Rater will evaluate whether the result meets the needs by placing the cursor on the scale from FailsM (Fail to Meet the Needs) to FullyM (Fully Meet the Needs). Some queries can be a mixture of several types.
A Search Quality Rater may decide not to assign a rating for content and to “flag” it in certain cases: if the material is pornographic, presented in a language different from that of the query, does not load, or contains upsetting and or offensive content.
The E-A-T acronym stands for Expertise-Authority-Trust. The Search Quality Raters assess the level of expertise of the content by verifying that the author of the main content has enough personal experience for it to be considered relevant.
They then assesses the authority of the main content, the site and the author. A Search Quality Rater must find evidence of their reputation and recommendations from entities whose authority is already clearly established.
Finally, Trustworthiness is the confidence that the user can have towards the site. It is established with the main content, the website and the author.
This evaluation is in no way related to the query. Through their criteria, Google puts forward the assessment of the benefit that the content brings to users. Moreover, it says on the Google Blog: “We built Google for the users, not for websites”. Through this rating, Google is fighting back against the increase of fake news.
We built Google for the users, not for websites – The Google Blog
The Overall page quality rating
This rating is based on the query and the intent of the user. It includes five criteria: The purpose of the page, the notation of the E-A-T, the appreciation of the main content, the information found and the reputation of the website and the author.
The YMYL pages
Some pages are rated more strictly than others: pages Your Money, Your Life (YMYL) page category, created by Google, groups pages containing medical, financial, legal, news, public / official information, as well as pages for shopping or financial transactions. Their content can have a significant impact on the lives of users reading them, which is why they must contain high-quality information.
A quarter of the guidelines pages are dedicated to mobile queries and the assessment of its content especially for queries like “visit-in-person”. Both the main content, as well as the quality of the mobile optimisation of pages have a full part to play in this.
Grey Areas around the ratings
The impact on the SERP ranking
Many experts have expressed concerns about the role of Search Quality Raters in the Search Engine Result Page (SERP). Can the evaluation of URL quality and feedback from Search Quality Raters cause a downgrade? Is the data collected reusing in addition to the algorithm? In response to this, Matt Cutts, the head of the webspam team at Google, said the feedback would only be used to refine the algorithm. The webspam and quality rater teams have two separate goals and are not connected.
— Matt Cutts (@mattcutts) 7 décembre 2011
Indeed, the process would be to evaluate the quality of sites at first. Then, when engineers change the algorithm, Search Quality Raters would be able to assess the difference in quality during side-by-side evaluations without knowing which side contains the product of the change in the algorithm and which version is the old one. Engineers will modify and improve the algorithm based on feedback from Search Quality Raters. They can then run a live test on a small percentage of users that are not search quality raters.
However, if in the short term the ranking of a page judged of poor quality by Google is not altered. We can imagine that this will happen in the long term. Indeed, if a page presents some of the characteristics considered to be bad quality, the fact that it is noted as such by a Search Quality Rater will not impact its ranking.
On the other hand the engineers will make sure that only the high quality results are present in the best results during different changes in the algorithm.
The Search Quality Evaluator Guidelines as SEO bedtime reading
The ratings of Search Quality Raters are therefore essential. Unfortunately, Google does not communicate this to the authors but the guidelines framing their notation are, which is why the Search Quality Evaluator Guidelines is an essential document for evaluating one’s content. By doing our assessment, we are more than likely to find areas for improvement. Moreover, as SEO is a red thread spot, this evaluation is to be renewed regularly and especially when reworking these guidelines
- An Interview With A Google Search Quality Rater – Search Engine Land – January 20, 2012
- Search Quality Evaluator Guidelines – Google – July 20, 2018
- We built Google for users, not websites – Google Europe Blog – September 6, 2014
- How does Google use human raters in web search? – YouTube | Google Webmasters |
Matt Cutts – May 1, 2012
Over the past years, we have seen the growth of chat apps, social media, and business tools for sharing content and communicating better.
We have also noticed the general annoyance about the number of emails received each day, both in a professional or personal context with commercial communications.
Despite all of this: email is still thriving. In fact, according to a Statistica report, 269 emails have been sent and received in 2017. This figure should get to 333 emails by 2022, that is to say approximately a 23% increase in barely 5 years.
I have worked for several months in CRM marketing and especially emailing and I am currently working in a French data marketing agency. What I learnt is that, indeed, the way brands were using emailing is totally out of date. Emailing is not over yet though, and I am truly convinced that machine learning will very soon transform emailing marketing.
For me, machine learning will have an impact on four elements that are the pillars of emailing marketing. It should help marketers send communications to the right email recipients, at the right time. Most of all, artificial intelligence will provide a better knowledge of these recipients that are actually human beings: emailing marketers will be able to meet the needs of each individual, case by case. According to a PwC and L’Usine Digitale survey*, half of the 240 leaders interviewed are exploiting less than 25% of the caught and analysed data… A reality that might change thanks to machine learning.
A better segmentation to communicate to the right recipients
The actual strategy for most of the brands is to target people based on their very personal information like their age, the geographic area where they live, or their purchase history. The future of targeting is, in my opinion, based on the analysis of their behaviours. Machine learning algorithms will permit to create newly qualified segments, receiving customized communications according to their behaviour pattern.
A very interesting new tool to improve targeting is Tinyclues. This solution helps brands and retailers with huge customers database to sort out this amount of data. Artificial intelligence is able to predict who will be more likely to open, click and buy the product or service. To realize these predictions, Tinyclues is using unassigned customer data, like the name domain of a website address, the purchase history or the link the customer clicked on. The algorithm will then find correlations between the billion of data, mostly unstructured, and learn about it in order to propose a solution.
As an illustration, this short video explains what Tinyclues is doing :
Content: a better knowledge about how to talk to customers
With machine learning solutions, A/B testings on subject lines, body copies and images will not be useful anymore. The artificial intelligence tool will be able to determine which content will perform best in terms of opening, click and conversion rates.
Phrasee explains in the video below how its algorithm permits to generate subject line :
The right timing to send communications
One of the most frequently asked question in emailing marketing is “when should I send this email to my customers?”. According to me, the answer depends on the sector and the typology of clients. However, if a brand sends too many emails, recipients are more likely to unsubscribe. On the contrary, if a brand does not send enough emails, the competitors on the market will take the place.
Machine learning will figure out both the frequency and the timing issue by analyzing the customers’ activity history. It will enable to determine habits, time zones and downtimes in order to adapt to each people individually, according to their preferences.
Personalize the content
Improving the content can go further than finding the right subject line of the image. In order to maximize the results of a commercial email, artificial intelligence will help marketers determine what type of promotion will best perform for each individual (full price product, new products, discounts, free products, free shipping…). The probability to purchase will be significantly increased. Both companies and customers are winners: companies because they will sell more, and customers because they will have communications corresponding to their needs or their wishes.
As a conclusion, it is true that people are receiving too many emails. Commercial pressure is the reality. In order to differentiate, brands need to go further than the first step of personalization (like putting the first name in the subject). Following this objective, AI will help marketers sort out the available data to determine the best messaging, deliver at the best time and including the right offer for each individual.
Therefore, he next challenge for companies is to hire machine learning talent to implement those new AI tools. It will probably be harder for small brands: according to a PwC and L’Usine Digitale survey*, 44% of companies with less than 500 employees do not think about integrating AI in their project. For companies that are already using AI, the human factor is the first obstacle to the development of AI tools: 56% of interviewed companies list the lack of knowledge and 49% the lack of training.
This might in the end build a gap with huge companies that have the means to attract and retain highly qualified talent.
Racing Video games have always been fifteen years ahead of the industry they are portraying. Now comes the time to bridge the gap.
Buying a car is not fun.
Well, not for anyone who’s not into watching countless hours of car-related videos and actually rolls down his window when he gets passed by a Porsche (that happens often).
For those of you who are not into cars but need one nonetheless, you’re in for an ordeal. Much like swimming in a sea full of hungry sharks. Why ?