Is the Internet threatened by the rising seas?

Is the Internet threatened by the rising seas?

In the collective imaginary the internet is in the cloud, when it is actually much more tangible than we want to think. It spreads across the globe using an underground network which grows according to the demand for internet access. It just so happens that this network is in danger. Indeed, the rise of sea levels due to global warming threatens this internet network located on the coast. The damage caused by this rising water could greatly affect our modern lifestyle.

A tangible and massive internet network

The internet is described by the Cambridge dictionary as “the broad system of connected computers around the world that allows people to share information and communicate with each other”. It has three main components: the end-user equipment, the data centres and the internet network.

This network is itself composed of several elements such as optic fibre cables, hardware servers, data transfer stations and power stations. These elements, all interconnected, weave a network to transmit information from one end of the world to another, and it is quite difficult to estimate its length. In 2014, there were 285 submarine communication cables or about 550,000 miles. It is difficult to gage the size of the terrestrial network, as it grows based on the demand, and the newly installed cables intermix with the old ones.

In the United States, it is estimated that most Internet infrastructure was built around the 1990s and 2000s. At that time, the development of the network followed the development of major American cities. Today, operators tend to install network extensions along with other infrastructures such as roads, railways or power lines. In some areas of the world and throughout history, cities and megacities have developed along the coastlines; portuary cities that were synonymous with wealth, opportunities, and businesses. These attractive and often densely populated cities are now facing a danger: the flooding of their internet network.

The rising seas gaining internet ground

Paul Barford, a computer scientist, and his assistant student, Ramakrishnan Durairajan, undertook a mapping of US internet infrastructure. As the infrastructures are private and belong to the operators, the locations are kept mostly secret in order to avoid any possible damage. In mapping the network, they observed that it was becoming denser in areas of high population. There are often coastal cities.

They presented their information to Carole Barford, climate scientist. and they became aware of the risk of flooding part of the network. They decided to superimpose the map with that of the rising sea level due to global warming by the National Oceanic and Atmospheric Administration (NOAA). Through their research, they estimated that in 2033, about 4,000 miles and 1,100 traffic hubs would be underwater in the US. For New York City, about 20% of its internet network will be underwater.

We should not underestimate the repercussions this flood would have on our current lifestyles. Many services work through the internet, such as traffic lights, medical monitoring or cash dispensers. In the past, some cities had suffered blackouts due to flooding. A recent example: in 2012 during Hurricane Sandy, 10% of the city of New York was deprived of electricity.  

The problem is that the terrestrial network is designed to be water resistant, but not to work under water.

Unlike Internet Networksubmarine cables, cables buried in the earth are protected mainly with plastic. There are not adequately protected in cases of floods or frost. And with part of the network being a few years old, it is possible that it is even more fragile than the new extensions.

It was during the Applied Networking Workshop in Montreal, July 16, 2018, that the three scientists presented their study concerning the territory of the USA. Carole Barford said “The 15-year predictions are really kind of locked in,” nobody can change what will happen. The main cities involved are New York, Miami and Seattle.

Saving the Internet … from itself?

“If we want to be able to function like we expect every day, we’re going to have to spend money and make allowances and plans to accommodate what’s coming” said Carole Barford.” “Most of the damage that’s going to be in the next 100 years will be done sooner than later … That surprised us. The expectation was 50 years to plan for it. We do not have 50 years, “added Paul Barford.

So, what are the solutions to avoid this submersion of the network?

The first would be to be able to locate all the infrastructures that compose / form the Internet network. Despite the risk of voluntary degradation, it is necessary to identify the infrastructure that will be underwater in a few years. The study predicts that about 4,000 miles and 1,100 traffic hubs will eventually be underwater. Their estimation is made according to the networks they knew about. This study must also extend to all continents and countries. As rising water levels are a global effect of climate change, many coastal cities are likely to be affected.

In order to limit the impact of rising water on the internet, operators can envisage different solutions. Strengthening the current network, moving it further inland, or ensuring that computer signals avoid submerged areas. However, these solutions are not perfect or permanent. Strengthening infrastructure will only work for so long. Avoiding submerged areas will impact the quality of the internet network and could cause latency. Moving existing infrastructures or creating new ones will require significant financial investments that could affect the end user.

Our internet use seems to be in danger. However, does it contribute to its own destruction? The internet is not as green as it seems. We power data centres, one of the main components of the internet, with unsustainable energy sources, creating carbon emissions. Forbes estimated that the carbon footprint of data centres alone is equivalent to that of the global aviation industry, or 2% of global emissions. The emissions of carbon dioxide, due to our increasing use of the internet, are one of the causes of the melting of the ice caps and rising water levels.

Wouldn’t it be ironic if our growing internet addiction was its own worst enemy?

The invisible pollution of the internet

The invisible pollution of the internet

What If the internet became the primary cause of global warming? Ian Bitterlin, a data centre expert, estimates that by 2030, the Internet will consume 20% of the world’s electricity. Today the energy consumed by the internet is, for the most part, not of green origin. It generates an ever-increasing carbon footprint and has a detrimental impact on global warming. Large companies face social pressure and increasingly frequent investigations from independent organisations, and now are embarking on a race for a Green Internet.

The energetic greed of Internet

A high power consuming global network

To determine the energy consumption of the internet, one must ask what the Internet is. According to the Cambridge Dictionary, the internet is “the large system of connected computers around the world that allows people to share information and communicate with each other”. A study conducted by Ericsson and TeliaSonera determined that the three most energy-hungry internet components of this “broad system” are the end-user equipment, data centres and networks.

The end-user equipment

According to a study from the Centre for the Digital Future in the United States in 2017, an American would spend an average of one full day per week connected to the internet. A study from Statista indicates that teenagers are even more exposed to internet: they spend about 4 hours on the internet a day, a little over a day in a week. These numbers are just further evidence of the constant connectivity we experience daily. To stay connected, we use devices that we regularly recharge, thus consuming energy.

The data centres

Data centres are also very greedy. A data centre is “a place of computers that can be kept safely” according to the Cambridge dictionary. Each click, each message sent, each video watched solicits these computer farms. They use electricity to operate, but especially to keep cool. The cooling functions of computers alone account for 40 to 50% of the electricity consumed. McKinsey & Company estimate that only 6% to 12% of the power is used to compute. The remaining part is used to prevent a surge in activity that could crash their operations.

To illustrate the amount of energy consumed by a data centre, Peter Gross, an engineer and designer of power systems for data centres said: “A single data centre can take more power than a medium-size town”. In France, the energy consumption of data centres is higher than the electricity consumption of the city of Lyon (2015, the French Electricity Union). Data Centres’ global energy consumption is up to 3% of the global energy consumption wrote The Independent in 2016.

The internet network

We can also see an increase in the development of the networks which allow access to the internet. The components of this network are for example DSL, Cable Modem, Fiber. These networks work also thanks to energy.

 

To determine the energy consumption shares of these three major Internet components, the ACEEE assessed in 2012 that for a gigabyte of data downloaded, the network is consuming 5.12 kWh power: 48% from data centres, 38% of end-users equipment, and 14% of internet networks.

 

The vagueness of the exact global consumption

Determining the global energy consumption of the internet is complicated. The Centre for Energy-Efficient Telecommunications (CEET) tried to do this once. It estimated that internet consumption accounts for 1.5% to 2% of the world’s total energy consumption in 2013. If we compare this figure with the use of other countries, the Internet would be the 5th country consuming the most energy in the world. In 2014, Jon Koomey, Professor at Stanford University and known for describing Koomey’s law, estimated this consumption to be at around 10%. However In 2017 Greenpeace estimated it at the lower rate of 7%.

There are a few reasons that can explain this critical difference. The main one being that when end-user equipment consumes energy, this energy is not necessarily used to connect to the internet. A laptop or computer can be used offline to play video games. Allocating the share of electricity used for the internet connection is therefore very complicated. Some experts prefer not to count the energy consumption of these devices so as not to distort the numbers. Besides, experts expect this power consumption to double every four years. The Guardian predicts that in 2020 the internet will reach 12% of global energy consumption.

With great power come great sustainable responsibilities

The dark side of the power

The problem with the energy consumption of the internet lies in how to track what kind of energy the internet network is using. As Gary Cook, a senior policy analyst at Greenpeace, said: “How we power our digital infrastructure is rapidly becoming critical to whether we will be able to arrest climate change in time. […] If the sector simply grew on its current path without any thought as to where its energy came from, it would become a major contributor to climate change far beyond what it already is.” Indeed, in 2016, the Independent wrote that the carbon footprint of data centres worldwide was equivalent to the global aviation industry, which is up to 2% of global CO2 emissions.

Some associations have therefore investigated to determine what the share of renewable energy that the data centres use to consume is. The Environmental Leader estimated that in 2015 Google and Amazon used at least 30% of fossil energy to power their data centres. In 2015, Lux Research company found out through a benchmarking on data centres owned by Google, 4 out of 7 were dependent on coal energy. In 2012, Greenpeace released the report “How Clean is your Cloud?” informing about the respect (or not) of the environment of some companies had through the use of their cloud and their data centres.

The Green Power Race

These studies by different organisations have created a race for green power for data centres of large companies. Google, Apple, Facebook and Amazon now provide 100% renewable energy to their data centres or are aiming towards this objective. As an example, Amazon has been powering its servers with at least 50% renewable energy since 2018. However Greenpeace recently contradicted this information and would estimate it only at 12%. Greenpeace also points out that the change triggered by these big Western companies is not enough. Indeed, sizeable Chinese web companies such as Baidu, Tencent, show very little transparency, communicating little about their energy consumption, or their use of green energy. They face little access to renewable energies due to monopoly utilities.
The GAFA are also under the spotlight; medium and small data centres remain off the radar.

Nonetheless the International Energy Agency’s (IEA) announced that despite the increase in workload for data centres (about 30% in 2020) the amount of electricity used would be up to 3%. Data centres are becoming more and more energy efficient.

 

The internet remains the most important source of information and has also made it possible to create less polluting solutions. Reading an email is more eco-friendly than printing a paper. Using an app to find a car parking space is more environmentally friendly than driving around in circles to find one. if you find yourself feeling concerned about this invisible pollution that we generate daily, rest easy in the knowledge that the Internet also contains tips to reduce its own electricity consumption.

 

Sources:

 

 

 

Is Google’s Algorithm more human than you think ?

Is Google’s Algorithm more human than you think ?

Search Quality Raters. These users evaluate the quality of the search results of the Google algorithm. Behind the magic of the automatic, the questions, intentions of users and results of research are analysed and evaluated, tightly circumscribed by guidelines provided by Google. Their feedback is used by the engineers to improve the quality of the research results proposed to the daily users. However, SEO professionals are worried about the impact of the work of these teams on the URL ranking. Despite claims by Google managers to the contrary, minds remain sceptical about the total use of these data collected.

The humans behind the algorithm

An unknown profession, but not secret

Since the early 2000s, people have been working and analysing the results of Google’s algorithm. Today, there are approximately 10,000 of them in the world. They are average people, users of search engines like everyone else. They applied for a part-time job offer at a third company such as Lionbridge or Leapforce and had to pass two tests in order to be selected. One tested their reasoning through questions and the other composed of ‘nearly real-life’ exercises. At home, they spend between 10 and 20 hours per week (paid between $ 12-15 / hour) studying and giving feedback on research results that have already happened.

“In-our-shoes” analyses

The analysed results are mainly organic like texts, images, videos and news results (sometimes paid ad results, as well). Each day, they are offered to perform different tasks to evaluate research results. They can, for example, test a given URL and assess its relevance according to a query on desktop or mobile. They also make side-by-side comparisons of organic results of the same search and selecting the results that best match the query.

Companies provided them with information such as the language of the search, location and sometimes the map of queries (map restoring queries previously sought) to better understand the intention of the user. Their purpose, to put themselves in the shoes of any user and determine if the results are relevant to the intent and research.

 

A very monitored job

Each task has an estimated completion time. Agencies are timing the Search Quality Raters during their tasks to judge their effectiveness. For example, evaluating the quality of a URL is estimated at 1 minute and 48 seconds. However, to ensure that the analysis is done without bias and with the application, the same tasks are assigned to several Search Quality Raters. If their results diverge, they are asked to agree together. In case of persistent disagreement, a moderator will decide

 

The Guidelines: Quality Made in Google

To best frame the evaluation of the quality of the search results, Google transmits (via third-party companies) guidelines. In 2015, after many leaks, Google finally decided to publish them officially.

Google regularly makes changes according to the new objectives of the algorithm. The last official publication dates back to July 20, 2018 and is 164 pages long.

In their guidelines, Google explains to their Search Quality Raters how to evaluate the quality of pages of their search engine. For this, it is necessary to carry out three notations.

Needs Met

The objective is to verify that the result corresponds to the query and the intention of the user. For this, Google identifies four kinds of queries: those with the objective to inquire (know), to act (do), to go to a specific site (website) and local visit (visit-in-person). The Search Quality Rater will evaluate whether the result meets the needs by placing the cursor on the scale from FailsM (Fail to Meet the Needs) to FullyM (Fully Meet the Needs). Some queries can be a mixture of several types.

Scale of the Needs Mat Rating

A Search Quality Rater may decide not to assign a rating for content and to “flag” it in certain cases: if the material is pornographic, presented in a language different from that of the query, does not load, or contains upsetting and or offensive content.

 

The E-A-T

The E-A-T acronym stands for Expertise-Authority-Trust. The Search Quality Raters assess the level of expertise of the content by verifying that the author of the main content has enough personal experience for it to be considered relevant.

They then assesses the authority of the main content, the site and the author. A Search Quality Rater must find evidence of their reputation and recommendations from entities whose authority is already clearly established.

Finally, Trustworthiness is the confidence that the user can have towards the site. It is established with the main content, the website and the author.

This evaluation is in no way related to the query. Through their criteria, Google puts forward the assessment of the benefit that the content brings to users. Moreover, it says on the Google Blog: “We built Google for the users, not for websites”.  Through this rating, Google is fighting back against the increase of fake news.

We built Google for the users, not for websites – The Google Blog

The Overall page quality rating

 

This rating is based on the query and the intent of the user. It includes five criteria: The purpose of the page, the notation of the E-A-T, the appreciation of the main content, the information found and the reputation of the website and the author.

Scale of the Overall Page Quality Rating

The YMYL pages

Some pages are rated more strictly than others: pages Your Money, Your Life (YMYL) page category, created by Google, groups pages containing medical, financial, legal, news, public / official information, as well as pages for shopping or financial transactions. Their content can have a significant impact on the lives of users reading them, which is why they must contain high-quality information.

A quarter of the guidelines pages are dedicated to mobile queries and the assessment of its content especially for queries like “visit-in-person”. Both the main content, as well as the quality of the mobile optimisation of pages have a full part to play in this.

Grey Areas around the ratings

The impact on the SERP ranking

Many experts have expressed concerns about the role of Search Quality Raters in the Search Engine Result Page (SERP). Can the evaluation of URL quality and feedback from Search Quality Raters cause a downgrade? Is the data collected reusing in addition to the algorithm? In response to this, Matt Cutts, the head of the webspam team at Google, said the feedback would only be used to refine the algorithm. The webspam and quality rater teams have two separate goals and are not connected.

 

Indeed, the process would be to evaluate the quality of sites at first. Then, when engineers change the algorithm, Search Quality Raters would be able to assess the difference in quality during side-by-side evaluations without knowing which side contains the product of the change in the algorithm and which version is the old one. Engineers will modify and improve the algorithm based on feedback from Search Quality Raters. They can then run a live test on a small percentage of users that are not search quality raters.

However, if in the short term the ranking of a page judged of poor quality by Google is not altered. We can imagine that this will happen in the long term. Indeed, if a page presents some of the characteristics considered to be bad quality, the fact that it is noted as such by a Search Quality Rater will not impact its ranking.

On the other hand the engineers will make sure that only the high quality results are present in the best results during different changes in the algorithm.

The Search Quality Evaluator Guidelines as SEO bedtime reading  

The ratings of Search Quality Raters are therefore essential. Unfortunately, Google does not communicate this to the authors but the guidelines framing their notation are, which is why the Search Quality Evaluator Guidelines is an essential document for evaluating one’s content. By doing our assessment, we are more than likely to find areas for improvement. Moreover, as SEO is a red thread spot, this evaluation is to be renewed regularly and especially when reworking these guidelines

 

 

Sources :

The limpid and floating “Privacy by design” concept

The European General Data Protection Regulation (GDPR) has been enforced since 25 May 2018 and applied to organizations across the world. In a data-driven society where analysing and understanding data is a competitive advantage for companies, GDPR serves as a legal safeguard to protect the privacy of all European citizens.

The “Privacy by Design” framework is one of the key concepts of this regulation and was developed by former Information and Privacy Commissioner of Ontario (Canada), Ann Cavoukian, in the 90s. She proposed a model that can be seen as almost medical, which would favour preventing privacy “diseases” over curing them.

Five years after E. Snowden’s surveillance disclosures about the NSA’s wiretapping, companies are more than willing to embrace this concept to regain customer trust. But is the concept of “Privacy by Design” as limpid as it seems?

“Privacy by Design”, 7 principles

The “Privacy by Design” framework is introduced in article 25 of GDPR: companies  should design every project in such a way that they ensure personal data privacy. If a project is “designed by privacy”, then the risk attached (data breach) to any personal data will become very low. To appreciate its scope in the best possible way, this concept relies on 7 principles:

Proactive, not Reactive; Preventative, not Remedial

By anticipating, companies should be able to ensure the highest level of privacy for every action that will collect, process or destroy personal data. In this way, they will also ensure a high level of security.

Privacy as the Default

Individuals are automatically protected. They do not have to ask or carry out any action to ensure they and their personal data are private and protected.

Privacy Embedded into Design

A product should be designed to respect the privacy of personal data that it will process. Ways of ensuring privacy for personal data are fully integrated at the beginning of the creation process for a new project, product or service.

Full Functionality — Positive-Sum, not Zero-Sum

The goal is to build a balanced relationship where users and companies benefit from the situation (win-win model). It is possible to create this situation with a high level of privacy and security where no parties will suffer any loss.

End-to-End Security — Lifecycle Protection

Personal data should be highly protected during its entire life cycle. Each action that collects, processes and even destroys the data should ensure the highest level of security for individuals.

Visibility and Transparency

A user should be able to verify their data, how it is stored, processed and secured.  Thanks to this, trust between the user and the company should be strengthened.

Respect for User Privacy

In a user-centric approach, the companies’ first concern should be to protect the users’ personal data as much as possible.

 

All these principles should be applied to companies, according to their purposes of processing personal data.

GDPR briefly presents some measures that can lead to implementation of the “Privacy by Design” concept into businesses. Here are some examples:

  • Data Minimization (article 5), the concept of collecting only the data that is needed
  • Pseudonymisation (article 25), the technique that replaces the identifying fields of personal data collected to ensure that a user cannot be identified by an external individual
  • GDPR also establishes specific deadlines for the conservation of personal data depending on its type

 

A floating implementation

 

Nevertheless, the instructions presented in GDPR are not sufficiently detailed and cannot be simply applied. Even if companies apply these measures, it will not be enough to consider a project as compliant.

The concept of privacy by design is not a checklist that can be ticked quickly and easily. There is no handbook or detailed process to follow.

For R. Jason Cronk, Author of “Strategic Privacy by Design” and Privacy and Trust Consultant, there is an explanation behind this vagueness: “Unfortunately, part of the strength of her 7 Foundational Principles of Privacy by Design are also their weakness. She (editor’s note: Ann Cavoukian)  purposefully made them robust and flexible to allow organizations to find their own methods to achieve them. However, privacy by design has remained frustratingly vague – its flexibility might be a virtue in some respects, but it is a curse in other respects.”

 

A case-by-case application

Privacy by design is a concept that must be applied case-by-case. Organisations should study and apply measures to comply, according to their use of personal data. In this case-by-case application, companies can sometimes feel “overwhelmed” and willing to turn to a qualified third-party if they have the financial means or they can count on their personal search or on associations (i.e. the AFCDP in France) where they can share their experience and practices with other companies. In France, the CNIL provides a guide for SMEs, to lead them up to a GDPR compliance.

The concept therefore remains vague and difficult to apply for companies. But if they have the opportunity to work with a qualified third-party or already have the structure to find a way to apply it properly, they have an incontestable asset.

 

The DPO, the weakest link?

The challenge can also be human. Indeed, applying this concept during the creation process of a project that aims to process personal data implies an organizational effort at all levels. “Privacy by design” should be the first and not second thought for every service implicated, at their respective level, in order to ensure that Data Protection Officers or relays are designated at key point services whose role it is to verify and advise the company on how to collect, process and store personal data to comply with the GDPR. Being compliant with GDPR is an ongoing process in the life cycle of a project and the DPO follows the evolution of the project and the legislation. The designated DPOs must be, above all, motivated. They oversee the application of GDPR in the activity of their service and its relay.

If one of the DPOs or relays does not feel concerned enough by applying it, then the creation process designed by privacy is weakened. When a relay is not applying it properly at their level, then there is a certain risk that some data is not processed properly according to GDPR.

One of the DPO’s main tasks is to advise his company. In order to advise it in a better way, the DPO should develop and “grow” a legislative culture around the regulations in force. A DPO should be curious and interested in the subject. If the DPO does not care enough about his responsibilities, the company will suffer because of this lack of knowledge.

 

Implementation and awareness are keys

 

“Privacy by Design” may be easy to understand but companies that try to apply it may feel like they are walking on eggshells. Because it is in the experimental stage, it remains hard to know where to begin but over time the best practices will emerge from this experience and will lead to a simple implementation.

Also, raising awareness is necessary and essential for ideal application. Malakoff Médéric’s DPO, Johanna Carvais-Palut, explains that in her company DPOs receive a formation from the CNIL, a monthly informative newsletter on the legal evolution and participate in monthly meetings.
Today, “Privacy by Design” is essential to ensure the life privacy for all individual but it is up to companies to make sure it happens, thanks to the resources they will gather.

 

 

Further reading: