USA: les 65 ans (et +) augmentent leur temps TV alors que les plus jeunes (12-24) divisent leur temps  TV par 2 (Source: Nielsen)

Les nouvelles habitudes des téléspectateurs américains

Business : La publication de « The Nielsen Total Audience Report – Q2 2017 » est l’occasion de mesurer comment les foyers américains se comportent face à la télévision et dans quelle mesure leur comportement change au fil de la révolution numérique.

source: http://www.zdnet.fr/blogs/digital-home-revolution/les-nouvelles-habitudes-des-telespectateurs-americains-39860196.htm

L’analyse des résultats du rapport de Nielsen a pour principal intérêt de voir comment le comportement des américains évolue de trimestre en trimestre sous la pression de la révolution digitale et sous la pression des nouveaux acteurs de l’audiovisuel. Des chiffres pas transposables tels quels en France, mais qui permettent néanmoins de voir comment le marché de la télévision réagit à la délinéarisation.

 
Tassement des audiences TV, explosition de la consommation sur mobile
En un an, au deuxième trimestre,  l’écoute de la télévision chez les américains de 18 ans et plus est passée de 4h09 à 3h55, soit un repli de 14 minutes. Dans le même temps, la consommation de la TV en différé (Time Shifted TV) a gagné 2 minutes à 32 minutes par jour en une année. Mais c’est l’utilisation d’applications sur smartphones et sur tablettes qui a le plus progressé : 1h09 en 2015, 1h43 en 2016 et 2h27 au second trimestre 2017.

Les jeunes ont divisé leur temps TV par 2 en 7 ans

 Très souvent, les analystes minimisent le repli des audiences TV en mentionnant les effets de saisonnalité, les grandes manifestations sportives ou les grands événements de société. Mais la lecture des audiences publiées par Nielsen sur une longue période est sans appel : en 7 ans le temps passé devant la télévision par les américains a diminué de 16% en moyenne. Mais en fonction de l’âge des téléspectateurs, les résultats sont très différents : les plus âgés (plus de 65 ans) augmentent leur temps TV alors que les plus jeunes (12-24) divisent leur temps  TV par 2.
La recomposition du Temps TV
Cette évolution s’explique principalement par le double effet conjugué du phénomène de « cord-cutting », c’est-à-dire les foyers qui se désabonnent des offres de télévision payante classique et de la très forte croissance de la SVOD qui gagne régulièrement des parts de marché et des offres OTT, dont les skinny bundles, proposant des bouquets TV à prix réduit en dehors des offres des grands distributeurs (fournisseurs d’accès à internet et câblo-opérateurrs).  On le sait et on le dit depuis plusieurs années : la télévision n’a plus le monopole de l’image. Elle doit apprendre à partager. Désormais, la consommation délinéarisée fait partie du « temps TV » de chaque américain.
De plus, comme l’explique Nielsen, les téléspectateurs ont accès à une très grande quantité de contenus vidéo à la demande à travers des fournisseurs multicanaux. Et plus les possibilités de diffusion de ces nouveaux services s’élargissent, plus les téléspectateurs ont de nouvelles possibilités de les visionner sur leur TV avec des terminaux connectés.
Plus d’écrans pour plus de consommation à la demande
Le rapport nous apprend également que 58,7 % des foyers équipés d’un téléviseur (soit 69,5 millions de foyers) disposaient au moins d’un autre terminal capable de streamer de la vidéo sur un téléviseur. Ces terminaux OTT (Over The Top), englobent aussi bien les Smart TV, que les consoles de jeux, en passant par les divers sticks comme Roku, Fire TV, Apple TV ou Chromecast. La pénétration de ces terminaux a progressé de 12% en une année. 6,5 millions de foyers ont accès aux trois types de terminaux ; 31 millions de foyers possèdent deux appareils capables de diffuser des contenus sur un téléviseur.

incontournable SVOD

Enfin, Nielsen confirme les chiffres de la SVOD aux Etats-Unis : 59% des foyers sont abonnés à un service de SVOD, soit 6 points de mieux qu’en 2016. La SVOD fait maintenant partie intégrante de l’offre TV américaine et continue de prendre de l’ampleur.

La nouvelle ère de la télévision est bel et bien là. Nouveaux acteurs, nouveaux comportements, nouvelles technologies, nouvelles alliances. En attendant que les géants de l’internet confirment leur appétit pour ce marché.
Advertisements

IKEA Place : l’Appli qui permet de visualiser vos achats Ikéa à la maison

Visualiser votre canapé Ikéa dans votre salon, sans acheter le canapé, grâce à la réalité augmentée, c’est possible avec l’appli « Ikéa Place ».

source: http://www.mysweetimmo.com/2017/11/19/ikea-place-lappli-permet-de-visualiser-vos-achats-ikea-a-maison/

 

Si vous avez déjà eu envie de pleurer en vous rendant chez Ikéa, rien qu’en imaginant le parcours du combattant que vous allez devoir faire pour arriver à la zone salon. Et devant devant le canapé de vos rêves, vous vous rendez-compte que vous avez oublié de prendre les mesures de votre salon. Ainsi, vous êtes bons pour un aller et retour fissa, dans -presque- la joie et la bonne humeur.

Ce cauchemar n’arrivera plus, car Ikéa a désormais tout prévu. Depuis le 29 septembre, une nouvelle application est disponible sur Iphone et Android : « Ikea Place ». Le concept est simple : grâce à la caméra de votre smartphone, l’application scanne votre pièce, puis place l’objet de tous vos désirs, à l’endroit que vous voulez. Ainsi, il est possible de se rendre compte si la couleur de ce tabouret va bien avec notre papier peint, ou si cette table n’est pas trop foncée pour notre véranda.

 

La firme suédoise a annoncé que plus de 2000 objets étaient déjà disponibles sur cette application. Grâce à la réalité virtuelle, il est possible de s’approcher du meuble pour l’observer de plus près, et de tourner autour. Attention tout de même, il est pour le moment impossible de commander en un clic via l’application. En tout cas, si l’on en croit les réseaux sociaux, cette initiative du géant de l’ameublement semble l’emporter.

Voice-Tech Is Here — What’s Next For Businesses And Consumers? The goal of perceptual user interfaces is to become as natural and inconspicuous as humanly possible. (Havas XVIII)

Voice-Tech Is Here — What’s Next For Businesses And Consumers?

Source: https://18pillowtalks.io/voice-tech-is-here-whats-next-for-businesses-and-consumers-abc8a0417a4

The goal of perceptual user interfaces is to become as natural and inconspicuous as humanly possible.

Popular Smart Assistants that are currently on the market.

Voice-controlled smart assistants have been around for several years. Apple launched Siri on their iPhone 4S devices back in 2011, Google (now Alphabet) released Google Now in 2012 on the Galaxy Nexus, and even Amazon’s Alexa was first introduced in 2014. Even before these advancements, we can all remember asking customer service recordings to speak to a representative or to hear what the next showing for a movie was. Speech recognition technology has been in use for many years, but it was not until 2016/17 that it became fully domesticated.

Smart speakers from Google, Amazon, and Apple

Smart devices, such as the Amazon Echo units, Google Home units, Apple Homepod, and even Samsung’s upcoming Bixby speaker can control everything in your home. Sales for all of these products jumped to monstrous numbers in the past two years. Studies estimate that more than 15 million Amazon Echo units have already been sold in the U.S. and Amazon is working to integrate Alexa into just about every other smart device on the market. According to VoiceLabs, Google Home sales quadrupled in 2016 from the previous year and 5 million devices have been sold to date. Because of these numbers, some sources claim that voice technology is the next frontier.

The thing is… voice tech is already here. Sure, it needs a lot more work until the interactions become natural or even remotely similar to those in Her or Blade Runner, but with recent developments like Hinglish support, digital assistants like Hikari Azuma, and the work of Hiroshi Ishiguro, this idea is closer to reality than science fiction. The fact is that voice technology has already invaded tens of millions of homes and has been in use for several years. The question now becomes, What comes after voice tech? We can already control the devices in our lives with touch and speech, but what will be the new way to control the home when voice becomes old hat in a few months? From the looks of the upcoming startups and consumer goods on the market, the answer is: perceptual user interfaces.

Perceptual User Interfaces

The most natural interactions we have are with other people, naturally. Whether this is communicating by voice, using gestures to explain our ideas, shifting our eye contact to where our conversation partner is pointing, or even smiling at someone we recognize, we perform these actions best with other humans. When these intuitive ways of communicating are applied to computer-human interactions, they are called perceptual user interfaces. We have already tackled the voice communication aspect with computers. Now, the next perceptual user interfaces that are coming to the table involve gesture control, eye tracking, and facial recognition.

One company that is making headway with perceptual user interfaces is Oblong Industries. With a unique history in MIT research and Hollywood filmmaking, the company has a perfect background for turning fantasy into reality. Like in the futuristic workspaces in their films, Oblong has created a perceptual user interface where digital displays are not limited to the screen, and people can interact with them through a mix of voice, touch, and gesture. CEO John Underkoffler describes this new interface as having no interface, because humans can interact with their digital objects as they would with physical objects in the real world. Thus, the goal of perceptual user interfaces is to become as natural and inconspicuous as humanly possible.

Gesture Control — “Talk to the hand.”

How many times have you seen a movie character (especially superheroes) throw, grab, or alter a digital object in the physical world? A good example of this is Tom Cruise manipulating the pre-cog video files in Minority Report. Though this film came out 15 years ago, we do not generally control the devices in our lives like this. Yet. True, there were devices like the Xbox Kinect (RIP) which used motion capture technology to let users play a few games without controllers, but that’s not total home integration. Imagine swiping away a video with the wave of your hand or pointing at the dishwasher and having it start.

It’s worth pointing out that these developments are not new and in fact have been around for decades. The first gesture-interface device was developed in by MIT researchers in 1977 in the form of a wired glove. Obviously, this was not for consumer consumption and it took several decades for the gesture control to become readily available to the general public. And when products like this finally did become available, they were not widely adopted. For example, in 2013, Google acquired a gesture-based startup company called Flutter. With Flutter integration, users could play, rewind, etc. music tracks or videos with their hands. The problem was that users had to memorize a list of awkward handshapes just to manage a few control options. It was not worth it just to play music and no one used it. Most importantly, it was not intuitive.

Since then, the technology for gesture control has only gotten better and will soon become mainstream. Two companies, in particular, have been making headlines with their more advanced gesture control. Thalmic Labs and Leap Motion have taken new approaches from the original MIT glove to make gesture control mainstream. Thalmic Labs is the company behind the Myo armband, a wearable that lets you control smart devices with a literal wave of your hand. Leap Motion, on the other hand, has incorporated gesture reading in their AR and VR games so you can use your hands in virtual reality as you do in your real reality. With this technology, brands can capture data about how people use their products and what they want their products to do in real time. This new data insight creates another dimension of connecting with consumers as well as more information to understand their behavior.

Eye Tracking — “Look ma, no hands!”

In the same way speech is being used to control devices, the way people use their eyes can be harnessed for technological purposes. Consumers will soon be able to communicate intent simply by looking at an object. Not only is it a new way to interact with smart objects, but it is another method of data collection. Eye tracking will allow businesses to better measure visual attention, which in turn provides insights about consumer habits and what literally catches the consumer’s eye.

Two companies that are paving the way for this future are Tobii and FOVE. Tobii is an eye tracking device and software company and FOVE makes eye tracking VR headsets. While the gaming implications are apparent, there is also much potential for brands and consumers to harness this technology like with voice tech. In the same way you can ask Alexa to play a specific show or purchase items from stores, consumers will soon be able to use their gaze to select titles or add items to their cart without using their hands.

Similarly, businesses can use eye tracking to both understand consumer intent in real time and to inform UX design. Knowing where people look and what they see optimizes product and advertisement placement, packaging design, etc. The technology is also much more sensitive than traditional viewing tests because it can measure if people actually look at an ad when it’s presented to them, where they’re looking and at what features, and how much attention they’re actually giving to the ad.

Facial Recognition — “Read my lips.”

Except for those #blessed enough to have a perfect poker face, facial configurations tend to give away our true feelings. Conversation analysis has shown that facial cues are an integral part of natural human-human interaction. Even acting classes emphasize the need of using the correct facial expression at the correct time, further showing how necessary facial configuration is to natural communication. Thus, it makes sense that facial recognition technology is one of the next iterations of human-computer interaction.

Like with the other perceptual user interfaces, facial recognition technology has been around for several decades without being integrated into consumer products until recently. Especially after the new iPhone X announcement, facial recognition has been a big topic as an alternative to passcodes and for security purposes, but a less-talked about (and possibly bigger) application is customer personalization. An example of this is Uniqul, a company that has designed a facial recognition platform to be used mainly for making payments; the use of this technology, however, can easily go beyond payments to additional customized experiences. For instance, targeted and hyper-targeted advertising are already done and can easily be extended to facial recognition technology for individualized targeting.

Even more exciting, facial recognition technology is being applied to detect more than just unique faces. In the same way that humans can generally understand the feelings of others based on facial features, facial recognition algorithms are also being taught to recognize and adapt to different emotions. This technology is aptly named emotion recognition tech, and is being incorporated into platforms, particularly for advanced targeting. The smart assistant Jibo boasts facial and emotional recognition technology to adapt to different user preferences and provide personalized assistance based on the user’s emotional state. Makeup brand Benefit even launched the Brow Translator to use facial recognition technology to interpret emotions through eyebrow expressions and offer products and inspirational quotes based on the experience. The possibilities are endless and from a data perspective, the real-time insights of how people perceive a product are invaluable.

Conclusion

The purpose of UX Design is to make interfaces as natural as possible. What better way to model this intuitive interaction than to base it on how we already interact with each other in the real world? Voice tech was the first iteration of perceptual user interfaces, and it is being quickly followed by gesture control, eye tracking, and facial recognition technologies. A world fast approaches where our technology will know us so well that we won’t have to bother making decisions like what to watch or where to eat. Now we just need to wait until brainwave-reading robots become mainstream to achieve the ultimate user experience.


Artificial Inteligence & Marketing: A.I. Is definitively moving the 4P’s (Kotler) to the S.A.V.E. model through a Hyper Personalized Relation with the consumer

Yesterday I had the great pleasure to give a lecture with Professor Hugues Bersini about A.I. and Marketing…

Started (officialy) 50 years ago, AI is now present in marketing. We estimate that it will take about 10 years to reach the plateau of productivity. But tests are supposed to start now.

 

How A.I. Will Transform the Landscape of SEO ? (source: SearchEngine Journal)

source: https://www.searchenginejournal.com/search-ai-will-transform-landscape-seo/185531/

A Beginner’s Guide to Structured Data Markup

Schema and JSON-LD sound like rocket science? Don’t panic. We’ll show you how to master the bare minimum. Discover the basics of structured data, why you need it and how to add it to your web pages.

 

Artificial intelligence, or AI, has been a trending topic in the search engine optimization industry. It’s also widely misunderstood.

A lot of the widely-held beliefs about AI are based on speculation, surmised from patents and search engine behavior. Unfortunately, speculation frequently leads to fear — and fear sometimes empowers the industry’s con artists. I wonder how long it will be before we start seeing practitioners claiming to offer “AI-proof” SEO services.

The most important thing to understand about AI is that it is not a static formula to solve. It’s a constantly evolving system designed to identify, sort, and present the data that is most likely to meet the needs of users at that specific time, based on a multitude of variables that go far beyond just a simple keyword phrase.

AI is trained by using known data, such as:

  • content
  • links
  • user behavior
  • trust
  • citations
  • patterns

and then analyzing that data using user experience, big data, and machine learning to develop new ranking factors capable of producing the results most likely to meet user needs.

Robotic hand, accessing on laptop, the virtual world of information. Concept of artificial intelligence and replacement of humans by machines.

The Past

Search algorithms of the past were pretty unsophisticated. Drop a keyword phrase into your title, heading, and alt tags, then pepper it throughout your content, and you could almost be assured top ranking. That is, until your competitors did the same, and then it became a virtual arms race to see who could stuff a keyword phrase into a page as many times and in as many ways as possible.

SEO practitioners became creative at finding new ways to squeeze a few more instances of a keyword phrase into a page, even if it meant using ridiculous tactics that served no purpose other than to increase keyword density. They hid text by coloring it to blend into the background, positioning it off screen, or even using z-index to change the stack order of elements, along with a plethora of other equally sketchy methods.

Fortunately, it didn’t take search engines long to build effective countermeasures into their algorithms to defeat these rudimentary tactics. A more challenging obstacle, since Google’s algorithm relied heavily on links, was separating the legitimate editorial links from manipulative link spam.

After spending a few years battling both black hat SEO practitioners and honest but misinformed marketers, Google implemented a scorched earth policy with the release of Penguin 1.0, destroying thousands of legitimate businesses in the process.

The Present

As the algorithms evolved to measure less gameable ranking signals over the past several years, many of the industry’s bottom feeders were killed off. However, these new algorithms also made it necessary for legitimate digital marketers to dig deeper and put more effort into quality in terms of technical SEO, content development, and link building. All three components are still essential today to build an effective search engine optimization campaign.

Technical SEO has evolved from simple formulas, like keyword density, to an ongoing holistic effort to improve user experience while making it easier for search engines to understand what your content is about. Factors that indicate a positive user experience, like mobile responsiveness, page speed, and time on site play a significant role in technical SEO today.

Link building is no longer just a matter of volume. Penguin changed that. Today, only legitimate editorial links, which take significant time and effort to earn, will produce safe, long-term results. Link building techniques that fall outside of Google’s webmaster guidelines may produce short-term results, but will eventually earn you a nasty penalty, resulting in zero organic visibility. That’s an expensive risk in my book.

And while the tired phrase “content is king” still has meaning, a more accurate statement would be “engagement is king.” Simply writing a few hundred words sprinkled with a keyword phrase won’t produce results anymore. Search algorithms today are looking for high-quality content that engages users. If it’s not valuable to users, it generally won’t rank well in organic search.

The Future

Artificial intelligence will completely revolutionize search engine optimization. Instead of a static formula, it utilizes user experience, big data, and machine learning to produce results that meet user needs more precisely while learning and improving on the fly.

Buckle up, kids, this is going to be an interesting ride!

Keyword Phrases Are Dead

The days of developing keyword-centric content are, for the most part, long behind us. Google’s Knowledge Graph, based on latent semantic indexing, initially led the charge in this direction, but RankBrain turned it into an all-out blitzkrieg.

RankBrain is Google’s machine-learning artificial intelligence system that helps process its search results, which uses an entirely new way of processing queries according to Greg Corrado, a senior research scientist at Google who is involved with RankBrain.

RankBrain uses artificial intelligence to embed vast amounts of written language into mathematical entities — called vectors — that the computer can understand. If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.

Google tells us that RankBrain has quickly become the third most important factor in their overall algorithm for ranking webpages (right behind links and content). This is because unlike previous versions of their algorithm, it is very effective at analyzing a query and returning the most relevant content even when it doesn’t contain the keyword phrases used in the search.

This means instead of repetitively trying to force a particular keyword phrase into your content, you can and should focus on writing naturally, as you would if organic search wasn’t a factor. Naturally, Google can rank a webpage based on the content it contains, but under the right circumstances, they can also rank it based on information that isn’t even on the page. Because of this, you should include related terms and concepts whenever you can do so naturally because it will add more value for users, which helps send Google more positive ranking signals.

Search Intent

Today, if you want to become and remain competitive, you need to understand and plan for search intent. It’s not just what the visitor is looking for, but why are they looking for it? Google has been focusing on this for a while now, but you can expect it to become even more important as the role of artificial intelligence increases in newer iterations of their algorithm.

You need to think beyond the initial search term, and also think about what problem a visitor is most likely trying to solve.

You have four types of search queries in regards to search intent:

  • Navigational Queries are performed when a user is trying to find specific content on a specific URL. In some cases, the user may not know the exact domain.
  • Informational Queries cover a wide range of topics, but obtaining the information is the singular goal. No activity beyond clicking and reading is necessary.
  • Commercial Queries often consist both of queries from users with an immediate intent to purchase, and users looking for information to make a purchase at a later time.
  • Transactional Queries could consist of activities like signing up for a newsletter, creating an account, or paying a bill.

What Comes Next?

You can’t stop thinking yet — you’ve only reached the tip of the iceberg.

You need to anticipate what a visitor might need after their original search intent has been satisfied. What questions might they have at that point?

If I search for “fbi agent from swordfish,” I will get a lot of information on the 2001 action movie about a rogue counter-terrorism unit attempting to steal $400 million dollars from old DEA dummy corporations.

Wikipedia holds the number one organic position, while IMDB holds number two and three. Obviously, both of these websites are highly authoritative due to the volume and quality of content and links, but is that the only reason they consistently rank highly for these type of terms?

I don’t think so.

Think of all the different directions you could potentially go from a single page. Let’s use IMDB for this example. From the page about the movie Swordfish, which already has a tremendous amount of content on the page, we can also find links to extensive information about:

  • All of the actors, writers, directors, etc. involved in the movie
  • All of the other movies and television shows any of those people have worked in/on
  • Reviews of the movie from multiple third-party sources
  • Polls and message boards on topics relevant to the movie
  • Similar movies and television programs

Consider the fact that all of this information is interconnected, so almost any question that may cross your mind as a result of your initial search can probably be found without even leaving the IMDB website. I often find myself deep in a rabbit hole after visiting this website for that exact reason.

I’d say they’ve done a pretty good job of anticipating what a visitor might need after their original search intent has been satisfied, wouldn’t you?

IMDB operates on a massive scale in terms of content development, which is essential because they are also operating in a highly competitive market. But what do you think would happen if you applied their model, on a much smaller scale, to your own website?

I’m confident in saying that the AI gods would smile upon your website. (What would it look like when bits of data smile?)

Think about it like this — the goal of AI is to give the user a piece of content most likely to meet their needs, right? So which would make more sense:

  1. Send them to a particular page just because it lives on an authoritative domain,
  2. or send them to a particular page on a domain that also contains content that seems to answer the type of queries users tend to search for afterfinding the answer to their initial query?

I think the answer is clear.

To take advantage of this strategy, you need to segment your content into the phases of the typical buying decision, start at the beginning, and then develop content around all of the potential questions that may come up throughout the process.

The phases of your visitors’ buying decision are:

1.) Curiosity

At this stage, visitors may have stumbled across your paid advertisement, a link someone shared on social media, or an organic search result. They may not yet know anything about your company, your products or services, or even your industry, so their questions will revolve primarily around what your products or services do.

2.) Interest

Visitors at this stage may be interested in what you have to offer, but may be unaware of the value or whether it’s a good fit for them. Their questions now will be focused mainly on how your products or services may be able to help them specifically. Information on specific features and benefits are especially valuable at this point.

3.) Buying Decision

By this point, visitors understand the value in your products, have likely even evaluated some of your competitors, and now they’re looking for information to help them make a final decision. A good approach here is to present information about what differentiates you from competitors, such as testimonials, guarantees and warranties, and value-added incentives.

Deeper Search Intent Variables

A search query phrased exactly the same way, but conducted under different circumstances, may indicate a different intent.

For example, if I search for “Thai food” from my desktop, mid-afternoon on a Friday, I might be looking for a nice place to take my wife to dinner later that night. In this case, I’m probably most interested in large pictures of the interior and the food to see if it’s appropriate for date night. On the other hand, if I conduct that same search from my iPhone while driving at 12:05 on a Tuesday, I’m probably most interested in driving directions and a summary of reviews for a place I can get in and out of fairly quickly.

Some factors that can be a variable in search intent:

Time, Day, Date, Etc.

I’ve already shared one example of how time can play a role in AI-powered search, but other factors, like the day of the week, date, month, or even year can play an equally important role.

The algorithm could apply extra weight to local events within a certain radius during the times they’re being held, so if I searched for “things to do near me” between September through October, Google may display results for Halloween Horror Nights at Universal Studios, but not display results for them at other times when they don’t have a special event going on.

Location

If you search for gas stations from a mobile device, it’s a safe assumption that you’re running low on gas, which is part of the reason results are ordered by distance from your current location.

But what might be some other examples where location can play a role in search intent?

I run a digital marketing agency in Tampa, Florida, so my site is significantly more likely to come up in the search results for visitors in the Tampa area compared to another competitively similar (age, trust, content, links, etc.) digital marketing agency in a different city. The inverse is also true.

Device (iPhone, Android, Alexa, Google Home)

If you ask Alexa or Google Home about a product, it’s generally going to assume you’re interested in purchasing it, or at least collecting information to make a buying decision at a later date.

At this point, the algorithm has one singular goal — to present you with the one product that you’re most likely to purchase.

Why only one instead of the traditional ten or so listings? Because while it’s easy to scan a search results page on your screen, it’s simply far too cumbersome to do that with voice search.

This means that if you aren’t using PPC, you probably won’t achieve any visibility for even moderately competitive searches because there’s only room for one position. Even with PPC, however, companies with lots of complaints, poor reviews, or a limited track record probably won’t make the cut in voice search because they are less likely to generate conversions. Google’s primary goal is to satisfy their customer — and that means helping them find what they want the first time around.

Voice Search

Artificial Intelligence coupled with voice search will dramatically change the landscape of search within the next few years.

Unlike traditional algorithms of the past, AI has the unique ability to improve its results on the fly, and voice search helps it do that more quickly and more accurately. These improvements are driven through a combination of user experience and big data.

User experience, as it relates to artificial intelligence in search, can be measured in a variety of ways, such as:

  • Does the user seem to find what they needed from the result provided, or do they quickly return to make another query?
  • If the user didn’t find what they needed from the result provided, did they ask for the next result, modify their initial query, or did they rephrase it entirely?
  • Does the user transition from voice to traditional search? Did they then seem to find what they needed at that point?
  • Is the user’s voice relaxed and at a normal conversational volume, or agitated and raised? Has it changed during the search?

Voice also introduces a new layer of complexity into the equation because search terms are phrased differently and are far more varied. For example, someone searching for my services using traditional text search might use a search phrase like “tampa web design” but when using voice search, they would likely use a more conversational search phrase like “which web design company in Tampa designs websites for contractors?”

While AI theoretically has evolved to the point where it understands that those queries are basically the same thing, it’s still a wise idea to engage in a little hand holding, especially in the beginning. This is usually a simple matter of proper copywriting.

The Death of Websites

As someone who earns a significant portion of my revenue designing websites, it pains me to say this, but websites, as we know them today, face an inevitable demise.

You might think that sounds crazy, but it was just a few years ago when most people thought catering to mobile traffic was crazy, and today, mobile accounts for more than half of all web traffic.

When I talk about the death of websites, I’m not saying that the need for a powerful digital presence will die. That will continue to become even more important as time goes on. I’m not even saying that you won’t be able to access information about a company using a traditional web browser. What I am saying, however, is that the traditional way of thinking about websites will die.

The idea of a piece of content living at a particular URL will be replaced by a more data-centric concept. Think less like HTML/CSS and more like schema, XML, or some other form of structured data yet to be invented. As AI evolves, is refined, and becomes the dominant component in search algorithms, search engines will simply extract the data most likely to meet users’ needs, and present it directly to them rather than giving them a list of potential matches.

If you think this sounds far-fetched, consider that Google is already using a similar concept for Android Instant Apps, by requiring developers to build their apps modularly and host them on Google’s highly-optimized servers. Cindy Krum, one of the leading voices in all things mobile, explained the numerous advantages this offers to Google, marketers, and users on a recent episode of Webcology. It’s only logical to assume Google will soon implement some sort of requirement for data structure in websites as well, and make it a significant organic ranking factor, just as they’ve done with responsive design and page speed.

Once this evolution takes place, websites that don’t implement whatever new standard the search engines decide on will essentially become invisible. Implementation won’t be enough though, because instead of about ten results, the search engines will, in most cases, only return one. So like Highlander, in the end, there can be only one.

Big Data

People flock to data-driven marketing because it works. Large data sets enable search engines to spot patterns they couldn’t otherwise identify, and aside from Facebook, there is no company with more data than Google.

Powered by artificial intelligence, a search algorithm could utilize big data to identify and leverage trends and compare similar users using criteria like:

  • Geography
  • Profession
  • Education
  • Hobbies and interests
  • Medical and health conditions
  • Cultural beliefs
  • Search and browsing history
  • Age
  • Political affiliation
  • Social media activity
  • Reviews (on Google, as well as third-party websites, like Facebook, Amazon, Yelp, etc.)
  • Race
  • Connections to other users
  • Sexual orientation
  • Employer
  • Purchase history
  • Gender
  • Date, day, or time
  • Religious beliefs
  • Marital status

Big data is then used to rapidly identify patterns in user behavior, trends, and satisfaction in search results in order to provide better results for future searches in real time.

Self-Created Algorithms

One of the most exciting, and yet most concerning aspects of AI is the fact that it will use machine learning to develop new ranking factors entirely on its own.

According to Google, not only will their AI develop new ranking factors on its own — it will do so inside a proverbial black box. This presents a special problem for search engines and digital marketers alike, because if the engineers don’t know what their AI is using as ranking signals, how can they issue clear guidelines for marketers? And if marketers have no official guidelines, how can they follow them?

Dave Davies, co-host of the long-running SEO podcast Webcology, weighs in:

There’s an interesting phenomenon on the horizon and that’s crossing over the point where Google’s AI takes over and begins creating its own factors and ranking signals.

Historically it’s been a game of cat-and-mouse between those at Google who develop their ranking algorithms and SEOs who seek to understand them and optimize for them. This game was based on a core principle that the algorithm itself was a knowable thing; a very complex formula that applied weights to various attributes, and that could be reverse-engineered with enough time. Of course, no one had that time between updates but the core principle was that it could be and that at least the basic signals and approximate weights could be understood and optimized around.

The interesting thing about an AI-dominant environment is that even the engineers of the AI itself can never fully compute how the machine got to the specific conclusion it did in a specific instance and if they can’t, marketers and SEOs certainly won’t be able to. And that’s when the AI, which itself was designed by humans, takes it one step further. This is being worked on presently, and any ability to reverse engineer even a core understanding of the algorithm will be significantly reduced from what SEOs have worked with historically. Not only will the weights be highly variable and in constant flux but the signals being weighed will be added and removed on the fly and generated outside of any human input, meaning there will be tests and signals unlike any we have seen in the past and beyond what we may even be able to predict.

This factor alone will be a complete game changer for the SEO industry. I wouldn’t be the slightest bit surprised to see Google’s interaction with the industry disappear entirely, to be replaced with a boilerplate “we don’t know what factors go into the algorithm, just make great content and you should be fine” type of response.

Combatting Black Hat SEO

Contrary to popular belief, black hat SEO practitioners tend to be a rather brilliant crowd. They understand search engine optimization on an advanced level, have the insight to identify opportunities others can’t see, and possess the skills to exploit those opportunities at scale. To top it all off, they’re always looking for new ways to outsmart the search engines.

This makes them a significant and formidable enemy for search engines, but AI has the potential to give search engines a massive strategic advantage in this long-running battle.

MIT’s Computer Science and Artificial Intelligence Lab has recently created an algorithm that can predict how humans will behave in certain situations, which was trained by “watching” 600 hours of TV shows pulled from clips on YouTube, including The OfficeBig Bang Theory, and Desperate Housewives.

After training the algorithm, researchers presented it with a series of new videos and froze the clips just before an action was about to take place. They found that the algorithm was able to correctly predict what would happen next 43% of the time.

While it’s not quite Blue CRUSH (IBM’s Blue Crime Reduction Utilizing Statistical History), when you consider the massive and constantly growing pool of data Google has on black hat tactics, it’s easy to see how this concept could become Google’s “precrime” response to black hat SEO.

AI could be trained on previous black hat techniques and monitor new techniques that show up, but the potential goes a lot further than that. It can also identify patterns, and then use those patterns to predict future techniques people may attempt. Over time, as the algorithm learns more about human behavior related to exploiting black hat SEO, it will become particularly effective at eliminating it, which could lead to some scary unintended consequences…

Unintended Consequences

AI can behave brilliantly at some times while behaving more like a drunk toddler on a sugar high at other times.

The results have the potential to be catastrophic but don’t take my word for it. Known for their brilliance, especially in regards to technology, Elon Musk and Stephen Hawking have said that AI is like “summoning the demon.” Even Google’s own engineers have addressed these concerns by building a kill switch into their own systems.

Researchers at Google’s DeepMind team developed artificial intelligence that can learn to play classic Atari games like Space Invaders and Pong. This AI doesn’t need to be taught the rules before playing because it’s equipped to remember and learn from previous attempts to play the game and improve over time. When coupled with the goal of maximizing its score, this AI produced an unexpected outcome.

For example, in the game Seaquest, the AI figured out that it could prevent its submarine from running out of oxygen and stay alive forever by keeping it near the water’s surface.

So how could this type of unintended consequences play out in search?

Let’s start with what we already know. Search engines want to provide their users with a positive user experience, and one of the factors they see as an indication of a positive user experience is how long a visitor stays on a website.

It doesn’t take much effort to imagine how this could go wrong. Great information will obviously keep visitors on a page longer, but so could:

  • Slow page speed
  • Broken or ineffective navigation
  • Non-functioning elements
  • Text size/color issues
  • Obtrusive pop ups
  • Disabling the back button

So without the context that seems like common sense to you, AI couldincorrectly interpret the effects of poor user experience as a positive ranking signal.

Let’s look at another potential scenario where AI thinks it’s providing the best user experience.

Imagine that you have a page with amazing content, and based on multiple signals, Google’s algorithm is confident that it serves visitors’ needs perfectly so it ranks that page highly for relevant terms. You later decide to add an opt-in form that users must fill out in order to access that content.

The bounce rate now goes up dramatically, and along with several other signals, AI makes the incorrect interpretation and looks for a way to provide the best content — even if it’s no longer visible to search engines.

Since the AI has determined that your content is the best result for a particular search query, and Google already has the old page in its cache, it decides to simply send searchers to an older cached version of your page hosted on their own servers. They could even mask the URL for Chrome users so they appear to be on your website, rather than theirs.

Offensive Capabilities

Taking the concept of unintended consequences a step further — what happens when AI, which has the singular goal of presenting the best search results possible, decides to strike offensively against websites that it deems to be using “inappropriate” techniques in an effort to protect the quality of its search results?

These offensive strikes could range from something as mild as a penalty demoting ranking for a particular set of keyword phrases, to something more vicious, such as systematically scrubbing all organic search results (online reviews, press releases, write ups about the company, etc.) related to that website that the algorithm deems to be violating its guidelines.

This may sound ludicrous, but it’s well within the realm of likely scenarios, and while it’s not quite as ominous as Skynet becoming self aware, the adverse impact on digital marketing has the potential to be massive.

Experienced SEO professionals have already seen first hand that Google’s engineers are not the slightest bit squeamish about destroying innocent business owners. Penguin is a good example of how far they’re willing to go, but bad as that was, imagine the collateral damage of an AI algorithm with zero empathy for website owners.

Censorship

Once AI has created its own guidelines, it has essentially made a determination of right and wrong. From there, the next logical progression is to make a determination of right and wrong on other topics as well, like political viewpoints, social issues, and the morality of a product or service.

Researchers from the Entertainment Research Lab at the Georgia Institute of Technology have already taught AI right and wrong by telling it stories selected by humans that demonstrate normal or acceptable behavior. This technique then assigns rewards, basically the robot version of a gold star, when the AI makes decisions that align with the positive behavior exhibited in the stories.

Just as search engines today will penalize a website for using techniques they disapprove of, search engines of the future may penalize websites for promoting ideas they deem inappropriate.

Conclusion

Artificial intelligence will become a disruptive force in the SEO industry and there are bound to be some unexpected outcomes, especially during the early stages. In the long run though, I believe it will have a positive impact on users, search engines, and even digital marketers.

Success in an organic search powered by AI ultimately comes down to delivering a positive user experience. That means producing amazing content that meets their needs, and making it as easy as possible to access on any type of device.

Image Credits

Featured Image: iLex / DepositPhotos

In-post Image: ktsdesign/DepositPhotos

The ROI of recommendation engines for marketing (Source: Martech Today)

Recommendation engines are a powerful tool for Amazon, Netflix and more. Columnist Daniel Faggella takes a look at the benefits of recommendation engines and explains why marketers should be paying attention.

source: https://martechtoday.com/roi-recommendation-engines-marketing-205787

Netflix’s long list of suggested movies and TV shows is a fantastic example of a personalized user experience. In fact, about 70 percent of everything users watch is a personalized recommendation, according to the company.

Getting to that point hasn’t been easy, and improving on its recommendation system is an ongoing process. Netflix has spent well over a decade developing and refining its recommendations.

In 2006, it launched the Netflix Prize to search for machine learning experts who could improve its previous algorithm. A team of algorithmic scientists bested the company’s algorithm by 10 percent — a small percentage, you may think, but it was convincing enough for the company to expect huge improvements in the future. The team’s efforts earned them a $1 million prize.

Recommendation engines can help marketers and organizations increase the likelihood of arriving at recommendations tailored to a user’s past online activity or behavior using in-depth knowledge based on big data analysis.

In this article, I’ll explore how companies can increase their ROI by fruitfully leveraging personalization and recommendations. I’ll break down the potential business benefits of recommendation engines into three categories based on my company’s analysis of dozens of recommendation engine use cases.

Improving with use (positive feedback loop)

The goal of the Netflix Prize was to improve member retention — or in other words, make its service more “sticky” in the long run. But the company wasn’t looking for a one-time improvement.

The promise of recommendation engines is to build a self-improving system, one that — given a sufficient stream of data — can better satisfy users over time. As Netflix’s Carlos Gomez-Uribe and Neil Hunt explained in a published paper (PDF):

If we create a more compelling service by offering better personalized recommendations, we induce members who were on the fence to stay longer, and improve retention.

By offering a list of what may likely appeal to its members through hyper-specific classification, Netflix has narrowed down the myriad on-demand video streaming options. And measuring results through trends in viewership and churn allows the company to refine those results more and more every day.

Aside from adding new members, for the company to sustain its revenue growth, it must be able to retain its existing subscribers. In other words, the smaller the churn, the higher the monthly revenue.

Netflix has very little time to convince users to browse the app and select a video; they lose interest 60 to 90 seconds after reviewing around only 20 titles, according to the company.

Gomez-Uribe and Hunt said that by applying a more sophisticated recommendation system and personalized user experience, it has allowed them to save $1 billion per year from service cancellations.

A look into Netflix’s revenue figures reveals that in the second quarter of 2017, the company had 32.3 percent year-over-year growth, brought about by adding 5.2 million subscribers to its existing 99 million members in the previous quarter.

With more data to train its algorithms on (read: more active users interacting with the Netflix platform every day), Netflix has the advantage of improving its offering faster and faster for existing users. This “winner take all” dynamic, according to the AI investor Gary Swart of Polaris Partners, is one of the reasons why “improvement with use” is so important.

Is there any website that can give you answers to your questions as well as Google? Probably not. It has the massive market share and is likely to improve faster than any competitors using less data. Can any online shopping experience recommend better-related products than Amazon? Probably not.

You get the point: Recommendation engines in all niches have the possibility of creating the same kind of extreme differentiation.

Improving cart value (profit)

With $37.9 billion in revenue in the second quarter, Amazon continues its glorious dominance over online retail. Its item-to-item collaborative filtering algorithm, introduced in 1998, presents recommendations to customers based on product lines and subject area. The system matches a customer’s purchased and rated items to similar products in their list, which are then selected to be part of their recommendations.

While on some websites, recommendations exist in only one aspect of the customer journey (for example, a list of “related items” on the site’s checkout page), Amazon has integrated many recommendation “entry points” into its online experience to maximize cart value.

For example, users can click on the “Your Recommendation” link to display a page that contains categorized products that might be of interest, or they can refer to the section containing similar items with previously viewed products.

Users also can check out bundled goods under the “Frequently Bought Together” section, which allows Amazon to cut delivery costs when purchased together. This wouldn’t be possible if not for the billions of data points — from purchase history to abandoned carts — that the online retailer analyzes.

McKinsey estimated that 35 percent of consumer purchases on Amazon come from product recommendations, although the e-commerce giant itself has never revealed its own estimates. In 2016, it offered its open-source artificial intelligence (AI) framework called, DSSTNE (pronounced as “destiny”), for free to encourage the development of artificial intelligence apps.

Alibaba Group, another e-commerce behemoth, continues to dominate China’s e-commerce scene through its Tmall and Taobao platforms. Customers are presented with product recommendations based not just on their past transactions, but also on browsing history, product feedback, bookmarks, geographic location and other online behavior data.

The company uses AI to offer product recommendations to new users who have no previous transaction data. Its technology can receive data points from a user’s product purchases elsewhere and use it to match items in their pool, Wei Hu, director of data technology at Alibaba’s Merchant Service Business Unit, explained in a statement.

During the 11.11 Shopping Festival in 2016, a 24-hour online shopping event in China, Alibaba used AI to display product recommendations on shoppers’ personalized pages. Participating merchants customized their storefronts based on their target customers’ data. Alibaba told InsideRetail Asiathat it generated 6.7 billion personalized shopping pages with a 20 percent conversion rate improvement from the event.

Improving engagement and delight (retention)

YouTube’s recommendation engine has been powered by Google Brain since 2015, and Google considers it to be one of the most sophisticated recommendation systems to date.

Like Netflix, YouTube had plenty of experiments and redesigns before employing an algorithm that searches for similarities among different types of video content. Johanna Wright, YouTube’s vice president of product management, told Wired that the company is now more confident that the recommended videos they’re showing have relevance to the viewer.

Some of the data points that it uses include a user’s viewing history, age of videos, search terms and location to decide which video will play next. Unlike Netflix, YouTube is free and relies entirely on engagement with its content (and exposure to ads) — rather than a subscription billing model.

Through recommendations, YouTube hopes to keep a viewer engaged on the site while the system matches ads based on historical data. YouTube earns revenues when a viewer watches more than 30 seconds of an ad, or when a user clicks an entity on the screen or an ad listed on the page.

Its parent company, Alphabet, lists YouTube as part of the entire Google websites platform, so no separate revenue figures are listed (total Google ad revenues as of June 30, 2017, amounted to more than $22 billion).

On the other hand, music streaming services such as Pandora and Sweden-based Spotify use recommendation engines as a way to improve engagement and retain subscribers. These internet radio companies earn revenue through paid subscription plans and advertising for non-subscribers.

Spotify’s Discover Weekly recommends songs based on a user’s tastes, listening habits, favorite artists, and even the features they use. This is made possible through collaborative filtering and natural language processing. Edward Newett, an engineering manager who contributed to the recommendation feature, explained to Wired:

By trying to mimic the behavior of all of our users when trying to put together their perfect mix, we can leverage Spotify’s 2 billion playlists, target individual tastes and come up with playlists that will be interesting.

Pandora’s own recommendation engine is powered by what it calls The Music Genome Project, which has 450 musical attributes. The algorithm analyzes songs from thousands of artists and programs its online radio stations based on users’ desires.

With these features from Spotify and Pandora, listeners get to hear songs that they have never heard before but are most likely to generate their interest, especially those that are released by emerging independent artists. This is one way of keeping their existing subscribers and allowing non-members to listen to more music with ads.

Spotify says on its website that as of July 2017, it has more than 60 million subscribers and 140 million active users spread across 61 countries who can choose from over 30 million songs. Meanwhile, Pandora’s revenue in the second quarter was $376.8 million.

The breadth of recommendation engines isn’t limited to these industries. Businesses in the food industry, sports, and even fashion are using artificial intelligence to identify possible options for customers. More and more, companies are attempting to improve their ROI using this technology in their marketing campaigns.