Opinion: “We have become used to talking about an ‘attention economy’, but perhaps we should think more of the ecology of attention” – 11 Aug 2022 | Mike Follett

Follett: we need to talk about the carbon cost of attention


We have become used to talking about an ‘attention economy’, but perhaps we should think more of the ecology of attention.

It’s hot. It’s too damn hot. It’s so hot that villages on the outskirts of London are burning up in wildfires. It’s so hot that even climate change sceptics like Professor Byron Sharp might be changing (or re-changing) their mind about the reality of climate collapse. It affects us all; we’re all implicated, and it’s all of our responsibilities to do something about it.

source: Follett: we need to talk about the carbon cost of attention – The Media Leader (the-media-leader.com)

The advertising industry is definitely part of the problem, but we can also definitely be part of the solution. And if we are clever, the solution we come up with may be better than what went before.

The first thing to do is admit that advertising is contributing to the climate crisis. I don’t mean that we’re responsible to creating unsustainable demand: our tardy, apish industry is better at directing—rather than manufacturing—desire. What I mean is that advertising itself produces a lot of CO₂—in our offices, in our production practices, and crucially in our media buying.

The carbon cost of media buying is a novel idea, but pretty obvious when you think about it. As the good people at Scope3 have begun to point out, digital advertising is a significant polluter in itself: millions of phones receiving billions of ads after trillions of ad auctions every day use up a lot of electricity, which, in turn, requires a lot of carbon dioxide to be pumped into the air.

All that energy for so little engagement

What’s especially tragic is that when the ads finally reach our devices we often ignore them. We incur a definite (carbon) cost but only achieve a potential attention gain. Lumen’s eye-tracking research has shown that, for some formats, as few as 9% of impressions that reach the screen end up being looked at. All that energy for so little engagement.

But there is hope. Not all ads get ignored: different formats, publishers, and platforms are much better or worse at turning the opportunity to see an ad into actual viewing. When this ‘attentive seconds per thousand’ data is combined with ‘cost per thousand’ numbers, buyers can distinguish the true ‘cost per thousand seconds of attention’ between media alternatives.

And this in turn can be linked to the carbon cost of the media employed, to create a new and potentially powerful means of assessing media: the ‘carbon cost of attention’.

Lumen has been working with Scope3 and Havas to bring this concept to life, launching our ‘carbon cost of attention’ tool at Cannes Lions earlier in the summer. We combine Lumen’s impression-based attention predictions with Scope3’s carbon cost predictions and the pricing information available to a major trading desk like Havas to understand the true financial and ecological cost of the attention that we’re buying as an industry.

Already, we are seeing considerable differences for ads of the same format across publishers:

In the bottom left-hand quadrant of the chart above, we have low attention/low emissions publishers: a sad state of affairs, but not a disaster for the advertiser or the planet. What we want to avoid is shown to the right, an ‘attention desert’: low attention, but high emissions, which is the worst of all worlds. Instead, we should aim for publishers who provide high attention with low emissions: an advertising ‘Garden of Eden’.

An ecology of attention

What puts some publishers in the ‘carbon cost of attention’ good books, and others on the naughty step? Well, there are a number of factors, but some of the biggest include:

1. Clutter: as the chart below shows, the more ads that are served simultaneously on a screen, the less attention each receives. Given that the carbon cost for each ad stays the same, the ‘carbon cost of attention’ therefore skyrockets on cluttered pages.

It’s as if people can’t see the wood for the trees. This is bad for the advertiser (because people aren’t attending to their message), bad for the reader (as they often feel overwhelmed by ads), and, in a bitter irony, bad for the trees.

2. Scroll velocity: the slower people scroll the page, the more attention they give to the accompanying advertising.

Again working with Havas, and this time in partnership with Teads on the Project Trinity report, Lumen has found that attention to advertising is in part a function of how slowly people read a page. This in turn has a knock-on effect on the carbon cost of attention: ‘slow media’ leads to ‘sustainable attention’.

3. Streaming video: video advertising tends to get significantly more attention than static display advertising. But downloading a video to your phone can be fearsomely energy intensive, the increased carbon emissions outweighing the increased attention performance.

This is what is so exciting about streaming video services such as SeenThis, which allow advertisers to achieve all the attention benefits of video advertising at a fraction of the carbon cost.


We have become used to talking about an ‘attention economy’ – the cost of attention and the value of attention are well now established concepts.

But perhaps we should think more of the ecology of attention: one that safeguards the interests of advertisers, publishers, consumers, and the planet.

How popular is ChatGPT? Slower growth than Pokémon GO (source: AI Impact – Author: Rick Korzekwa)

Rick Korzekwa, March 3, 2023

A major theme in reporting on ChatGPT is the rapid growth of its user base. A commonly stated claim is that it broke records, with over 1 million users in less than a week and 100 million users in less than two months. It seems not to have broken the record, though I do think ChatGPT’s growth is an outlier.

source: How popular is ChatGPT? Part 2: slower growth than Pokémon GO – AI Impacts

Checking the claims

ChatGPT growth

From what I can tell, the only source for the claim that ChatGPT had 1 million users in less than a week comes from this tweet by Sam Altman, the CEO of OpenAI:

I don’t see any reason to strongly doubt this is accurate, but keep in mind it is an imprecise statement from a single person with an incentive to promote a product, so it could be wrong or misleading.

The claim that it reached 100 million users within two months has been reported by many news outlets, which all seem to bottom out in data from Similarweb. I was not able to find a detailed report, but it looks like they have more data behind a paywall. I think it’s reasonable to accept this claim for now, but, again, it might be different in some way from what the media is reporting1.

Setting records and growth of other apps

Claims of record setting

I saw people sharing graphs that showed the number of users over time for various apps and services. Here is a rather hyperbolic example:

That’s an impressive curve and it reflects a notable event. But it’s missing some important data and context.

The claim that this set a record seems to originate from a comment by an analyst at investment bank UBS, who said “We cannot remember an app scaling at this pace”, which strikes me as a reasonable, hedged thing to say. The stronger claim that it set an outright record seems to be misreporting.

Data on other apps

I found data on monthly users for all of these apps except Spotify2. I also searched lists of very popular apps for good leads on something with faster user growth. You can see the full set of data, with sources, here.3 I give more details on the data and my methods in the appendix.

From what I can tell, that graph is reasonably accurate, but it’s missing Pokémon GO, which was substantially faster. It’s also missing the Android release of Instagram, which is arguably a new app release, and surpassed 1M within the first day. Here’s a table summarizing the numbers I was able to find, listed in chronological order:

ServiceDate launchedDays to 1MDays to 10MDays to 100M
Netflix subscribers (all)1997-08-29366941857337
Netflix subscribers (streaming)2007-01-15188923513910
Instagram (all)2010-10-0161362854
Instagram (Android)2012-04-031
Pokemon Go (downloads)2016-07-05727

Number of days to reach 1 million, 10 million, and 100 million users, for several apps. Some of the figures are exponentially interpolated, due to a lack of datapoints at the desired values.

It’s a little hard to compare early numbers for ChatGPT and Pokémon GO, since I couldn’t find the days to 1M for Pokémon GO or the days to 10M for ChatGPT, but it seems unlikely that ChatGPT was faster for either.


Scaling by population of Internet users

The total number of people with access to the Internet has been growing rapidly over the last few decades. Additionally, the growth of social networking sites makes it easier for people to share apps with each other. Both of these should make it easier for an app to spread. With that in mind, here’s a graph showing the fraction of all Internet users who are using each app over time (note the logarithmic vertical axis):

Number of monthly users over time for several applications. The vertical axis is on a log scale.

In general, it looks like these curves have initial slopes that are increasing with time, suggesting that how quickly an app can spread is influenced by more than just an increase in the number of people with access to the Internet. But Pokémon GO and ChatGPT just look like vertical lines of different heights, so here’s another graph, showing the (logarithmic) time since launch for each app:

Fraction of total global population with access to the Internet who are using the service vs days since the service launched. The number of users is set somewhat arbitrarily to 1 at t=1 minute

This shows pretty clearly that, while ChatGPT is an outlier, it was nonetheless substantially slower than Pokémon GO4.

Additional comparisons

One more comparison we can make is to other products and services that have a very fast uptake with users and how their reach increases over time:

  1. YouTube views within 24 hours for newly posted videos gives us a reference point for how quickly a link to something on the Internet can spread and get engagement. The lower barrier to watching a video, compared to making an account for ChatGPT, might give videos an advantage. Additionally, there is presumably more than one view per person. I do not know how big this effect is, but it may be large.
  2. Pay-per-view sales for live events, in this case for combat sports, are a reference point for something that people are willing to pay for to use at home in a short timeframe. The payment is a higher barrier than making an account, but marketing and sales can happen ahead of time.
  3. Video game sales within 24 hours, in some cases digital downloads, are similar to pay-per-view, but seem more directly comparable to a service on a website. I would guess that video games benefit from a longer period of marketing and pre-sales than PPV, but I’m not sure.

Here is a graph of records for these things over time, with data taken from Wikipedia5, which is included in the data spreadsheet. Each dot is a separate video, PPV event, or game, and I’m only including those that set 24 hour records:

Records for most sales, views, and users within the first 24 hours for video games, PPV bouts, YouTube videos, and apps, plus a few points for users during first week for apps (shown as blue diamonds). Each data point represents one event, game, video, or app. Only those setting records in their particular category are included.

It would appear that very popular apps are not as popular as very popular video games or videos. I don’t see a strong conclusion to be drawn from this, but I do think it is helpful context.

Additional considerations

I suspect the marketing advantage for Pokémon GO and other videogames is substantial. I do not remember seeing ads for Pokémon GO before its release, but I did a brief search for news articles about it before it was released and found lots of hype going back months. I did not find any news articles mentioning ChatGPT before launch. This does not change the overall conclusion, that the claim about ChatGPT setting an outright record is false, but it should change how we think about it. 

That ChatGPT was able to beat out most other services without any marketing seems like a big deal. I think it’s hard to sell people on what’s cool about it without lots of user engagement, but the next generation of AI products might not need that, now that people are aware of how far the technology has come. Given this (and the hype around Bing Chat and Bard), I would weakly predict that marketing will play a larger role in future releases.

Appendix – methods and caveats

Most of the numbers I found were for monthly users or, in some cases, monthly active users. I wasn’t always sure what the difference was between these two things. In some cases, all I was able to find was monthly app downloads or annual downloads, both of which I would naively expect to be strictly larger than monthly users. But the annual user numbers reflected longer-term growth anyway, so they shouldn’t affect the conclusions.

Some of the numbers for days to particular user milestones were interpolated, assuming exponential growth. By and large, I do not think this affects the overall story too much, but if you need to know precise numbers, you should check my interpolations or find more direct measurements. None of the numbers is extrapolated.

When searching for data, I tried to use either official sources like SEC filings and company announcements, or measurements from third-party services that seem reputable and have paying customers. But sometimes those were hard to find and I had to use less reliable sources like news reports with dubious citations or studies with incomplete data.

I did not approach this with the intent to produce very reliable data in a very careful way. Overall, this took about 1-2 researcher-days of effort. Given this, it seems likely I made some mistakes, but hopefully not any that undermine the conclusions.

Thanks to Jeffrey Heninger and Harlan Stewart for their help with research on this. Thanks to the two of them and Daniel Filan for helpful comments.

  1. I also found some claims that the 100M number was inferred from some other figure, like total site visits, and that it might be an overestimate. I haven’t actually seen any sources doing this, so I’m sticking with the original number for now.
  2. I skipped Spotify because at first glance it seemed not to be unusually fast, it didn’t seem very easy to find, and I thought the other apps were sufficient to put things in context.
  3. Be warned that, at the time of this writing, the Google sheet is a bit of a mess and the sources are not cited in the most user-friendly way. If you’d like to use the data and you’re having trouble, please don’t hesitate to ask for a cleaner version of it.
  4. This is still the case if we do not divide by the number of Internet users, which increased by less than a factor of two between 2016 and 2022.
  5. The relevant Wikipedia pages are:


Face aux enjeux économiques, énergétiques et climatiques majeurs auxquels doivent faire face ses clients, Havas propose en exclusivité une solution de pilotage business intégrant la trajectoire carbone.

M4 (Meaningful Marketing Mix Modeling) est né du constat que les directions marketing & communication des entreprises sont de plus en plus soumises à des injonctions contradictoires sur leurs décisions d’investissements marketing-communication. Elles parviennent actuellement à évaluer leur performance business, mais se retrouvent plus démunies lorsqu’il s’agit d’intégrer leur impact environnemental à cette évaluation, faute de solution fiable.

No alternative text description for this image

M4 a été développé par CSA Data Consulting, le cabinet data du Groupe, spécialisé dans les plateformes prédictives pour simuler les performances business. Pour ce faire, il a combiné ses expertises leaders en modélisation économétrique appliquée au marketing, et en calcul d’impact carbone. La solution intègre les normes carbone et la méthodologie d’Havas Impact Carbone, l’outil interne de mesure de l’impact carbone des campagnes plurimédia, expérimenté chez Havas Media depuis trois ans.

La solution permet ainsi de développer des scenarii prédictifs et de simuler les performances en amont, afin de prendre les meilleures décisions sur les court, moyen et long termes.

« La mission d’Havas est de faire la différence, au service des marques, des entreprises et de la société dans son ensemble. Pour y parvenir, le sens doit être au cœur de tout ce que nous faisons en tant que groupe de communication et de tout ce que nous proposons à nos clients », affirme Raphaël de Andréis, Chairman & CEO d’Havas Village France. « Face au défi que représente le développement durable, notre objectif est d’encourager notre industrie à pratiquer une communication responsable et à élever les standards de la profession. M4 s’inscrit parfaitement dans cette volonté d’être un partenaire privilégié et meaningful dans l’accompagnement de nos clients face aux enjeux de demain ».

« Avec notre partenaire Havas, nous avons développé cette nouvelle solution de modélisation avancée, qui nous permet de conjuguer l’impact Marque, Business ET Carbone des campagnes media et digitales de Decathlon », partage Ruoyang Fumery, Performance Leader de Decathlon. « En effet, M4 nous permet d’élaborer des scenarii prédictifs, de simuler les résultats en amont afin de prendre les meilleures décisions sur les performances court, moyen et long terme, en corrélation avec la stratégie de communication de notre marque».

Thierry Fontaine-Kessar, Directeur de CSA Data Consulting, conclut : « Avec le lancement de M4, nous apportons à nos clients une solution exclusive de pilotage, unique sur le marché, pour maximiser la performance brand et business de leur communication tout en maîtrisant leur impact carbone ».

M4 est désormais disponible en France et sera déployé très prochainement à l’international.

À propos d’Havas
Fondé à Paris en 1835, Havas est l’un des plus grands groupes de communication au monde, avec plus de 22 000 collaborateurs dans plus de 100 pays, unis autour d’une même raison d’être : Make a meaningful difference, au service des marques, des entreprises, et de la société dans son ensemble. Havas a développé un modèle d’agences totalement intégré, avec 70 Havas Villages à travers le monde regroupant tous les métiers de la communication. Les équipes des trois divisions Creative, Media, et Health &You, travaillent avec agilité et en parfaite synergie pour proposer des solutions sur mesure et innovantes à leurs clients et les accompagner dans leur transformation. Le groupe œuvre chaque jour à la formation d’une culture permettant à tous ses talents de s’épanouir et d’évoluer. Depuis 2017, Havas est totalement intégré au sein du groupe Vivendi, leader mondial dans les médias, l’entertainment, et la communication.

Plus d’informations sur www.havasgroup.com

À propos de CSA (Consumer Science and Analytics)
Pionnier de la data intelligence depuis 1983 et du marketing mix modeling depuis 2004, en France comme à l’international, Consumer Science & Analytics, c’est CSA Research, institut de référence des études marketing et d’opinion, CSA Data Consulting, spécialiste du pilotage de l’efficacité media marketing par la modélisation prédictive et France Pub, référence du géomarketing. CSA, dirigé par Yves del Frate, fait partie d’Havas depuis 2015.

Generation Z. Not Dazed. Not Confused. (Lizzie Nolan & Lindsey Partos)

Numbering 2.5 billion, Gen Z took over Millennials in 2019 to become the largest generation on earth. Aged between 10 to 26 years old, they are humanity’s first generation born into the digital age, and have the tools, tenacity, and self-awareness to drive change and shape a better future.

In a new white paper, Lizzie Nolan, EVP Managing Director Strategy & Insights, and Lindsey Partos, Strategy & Editorial Director, explore vital insights on how this nuanced and connected generation is influencing the future of media. We invite you to read “Gen Z. Not Dazed. Not Confused” to learn about Gen Z and the meaningful media that matters to them.

#MeaningfulMedia #HavasProud

Les canaux digitaux représentent 35% des investissements publicitaires en Belgique (Barometre UMA/UBA)

Comme l’année dernière, l’UMA et l’UBA ont joint leurs forces pour produire un benchmark de l’investissement digital net en Belgique :

  • Pour les données agences (membres UMA + 4 agences spécialisées), l’information 2022 comporte les parts de marché des différentes catégories de médias, y compris offline, rapportées à l’investissement net. L’UMA a également établi une répartition des investissements digitaux entre acteurs locaux et internationaux pour la première fois sur une année complète.
  • L’information du Benchmark 2022 comporte les résultats d’un sondage auprès de membres de l’UBA (United Brands Association) sur la valeur de leurs investissements digitaux réalisés en-dehors du périmètre des agences, soit internalisés soit effectués via un intermédiaire situé en-dehors du territoire belge.

Il est important de préciser que les données UMA & agences associées et celles de l’UBA ont un statut différent. La source agences est exhaustive, mais limitée au périmètre des participants. La source annonceurs repose sur les déclarations non extrapolées d’un large panel de répondants, mais elle ne couvre pas la totalité des achats digitaux « directs ». Quelque 70 des plus de 350 membres de l’association ont répondu, ce qui est plus que significatif. Mais la réalité des investissements directs par les annonceurs belges est certainement plus élevée.

Les canaux digitaux représentent 35,1% des investissements publicitaires en Belgique

Globalement, dans le périmètre UMA, le digital représente 33% du total de l’investissement média en 2022. Ce ratio monte à 35% si on intègre les données UBA. Côté UMA, cette part est en nette hausse par rapport aux 32% de 2021, et l’ensemble UMA-UBA progresse d’un point de pourcentage par rapport à 2021 (34% l’année dernière)

MEDIAGROUPS UMA + UBA%   segm% total%   segm% total
OUT OF HOME13,6%8,8%12,0%7,9%
TOTAL OFFLINE100%64,9%100%65,7%

La télévision conserve la plus grande part de marché (35,6%)

La répartition des investissements nets nous montre que la télévision reste le média leader dans notre pays : avec plus de 35% du total tous médias la télévision reste le principal média publicitaire en Belgique, suivie de tout près par le digital dans son ensemble, à 35% du total.

La radio sort du benchmark comme le 2e canal individuel, avec une part proche des 15%. Avec 9%, le ‘paid social’ arrive 3e du classement individuel, suivi de près par l’out of home à un demi-point de pourcentage. Important à signaler aussi: les investissements publicitaires qui bénéficient aux éditions digitales des éditeurs sont repris en digital et donc ni en presse, ni en télévision, ni en radio pour ne citer que ceux-là.

Des évolutions remarquables des canaux: ‘Social’ est devenu le ‘leading digital touchpoint’

En 2021 les données UBA (les investissements par les annonceurs) pesaient 10% du total digital. Cette proportion a augmenté d’un point en 2022, à 11%. Avec des changements significatifs : ainsi le SEA est passé de pratiquement 21% du total à 16,5%, alors que le ‘paid social’ grimpe lui de 9 à près de 15%.

La part du display et de la vidéo dans les données UBA est également en croissance en 2022, ce qui traduit probablement une diversification des approches internalisées chez les annonceurs. Au sein des données UMA, l’évolution la plus remarquable est celle des « autres canaux », qui passent de moins de 7 à 10%, ce qui nécessitera probablement dans le futur une analyse plus fine des touchpoints digitaux repris sous cette rubrique. Cette montée en force des « autres » canaux affaiblit proportionnellement tous les autres, sauf la vidéo qui passe de moins de 21 à 22% du total dans l’univers UMA.

YearSourceDigital total DisplaySocialSEAVideo Other
% V
% H

Répartition des investissements entre acteurs nationaux et internationaux

Comme le benchmark UMA du premier semestre 2022, ce benchmark UBA/UMA reprend également pour l’UMA la répartition des investissements digitaux entre acteurs locaux et internationaux. Pour ces derniers, le questionnaire envoyé aux agences concernées les désignait sous la dénomination « GAFAM ». L’agrégation des réponses par canal digital nous permet de conclure à une part de l’ordre de 60% pour les acteurs internationaux sur l’ensemble du marché de la pub digitale en Belgique tel qu’il apparaît dans l’univers UMA (qui n’est pas exhaustif). La part des acteurs locaux dans le digital est donc légèrement supérieure à 40%.  Cette part des acteurs locaux est supérieure à celle des GAFAM dans tous les canaux où il y a effectivement concurrence.

Si on considère l’ensemble des investissements offline comme réalisés auprès d’acteurs belges, la part du « local » sur l’ensemble du marché média est alors de 81%, les 19% restants étant investis via les GAFAM.

SEA 100,0%
SOCIAL 100,0%
TOTAL MEDIAMARKET H1 202280,7%19,3%

Les répondants de l’étude, soit l’UMA et les agences participantes à ce benchmark, ainsi que ceux du panel UBA ont été agrégés dans le rapport final par un consultant externe dans le plus strict respect de la confidentialité. Chaque agence a rempli un tableau reprenant les 44 secteurs de produits déterminés par l’UMA qui a redécoupé une segmentation de Nielsen Ad Intel en déclarant les investissements totaux des annonceurs et marques concernés par ces groupes dans les 5 catégories de formats digitaux étudiés.

Agences et filiales contribuant aux chiffres repris dans le total UMA :

GroupM (Mindshare, Wavemaker, Maxus, Kinetic), Mediabrands (Initiative, UM, Reprise, Rapport), Space, Dentsu Belgique, OmnicomMediaGroup, (OMD, PHD Media, Semetis), Havas Media, Publicis Groupe, (Zenith, Blue449), Serviceplan (Mediaplus, Mediascale), Zigt.

Agences partenaires dans la constitution de ce rapport :

AdSomeNoise, blue2purple, Pivott, Ogilvy | Social.Lab

Digital attribution is dead! Les Binet tells us why marketers need econometrics in 2023 (source: The Drum)

The Drum columnist Samuel Scott recently fell down a rabbit hole and into the world of econometrics with effectiveness expert Les Binet. Here he explains what marketers need to know as digital attribution degrades.

Les Binet

Les Binet, group head of effectiveness at Adam&EveDDB

What is better – information that is cheap and wrong or information that is expensive and accurate? Soon, marketers may have to decide.

source: The Drum | Digital Attribution Is Dead! Les Binet Tells Us Why Marketers Need Econometrics In 2023

Some months ago at my day job as head of marketing at the IT mapping software company Faddom, we saw a decline in organic search engine traffic. I could not figure out why. There was no evidence of a Google penalty, it wasn’t an SEO issue.

So I decided to look into some statistical correlations. On a hunch, I made a list of many potential variables that might have affected the traffic – no matter how far-fetched – and plotted their changes against the changes in organic traffic and the numbers of Google searches and clicks for our brand name over the same several months.

One theory was that a decrease in our Google Ads spend might be a secret search engine ranking factor. Another was that spending less on Google Ads resulted in fewer people seeing our brand name, becoming interested in us and then searching for the company. But we found that there was an 11% correlation between Google Ads spend and people searching for our brand name and clicking to our website. There was a 6% correlation between Google Ads spend and total organic website traffic. So those were clearly not the issues.

Instead, there was a 96% correlation between the number of cold sales emails that our business development team would send out and the number of Google searches for our brand name and resulting website clicks. Put simply, we inferred that people would receive an email, wonder who Faddom is, search Google for the name and visit our website.

Now here’s the issue. Google Analytics logged those visits as organic search engine traffic because they did indeed come from the search engine’s unpaid results. But in such specific cases, the cold emails – not SEO – should get the credit. The decline in organic traffic had almost surely come from the team temporarily sending fewer emails.

What can readers learn from my day job headaches?

First, marketers should remember that so-called ’outbound’ and ’interruptive’ marcom can be extremely effective – no matter what nonsense HubSpot has told the industry for the past 15 years to sell its own ’inbound’ marketing software. Second, I remembered the story as yet another example of how analytics dashboards, digital attribution and the entire online world can be misleading at best or completely wrong at worst.

In fact, it will likely be one of the biggest problems that marketers will face in 2023 as we see the planned death of third-party cookies, 43% of people now using adblockers that also stop scripts such as Google Analytics from running and the iOS 14 update that stopped ad tracking on Apple devices.

Econometrics – also known as marketing mix modeling (MMM) – might be a solution. After all, the attribution-based online marketing world that many have known for the past two decades is rapidly disappearing.

The problems with digital attribution

Les Binet, group head of effectiveness at Adam&EveDDB, told me that attribution modeling began around 1900 with direct response ads in print media that had coupons. Different printing presses in the same city could run ads with different coupons to see which ones worked better.

Starting in the early 2000s, the nascent world of online advertising quickly became addicted to attribution with Cocaine Bear levels of enthusiasm. For example, at AdExchanger’s industry preview this month, Lending Tree senior vice-president of growth marketing Joshua Palau said that “all media should be performance media.”

An entire new generation of ’digital’ marketers now thinks only about what is stupidly called ’performance advertising’ because it is deceptively simple. You put an ad on Twitter. You see how many people click to visit your website and buy. You attribute those purchases to Twitter. Easy.

But it’s actually not that easy. Even before the ongoing death of ad tracking today, attribution modeling on its own has always been deeply flawed.

“If you say this ad generates a million in sales, the true answer could be anything between [zero] to a million,” Binet, whose original training was as an econometrician, told me. “It looks very scientific, it looks very precise, and it’s extremely unreliable.”

As an example, here is a slide that Binet gave me on last-touch attribution.

Here are some of the reasons why attribution is so unreliable.

The Fallacy of Immediacy

Marketers often assume that an effective ad will convince someone to buy or become a lead immediately. But many ad-driven purchases occur long after the advertisement appeared – and especially long after the ability to track the sale with digital attribution has disappeared.

Just remember one of Binet’s classic charts from his famed work with Peter Field on The Long and Short of It.

At Faddom, we recently spent a portion of our ad spend on Reddit. After two months, we received fewer customer leads than expected. A common assumption would be that the campaign results were poor. But what if hundreds or thousands of people saw the ads and made a mental note to check us out in a year because our industry has long sales cycles?

With only digital attribution, it is impossible to know. And it’d also be impossible to know the original sources of those hypothetical purchases in 2024.

The Fallacy of Last-Touch Attribution

Binet likened this to a store measuring the number of people who enter through each door and how much they buy. He gave an example where the west door supposedly resulted in 25% of sales.

“It’s clear that it’s not just the door that generates the sales,” he said. “If you shut the west door, [digital attribution modeling] would say that you immediately lose a quarter of your business. But that’s not true. What would happen is that they’d walk around to the south door, the north door or the east door. If you’ve got a healthy business and people really want to come in, they’ll find a way.“

The Fallacy of First-Touch Attribution

Many companies assign sales and leads to the supposed first touch out of a desire to have a simple way to show revenues, expenses and overall returns from each activity. But it also happens to be completely inaccurate.

“The idea that one channel can be given ‘credit’ for a given lead or sale is nearly always nonsense,” Binet said. “Each sale is usually the combined result of multiple channels working together, often over a period of months or years. Instead, the question marketers need to ask is: What is the incremental effect of each channel? If I dial spend on this activity up or down, how much will my sales rise or fall?”

He added: “First-touch attribution is just as bogus as last-touch attribution. For a start, digital data rarely goes back far enough to identify the first exposure. Digital data trails usually last days or weeks, but advertising effects can last for years. Think about all those TV ads you remember from childhood. A sale is rarely caused by the first exposure, or the last one, but the combination of all previous exposures.”

The Fallacy of Multi-Touch Attribution

This often shows only the method of a purchase, not what convinced a person to buy. If I have already decided to buy a product, then I might, say, search Google or Facebook for the item.

But it would not necessarily mean that Google or Facebook influenced my decision to purchase it in the first place – even though digital attribution would make it seem so. To use some popular modern buzzwords, attribution modeling often identifies the channels where demand fulfillment occurred but not where demand creation happened.

Binet likened this to someone deciding to buy something and then traveling along subway lines and streets to get to the store. Attribution often focuses on the subway lines and streets.

“The attribution modeling bogusly attributes the sales to the few factors along the customer journey at the last bit of sale, and it ignores the many factors that led up to it,” Binet said.

The Fallacy of Technology

’Digital’ is not an advertising strategy or tactic. It is a type of technology based on binary code. And digital attribution is biased in favor of channels that use digital attribution technology.

Say a person sees your company’s booth at a conference but does not approach. Or they watch a YouTube video but do not click. Six months later, they remember you and decide to find you and make a purchase. Digital attribution would show nothing because there would be no trail.

“What we’ve always known is that the direct attribution method is flawed,” Binet said. “It doesn’t take much thought to realize that it’s wrong.”

One of the biggest problems in advertising today is an overreliance on short-term direct response and an under-reliance on long-term brand advertising – particularly in the B2B world.

“Attribution modeling overestimates the ROI from direct response communications and underestimates the ROI from brand communications,” Binet said. “If you just follow the attribution data, you end up just doing short-term stuff. You never build a brand, you don’t grow the customer base, you don’t grow the base level of demand. And it’s a recipe for disaster in business.”

But another big problem is that much of the attribution on which online direct response is based is wrong.

Digital attribution is inaccurate

In late 2022, Chris Walker, chief executive at the Boston marketing agency Refine Labs, posted on LinkedIn a comparison of what customers said versus what software-based attribution stated.

“Attribution software inaccurately overweights search (paid and organic) and direct traffic,” he wrote. “Not because those channels drove the results, but because that’s the path buyers take when they’re ready to buy.”

Everyone today is obsessed with being so-called ’data-driven.’ But a lot of data is simply bad information. Good research, common sense and gut instinct are often more accurate.

Eric Stockton, senior vice-president of demand generation at cloud desktop provider Evolve IP, perfectly summarized the situation late last year on LinkedIn: “In an ironic twist of marketing fate, the channels that aren’t the easiest to measure are often the best contributors to pipeline and revenue … correlation of buyer behavior [is better than] direct attribution by channel.”

In January 2023, Paul Arpikari, chief commercial officer at econometrics platform Sellforte, posted on LinkedIn that branded search and “performance” channels in general are overestimated in sales attributions. Online video is underestimated due to not having clicks.

Binet said that one person at a conference told him this: “Finally, we’re getting these digital nerds to understand why what they’ve been doing has been wrong all these years.”

So, if econometrics may be a good replacement for digital attribution, what exactly is it?

How to use econometrics in marketing

Back to Binet’s history. With the birth of radio, cinema and TV broadcast media, using methods of attribution similar to those in print ads was impossible.

So advertisers started to use econometric modeling, which is doing advanced statistical analysis of sets of mass aggregate data. Basically, it is creating a machine based on a bunch of historical numbers or industry benchmarks that will then see the relationships and calculate long-term correlations to show the effects of changing one or more variables.

In the story at the beginning of this column, I told how I used a very basic form of statistical correlation to discover the effect of changing the number of cold sales emails on organic search engine traffic. Marketers who use proper econometrics can go further and see the results of raising or lowering prices, changing ad budgets on one channel or another, opening stores on days with different weather and more.

Essentially, econometrics models take every single thing that might affect sales – from advertising campaigns to the weather to pricing to overall economic conditions – and narrows down the factors from hundreds to the dozen that are the most important.

Then, the model creates an equation that describes the relationship between those factors and sales. Marketers can then run the equation to see that if they do X, sales will change to Y. Governments use econometrics to measure things such as the effects of tax rate changes on GDP. Marketers can use it to determine the best marketing mix – because marketing, of course, covers a lot more than advertising and communications.

“Attribution – quickly and cheaply – will give you an answer that is precise and wrong. Econometrics – slowly, laboriously and expensively – will give you an answer that is right,” Binet said. “Attribution modeling will tell you that things that are unprofitable actually made money. It can be incredibly misleading. And it will tell you that things that really are profitable aren’t.”

The argument against econometrics

No theory or model is without critics – especially in marketing. For econometrics, one prominent skeptic is Byron Sharp, director of the Ehrenberg-Bass Institute for Marketing Science in Australia.

“The people proposing econometrics as a solution [to the problems of attribution modeling] all seem to be people who sell econometrics,” he told me. “There is a marketing law: ‘wherever there is demand there will be supply.’ The supply doesn’t have to work, it just has to convince the buyer that it works … and marketers are not known for their mathematical knowledge, so [they] are easily fooled.”

He added: “Using econometrics as a solution to the problems of attribution modeling is like cutting off a finger to distract yourself from having a sore foot.”

Sharp gave me a 2018 paper that he co-wrote for the International Journal of Market Research with John Dawes, Rachel Kennedy and Kesten Green. I have made it available for download in PDF format here.

From the abstract: “The contribution of regression analysis (econometrics) to advertising and media decision-making is questioned and found wanting. Econometrics cannot be expected to estimate valid and reliable forecasting models unless it is based on extensive experimental data on important variables, across varied conditions.”

But if there will ever be a widespread push to adopt econometrics modeling in marketing, Sharp will likely not be the only opponent.

One basic rule in business is to maximize revenue and minimize expenses. As a result, it is extremely difficult to get companies to pay more for something that they have always gotten for cheap. For 20 years, marketers have had access to free attribution-based analytics data through platforms such as Google Analytics. Now, try telling your boss that you want to increase your analytics spend from zero to five figures or more next year to build an econometrics model.

Second, most econometrics models need at least a couple of years of historical data on hundreds of variables. Large, established brands certainly have that. But small businesses and new high-tech startups, for example, do not.

Third, it can take months to build such a model. Try telling your boss that they will not see the results of the massive increase in analytics spend for half a year or more.

The solution might be to ask your companies if they want information that is cheap and wrong or expensive and accurate – especially when they want to see the true results of long-term brand advertising campaigns. But I am still somewhat skeptical. The perfect metaphor for the popularity of the last-touch attribution fallacy is when many companies always throw parties to celebrate sales teams closing deals but barely acknowledge the work of marketing departments.

7th of March 2023 – Double Date at Havas Village Bruxelles: Sustainable & Profitable Brands for IHECS students + Employer Branding By BMMA

In the centre of Havas Village #Brussels, the Havas Belgium Café is a place where our (future) #talents#partners#clients and friends can meet, #share and #learn.

The Virtuous Struggle for Sustainabilty Leads to Meaningful (and Prof… (slideshare.net)

This was particularly the case on March 7th 2023. Indeed, in the morning we had the chance to share our vision based on “#meaningfulness” with 40 #students from the IHECSIhecs – Haute Ecole Gallilee.

The Virtuous Struggle for Sustainabilty Leads to Meaningful (and Prof… (slideshare.net)

Thank you for your intelligent, sensitive and critical participation. Your questions are of high quality and an echo of our approach on the virtuous circle that must lead us to build #sustainable and #profitable#brands

At the end of the day, we hosted the first of three “Employer Branding” sessions proposed by BMMA – Belgian Management and Marketing Association. A particularly interesting moment which reminds us of the crucial importance of the #human being in the #company. The next 2 sessions (21 and 28 March) are still available. Huge congrats to Derek d’Ursel & Johan Claes our speakers of the day. Next speakers are Michael Liekens & Marc Soumillion

Interested about the next sessions ? https://lnkd.in/ectRZ_dr

Thank you Cédric CauderlierThierry Antoine

Angelique De Clercq Gemoets Sabrina Christian de La Villehuchet Joelle Liberman Alain Mayné Sophie Pochet Laureen Donadieu Marc Dewulf Alessandro Pepi

Chris Burggraeve 29 Sep 2022: “You need purpose and pricing power, not one or the other” in Martketing Week

Source: You need purpose and pricing power, not one or the other (marketingweek.com)

To really convince financial stakeholders, marketers need to urgently upgrade their marketing effectiveness language.

For about five years, until about a year ago, ‘purpose thinking’ was one of the hottest thought leadership concepts, promoted at every marketing and management seminar. Today, it is at risk of being simply forgotten about, or at least “postponed until after the recession”, as you sometimes hear in boardrooms.

Back in February, I predicted inflation would separate the strong brands from the weak. Paraphrasing the famous Warren Buffet quote, I argued that while purpose thinking gets you in the right water, pricing power is your swimsuit – to avoid being caught naked when the tide goes out.

Pricing power is a powerful concept that is still not as well understood by marketers as it should be. As Buffet has defined it for decades, pricing power is “the ability to systematically raise prices without curtailing demand or losing share to a competitor”. It is his number-one criterion for investing in companies.

Marketing Week’s recent Language of Effectiveness survey indicated again that the idea of pricing power is still largely absent in how marketers communicate with their CEO and boards on the ‘why’ of marketing investment.

As a former global CMO, as a current investor and board member of public and private companies, and as a board advisor who has been deeply immersed in these discussions for a decade, I recommend pricing power as the best way to explain marketing effectiveness to internal and external financial stakeholders. Companies that do have reaped the benefits.

I never advise senior marketers to forget about their own well-honed functional language (serving unmet or underserved needs and wants, creating demand, etc), nor to deny the potential use of the range of effectiveness metrics that go with it. However, I do implore them to actively address a hard truth once and for all: most marketing language is perceived as voodoo for many internal and external finance stakeholders.

They are only interested in the outcome of the marketing work, in other words its impact on the P&L (top- and bottom-line growth) and balance sheet (creating intangible asset value); not in the dizzying array of input or output metrics with which marketers bamboozle them.

Pricing power is new financial literacy language marketers need to adopt fast, at least if they want to be taken seriously by their largely underserved financial stakeholders. The more CMOs, CFOs and CEOs can align on the concept of how to create and measure sustainable pricing power, the more they will close what academics have come to call ‘the (stubborn) managerial marketing-finance gap’. As a result, more CMOs will get a seat at the exco table and more CMOs may access boards positions over time. Research from the Darden School of Business at the University of Virginia has shown less than 2.6% of boards have marketers on them. Why do you think this is? These are, in the end, the people signing off on the CEO strategy and budget proposals.

Many – still not all, surprisingly – boards and CEOs now realise they should have been smarter before in terms of brand building.

Pricing is tangible. Pricing power is hard data that trumps the fuzzy ROI discussion and any other marketing metric to convey the impact of marketing. It is the language of money that the CFO, CEO and board want to understand. Pricing drives top line. Investing in brand health today means pricing power tomorrow.

The best marketing companies are the ones that never waver on brand building. They invest consistently, both in good times and in recessionary times. The key benefit lies in boring consistency, not in flashy brand relaunches that make for good media stories (and make or break marketing careers). They systematically build sustainable pricing power over years, and they monetise this effort on a continuous basis. The recent inflation spike becomes a moment they diligently prepared for. Not a crisis. Inflation is an acid test for a company’s true marketing capability excellence.

Nailing the ‘alligator pitch’: Marketers on sharing the effectiveness message

Purpose and ESG under pressure

The tide has indeed gone out, and plenty of brand owners do stand naked. With rampant inflation, senior leadership teams are scrambling to protect margins. The focus is on survival, regardless of the recent sensational announcements by purpose-driven icons like Patagonia’s owner and CEO Yvon Chouinard.

He is heralded by many disciples as a true champion of the cause to reinvent capitalism. However, the hard reality is that, in many boardrooms and senior leadership committees, any thinking about purpose is simply being pushed onto the backburner.

Many – still not all, surprisingly – boards and CEOs now realise they should have been smarter before in terms of brand building. One can only hope that the hard lesson will be immediately applied moving forward. Yet I am not holding my breath. The proof will be in the ongoing 2022 budget rejigging, and in the 2023 planning rounds.

Managerial instinct will be to ‘focus on the basics’, a veiled proxy for the short term, and on saving careers. As per the German saying by Berthold Brecht: “Erst kommt das Fressen, dann die Moral.” First comes food, then comes morality. It is a very understandable human reflex in crisis, albeit unlikely to guide us through the dilemma at hand.

Equally, ESG (environmental, social and governance) ratings, often used by stakeholders as a proxy for how purpose-driven a company might be, are under fire, their credibility increasingly compromised. There are three key reasons behind it.

Firstly, ESG ratings are still misunderstood by managers, boards and investors. The rating is marketed to them as a predictive proxy for the ethics and morals underpinning a company’s future growth; an assessment of how a company might translate oft lofty purpose statements in numbers Wall Street could understand and act on.

Naturally, boards came to see it as a proxy for doing good, as opposed to what it truly is: a rating measuring the impact of a specific sub-set of risks to the company’s long-term value creation potential.

The pendulum may be swinging too much away from stakeholder-centric purpose, and back to shareholder short-termism.

As an example, an ESG rating does not measure the impact on the planet of how hard a company works to help save it. On the contrary, it measures the possible current and future impact on the P&L and balance sheet of climate change. Just imagine the exposure to a lack of water for many industries in the summer we just lived through.

For most CFOs, though, the key benefit to fighting for a high ESG rating was always crystal clear: the lower the ESG risk, the lower the interest on any bank loans. Not unlike getting a better credit rating. They don’t mind if the banks will claim they lend to companies who do good, perpetuating the overall misconception of what an ESG rating really measures.

The second issue for ESG is the absence of clear standards of how and what it exactly measures. ESG ratings started to be offered as a service by the likes of MSCI, Bloomberg and Thomson Reuters around the year 2000. According to ERM Group’s SustainAbility Institute, by 2018 there were already over 600 different ESG ratings marketed. However, unlike with credit ratings, academic research (such as from MIT in 2019) has consistently shown a very high statistical dispersion among different providers’ ratings of the same companies.

In simple terms, the same company would get a similar credit rating from the few credit rating agencies, as they all apply relatively similar algorithms. However, in ESG land, companies are likely to get very different ratings, dependent who they got it from, as the respective methodologies are very different. The range is so wide that ratings may even be completely opposite. For example, my former company AB InBev is rated mid/low on ESG by S&P Global, mid/high by Refinitiv and Sustainalytics, and very high by MSCI. The range of ratings on a company like Volkswagen is even wider. Who should investors believe? Who will stakeholders (want to) believe?

The third key issue on ESG is their actual tangible impact for investors (in terms of returns) as well as for society at large (through lowering carbon emissions or reducing income equality, for example).

Bloomberg Intelligence estimates that ESG-rated funds will represent $50tn by 2025, or one third of all assets under management. How do these funds and their investments perform so far? A recent Wall Street Journal article highlighted academic research (from UCLA, NYU Stern, etc) comparing value creation by firms with high and low ESG scores. It proved that a high score is not necessarily a predictor of higher growth and profits, contrary to advertising claims made by the ESG rating agencies and by the funds touting them.

Moreover, it said: “Over the past five years, global ESG rated funds have underperformed the broader market by more than 250 basis points per year, an average 6.3% return compared with an 8.9% return.”

Many of us might consider a 6.3% return as not too shabby at all. The so-called impact investors, especially, who elected to invest in these ESG-rated funds, might live with that. Behavioural finance theory would suggest they might see the forgone extra return as acceptable, if it did any good for mankind.

Alas, no. More academic research (Utah, Miami and Hong Kong) found that at least so far there is “no evidence that socially responsible investment funds improve corporate behaviour”. The WSJ article concluded with a series of possible improvements to make ESG more useful and credible again.

But is all the above reason enough to jettison all the good that purpose thinking and the still maturing ESG rating frameworks have brought us over the last decades? We are at risk of throwing away the proverbial baby with the bathwater. The pendulum may be swinging too much away from stakeholder-centric purpose, and back to shareholder short-termism.

The ‘Tyranny of the OR’

The strongest brands have always been, and will always be, the ones that create sustainable pricing power. Sustainability built on purpose. As I argued in my February Marketing Week article, the two are inextricably linked. Where purpose and pricing power are concerned, the decision is ‘and’, not ‘or’. Allow me to deepen the argument here.

Management literature and practice always reminds us that strategy is about making hard choices. It is an ‘or’, not an ‘and’; a deliberate choice between a set of options to reach an objective or goal. In a way, strategic decisions are not unlike marriage decisions. Some of you will correctly argue that many marriages historically were arranged purely for strategic non-love reasons. These days some still are. However, luckily, in most cases both sides have been able to pick the true love of their life, and they (try to) stick to that choice to be happy forever after. Marriage is in essence an ‘or’ decision, at least in principle.

In his 1994 bestseller Built to Last, management guru Jim Collins introduced the concept of the “Tyranny of the OR,” which “pushes people to believe that things must be either A OR B, but not both”. Instead of feeling oppressed by this, he argued that highly visionary companies liberate themselves with the “Genius of the AND”, the ability to embrace extremes of dimensions at the same time.

Effective brand management is such an eternal high-wire act between extremes: building long-term brand assets as cost-efficiently as possible, and intelligently monetizing that asset value short-term. What Americans call ‘to walk and chew gum’. Untapped equity reserves in the consumer or B2B customer mind should get translated ASAP into real P&L added value, by bringing real market prices over time as close as possible to their willingness to pay (WTP). By leveraging earned pricing power.

While many strategic choices mean ‘or’, in this case the hard choice is ‘and’.

Of course, if a company was only focused on the short term in the past, and it has not built or nurtured any/enough WTP in the mind of consumers or customers, that is the first bullet to bite. The hard decision is to invest more to create WTP. There is no panacea. There will be short-term margin pain. Company leadership will pay for the sins of its past.

Wall Street calls this over-earning: creating performance that was better than it deserved to be. Note: short-term investors happily took that result and ran with it. But the music always stops one day. Inflation made the music stop.

Because of inflation, boards and executive leadership should by now have understood that (painful) reinvestment, with a long-range plan and new narrative to stakeholders, is really the only correct and viable option. If they insist on short-term only, as per the natural reflex mentioned above, value investors should take their money out now.

Both willingness to pay and willingness to sell drive margins

There is another, often overlooked, ‘hidden turbo’ side to pricing power that CMOs might want to make their CFOs and CEOs aware of, visualised in the simple yet powerful concept of the ‘value stick’, created by Harvard professor Felix Oberholzer-Gee in his 2021 book Better, Simpler Strategy.

Creating WTP is the top-line concept marketers rightfully (and uniquely) should continue to focus on. However, now look at the bottom of the value stick. Companies can also grow margin by lowering costs. Not just by smartly cutting costs in the classic sense, but by elegantly leveraging pricing power to further lower what Oberholzer-Gee calls ‘willingness to sell’ (WTS) – the lowest price a company can accept for a product or service.

Intuitively, one can see how a company with aspirational, strong brands – ie brands with sustainable pricing power – gains a number of hidden WTS benefits that can put a turbo on the full margin, including (but not limited to) the below:

  • Employees: may join and stay longer at lower cost
  • Debt: banks may offer lower interest cost for lower risk
  • Equity: investors may pay more for your shares
  • Suppliers: may accept better payment terms to get you as a client
  • Customers: may accept you as a reference supplier at lower cost
  • Regulators: may accept lower risk-mitigation cost

There is one more interesting dimension to this hidden turbo. Mark Ritson has rightfully questioned the one-dimensional worship of purpose in various Marketing Week articles since last yearhighlighting recently that much of the self-serving research on the impact of purpose needs to be taken with a grain of salt. Still, Oberholzer’s research suggests it pays to be stakeholder-focused.

His book suggests the potential WTP is higher for a stakeholder-focused corporation, while its potential WTS can be significantly lower. Hence, the margin potential of a company focused on building brands with sustainable pricing power is higher than that of a company only focused on short-term shareholder value.

Close your eyes and picture a company you personally would never work for, versus a purpose-driven company you love (maybe Patagonia?). How much would each have to pay you to earn your soul? Which company benefits from the lower WTS? It makes intuitive sense. It all goes back to the sustainability of your brand.

Ritson: Good purpose, bad purpose – marketers shouldn’t oversimplify the arguments

Think also about the ethics of pricing power. It is not because you can increase prices, or can lower WTS, that you should. Energy monopolies are now accused of abusing their monopoly pricing power. Their shareholders will love the gains and may not care. But the board should, as in the end reputations matter. Governments rightfully will come and tax the super-earnings (or at least say they will). Greece has already shown its EU colleagues it is possible.

Amazon Prime increased its yearly subscription in the US by 17%, from $119 to $139 in the last year, compared to about 7-10% inflation. Given its pricing power, the brand probably can. It worked hard for that privilege. The brand invests a lot in superior customer service (CX) and has earned our trust.

The strategic question for the Amazon leadership is about stakeholder versus shareholder. Consumer empathy in tough times. How much does Amazon, a self-proclaimed (and proven) CX champion, really feel the pain of its shoppers?

What if the board and CEO had said: “We know you are experiencing tough times, so we’ll postpone our price increase?” What would it have done for the reputation and long-term gains in WTP and WTS, especially for a company that needs to attract so much talent to keep growing?

The ‘Genius of the AND’

While many strategic choices mean ‘or’, in this case the hard choice is ‘and’. Being purpose-driven remains the necessary condition for success, while investing in building pricing power is the sufficient one. Marketers need to invest much more time with their underserved internal and external finance stakeholders. It is all about language. Main Street needs to learn to speak better Wall Street, and vice versa. Ignoring each other is bad for both sides.

The value stick may be a valuable practical tool to help you visualise the idea of pricing power inside your company. However, nobody gets their margins handed to them just like that. Creating higher WTP or lower WTS is the first hard choice to make. Monetising it requires daily operational excellence. Companies need to negotiate hard and smart to get this done. A very focused organisation is necessary to translate potential WTP and WTS into actual margin. That requires a very close collaboration between marketing, finance, and sales.

It is all about balanced growth. It is about purpose and pricing power. Sustainable pricing power.

Chris Burggraeve is the founder of Vicomte, former global CMO of AB InBev, and former president of the World Federation of Advertisers. Find more information on his books on the topic of marketing-finance collaboration on Linkedin or www.vicomte.com

Pour 71% des marketeers, il est toujours difficile de délivrer leur contenu à la bonne audience dans le bon contexte. Reflexions sur les opportunités de Targeting contextuel et la création d’un éco-systeme publicitaire vertueux et qualitatif lors du séminaire de Mediaspecs 28.02.23

Comment, en tant qu’agence, peut-on améliorer la qualité du contenu qu’on présente à des communautés ? Quelle est la meilleure manière de cibler la bonne audience ? Peut-on créer une expérience média plutôt que construire un plan média ? Ce sont les questions sur lesquelles Hugues Rey, CEO de Havas Belgium et président de l’UMA, s’est penché lors de son introduction au séminaire Media & Communities le 28.02.2023 à l’UGC Mechelen, à l’occasion des 15 ans de MediaSpecs.

« 71% des marketeers trouvent qu’il est difficile de délivrer leur contenu à la bonne audience dans le bon contexte. » Les médias qui ont embrassé l’approche communautaire sont une réponse à cette problématique, et visent à fournir des contenus améliorés dans des contextes plus pertinents.

La notion de communauté se rapproche, dans le contexte publicitaire, de la notion de cible, en ce qu’elle réunit des individus avec des intérêts communs. Le monde de la publicité a un demi-siècle d’expérience derrière lui en termes de ciblage d’audience. Si notre capacité à définir ces audiences s’est grandement améliorée, les toucher réellement reste encore un défi. Il s’agit de trouver des bons partenaires de qualité avec les communautés adaptées à notre communication.

source: https://www.mediaspecs.be/fr/hugues-rey-mieux-diffuser-cest-moins-diffuser/

« Demandons-nous pourquoi les gens consomment du média. C’est significatif par rapport à leurs intérêts. Revenons à du contextuel » – Hugues Rey

Hugues Rey propose de se recentrer sur nos racines pour revenir à l’essence du ciblage : le contexte. « Cela peut ressembler à un pas de 50 ans en arrière, mais le contexte est essentiel pour être mis en avant de la bonne manière. On peut aller ainsi bien plus loin qu’avec l’approche ‘behavioral’, qui devient limitative. » Il ajoute : « 32% des consommateurs qui voient du contenu dans un contexte publicitaire pertinent auront tendance à agir de façon plus positive et dans le sens du message publicitaire qui leur a été offert. »

Bien sûr, les autres approches, bien qu’imparfaites, ne sont pas obsolètes. Tout est complémentaire : les mesures socio-démo, les cookies (malgré les potentiels biais qu’ils convoient en se focalisant sur l’intention), les cibles enrichies ou personae, et évidemment le contexte. Les nouvelles technologies ont également un rôle à jouer, l’IA est un outil particulièrement productif pour définir son audience.

« Mieux diffuser, c’est moins diffuser » – Hugues Rey

« L’industrie a plus que jamais besoin de talents, et il faut les utiliser au meilleur escient, que ce soit pour produire des contenus de qualité optimale ou que ce soit en agence pour penser et designer. Et les meilleures équipes qui soient sont formées de personnes qui aiment les médias. »

Finalement, c’est simple : « Avec les meilleurs ingrédients possibles, on peut faire les meilleures offres possibles pour nos clients. »

C’est ainsi qu’on arrive à un bon contexte et un bon ciblage, et qu’ils forment un cercle vertueux, bénéfiques pour toutes les parties : l’annonceur peut compter sur des résultats supérieurs, les médias qui produisent du contenu diffusent des messages plus pertinents dans leur contexte,et le consommateur se sentira plus concerné par les messages publicitaires. Moins de publicité, mais plus qualitative et mieux distribuée !

Integrated Communication Process Acronym: BASTARD. Brief, Analyse, Strategy, Touchpoints, Advertising (Content), ROI, Debrief… And Repeat!

A methodology for building an omnichannel, creative and touchpoint communication plan by integrating a.o. briefing, brand and consumer analysis, consumer decision journey, business marketing and communication objectives, targeting, eco-system of sales and communication channels, phasing, creative options and content type, media buying and planning, ROI, predictive and prescriptive modelling, data exploitation and reporting…

In summary: A process that includes the tasks and roles of the advertiser marketers and the digital media creative consulting agencies.

This is the outline of the methodology of the course I teach at the Brussels School of Economics and Management: Integrated Communication (GEST S508) – https://www.solvay.edu/.

I also share a master of communication and media planning in Saigon every year on the same basis – https://solvay-mba.edu.vn/mmcom/