Opinion: “We have become used to talking about an ‘attention economy’, but perhaps we should think more of the ecology of attention” – 11 Aug 2022 | Mike Follett

Follett: we need to talk about the carbon cost of attention


We have become used to talking about an ‘attention economy’, but perhaps we should think more of the ecology of attention.

It’s hot. It’s too damn hot. It’s so hot that villages on the outskirts of London are burning up in wildfires. It’s so hot that even climate change sceptics like Professor Byron Sharp might be changing (or re-changing) their mind about the reality of climate collapse. It affects us all; we’re all implicated, and it’s all of our responsibilities to do something about it.

source: Follett: we need to talk about the carbon cost of attention – The Media Leader (the-media-leader.com)

The advertising industry is definitely part of the problem, but we can also definitely be part of the solution. And if we are clever, the solution we come up with may be better than what went before.

The first thing to do is admit that advertising is contributing to the climate crisis. I don’t mean that we’re responsible to creating unsustainable demand: our tardy, apish industry is better at directing—rather than manufacturing—desire. What I mean is that advertising itself produces a lot of CO₂—in our offices, in our production practices, and crucially in our media buying.

The carbon cost of media buying is a novel idea, but pretty obvious when you think about it. As the good people at Scope3 have begun to point out, digital advertising is a significant polluter in itself: millions of phones receiving billions of ads after trillions of ad auctions every day use up a lot of electricity, which, in turn, requires a lot of carbon dioxide to be pumped into the air.

All that energy for so little engagement

What’s especially tragic is that when the ads finally reach our devices we often ignore them. We incur a definite (carbon) cost but only achieve a potential attention gain. Lumen’s eye-tracking research has shown that, for some formats, as few as 9% of impressions that reach the screen end up being looked at. All that energy for so little engagement.

But there is hope. Not all ads get ignored: different formats, publishers, and platforms are much better or worse at turning the opportunity to see an ad into actual viewing. When this ‘attentive seconds per thousand’ data is combined with ‘cost per thousand’ numbers, buyers can distinguish the true ‘cost per thousand seconds of attention’ between media alternatives.

And this in turn can be linked to the carbon cost of the media employed, to create a new and potentially powerful means of assessing media: the ‘carbon cost of attention’.

Lumen has been working with Scope3 and Havas to bring this concept to life, launching our ‘carbon cost of attention’ tool at Cannes Lions earlier in the summer. We combine Lumen’s impression-based attention predictions with Scope3’s carbon cost predictions and the pricing information available to a major trading desk like Havas to understand the true financial and ecological cost of the attention that we’re buying as an industry.

Already, we are seeing considerable differences for ads of the same format across publishers:

In the bottom left-hand quadrant of the chart above, we have low attention/low emissions publishers: a sad state of affairs, but not a disaster for the advertiser or the planet. What we want to avoid is shown to the right, an ‘attention desert’: low attention, but high emissions, which is the worst of all worlds. Instead, we should aim for publishers who provide high attention with low emissions: an advertising ‘Garden of Eden’.

An ecology of attention

What puts some publishers in the ‘carbon cost of attention’ good books, and others on the naughty step? Well, there are a number of factors, but some of the biggest include:

1. Clutter: as the chart below shows, the more ads that are served simultaneously on a screen, the less attention each receives. Given that the carbon cost for each ad stays the same, the ‘carbon cost of attention’ therefore skyrockets on cluttered pages.

It’s as if people can’t see the wood for the trees. This is bad for the advertiser (because people aren’t attending to their message), bad for the reader (as they often feel overwhelmed by ads), and, in a bitter irony, bad for the trees.

2. Scroll velocity: the slower people scroll the page, the more attention they give to the accompanying advertising.

Again working with Havas, and this time in partnership with Teads on the Project Trinity report, Lumen has found that attention to advertising is in part a function of how slowly people read a page. This in turn has a knock-on effect on the carbon cost of attention: ‘slow media’ leads to ‘sustainable attention’.

3. Streaming video: video advertising tends to get significantly more attention than static display advertising. But downloading a video to your phone can be fearsomely energy intensive, the increased carbon emissions outweighing the increased attention performance.

This is what is so exciting about streaming video services such as SeenThis, which allow advertisers to achieve all the attention benefits of video advertising at a fraction of the carbon cost.


We have become used to talking about an ‘attention economy’ – the cost of attention and the value of attention are well now established concepts.

But perhaps we should think more of the ecology of attention: one that safeguards the interests of advertisers, publishers, consumers, and the planet.

How popular is ChatGPT? Slower growth than Pokémon GO (source: AI Impact – Author: Rick Korzekwa)

Rick Korzekwa, March 3, 2023

A major theme in reporting on ChatGPT is the rapid growth of its user base. A commonly stated claim is that it broke records, with over 1 million users in less than a week and 100 million users in less than two months. It seems not to have broken the record, though I do think ChatGPT’s growth is an outlier.

source: How popular is ChatGPT? Part 2: slower growth than Pokémon GO – AI Impacts

Checking the claims

ChatGPT growth

From what I can tell, the only source for the claim that ChatGPT had 1 million users in less than a week comes from this tweet by Sam Altman, the CEO of OpenAI:

I don’t see any reason to strongly doubt this is accurate, but keep in mind it is an imprecise statement from a single person with an incentive to promote a product, so it could be wrong or misleading.

The claim that it reached 100 million users within two months has been reported by many news outlets, which all seem to bottom out in data from Similarweb. I was not able to find a detailed report, but it looks like they have more data behind a paywall. I think it’s reasonable to accept this claim for now, but, again, it might be different in some way from what the media is reporting1.

Setting records and growth of other apps

Claims of record setting

I saw people sharing graphs that showed the number of users over time for various apps and services. Here is a rather hyperbolic example:

That’s an impressive curve and it reflects a notable event. But it’s missing some important data and context.

The claim that this set a record seems to originate from a comment by an analyst at investment bank UBS, who said “We cannot remember an app scaling at this pace”, which strikes me as a reasonable, hedged thing to say. The stronger claim that it set an outright record seems to be misreporting.

Data on other apps

I found data on monthly users for all of these apps except Spotify2. I also searched lists of very popular apps for good leads on something with faster user growth. You can see the full set of data, with sources, here.3 I give more details on the data and my methods in the appendix.

From what I can tell, that graph is reasonably accurate, but it’s missing Pokémon GO, which was substantially faster. It’s also missing the Android release of Instagram, which is arguably a new app release, and surpassed 1M within the first day. Here’s a table summarizing the numbers I was able to find, listed in chronological order:

ServiceDate launchedDays to 1MDays to 10MDays to 100M
Netflix subscribers (all)1997-08-29366941857337
Netflix subscribers (streaming)2007-01-15188923513910
Instagram (all)2010-10-0161362854
Instagram (Android)2012-04-031
Pokemon Go (downloads)2016-07-05727

Number of days to reach 1 million, 10 million, and 100 million users, for several apps. Some of the figures are exponentially interpolated, due to a lack of datapoints at the desired values.

It’s a little hard to compare early numbers for ChatGPT and Pokémon GO, since I couldn’t find the days to 1M for Pokémon GO or the days to 10M for ChatGPT, but it seems unlikely that ChatGPT was faster for either.


Scaling by population of Internet users

The total number of people with access to the Internet has been growing rapidly over the last few decades. Additionally, the growth of social networking sites makes it easier for people to share apps with each other. Both of these should make it easier for an app to spread. With that in mind, here’s a graph showing the fraction of all Internet users who are using each app over time (note the logarithmic vertical axis):

Number of monthly users over time for several applications. The vertical axis is on a log scale.

In general, it looks like these curves have initial slopes that are increasing with time, suggesting that how quickly an app can spread is influenced by more than just an increase in the number of people with access to the Internet. But Pokémon GO and ChatGPT just look like vertical lines of different heights, so here’s another graph, showing the (logarithmic) time since launch for each app:

Fraction of total global population with access to the Internet who are using the service vs days since the service launched. The number of users is set somewhat arbitrarily to 1 at t=1 minute

This shows pretty clearly that, while ChatGPT is an outlier, it was nonetheless substantially slower than Pokémon GO4.

Additional comparisons

One more comparison we can make is to other products and services that have a very fast uptake with users and how their reach increases over time:

  1. YouTube views within 24 hours for newly posted videos gives us a reference point for how quickly a link to something on the Internet can spread and get engagement. The lower barrier to watching a video, compared to making an account for ChatGPT, might give videos an advantage. Additionally, there is presumably more than one view per person. I do not know how big this effect is, but it may be large.
  2. Pay-per-view sales for live events, in this case for combat sports, are a reference point for something that people are willing to pay for to use at home in a short timeframe. The payment is a higher barrier than making an account, but marketing and sales can happen ahead of time.
  3. Video game sales within 24 hours, in some cases digital downloads, are similar to pay-per-view, but seem more directly comparable to a service on a website. I would guess that video games benefit from a longer period of marketing and pre-sales than PPV, but I’m not sure.

Here is a graph of records for these things over time, with data taken from Wikipedia5, which is included in the data spreadsheet. Each dot is a separate video, PPV event, or game, and I’m only including those that set 24 hour records:

Records for most sales, views, and users within the first 24 hours for video games, PPV bouts, YouTube videos, and apps, plus a few points for users during first week for apps (shown as blue diamonds). Each data point represents one event, game, video, or app. Only those setting records in their particular category are included.

It would appear that very popular apps are not as popular as very popular video games or videos. I don’t see a strong conclusion to be drawn from this, but I do think it is helpful context.

Additional considerations

I suspect the marketing advantage for Pokémon GO and other videogames is substantial. I do not remember seeing ads for Pokémon GO before its release, but I did a brief search for news articles about it before it was released and found lots of hype going back months. I did not find any news articles mentioning ChatGPT before launch. This does not change the overall conclusion, that the claim about ChatGPT setting an outright record is false, but it should change how we think about it. 

That ChatGPT was able to beat out most other services without any marketing seems like a big deal. I think it’s hard to sell people on what’s cool about it without lots of user engagement, but the next generation of AI products might not need that, now that people are aware of how far the technology has come. Given this (and the hype around Bing Chat and Bard), I would weakly predict that marketing will play a larger role in future releases.

Appendix – methods and caveats

Most of the numbers I found were for monthly users or, in some cases, monthly active users. I wasn’t always sure what the difference was between these two things. In some cases, all I was able to find was monthly app downloads or annual downloads, both of which I would naively expect to be strictly larger than monthly users. But the annual user numbers reflected longer-term growth anyway, so they shouldn’t affect the conclusions.

Some of the numbers for days to particular user milestones were interpolated, assuming exponential growth. By and large, I do not think this affects the overall story too much, but if you need to know precise numbers, you should check my interpolations or find more direct measurements. None of the numbers is extrapolated.

When searching for data, I tried to use either official sources like SEC filings and company announcements, or measurements from third-party services that seem reputable and have paying customers. But sometimes those were hard to find and I had to use less reliable sources like news reports with dubious citations or studies with incomplete data.

I did not approach this with the intent to produce very reliable data in a very careful way. Overall, this took about 1-2 researcher-days of effort. Given this, it seems likely I made some mistakes, but hopefully not any that undermine the conclusions.

Thanks to Jeffrey Heninger and Harlan Stewart for their help with research on this. Thanks to the two of them and Daniel Filan for helpful comments.

  1. I also found some claims that the 100M number was inferred from some other figure, like total site visits, and that it might be an overestimate. I haven’t actually seen any sources doing this, so I’m sticking with the original number for now.
  2. I skipped Spotify because at first glance it seemed not to be unusually fast, it didn’t seem very easy to find, and I thought the other apps were sufficient to put things in context.
  3. Be warned that, at the time of this writing, the Google sheet is a bit of a mess and the sources are not cited in the most user-friendly way. If you’d like to use the data and you’re having trouble, please don’t hesitate to ask for a cleaner version of it.
  4. This is still the case if we do not divide by the number of Internet users, which increased by less than a factor of two between 2016 and 2022.
  5. The relevant Wikipedia pages are:

Generation Z. Not Dazed. Not Confused. (Lizzie Nolan & Lindsey Partos)

Numbering 2.5 billion, Gen Z took over Millennials in 2019 to become the largest generation on earth. Aged between 10 to 26 years old, they are humanity’s first generation born into the digital age, and have the tools, tenacity, and self-awareness to drive change and shape a better future.

In a new white paper, Lizzie Nolan, EVP Managing Director Strategy & Insights, and Lindsey Partos, Strategy & Editorial Director, explore vital insights on how this nuanced and connected generation is influencing the future of media. We invite you to read “Gen Z. Not Dazed. Not Confused” to learn about Gen Z and the meaningful media that matters to them.

#MeaningfulMedia #HavasProud

Les canaux digitaux représentent 35% des investissements publicitaires en Belgique (Barometre UMA/UBA)

Comme l’année dernière, l’UMA et l’UBA ont joint leurs forces pour produire un benchmark de l’investissement digital net en Belgique :

  • Pour les données agences (membres UMA + 4 agences spécialisées), l’information 2022 comporte les parts de marché des différentes catégories de médias, y compris offline, rapportées à l’investissement net. L’UMA a également établi une répartition des investissements digitaux entre acteurs locaux et internationaux pour la première fois sur une année complète.
  • L’information du Benchmark 2022 comporte les résultats d’un sondage auprès de membres de l’UBA (United Brands Association) sur la valeur de leurs investissements digitaux réalisés en-dehors du périmètre des agences, soit internalisés soit effectués via un intermédiaire situé en-dehors du territoire belge.

Il est important de préciser que les données UMA & agences associées et celles de l’UBA ont un statut différent. La source agences est exhaustive, mais limitée au périmètre des participants. La source annonceurs repose sur les déclarations non extrapolées d’un large panel de répondants, mais elle ne couvre pas la totalité des achats digitaux « directs ». Quelque 70 des plus de 350 membres de l’association ont répondu, ce qui est plus que significatif. Mais la réalité des investissements directs par les annonceurs belges est certainement plus élevée.

Les canaux digitaux représentent 35,1% des investissements publicitaires en Belgique

Globalement, dans le périmètre UMA, le digital représente 33% du total de l’investissement média en 2022. Ce ratio monte à 35% si on intègre les données UBA. Côté UMA, cette part est en nette hausse par rapport aux 32% de 2021, et l’ensemble UMA-UBA progresse d’un point de pourcentage par rapport à 2021 (34% l’année dernière)

MEDIAGROUPS UMA + UBA%   segm% total%   segm% total
OUT OF HOME13,6%8,8%12,0%7,9%
TOTAL OFFLINE100%64,9%100%65,7%

La télévision conserve la plus grande part de marché (35,6%)

La répartition des investissements nets nous montre que la télévision reste le média leader dans notre pays : avec plus de 35% du total tous médias la télévision reste le principal média publicitaire en Belgique, suivie de tout près par le digital dans son ensemble, à 35% du total.

La radio sort du benchmark comme le 2e canal individuel, avec une part proche des 15%. Avec 9%, le ‘paid social’ arrive 3e du classement individuel, suivi de près par l’out of home à un demi-point de pourcentage. Important à signaler aussi: les investissements publicitaires qui bénéficient aux éditions digitales des éditeurs sont repris en digital et donc ni en presse, ni en télévision, ni en radio pour ne citer que ceux-là.

Des évolutions remarquables des canaux: ‘Social’ est devenu le ‘leading digital touchpoint’

En 2021 les données UBA (les investissements par les annonceurs) pesaient 10% du total digital. Cette proportion a augmenté d’un point en 2022, à 11%. Avec des changements significatifs : ainsi le SEA est passé de pratiquement 21% du total à 16,5%, alors que le ‘paid social’ grimpe lui de 9 à près de 15%.

La part du display et de la vidéo dans les données UBA est également en croissance en 2022, ce qui traduit probablement une diversification des approches internalisées chez les annonceurs. Au sein des données UMA, l’évolution la plus remarquable est celle des « autres canaux », qui passent de moins de 7 à 10%, ce qui nécessitera probablement dans le futur une analyse plus fine des touchpoints digitaux repris sous cette rubrique. Cette montée en force des « autres » canaux affaiblit proportionnellement tous les autres, sauf la vidéo qui passe de moins de 21 à 22% du total dans l’univers UMA.

YearSourceDigital total DisplaySocialSEAVideo Other
% V
% H

Répartition des investissements entre acteurs nationaux et internationaux

Comme le benchmark UMA du premier semestre 2022, ce benchmark UBA/UMA reprend également pour l’UMA la répartition des investissements digitaux entre acteurs locaux et internationaux. Pour ces derniers, le questionnaire envoyé aux agences concernées les désignait sous la dénomination « GAFAM ». L’agrégation des réponses par canal digital nous permet de conclure à une part de l’ordre de 60% pour les acteurs internationaux sur l’ensemble du marché de la pub digitale en Belgique tel qu’il apparaît dans l’univers UMA (qui n’est pas exhaustif). La part des acteurs locaux dans le digital est donc légèrement supérieure à 40%.  Cette part des acteurs locaux est supérieure à celle des GAFAM dans tous les canaux où il y a effectivement concurrence.

Si on considère l’ensemble des investissements offline comme réalisés auprès d’acteurs belges, la part du « local » sur l’ensemble du marché média est alors de 81%, les 19% restants étant investis via les GAFAM.

SEA 100,0%
SOCIAL 100,0%
TOTAL MEDIAMARKET H1 202280,7%19,3%

Les répondants de l’étude, soit l’UMA et les agences participantes à ce benchmark, ainsi que ceux du panel UBA ont été agrégés dans le rapport final par un consultant externe dans le plus strict respect de la confidentialité. Chaque agence a rempli un tableau reprenant les 44 secteurs de produits déterminés par l’UMA qui a redécoupé une segmentation de Nielsen Ad Intel en déclarant les investissements totaux des annonceurs et marques concernés par ces groupes dans les 5 catégories de formats digitaux étudiés.

Agences et filiales contribuant aux chiffres repris dans le total UMA :

GroupM (Mindshare, Wavemaker, Maxus, Kinetic), Mediabrands (Initiative, UM, Reprise, Rapport), Space, Dentsu Belgique, OmnicomMediaGroup, (OMD, PHD Media, Semetis), Havas Media, Publicis Groupe, (Zenith, Blue449), Serviceplan (Mediaplus, Mediascale), Zigt.

Agences partenaires dans la constitution de ce rapport :

AdSomeNoise, blue2purple, Pivott, Ogilvy | Social.Lab

Digital attribution is dead! Les Binet tells us why marketers need econometrics in 2023 (source: The Drum)

The Drum columnist Samuel Scott recently fell down a rabbit hole and into the world of econometrics with effectiveness expert Les Binet. Here he explains what marketers need to know as digital attribution degrades.

Les Binet

Les Binet, group head of effectiveness at Adam&EveDDB

What is better – information that is cheap and wrong or information that is expensive and accurate? Soon, marketers may have to decide.

source: The Drum | Digital Attribution Is Dead! Les Binet Tells Us Why Marketers Need Econometrics In 2023

Some months ago at my day job as head of marketing at the IT mapping software company Faddom, we saw a decline in organic search engine traffic. I could not figure out why. There was no evidence of a Google penalty, it wasn’t an SEO issue.

So I decided to look into some statistical correlations. On a hunch, I made a list of many potential variables that might have affected the traffic – no matter how far-fetched – and plotted their changes against the changes in organic traffic and the numbers of Google searches and clicks for our brand name over the same several months.

One theory was that a decrease in our Google Ads spend might be a secret search engine ranking factor. Another was that spending less on Google Ads resulted in fewer people seeing our brand name, becoming interested in us and then searching for the company. But we found that there was an 11% correlation between Google Ads spend and people searching for our brand name and clicking to our website. There was a 6% correlation between Google Ads spend and total organic website traffic. So those were clearly not the issues.

Instead, there was a 96% correlation between the number of cold sales emails that our business development team would send out and the number of Google searches for our brand name and resulting website clicks. Put simply, we inferred that people would receive an email, wonder who Faddom is, search Google for the name and visit our website.

Now here’s the issue. Google Analytics logged those visits as organic search engine traffic because they did indeed come from the search engine’s unpaid results. But in such specific cases, the cold emails – not SEO – should get the credit. The decline in organic traffic had almost surely come from the team temporarily sending fewer emails.

What can readers learn from my day job headaches?

First, marketers should remember that so-called ’outbound’ and ’interruptive’ marcom can be extremely effective – no matter what nonsense HubSpot has told the industry for the past 15 years to sell its own ’inbound’ marketing software. Second, I remembered the story as yet another example of how analytics dashboards, digital attribution and the entire online world can be misleading at best or completely wrong at worst.

In fact, it will likely be one of the biggest problems that marketers will face in 2023 as we see the planned death of third-party cookies, 43% of people now using adblockers that also stop scripts such as Google Analytics from running and the iOS 14 update that stopped ad tracking on Apple devices.

Econometrics – also known as marketing mix modeling (MMM) – might be a solution. After all, the attribution-based online marketing world that many have known for the past two decades is rapidly disappearing.

The problems with digital attribution

Les Binet, group head of effectiveness at Adam&EveDDB, told me that attribution modeling began around 1900 with direct response ads in print media that had coupons. Different printing presses in the same city could run ads with different coupons to see which ones worked better.

Starting in the early 2000s, the nascent world of online advertising quickly became addicted to attribution with Cocaine Bear levels of enthusiasm. For example, at AdExchanger’s industry preview this month, Lending Tree senior vice-president of growth marketing Joshua Palau said that “all media should be performance media.”

An entire new generation of ’digital’ marketers now thinks only about what is stupidly called ’performance advertising’ because it is deceptively simple. You put an ad on Twitter. You see how many people click to visit your website and buy. You attribute those purchases to Twitter. Easy.

But it’s actually not that easy. Even before the ongoing death of ad tracking today, attribution modeling on its own has always been deeply flawed.

“If you say this ad generates a million in sales, the true answer could be anything between [zero] to a million,” Binet, whose original training was as an econometrician, told me. “It looks very scientific, it looks very precise, and it’s extremely unreliable.”

As an example, here is a slide that Binet gave me on last-touch attribution.

Here are some of the reasons why attribution is so unreliable.

The Fallacy of Immediacy

Marketers often assume that an effective ad will convince someone to buy or become a lead immediately. But many ad-driven purchases occur long after the advertisement appeared – and especially long after the ability to track the sale with digital attribution has disappeared.

Just remember one of Binet’s classic charts from his famed work with Peter Field on The Long and Short of It.

At Faddom, we recently spent a portion of our ad spend on Reddit. After two months, we received fewer customer leads than expected. A common assumption would be that the campaign results were poor. But what if hundreds or thousands of people saw the ads and made a mental note to check us out in a year because our industry has long sales cycles?

With only digital attribution, it is impossible to know. And it’d also be impossible to know the original sources of those hypothetical purchases in 2024.

The Fallacy of Last-Touch Attribution

Binet likened this to a store measuring the number of people who enter through each door and how much they buy. He gave an example where the west door supposedly resulted in 25% of sales.

“It’s clear that it’s not just the door that generates the sales,” he said. “If you shut the west door, [digital attribution modeling] would say that you immediately lose a quarter of your business. But that’s not true. What would happen is that they’d walk around to the south door, the north door or the east door. If you’ve got a healthy business and people really want to come in, they’ll find a way.“

The Fallacy of First-Touch Attribution

Many companies assign sales and leads to the supposed first touch out of a desire to have a simple way to show revenues, expenses and overall returns from each activity. But it also happens to be completely inaccurate.

“The idea that one channel can be given ‘credit’ for a given lead or sale is nearly always nonsense,” Binet said. “Each sale is usually the combined result of multiple channels working together, often over a period of months or years. Instead, the question marketers need to ask is: What is the incremental effect of each channel? If I dial spend on this activity up or down, how much will my sales rise or fall?”

He added: “First-touch attribution is just as bogus as last-touch attribution. For a start, digital data rarely goes back far enough to identify the first exposure. Digital data trails usually last days or weeks, but advertising effects can last for years. Think about all those TV ads you remember from childhood. A sale is rarely caused by the first exposure, or the last one, but the combination of all previous exposures.”

The Fallacy of Multi-Touch Attribution

This often shows only the method of a purchase, not what convinced a person to buy. If I have already decided to buy a product, then I might, say, search Google or Facebook for the item.

But it would not necessarily mean that Google or Facebook influenced my decision to purchase it in the first place – even though digital attribution would make it seem so. To use some popular modern buzzwords, attribution modeling often identifies the channels where demand fulfillment occurred but not where demand creation happened.

Binet likened this to someone deciding to buy something and then traveling along subway lines and streets to get to the store. Attribution often focuses on the subway lines and streets.

“The attribution modeling bogusly attributes the sales to the few factors along the customer journey at the last bit of sale, and it ignores the many factors that led up to it,” Binet said.

The Fallacy of Technology

’Digital’ is not an advertising strategy or tactic. It is a type of technology based on binary code. And digital attribution is biased in favor of channels that use digital attribution technology.

Say a person sees your company’s booth at a conference but does not approach. Or they watch a YouTube video but do not click. Six months later, they remember you and decide to find you and make a purchase. Digital attribution would show nothing because there would be no trail.

“What we’ve always known is that the direct attribution method is flawed,” Binet said. “It doesn’t take much thought to realize that it’s wrong.”

One of the biggest problems in advertising today is an overreliance on short-term direct response and an under-reliance on long-term brand advertising – particularly in the B2B world.

“Attribution modeling overestimates the ROI from direct response communications and underestimates the ROI from brand communications,” Binet said. “If you just follow the attribution data, you end up just doing short-term stuff. You never build a brand, you don’t grow the customer base, you don’t grow the base level of demand. And it’s a recipe for disaster in business.”

But another big problem is that much of the attribution on which online direct response is based is wrong.

Digital attribution is inaccurate

In late 2022, Chris Walker, chief executive at the Boston marketing agency Refine Labs, posted on LinkedIn a comparison of what customers said versus what software-based attribution stated.

“Attribution software inaccurately overweights search (paid and organic) and direct traffic,” he wrote. “Not because those channels drove the results, but because that’s the path buyers take when they’re ready to buy.”

Everyone today is obsessed with being so-called ’data-driven.’ But a lot of data is simply bad information. Good research, common sense and gut instinct are often more accurate.

Eric Stockton, senior vice-president of demand generation at cloud desktop provider Evolve IP, perfectly summarized the situation late last year on LinkedIn: “In an ironic twist of marketing fate, the channels that aren’t the easiest to measure are often the best contributors to pipeline and revenue … correlation of buyer behavior [is better than] direct attribution by channel.”

In January 2023, Paul Arpikari, chief commercial officer at econometrics platform Sellforte, posted on LinkedIn that branded search and “performance” channels in general are overestimated in sales attributions. Online video is underestimated due to not having clicks.

Binet said that one person at a conference told him this: “Finally, we’re getting these digital nerds to understand why what they’ve been doing has been wrong all these years.”

So, if econometrics may be a good replacement for digital attribution, what exactly is it?

How to use econometrics in marketing

Back to Binet’s history. With the birth of radio, cinema and TV broadcast media, using methods of attribution similar to those in print ads was impossible.

So advertisers started to use econometric modeling, which is doing advanced statistical analysis of sets of mass aggregate data. Basically, it is creating a machine based on a bunch of historical numbers or industry benchmarks that will then see the relationships and calculate long-term correlations to show the effects of changing one or more variables.

In the story at the beginning of this column, I told how I used a very basic form of statistical correlation to discover the effect of changing the number of cold sales emails on organic search engine traffic. Marketers who use proper econometrics can go further and see the results of raising or lowering prices, changing ad budgets on one channel or another, opening stores on days with different weather and more.

Essentially, econometrics models take every single thing that might affect sales – from advertising campaigns to the weather to pricing to overall economic conditions – and narrows down the factors from hundreds to the dozen that are the most important.

Then, the model creates an equation that describes the relationship between those factors and sales. Marketers can then run the equation to see that if they do X, sales will change to Y. Governments use econometrics to measure things such as the effects of tax rate changes on GDP. Marketers can use it to determine the best marketing mix – because marketing, of course, covers a lot more than advertising and communications.

“Attribution – quickly and cheaply – will give you an answer that is precise and wrong. Econometrics – slowly, laboriously and expensively – will give you an answer that is right,” Binet said. “Attribution modeling will tell you that things that are unprofitable actually made money. It can be incredibly misleading. And it will tell you that things that really are profitable aren’t.”

The argument against econometrics

No theory or model is without critics – especially in marketing. For econometrics, one prominent skeptic is Byron Sharp, director of the Ehrenberg-Bass Institute for Marketing Science in Australia.

“The people proposing econometrics as a solution [to the problems of attribution modeling] all seem to be people who sell econometrics,” he told me. “There is a marketing law: ‘wherever there is demand there will be supply.’ The supply doesn’t have to work, it just has to convince the buyer that it works … and marketers are not known for their mathematical knowledge, so [they] are easily fooled.”

He added: “Using econometrics as a solution to the problems of attribution modeling is like cutting off a finger to distract yourself from having a sore foot.”

Sharp gave me a 2018 paper that he co-wrote for the International Journal of Market Research with John Dawes, Rachel Kennedy and Kesten Green. I have made it available for download in PDF format here.

From the abstract: “The contribution of regression analysis (econometrics) to advertising and media decision-making is questioned and found wanting. Econometrics cannot be expected to estimate valid and reliable forecasting models unless it is based on extensive experimental data on important variables, across varied conditions.”

But if there will ever be a widespread push to adopt econometrics modeling in marketing, Sharp will likely not be the only opponent.

One basic rule in business is to maximize revenue and minimize expenses. As a result, it is extremely difficult to get companies to pay more for something that they have always gotten for cheap. For 20 years, marketers have had access to free attribution-based analytics data through platforms such as Google Analytics. Now, try telling your boss that you want to increase your analytics spend from zero to five figures or more next year to build an econometrics model.

Second, most econometrics models need at least a couple of years of historical data on hundreds of variables. Large, established brands certainly have that. But small businesses and new high-tech startups, for example, do not.

Third, it can take months to build such a model. Try telling your boss that they will not see the results of the massive increase in analytics spend for half a year or more.

The solution might be to ask your companies if they want information that is cheap and wrong or expensive and accurate – especially when they want to see the true results of long-term brand advertising campaigns. But I am still somewhat skeptical. The perfect metaphor for the popularity of the last-touch attribution fallacy is when many companies always throw parties to celebrate sales teams closing deals but barely acknowledge the work of marketing departments.

Chris Burggraeve 29 Sep 2022: “You need purpose and pricing power, not one or the other” in Martketing Week

Source: You need purpose and pricing power, not one or the other (marketingweek.com)

To really convince financial stakeholders, marketers need to urgently upgrade their marketing effectiveness language.

For about five years, until about a year ago, ‘purpose thinking’ was one of the hottest thought leadership concepts, promoted at every marketing and management seminar. Today, it is at risk of being simply forgotten about, or at least “postponed until after the recession”, as you sometimes hear in boardrooms.

Back in February, I predicted inflation would separate the strong brands from the weak. Paraphrasing the famous Warren Buffet quote, I argued that while purpose thinking gets you in the right water, pricing power is your swimsuit – to avoid being caught naked when the tide goes out.

Pricing power is a powerful concept that is still not as well understood by marketers as it should be. As Buffet has defined it for decades, pricing power is “the ability to systematically raise prices without curtailing demand or losing share to a competitor”. It is his number-one criterion for investing in companies.

Marketing Week’s recent Language of Effectiveness survey indicated again that the idea of pricing power is still largely absent in how marketers communicate with their CEO and boards on the ‘why’ of marketing investment.

As a former global CMO, as a current investor and board member of public and private companies, and as a board advisor who has been deeply immersed in these discussions for a decade, I recommend pricing power as the best way to explain marketing effectiveness to internal and external financial stakeholders. Companies that do have reaped the benefits.

I never advise senior marketers to forget about their own well-honed functional language (serving unmet or underserved needs and wants, creating demand, etc), nor to deny the potential use of the range of effectiveness metrics that go with it. However, I do implore them to actively address a hard truth once and for all: most marketing language is perceived as voodoo for many internal and external finance stakeholders.

They are only interested in the outcome of the marketing work, in other words its impact on the P&L (top- and bottom-line growth) and balance sheet (creating intangible asset value); not in the dizzying array of input or output metrics with which marketers bamboozle them.

Pricing power is new financial literacy language marketers need to adopt fast, at least if they want to be taken seriously by their largely underserved financial stakeholders. The more CMOs, CFOs and CEOs can align on the concept of how to create and measure sustainable pricing power, the more they will close what academics have come to call ‘the (stubborn) managerial marketing-finance gap’. As a result, more CMOs will get a seat at the exco table and more CMOs may access boards positions over time. Research from the Darden School of Business at the University of Virginia has shown less than 2.6% of boards have marketers on them. Why do you think this is? These are, in the end, the people signing off on the CEO strategy and budget proposals.

Many – still not all, surprisingly – boards and CEOs now realise they should have been smarter before in terms of brand building.

Pricing is tangible. Pricing power is hard data that trumps the fuzzy ROI discussion and any other marketing metric to convey the impact of marketing. It is the language of money that the CFO, CEO and board want to understand. Pricing drives top line. Investing in brand health today means pricing power tomorrow.

The best marketing companies are the ones that never waver on brand building. They invest consistently, both in good times and in recessionary times. The key benefit lies in boring consistency, not in flashy brand relaunches that make for good media stories (and make or break marketing careers). They systematically build sustainable pricing power over years, and they monetise this effort on a continuous basis. The recent inflation spike becomes a moment they diligently prepared for. Not a crisis. Inflation is an acid test for a company’s true marketing capability excellence.

Nailing the ‘alligator pitch’: Marketers on sharing the effectiveness message

Purpose and ESG under pressure

The tide has indeed gone out, and plenty of brand owners do stand naked. With rampant inflation, senior leadership teams are scrambling to protect margins. The focus is on survival, regardless of the recent sensational announcements by purpose-driven icons like Patagonia’s owner and CEO Yvon Chouinard.

He is heralded by many disciples as a true champion of the cause to reinvent capitalism. However, the hard reality is that, in many boardrooms and senior leadership committees, any thinking about purpose is simply being pushed onto the backburner.

Many – still not all, surprisingly – boards and CEOs now realise they should have been smarter before in terms of brand building. One can only hope that the hard lesson will be immediately applied moving forward. Yet I am not holding my breath. The proof will be in the ongoing 2022 budget rejigging, and in the 2023 planning rounds.

Managerial instinct will be to ‘focus on the basics’, a veiled proxy for the short term, and on saving careers. As per the German saying by Berthold Brecht: “Erst kommt das Fressen, dann die Moral.” First comes food, then comes morality. It is a very understandable human reflex in crisis, albeit unlikely to guide us through the dilemma at hand.

Equally, ESG (environmental, social and governance) ratings, often used by stakeholders as a proxy for how purpose-driven a company might be, are under fire, their credibility increasingly compromised. There are three key reasons behind it.

Firstly, ESG ratings are still misunderstood by managers, boards and investors. The rating is marketed to them as a predictive proxy for the ethics and morals underpinning a company’s future growth; an assessment of how a company might translate oft lofty purpose statements in numbers Wall Street could understand and act on.

Naturally, boards came to see it as a proxy for doing good, as opposed to what it truly is: a rating measuring the impact of a specific sub-set of risks to the company’s long-term value creation potential.

The pendulum may be swinging too much away from stakeholder-centric purpose, and back to shareholder short-termism.

As an example, an ESG rating does not measure the impact on the planet of how hard a company works to help save it. On the contrary, it measures the possible current and future impact on the P&L and balance sheet of climate change. Just imagine the exposure to a lack of water for many industries in the summer we just lived through.

For most CFOs, though, the key benefit to fighting for a high ESG rating was always crystal clear: the lower the ESG risk, the lower the interest on any bank loans. Not unlike getting a better credit rating. They don’t mind if the banks will claim they lend to companies who do good, perpetuating the overall misconception of what an ESG rating really measures.

The second issue for ESG is the absence of clear standards of how and what it exactly measures. ESG ratings started to be offered as a service by the likes of MSCI, Bloomberg and Thomson Reuters around the year 2000. According to ERM Group’s SustainAbility Institute, by 2018 there were already over 600 different ESG ratings marketed. However, unlike with credit ratings, academic research (such as from MIT in 2019) has consistently shown a very high statistical dispersion among different providers’ ratings of the same companies.

In simple terms, the same company would get a similar credit rating from the few credit rating agencies, as they all apply relatively similar algorithms. However, in ESG land, companies are likely to get very different ratings, dependent who they got it from, as the respective methodologies are very different. The range is so wide that ratings may even be completely opposite. For example, my former company AB InBev is rated mid/low on ESG by S&P Global, mid/high by Refinitiv and Sustainalytics, and very high by MSCI. The range of ratings on a company like Volkswagen is even wider. Who should investors believe? Who will stakeholders (want to) believe?

The third key issue on ESG is their actual tangible impact for investors (in terms of returns) as well as for society at large (through lowering carbon emissions or reducing income equality, for example).

Bloomberg Intelligence estimates that ESG-rated funds will represent $50tn by 2025, or one third of all assets under management. How do these funds and their investments perform so far? A recent Wall Street Journal article highlighted academic research (from UCLA, NYU Stern, etc) comparing value creation by firms with high and low ESG scores. It proved that a high score is not necessarily a predictor of higher growth and profits, contrary to advertising claims made by the ESG rating agencies and by the funds touting them.

Moreover, it said: “Over the past five years, global ESG rated funds have underperformed the broader market by more than 250 basis points per year, an average 6.3% return compared with an 8.9% return.”

Many of us might consider a 6.3% return as not too shabby at all. The so-called impact investors, especially, who elected to invest in these ESG-rated funds, might live with that. Behavioural finance theory would suggest they might see the forgone extra return as acceptable, if it did any good for mankind.

Alas, no. More academic research (Utah, Miami and Hong Kong) found that at least so far there is “no evidence that socially responsible investment funds improve corporate behaviour”. The WSJ article concluded with a series of possible improvements to make ESG more useful and credible again.

But is all the above reason enough to jettison all the good that purpose thinking and the still maturing ESG rating frameworks have brought us over the last decades? We are at risk of throwing away the proverbial baby with the bathwater. The pendulum may be swinging too much away from stakeholder-centric purpose, and back to shareholder short-termism.

The ‘Tyranny of the OR’

The strongest brands have always been, and will always be, the ones that create sustainable pricing power. Sustainability built on purpose. As I argued in my February Marketing Week article, the two are inextricably linked. Where purpose and pricing power are concerned, the decision is ‘and’, not ‘or’. Allow me to deepen the argument here.

Management literature and practice always reminds us that strategy is about making hard choices. It is an ‘or’, not an ‘and’; a deliberate choice between a set of options to reach an objective or goal. In a way, strategic decisions are not unlike marriage decisions. Some of you will correctly argue that many marriages historically were arranged purely for strategic non-love reasons. These days some still are. However, luckily, in most cases both sides have been able to pick the true love of their life, and they (try to) stick to that choice to be happy forever after. Marriage is in essence an ‘or’ decision, at least in principle.

In his 1994 bestseller Built to Last, management guru Jim Collins introduced the concept of the “Tyranny of the OR,” which “pushes people to believe that things must be either A OR B, but not both”. Instead of feeling oppressed by this, he argued that highly visionary companies liberate themselves with the “Genius of the AND”, the ability to embrace extremes of dimensions at the same time.

Effective brand management is such an eternal high-wire act between extremes: building long-term brand assets as cost-efficiently as possible, and intelligently monetizing that asset value short-term. What Americans call ‘to walk and chew gum’. Untapped equity reserves in the consumer or B2B customer mind should get translated ASAP into real P&L added value, by bringing real market prices over time as close as possible to their willingness to pay (WTP). By leveraging earned pricing power.

While many strategic choices mean ‘or’, in this case the hard choice is ‘and’.

Of course, if a company was only focused on the short term in the past, and it has not built or nurtured any/enough WTP in the mind of consumers or customers, that is the first bullet to bite. The hard decision is to invest more to create WTP. There is no panacea. There will be short-term margin pain. Company leadership will pay for the sins of its past.

Wall Street calls this over-earning: creating performance that was better than it deserved to be. Note: short-term investors happily took that result and ran with it. But the music always stops one day. Inflation made the music stop.

Because of inflation, boards and executive leadership should by now have understood that (painful) reinvestment, with a long-range plan and new narrative to stakeholders, is really the only correct and viable option. If they insist on short-term only, as per the natural reflex mentioned above, value investors should take their money out now.

Both willingness to pay and willingness to sell drive margins

There is another, often overlooked, ‘hidden turbo’ side to pricing power that CMOs might want to make their CFOs and CEOs aware of, visualised in the simple yet powerful concept of the ‘value stick’, created by Harvard professor Felix Oberholzer-Gee in his 2021 book Better, Simpler Strategy.

Creating WTP is the top-line concept marketers rightfully (and uniquely) should continue to focus on. However, now look at the bottom of the value stick. Companies can also grow margin by lowering costs. Not just by smartly cutting costs in the classic sense, but by elegantly leveraging pricing power to further lower what Oberholzer-Gee calls ‘willingness to sell’ (WTS) – the lowest price a company can accept for a product or service.

Intuitively, one can see how a company with aspirational, strong brands – ie brands with sustainable pricing power – gains a number of hidden WTS benefits that can put a turbo on the full margin, including (but not limited to) the below:

  • Employees: may join and stay longer at lower cost
  • Debt: banks may offer lower interest cost for lower risk
  • Equity: investors may pay more for your shares
  • Suppliers: may accept better payment terms to get you as a client
  • Customers: may accept you as a reference supplier at lower cost
  • Regulators: may accept lower risk-mitigation cost

There is one more interesting dimension to this hidden turbo. Mark Ritson has rightfully questioned the one-dimensional worship of purpose in various Marketing Week articles since last yearhighlighting recently that much of the self-serving research on the impact of purpose needs to be taken with a grain of salt. Still, Oberholzer’s research suggests it pays to be stakeholder-focused.

His book suggests the potential WTP is higher for a stakeholder-focused corporation, while its potential WTS can be significantly lower. Hence, the margin potential of a company focused on building brands with sustainable pricing power is higher than that of a company only focused on short-term shareholder value.

Close your eyes and picture a company you personally would never work for, versus a purpose-driven company you love (maybe Patagonia?). How much would each have to pay you to earn your soul? Which company benefits from the lower WTS? It makes intuitive sense. It all goes back to the sustainability of your brand.

Ritson: Good purpose, bad purpose – marketers shouldn’t oversimplify the arguments

Think also about the ethics of pricing power. It is not because you can increase prices, or can lower WTS, that you should. Energy monopolies are now accused of abusing their monopoly pricing power. Their shareholders will love the gains and may not care. But the board should, as in the end reputations matter. Governments rightfully will come and tax the super-earnings (or at least say they will). Greece has already shown its EU colleagues it is possible.

Amazon Prime increased its yearly subscription in the US by 17%, from $119 to $139 in the last year, compared to about 7-10% inflation. Given its pricing power, the brand probably can. It worked hard for that privilege. The brand invests a lot in superior customer service (CX) and has earned our trust.

The strategic question for the Amazon leadership is about stakeholder versus shareholder. Consumer empathy in tough times. How much does Amazon, a self-proclaimed (and proven) CX champion, really feel the pain of its shoppers?

What if the board and CEO had said: “We know you are experiencing tough times, so we’ll postpone our price increase?” What would it have done for the reputation and long-term gains in WTP and WTS, especially for a company that needs to attract so much talent to keep growing?

The ‘Genius of the AND’

While many strategic choices mean ‘or’, in this case the hard choice is ‘and’. Being purpose-driven remains the necessary condition for success, while investing in building pricing power is the sufficient one. Marketers need to invest much more time with their underserved internal and external finance stakeholders. It is all about language. Main Street needs to learn to speak better Wall Street, and vice versa. Ignoring each other is bad for both sides.

The value stick may be a valuable practical tool to help you visualise the idea of pricing power inside your company. However, nobody gets their margins handed to them just like that. Creating higher WTP or lower WTS is the first hard choice to make. Monetising it requires daily operational excellence. Companies need to negotiate hard and smart to get this done. A very focused organisation is necessary to translate potential WTP and WTS into actual margin. That requires a very close collaboration between marketing, finance, and sales.

It is all about balanced growth. It is about purpose and pricing power. Sustainable pricing power.

Chris Burggraeve is the founder of Vicomte, former global CMO of AB InBev, and former president of the World Federation of Advertisers. Find more information on his books on the topic of marketing-finance collaboration on Linkedin or www.vicomte.com

Jumeau numérique : quand l’innovation voit double (source: lehub.laposte.fr)

Publié le 23/02/2023

Certains secteurs comme l’automobile ne peuvent déjà plus s’en passer. Les jumeaux numériques trouvent chaque jour de nouvelles applications. Répliques du corps humain, d’usines, de magasins ou encore de villes, ils contribuent à accélérer l’innovation.

source: https://lehub.laposte.fr/dossier/jumeau-numerique-quand-innovation-voit-double

Et si demain votre jumeau numérique prenait votre place sur la table d’opération afin que le chirurgien s’assure que l’intervention qu’il va pratiquer sur vous aura bien les effets escomptés ? Science-fiction, pensez-vous… Plus tout à fait. « À l’hôpital pédiatrique de Boston, le chirurgien-ingénieur David Hoganson reçoit des enfants souffrant de malformations cardiaques congénitales. Avant de procéder à une opération, il réalise un jumeau numérique de leur cœur et teste différentes manières d’intervenir. Il peut ainsi déterminer la meilleure approche pour sauver ces enfants », nous explique Claire Biot, Vice-Présidente en charge de l’Industrie des Sciences de la Vie et de la Santé chez Dassault Systèmes, dans l’interview qu’elle nous a accordée.

La santé est un des domaines les plus prometteurs pour les jumeaux numériques, avec une très grande variété d’applications. « Un médicament sur deux est désormais conçu avec une de nos solutions de jumeaux numériques », illustre Claire Biot. Un jumeau numérique ? Ce n’est pas une simple maquette digitale d’un élément du monde physique. C’est la reproduction exacte et dynamique de son fonctionnement grâce à l’utilisation de données qui l’alimentent et lui permettent de simuler un comportement futur. Pour la mise au point de médicaments, la technologie du jumeau numérique permet, par exemple, de réaliser des essais cliniques virtuels en comparant les effets d’un nouveau traitement avec les données obtenues lors d’essais précédents. « En nous appuyant sur une bibliothèque de patients virtuels, nous pouvons diviser par deux le temps de l’essai clinique. Ce qui accélère très fortement l’accès au marché pour l’innovation thérapeutique », souligne Claire Biot.

Accélérer l’innovation. Ce n’est pas le seul avantage des jumeaux numériques. Ils contribuent aussi à une meilleure collaboration entre les différents acteurs. « Pour réussir à constituer un jumeau numérique, il faut faire travailler ensemble des disciplines parfois très éloignées autour d’une représentation commune, développe la vice-présidente de Dassault Systèmes. Ces différentes équipes vont devoir mettre des connaissances et des savoir-faire en commun, ce qui permet de casser les silos. »

L’industrie automobile ne peut plus s’en passer !

La santé est loin d’être le seul secteur bouleversé par l’émergence du jumeau numérique. « L’industrie automobile ne peut plus s’en passer, reconnaît sur son site web le groupe Renault. Il a révolutionné les méthodes de travail des acteurs de la conception, de la fabrication et de la réparation d’un véhicule. » Cette technologie permet de gagner jusqu’à une année dans la conception d’une voiture. Les ingénieurs utilisent le jumeau numérique « pour tester par exemple l’aérodynamisme de la carrosserie, les performances du moteur, la gestion de la transmission ou encore l’efficacité de la climatisation par simulation virtuelle », explique Renault.

Grâce au double numérique, le recours aux crash tests a été totalement repensé. À l’instar des essais thérapeutiques dans le domaine de la santé, ils sont aujourd’hui d’abord virtuels. Le comportement d’une future voiture est maintenant testé sur des routes digitales au travers de scénarios de conduite. « Toutes ces simulations sont réalisées bien plus rapidement et en bien plus grand nombre que ne pourraient l’être des essais physiques. » Le passage par des prototypes physiques reste incontournable, mais leur nombre est fortement diminué et ils interviennent sur des versions du véhicule déjà validées numériquement.

Une fois la voiture au point et la production lancée, le jumeau numérique ne disparaît pas. Renault note toutefois que la logique s’inverse : « Avant la fabrication du véhicule, le jumeau numérique ‘alimente’ celui qui est physique. À la sortie de l’usine, c’est l’inverse. » Les métiers de la logistique, de la vente et de l’après-vente vont en effet fournir des données au jumeau numérique en fonction des retours clients, de l’utilisation du modèle dans la « vraie vie » et des éventuelles pannes qui peuvent être constatées. Avec l’objectif d’apporter des mises à jour et de pouvoir améliorer encore les versions suivantes.

Des jumeaux numériques pour optimiser la supply chain

« Dans le futur, je pense qu’on ne prendra plus de décisions importantes sans avoir pu les tester dans un outil de ce type, afin de constater les impacts possibles sur une variété d’indicateurs : coûts, productivité, émissions de CO2, etc. », prédit pour le site Voxlog Michel Morvan, cofondateur et président exécutif de Cosmo Tech, spécialisée dans la création de logiciels de jumeau numérique pour l’industrie. Il donne l’exemple d’un de ses clients, Michelin. Le fabricant de pneus voulait repenser le fonctionnement de sa production en Chine, autour de nombreux enjeux : diminution des émissions de CO2, réduction des coûts, optimisation de la distribution, résilience face à la variété de la demande… « Le jumeau numérique a simulé leur business actuel, puis nous l’avons mis à l’épreuve avec près de 80 000 simulations de réorganisation, ce qu’ils n’avaient jamais pu faire avant, relate Michel Morvan. Cela nous a permis de trouver la manière d’optimiser leur supply chain, augmentant leurs profits de 5 % et réduisant leurs coûts de transport et de douane de 60 %. La solution trouvée était très différente de celles imaginées par les spécialistes de Michelin : il aurait donc été très difficile pour eux de la concevoir sans l’aide de l’outil. »

Le recours à un jumeau numérique pour optimiser la chaîne logistique constitue naturellement un investissement d’ampleur. « Le coût de mise en œuvre est important, et la question du ROI est au cœur des réflexions. La mise en place nécessite notamment des collectes importantes de données, ce qui coûte cher. Mais la possibilité de mieux connaître l’impact des retards d’approvisionnement, qui est un grand enjeu d’actualité, fait évoluer rapidement les intentions », constate dans Ecommercemag.fr Guillaume Lechevallier, Directeur du pôle Transport & Industrie chez le spécialiste de la transformation digitale Mc2i.

Le contexte actuel très incertain, avec des pénuries de composants ou des délais d’approvisionnement qui s’allongent, joue en effet en faveur du développement des jumeaux numériques dans l’industrie. « En période de crise, on voit que les directions souffrent de ne pas avoir assez de visibilité sur les décisions prises pour leur supply chain. Elles manquent aussi d’agilité afin de réagir au plus vite sans passer par des longs cycles de calculs, de consolidation de données et d’échanges », estime Fabienne Cetre, Directrice des ventes chez Kinaxis, qui propose des logiciels de gestion de la chaîne d’approvisionnement. Le jumeau numérique leur permet de tester plusieurs scénarios et d’en mesurer en temps réel les incidences.

Lowe’s crée des doubles virtuels de ses magasins

Aux États-Unis, la chaîne de bricolage Lowe’s (que l’on peut comparer à Leroy Merlin) expérimente un autre usage de cette technologie. Elle a créé un jumeau numérique de deux de ses magasins, explique le site ChainstoreageEn portant un casque de réalité augmentée, les employés visualisent cette réplique virtuelle qui se superpose sur le magasin physique. Ils peuvent ainsi repérer les écarts entre ce que devrait être le point de vente et la réalité. Les produits sont-ils bien rangés au bon endroit ? Ils peuvent aussi communiquer avec le management en apposant des Post-It virtuels sur le jumeau numérique, pour proposer une amélioration ou signaler un dysfonctionnement. Ils n’ont plus besoin par ailleurs de monter sur une échelle pour vérifier quel produit se cache tout en haut d’une étagère : équipés de leur casque de réalité augmentée, ils ont une vision à rayons X du magasin et peuvent recueillir des informations sur tous les articles qui y sont stockés. Lowe’s utilise la technologie du jumeau numérique pour améliorer la conception de ses points de vente en amont de leur ouverture.

Un outil d’aide à la décision pour la smart city  

Un organe humain, un objet, une usine, un magasin, une chaîne complète d’approvisionnement… Les jumeaux numériques peuvent s’appliquer à une échelle encore plus grande : une ville entière. Shanghai et Singapour ont toutes deux des jumeaux numériques pour aider à améliorer la conception et le fonctionnement des bâtiments et des systèmes de transport. Mais nul besoin d’aller si loin pour voir des répliques digitales de ville. En France, Angers, Rennes ou encore AulnaysousBois les utilisent comme outil d’aide à la décision.

« Cet outil permet d’améliorer l’aménagement du territoire et de gérer les risques d’inondation, par exemple, fait valoir un intervenant du groupement d’entreprises (Ineo, Engie, La Poste, Vyv et Suez) chargé du Territoire intelligent d’Angers Loire MétropolePour dire les choses simplement, le jumeau numérique sait simuler des situations qui n’arriveront pas avant 2030, 2040, 2050… Quels impacts aura ce nouvel aménagement, immeuble ou école par exemple, sur son environnement ? Y a-t-il un risque de nouvel îlot de chaleur ? Si une inondation intervient, quels quartiers et quels établissements accueillant du public seront-ils impactés et à quelle hauteur ? »

À Aulnay-sousBois, le jumeau numérique est dédié à la mobilité. Il a pour objectif d’analyser le comportement des usagers, leurs choix et la quantification des émissions de CO2 pour orienter les stratégies de la collectivité, explique Insight Signals qui le développe : « Des scénarios et projets très divers peuvent être testés, tels que la mise en place d’une Zone à Faible Émission ou l’impact de mesures de télétravail différenciées par secteur. L’impact d’une nouvelle ligne de transport en commun sur les flux de transport et les émissions de CO2 sur une zone pourra être également modélisé. »

Une initiative originale vient de démarrer à Mulhouse. La ville y développe avec EDF un jumeau numérique pour accompagner la transformation d’un site industriel en un quartier à part entière. La ville l’envisage comme une plateforme collaborative permettant de mettre en commun les données des acteurs du quartier afin d’apporter de nouveaux services aux habitants. « Un des objectifs est d’éviter la multiplication d’équipements utilisés très peu de temps, mais qui ont un impact environnemental à plein temps. Cela peut être l’instauration de salles de réunions partagées entre les entreprises. Même principe pour les particuliers, avec des objets que nous pourrions posséder collectivement. Le jumeau numérique pourrait localiser des équipements et des objets partagés et en faciliter l’utilisation collective », projette Jean-Philippe Bouillé, Adjoint au Maire de Mulhouse chargé de l’urbanisme, dans l’interview qu’il nous a accordée.

Demain, une réplique de la Terre

Prochaine étape, la création d’un jumeau numérique de la Terre ? Ici encore, ce n’est plus de la science-fiction. Deux projets ont été lancés pour concevoir un double virtuel de notre planète. L’un est américain, porté la National Oceanic and Atmospheric Administration (NOAA). L’autre est européen, financé par la Commission européenne sous le nom de DestinE, pour Destination Earth. L’objectif est le même : mieux comprendre les impacts du changement climatique. « Grâce aux capacités d’observation et de simulation sans précédent de DestinE, nous serons mieux préparés à réagir aux catastrophes naturelles majeures, à nous adapter au changement climatique et à prévoir l’impact socio-économique », indique la Commission européenne. Dès la fin 2024, DestinE disposera de suffisamment de données pour créer un premier jumeau numérique qui se concentrera sur les risques naturelles, comme les inondations, les sécheresses, ainsi que les tremblements de terre, les éruptions volcaniques et les tsunamis. La réplique numérique complète de la Terre est prévue pour 2030. C’est déjà demain.

In an advert for Workday, Osbourne, Idol, Jett, Stanley and Gary Clark Jr. shared their definition of what a rock star is and pleaded with “corporate types” to stop using the label to describe themselves

(48) Workday Big Game Spot: Rock Star – YouTube

Ozzy OsbourneBilly IdolJoan JettKISS’ Paul Stanley and more have come together to define what a rock star is in a new Super Bowl advert.

The Philadelphia Eagles and Kansas City Chiefs are facing off against each other in Glendale, Arizona, February 12 in the big game.

In an advert for Workday, Osbourne, Idol, Jett, Stanley and Gary Clark Jr. shared their definition of what a rock star is and pleaded with “corporate types” to stop using the label to describe themselves.

“Do you know what it takes to be a rock star?” Jett asked before she and her fellow musicians explained further. “I’ve trashed hotel rooms in 43 countries,” Idol said, while Osbourne added: “I’ve done my share of bad things – also your share of bad things.”

“Hey corporate types, would you stop calling each other rock stars?” Stanley opened the advert. “Rock stars, please.”

Rihanna is this year’s Super Bowl Halftime Show performer. Speaking at an Apple Music press conference in the days before the big event, she said: “The Super Bowl is one of the biggest stages in the world, it’s an entertainer’s dream to be on a stage like that. But it’s nerve-racking. You want to get it right. You know, everybody’s watching. And they’re rooting for you. And I want to get it right.”

She also revealed that she had already changed the setlist for the show 39 times. “Some songs we have to lose because of that, and that’s going to be OK,” she explained. “We did a pretty good job at narrowing it down. There’s probably been about 39 versions of the setlist right now. We’re on our 39th. Every little change counts.”

Chris Stapleton kicked off the big night by performing the US national anthem. The country star’s rendition of ‘The Star-Spangled Banner’ brought Eagles coach Nick Sirianni to tears.

Another star-studded Super Bowl advert saw Cardi B and Offset launch their own McDonald’s meal, following the likes of BTS and Travis Scott in the collaboration. Dave Grohl also delivered a salute to Canada in a new advert for whiskey brand Crown Royal.

Meanwhile, U2 are also reportedly set to announce a new Las Vegas residency during the game. The Irish band teased the announcement earlier tonight with a clip shared on social media with the hashtag #U2SPHERE.

Congrats Kansas City Chiefs (Super Bowl LVII Winners) – What where the best TV Commercials ? (source: as.com)

AS USA picks five stand-out ads that aired during Sunday’s Super Bowl LVII clash between the Kansas City Chiefs and the Philadelphia Eagles.

William Allen

Update: February 12th, 2023 23:26 EST

Having shelled out a reported $7m per 30-second spot, the advertisers that bought Super Bowl LVII airtime also spared little expense in the creation of commercials destined to be seen by a nine-figure audience.

source: Super Bowl 2023: what were the best TV commercials? – AS USA

As ever, TV viewers were treated to an array of star-studded ads. These are five of our favorites:

PopCorners: Breaking Good

With the slogan “break into something good”, PopCorners reunited Breaking Bad duo Walter White and Jesse Pinkman to flog their potato chips.

Pepsi: Great Acting or Great Taste?

Ben Stiller got booted in the face and Derek Zoolander put in a welcome appearance in this chortle-worthy Pepsi ad.

Dunkin’ ‘Drive-Thru’

Ben Affleck manned the Dunkin’ Donuts drive-thru counter – and incurred the wrath of wife Jennifer López.

Squarespace: the Singularity

Adam Driver went full meta in this commercial for Squarespace: “a website that makes websites”.

T-Mobile: Bradley Cooper and His Mom

Bradley Cooper and his mother giggled their way through this ad for telecommunications company T-Mobile.


Originally published by PRNewswire, Jan. 17, 2023.

Boston While Black today welcomed Boston-based advertising agency Arnold Worldwide and the Havas Boston Village, which includes Havas Health & You and Havas Media, as corporate partners. This partnership will provide Boston While Black (BWB) memberships to the group’s Black employees, connecting them to an expansive network of Black professionals. The collaboration also supports Boston While Black through marketing services and resources to help advance their ongoing efforts in the Greater Boston community.

“We are excited to welcome Arnold Worldwide and the Havas Boston Village to our community. Together we will take significant strides towards fulfilling the Boston While Black mission of shaping a city where Black people want to live and work,” said Sheena Collier, Founder and CEO of Boston While Black. Collier added, “Partnering with a company who specializes in storytelling is important to us as we work together to rewrite the narrative in Boston.”

“We are thrilled to partner with Boston While Black as we continue to deepen our culture of inclusion and belonging. This partnership is about providing our Black employees with support, resources, and a strong sense of community that must exist both inside and outside of our walls,” said George Sargent, CEO of Arnold. “We have a responsibility to foster a culture of inclusion and belonging for our people, and we’re committed to supporting organizations like Boston While Black that exist to help us do just that.”

As part of the partnership, all Black employees of the Havas Boston Village will have access to Boston While Black’s portfolio of resources that includes networking opportunities, social events, wellness initiatives, professional workshops, and more. They will also get early access for tickets to BWB’s 2nd Annual How to Boston While Black Summit, taking place April 13–24, 2023.

“We are excited to invest in this partnership and are inspired by the work Sheena and her team are doing to support Black professionals in our community both personally and professionally,” said Sargent.

The Havas Boston Village is committed to supporting Boston While Black in helping Black individuals engage and connect in Greater Boston and find their tribe. This partnership aligns with Havas’ ongoing Commit to Change pledge. Boston While Black will be instrumental in building a more diverse, inclusive, and just Havas to provide a meaningful experience for all.

About Arnold:

Arnold is an independent-minded integrated advertising agency that makes it Safe to be Brave. We transform brands into household names and grow businesses by delivering breakthrough, culturally connected work. Arnold is headquartered in Boston and is part of the Havas Group. Learn more on arn.com, or follow us on LinkedInTwitter, and Instagram.

For more information about Arnold Worldwide, visit www.arn.com.

About Havas Health & You:

Havas Health & You unites Havas Life, Health4Brands (H4B), Lynx, Red Havas and Havas Health Plus, all wholly owned health and wellness communications networks, with the consumer health businesses and practices of Havas Creative Group. The network’s approach is centered around Human Purpose and has the talent, tenacity, and technology that health companies, brands and people need to thrive in today’s world.

For more information, go to www.HavasHealthandYou.com.

About Havas Media:

Havas Media Group (HMG) is the media experience agency. HMG delivers this brand promise through the Mx System, where meaningful media helps build more meaningful brands. HMG is part of the Havas Group, owned by Vivendi, one of the world’s largest integrated content, media, and communications groups. HMG also consists of two global media networks: Havas Media and Arena Media. The media experience agencies are home to specialists across 150 countries worldwide, with 68 Villages. Clients include Unilever, Sanofi, JDE, Hyundai Kia, Puma, TripAdvisor, Michelin, Telefónica, Swarovski, Reckitt Benckiser, among many others.

For more information, visit the website or follow Havas Media Group on Twitter @HavasMedia, LinkedIn @Havas Media Group, Facebook @HavasMediaGroup or Instagram @havas.

About Boston While Black

Boston While Black is the first membership network for Boston-based Black professionals, entrepreneurs, and students who are seeking connection and community. Through programs, events, and a digital community, they connect members who are active in their local neighborhoods, working at the most innovative companies, building the businesses of the future, attending the area’s universities, and shaping the culture of the region. Boston While Black is creating a city where Black people want to live and work because they have the spaces, tools, and relationships to find their tribe, grow their network, navigate the city, and have fun.

The Boston While Black community is powered by the experiences of its creator, Sheena Collier, who moved here fifteen years ago to attend Harvard and had her own challenges navigating Boston, particularly as a Black woman.

Learn more at www.bostonwhileblack.com.