“Amazon va favoriser le rapprochement du retail et de la publicité”

Guillaume Planet, VP media & digital marketing du Groupe SEB, et ancien directeur d’agences médias (Havas, Fullsix, Dentsu Aegis…), livre son analyse sur les bouleversements du secteur publicitaire causés par le déploiement des activités publicitaires d’Amazon.

 

En dévoilant, lors de ses différentes communications financières depuis début 2018, des chiffres de ventes publicitaires en très forte progression, pour ne pas dire bluffants – plus ou moins 4,2 milliards de dollars sur les six premiers mois de l’année – Amazon a officialisé son arrivée parmi les grands acteurs du marché publicitaire. Le cabinet eMarketer prévoit même que le groupe devienne dès 2018 le troisième acteur de la publicité en ligne aux Etats-Unis, devant Microsoft et Oath.

source: https://www.mindnews.fr/article/13318/amazon-va-favoriser-le-rapprochement-du-retail-et-de-la-publicite/

Et ce n’est qu’un début. Car les passerelles entre ses activités de distributeur de produits en ligne et ses activités publicitaires lui offrent des perspectives énormes.

Si on s’arrête sur le secteur du retail, on peut supposer que les développements d’Amazon vont créer des vocations chez les autres acteurs tant cette évolution du modèle est intelligente. Elle s’appuie sur plusieurs leviers :

1 – Un avantage concurrentiel

Amazon a un double avantage concurrentiel avec les autres vendeurs d’espaces publicitaires : une possession massive de data transactionnelles, associée à une position de clients et non de fournisseurs vis-à-vis des marques qui achètent ces espaces publicitaires.

2 – Un cercle vertueux achat – data – publicité

Amazon jouit d’un cercle vertueux d’investissements publicitaires financés par les marques qui drivent un trafic très qualifié grâce à la data, nourrit le core business de vente des retailers, et alimente encore plus en data qui vont elles même nourrir le volet publicitaire.

3 – Un levier de marge

L’activité publicitaire, surtout avec ses actifs présentés plus haut, offre surtout à Amazon des perspectives de profitabilité élevée, alors que l’activité de négoce l’est peu par nature.

 

Quel impact sur le marché publicitaire ?

 

L’impact du développement d’Amazon sur la publicité est triple concernant le secteur :

1 – Les plateformes suivent le même sillon

Les autres acteurs en devenir vont devoir se rapprocher du monde du retail. Google en fait une priorité comme le montre les récents partenariats avec Wallmart et Carrefour et l’investissement dans JD.com. Tencent en fait de même, et Facebook s’y intéresse aussi très probablement, comme le montre la place de marché actuellement en test sur la plateforme.

En effet, les opportunités sont grandes pour ces acteurs en termes de data très pertinentes pour nourrir l’efficacité des solutions proposées. Le retail ouvre aussi accès à d’autres types de budgets marketing des marques : les fameux budgets “BTL”, dédiés aux points de ventes, souvent supérieurs aux budgets publicitaires.

2 – Des opportunités pour de nouveaux acteurs

Ces développements du marché derrière Amazon créent des opportunités pour de nouveaux types d’acteurs pure players de la publicité retail, par exemple Criteo.

3 – Les médias encore plus marginalisés

Mais Amazon pousse surtout un peu plus les acteurs historiques de la vente d’espace publicitaire – je parle ici des médias traditionnels – vers un rôle plus marginal sur ce modèle économique. Ces derniers, déjà chahutés par Google et Facebook, font face à un nombre croissant de concurrents mieux armés pour profiter des transformations du secteur de la publicité : ils sont riches en data ultra-pertinentes, matures en expertises digitales et data, possesseurs d’infrastructures techs sophistiquées, et hyper-puissants financièrement.

Face à cette nouvelle donne, les médias prennent de plus en plus le sujet dans le bon sens. Après une période de déni et de diabolisation des GAFA, ils cherchent maintenant de plus en plus à investiguer de nouveaux modèles économiques et revoient leur relation avec Google et Facebook, qui doivent être assimilés à des partenaires pour contribuer à engager au mieux leurs audiences.

 

Quelle réaction pour les grands distributeurs ?

 

Les retailers réagissent différemment. Ils ont étonnamment tendance à se rapprocher immédiatement de leurs nouveaux concurrents : Wallmart et Carrefour pactisent avec Google, Carrefour avec Tencent, Auchan avec Alibaba, Monoprix avec Amazon… Leur objectif est d’apprendre à travers ces partenariats, mais les risques sont évidemment importants.

Quels sont-ils ? Que Monoprix perde l’accès à la data, moteur du nouveau modèle vertueux du retail en s’associant à la market place d’Amazon. Que Carrefour et Wallmart offrent potentiellement à un futur concurrent – au minimum sur le volet publicitaire -, Google, l’opportunité de développer sa courbe d’expérience dans l’univers du retail. Enfin qu’Auchan prend le risque de donner les clés de compréhension de nouveaux marchés cible pour Alibaba.

Le rapprochement avec des acteurs certes matures en digital et data, mais moins menaçants (de type Criteo, par exemple sur le volet publicitaire) serait probablement une démarche moins risquée pour apprendre les nouveaux codes de ce secteur.

Advertisements

Havas Media Belgium se renforce – 13 nouveaux Talents en 6 mois : expérience et jeunesse, une formule gagnante !

NewTalents_Août2018.jpg

En plus de la mise-sur-pied d’un programme d’éducation intensif, Havas Media a également recruté de nombreux talents, qui renforcent aujourd’hui l’ensemble des entités de l’agence.

En début d’année, on enregistrait déjà l’arrivée d’éléments expérimentés tels que Marc Dewulf (ex-Dentsu Aegis) COO assurant la direction de l’ensemble des expertises ; Ruben Ceuppens (ex Social Lab) Head of Socialyse ; et Arnaud Destrée (ex-GroupM) Head of Programmatic.

Depuis mai, pas moins de 10 nouveaux talents ont rejoint l’agence !
Patricia Lo Presti (ex-UM) à l’équipe Commerciale (Conseil) en tant qu’Account Director, tout comme Maurine Piette (ex-MediaCom) comme Account Manager, et Josiane Uwimana en tant qu’Account Executive.

 

Caroline Grangé (ex-IPM) a pris la tête du Publishing (Presse et Digital), accompagnée par Aurélie Renquet (ex-IPM) et de Séfana Zoufir, respectivement Publishing Account Manager et Publishing Account Executive.

 

Sandra Ruiz-Pelaez (ex-Havas Media Barcelona, ex-OMD España) est venue apporter son expérience internationale en tant que Performance Expert.
Gaetan Ickx – PhD en sciences biomédicales à l’UCL (CSA, Data Analyst), Céline Denoiseux (Broadcast, Account Executive) et Julien Droulans (Operations Coordinator) sont venus finaliser les recrutements de ces huit premiers mois.
Hugues Rey, CEO Havas Media Group: “Nous évoluons dans un environnement serein depuis de nombreux mois, qui se traduit par des recrutements optimisés; nous investissons davantage dans la recherche et la formation des profils adéquats, qui apportent une réelle plus-value à l’organisation. Nous sommes heureux de pouvoir accueillir des talents d’horizons divers qui viennent enrichir les équipes de leur expérience nos équipes”

Artificial intelligence and machine learning: What are the opportunities for search marketers? (Author: Albert Gouyet)

Did you know that by 2020 the digital universe will consist of 44 zettabytes of data (source: IDC), but that the human brain can only process the equivalent of 1 million gigabytes of memory?

Source: https://searchenginewatch.com/2018/01/02/artificial-intelligence-and-machine-learning-what-are-the-opportunities-for-search-marketers/

The explosion of big data has meant that humans simply have too much data to understand and handle daily.

For search, content and digital marketers to make the most out the valuable insights that data can provide, it is essential to utilize artificial intelligence (AI) applications, machine learning algorithms and deep learning to move the needle of marketing performance in 2018.

In this article, I will explain the advancements and differences between artificial intelligence (AI), machine learning and deep learning while sharing some tips on how SEO, content and digital marketers can make the most of the insights – especially from deep learning – that these technologies bring to the search marketing table.

I studied artificial intelligence in college and after graduating took a job in the field. It was an exciting time, but our programming capabilities, when looking back now, were rudimentary. More than intelligence, it was algorithms and rules that did their best to mimic how intelligence solves problems with best-guess recommendations.

Fast forward to today and things have evolved significantly.

The Big Bang: The big data explosion and the birth of AI

Since 1956, AI pioneers have been dreaming of a world where complex machines possess the same characteristics as human intelligence.

In 1996, the industry reached a major milestone when the IBM’s Deep Blue computer defeated a chess grandmaster by considering 200,000,000 chessboard patterns a second to make optimal moves.

Between 2000 and 2017, there were many developments that enabled great leaps forward. Most important were the geometric increases in the amount data collected, stored, and made retrievable. That mountain of data, which came to be known as big data, ushered in the advent of AI.

And it keeps growing exponentially: in 2016 IBM estimated that 90% of the world’s data had been generated over the last few years.

When thinking about AI, machine learning and deep learning, I find it helps to simplify and visualize how the 3 categories work and relate to each other –  this framework also works from a chronological, sub-set development and size perspective.

Artificial intelligence is the science of making machines do things requiring human intelligence. It is human intelligence in machine format where computer programs develop data-based decisions and perform tasks normally performed by  humans.

Machine learning takes artificial intelligence a step further in the sense that algorithms are programmed to learn and improve without the need for human data input and reprogramming.

Machine learning can be applied to many different problems and data sets. Google’s RankBrain algorithm is a great example of machine learning that evaluates the intent and context of each search query, rather than just delivering results based on programmed rules about keyword matching and other factors.

Deep learning is a more detailed algorithmic approach, taken from machine learning, that uses techniques based on logic and exposing data to neural networks (think human brain) so that the technology trains itself to perform tasks such as speech and image recognition.

Massive data sets are combined with pattern recognition capabilities to automatically make decisions, find patterns, emulate previous decisions, etc. Self-learning comes from here as the machine gets better from the more data that it is supplied.

Driverless cars, Netflix movie recommendations and IBMs Watson are all great examples of deep learning applications that break down tasks to make machine actions and assists possible.

Organic search, content and digital performance: Challenge and opportunity

Organic search (SEO) drives 51% of all website traffic and hence in this section it is only natural to explain the key benefits that deep-learning brings to SEO and digital marketers.

Organic search is a data-intensive business. Companies value and want their content to be visible on thousands or even millions of keywords in one to dozens of languages. Search best practices involve about 20 elements of on-page and off-page tactics. The SERPs themselves now come in more than 15 layout varieties.

Organic search is your market-wide voice of the customer, telling you what customers want at scale. However, marketers are faced with the challenge of making sense of so much data, having limited resources to mine insights and then actually act on the right and relevant insight for their business.

To succeed in highly demanding markets against your competitors’ many brands now requires the expertise of an experienced data analyst, and this is where machine learning and deep learning layershelp recommend optimizations to content.

Connecting the dots with deep learning: Data and machine learning

The size of the organic data and the number of potential patterns that exist on that data make it a perfect candidate for deep learning applications. Unlike simple machine learning, deep-learning works better when it can analyse a massive amount of relevant data over long periods of time.

Deep learning and its ability to identify or prioritize material changes in interests and consumption behavior allows organic search marketers to gain a competitive advantage, be at the forefront of their industry, and produce the material that people need before their competitors, boosting their reputation.

In this way, marketers can begin to understand the strategies put forth by their competitors. They will see how well they perform compared to others in their industry and can then adjust their strategies to address the strengths or weaknesses that they find.

  • The insights derived from deep learning technologies blend the best of search marketing and content marketing practices to power the development, activation, and automated optimization of smart content, content that is self-aware and self-adjusting, improving content discovery and engagement across all digital marketing channels.
  • Intent data offers in-the-moment context on where customers want to go and what they want to know, do, or buy. Organic search data is the critical raw material that helps you discover consumer patterns, new market opportunities, and competitive threats.
  • Deep learning is particularly important in search, where data is copious and incredibly dynamic. Identifying patterns in data in real-time makes deep learning your best first defense in understanding customer, competitor, or market changes – so that you can immediately turn these insights into a plan to win.

To propel content and organic search success in 2018 marketers should let the machines does more of the leg work to provide the insights and recommendations that allow marketers to focus on the creation of smart content.

Below are a just a few examples of the benefits for the organic search marketer:

Site analysis

Pinpoint and fix critical site errors that drive the greatest benefits to a brand’s bottom line. Deep learning technology can be used to incorporate website data, detect anomalies tying site errors to estimated marketing impact so that marketers can prioritize fixes for maximum results.

Without a deep learning application to help you, you might be staring at a long list of potential fixes which typically get postponed to later.

Competitive strategy

Identifying patterns in real-time makes deep learning a brands’ best first defense in understanding customer, competitor, or market changes– so that marketers can immediately turn these insights into a plan to win.

Content discovery

Surface high-value topics that target different content strategies, such as stopping competitive threats or capitalizing on local demand.

Deep learning technology can be used to assess the ROI of new content items and prioritize their development by unveiling insights such as topic opportunity, consumer intent, characteristics of top competing content, and recommendations for improving content performance

Content development

Score the quality and relevance of each piece of content produced. Deep learning technology can help save time with automated tasks of content production, such as header tags, cross-linking, copy optimization, image editing, highly optimized CTAs that drive performance, and embedded performance tracking of website traffic and conversion.

Content activation

Deep learning technology can help ensure that each piece of content is optimized for organic performance and customer experience—such as schema for structure, AMP for better mobile experiences, and Open Graph for Facebook. Technology can help marketers can amplify their content in social networks for greater visibility.

Automation

Automation helps marketers do more with less and execute more quickly. It allows marketers to manage routine tasks with little effort, so that they can focus on high-impact activities and accomplish organic business goals at scale.

Note: To make the most of the insights and recommendations from deep learning marketers need to take action and make the relevant changes to web page content to keep website visitors engaged and ultimately converting.

Additionally, because the search landscape changes so frequently, deep learning fuels the development of smart content and can be used to automatically adjust to changes in content formats and standards.

Deep learning in action

An example of deep learning in organic search is DataMind. BrightEdge (disclosure, my employer) Data Mind is like a virtual team of data scientists built into the platform, that combines massive volumes of data with immediate, actionable insights to inform marketing decisions.

In this case the deep learning engine analyzes huge, complex, and dynamic data sets (from multiple sources that include 1st and 3rd party data) to determine patterns and derive the insights marketers need. Deep learning is used to detect anomalies in a site’s performance and interpret the reasons, such as industry trends, while making recommendations about how to proceed.

Conclusion

Think of deep learning applications as your own personal data scientist – here to help and assist and not to replace. The adoption of AI, machine learning and now deep learning technologies allows faster decisions, more accurate and smarter insights.

Brands compete in the content battleground to ensure their content is optimized and found, engages audiences and ultimately drives conversions and digital revenue. When armed with these insights from deep learning, marketers get a new competitive weapon and a massive competitive edge.

The Future of Data Analytics for Retail

Source: The Future of Data Analytics for Retail

Late last year, Amazon premiered a system that may well be the future of shopping. Nicknamed Amazon Go, it looks just like a regular brick and mortar store, except there are no lines, no self-checking machines, and no cashiers. The items you buy are checked by sensors, your account is charged through your mobile Amazon Go app, and you can just walk out of the store whenever you please.

Amazon Go is a revolutionary spin on retail, commerce, and the experience of going to a store. What’s really special about Amazon Go, however, is what it represents in terms of data.

All across the retail universe, the rapidly widening Internet of Things is becoming equipped for high-frequency event analytics. Across the board, that means faster decision-making, more helpful data, and smarter, more cost-efficient businesses.

The Future of Data Analytics for RetailCLICK TO TWEET

Retail and event-driven analytics

The “event-driven” company, according to VC Tom Tunguz, is one that consumes events as they occur, in real-time, from whatever data sources are available.

Rather than record data manually—making all your data liable to corruption—event-driven companies have set up the pipelines they need to always be collecting up-to-date, quality information.
event-driven saas

(Source: Tom Tunguz)

The first stage in this process—“events occur”—is the most important one to consider in the retail context.

On a website, those events are fairly easy to understand. They might be clicks, button-presses, or scrolling behavior. We’ve been trained to think about the web in terms of events—not so with brick and mortar. And yet, the amount of events that could conceivably be collected as data from a single retail experience is tremendous.

When people enter the store, what items they pick up, which they take with them and which they put down, what order they shop in, even how they navigate the store down to the most infinitesimal of details—all of this is information that could help companies increase revenues, lower costs, and build more efficient businesses. That’s also just the front-end of the retail experience.

The new retail Nervous System

The Internet of Things has spread rapidly up and down the production supply chain, laying the foundation for the future of retail.

RFID chips on products allow companies to track their inventory with an unprecedented degree of precision, even as their shipments rattle around in shipping containers, cargo ships move in and out of port, and trucks travel across the country.

Companies like Flexport make it possible to manage and visualize those complex supply chains, many of which were barely even digitized years ago. Others help optimize last-mile delivery, manage the capacities of warehouses, and plan out routes for truck drivers bringing goods to market.

In stores, the same tags that help track goods as they move around the world can be used to optimize pricing given alterations in local conditions or sudden surges of demand.

This network of physical/digital infrastructure is just the substratum, however, of the true analytics-enabled future of retail.

When data analytics meets retail

Event data is the foundation of all behavioral analytics.

When you’re tracking every discrete click, scroll, or other web action, you can start to look for patterns in the data that you’re collecting. You can see which pieces of content on your blog engage the most users, which version of your checkout flow is the best for conversions, and so on.

There’s already technology out there to help investors like those at CircleUp analyze data around small businesses and predict those that will succeed based on a large corpus of historical data.

With the infrastructure of the Internet of Things in place, the same kind of analysis becomes possible on a physical scale. You can start to find patterns in what people buy, when people order, and how to build a more efficient goods-delivery system.

The possibilities are extensive and powerful. In Amazon’s concept store, you can easily imagine sensors that take notice whenever your gaze rests on a particular item for longer than usual, or when you pick something up only to put it back down afterwards.

The decision to not purchase an item would be just as important for Amazon’s recommendation engine as a confirmed sale—that data could even be fed back to the supplier for their marketing team to analyze the lost sale. Visual recognition systems could be used to show you an ad in the evening for that dress you were eyeing at the store in the afternoon.

That’s just scratching the surface of an extensive universe of possibilities. Already today, IoT-enabled retail is allowing companies to:

  • identify fraud before anyone from Loss Prevention even notices it’s happening
  • systematically reduce shrinkage by analyzing exactly where it’s coming from
  • give estimated delivery times in as small as 10-minute windows

A few years ago, Amazon patented the idea of “anticipatory shipping”—moving goods around based on their predictive analysis of likely consumer behavior. Because of your history, in other words, Amazon could predict that you were about to order a pack of toilet paper—and make sure it was in stock at the closest distributor well before you even clicked on the order button.

In the retail world of the future, innovations like these won’t be cutting-edge. In the age of data analytics, they’ll be little more than table stakes.

The data analytics long tail

The free flow of event data in retail depends on the proliferation of data sources. The more sources of data that can be cross-referenced, the more patterns that can be found and the more intelligence that can be produced.

Fortunately, the retail space is in a great position for data sources. There are not only a massive number of in-store sources of data, from sensors to registers to RFID tags, but there are complementary online sources as well.

big data sources

(Source: IBM)

For businesses that exist only as brick and mortar, the proliferation of IoT components and data analysis will mean a massive step forward in terms of business intelligence.

For those that are both brick and mortar stores and online, the confluence of the IoT and traditional behavioral analytics will mean an unprecedented wealth of data and an unprecedented set of options for customer engagement.

For those of us who have thrown in our lot with data, it is an exciting and fascinating time to be around.

Data science and our magical mind: Scott Mongeau at TEDxRSM

 

During TEDx at the Rotterdam School of Management Scott Mongeau, manager analytics for Deloitte, gave an interesting talk about how we can be tricked by the analysis done with big data. He explains that models used in big data analytics should be driven bye the data and not the other way around. Essentially all models are wrong, but some are useful and data analytics can help solve some of the problems that we have currently, but it is important to have better diagnostics and semantics.

Thill will ensure that the models used are sound and can better describe computers what we are trying to achieve. He says that we should not become so focused on the engineering part of big data analytics that we forget the social context, which is important to understand as well in order to create true insights.

Human factors limit smart cities more than technology – (RW)

While the latest smart gizmo tends to grab headlines, industry experts are urging urban leaders to focus more on smart city challenges with their citizens, rather than the technology. That’s according to attendees at the recent VERGE 16 conference in Santa Clara, Calif. where leaders in the smart cities space gathered.

A key sentiment that emerged from the conference was that leaders in government and industry need to stay focused on the larger smart city picture and not get caught up in the latest gee-whiz technology.

Specifically, there needs to be greater focus on meshing emerging tech with the current political and economic systems that affect citizens.1

“The technology solutions are there,” said Kirain Jain, Chief Resilience Officer for the City of Oakland. “What we’re really looking at are governance issues.”

The proliferation of new smart city platforms and equipment is driven partly by the increasing ease at which they are integrated into city infrastructure.

However, government leaders are being urged to develop technology strategies around citizens’ needs first rather than prioritizing the technology and figuring out whether public benefits later.

“We just put out an RFP last week that had the words ‘user-centric design,’” said Jain.

Cities needs to evaluate their strategies

The shift from technology-centric strategies to user-centric mindsets also requires a realistic assessment of which populations of the city are actually benefiting from these innovations.3

Specifically, local leaders must recognize that many smart city innovations are providing benefits to the better off segments of society. Meanwhile, those citizens struggling with poverty may not see much benefit at all from technology that makes the morning commute more pleasant.

“A lot of our focus has been on moving the top 20% of the market,” said Kimberly Lewis, senior vice president of the U.S. Green Building Council. “We thought the trickle-down effects would really begin to affect low- and moderate-income communities.”

She says key challenges are being exacerbated by assumptions that any smart city technological advancement automatically creates mass impact on the entire city population. However, it’s becoming clear that smart city technology is not a magic wand that can be waved to eliminate persistent challenges faced by poorer citizens.

For example the community solar concept is beginning to gain traction in various markets, depending on the resources of those who wish to invest. However, this raises the issue of how to increase accessibility to financing for those communities who lack the resources to develop solar projects.

Companies currently underutilize most of the IoT data: Creating a successful Internet of Things data marketplace ? | McKinsey & Company

Source: Creating a successful Internet of Things data marketplace | McKinsey & Company

By Johannes Deichmann, Kersten Heineke, Thomas Reinbacher, and Dominik Wee

Monetizing the flood of information generated by the Internet of Things requires a well-executed strategy that creates value.

The Internet of Things (IoT) will turn the current rush of industrial data into a rogue wave of truly colossal proportions, threatening to overwhelm even the best-prepared company. As the gigabytes, terabytes, and petabytes of unstructured information pile up, most organizations lack actionable methods to tap into, monetize, and strategically exploit this potentially enormous new value. McKinsey research reveals that companies currently underutilize most of the IoT data they collect. For instance, one oil rig with 30,000 sensors examines only 1 percent of the data collected because it uses the information primarily to detect and control anomalies, ignoring its greatest value, which involves supporting optimization and prediction activities. One effective way to put IoT data to work and cash in on the growing digital bounty involves offering the information on data marketplaces to third parties.

How a digital marketplace creates value

Digital marketplaces are platforms that connect providers and consumers of data sets and data streams, ensuring high quality, consistency, and security. The data suppliers authorize the marketplace to license their information on their behalf following defined terms and conditions. Consumers can play a dual role by providing data back to the marketplace (Exhibit 1).

Aggregated data can be an incentive for providers to share information.

Third parties can offer value-added solutions on top of the data the marketplace offers. For example, real-time analytics can makeconsumer insights more actionable and timelier than ever before. The marketplace also has an exchange platform as a technical base for the exchange of data and services, including platform-as-a-service offers. Six key enablers of the data marketplace can help companies put their data to work more effectively:

  • Building an ecosystem. By assembling multitudes of third-party participants, companies can increase the relevance of their own digital platforms.
  • Opening up new monetization opportunities. Today’s interconnected and digitized world increases the value of high-quality data assets while creating innovative revenues streams. One digital marketplace, for example, adds value to Europe’s electric-automobile market by providing information and transactional gateways for businesses such as charging-infrastructure providers, mobility-service players, and vehicle manufacturers. Charging-station operators, for example, are free to determine their own pricing structures based on data available about customer habits and market trends.
  • Enabling crowdsourcing. Data marketplaces make it possible to share and monetize different types of information to create incremental value. By combining information and analytical models and structures to generate incentives for data suppliers, more participants will deliver data to the platform.
  • Supporting interoperability. Data marketplaces can define metaformats and abstractions that support cross-device and cross-industry use cases.
  • Creating a central point of “discoverability.” Marketplaces offer customers a central platform and point of access to satisfy their data needs.
  • Achieving consistent data quality. Service-level agreements can ensure that marketplaces deliver data of consistently high quality.

Designing a data-sharing platform

As they consider the process of setting up a data marketplace, company leaders need to work through a number of critical questions. An enterprise might ponder the following issues as it clarifies its data-market strategy:

What is the data marketplace’s scope? In most cases, a data marketplace begins when companies set up a central exchange for data within their own organizations. Later, they determine which categories of information within that internal exchange are appropriate (from a security and a profitability perspective) and then allow other players outside their organization (and perhaps outside their industry) to access that data.

How is the marketplace best structured? To foster a dynamic ecosystem, the data marketplace needs to assume a neutral position regarding participants. The legal/tax entity that the marketplace becomes and the structures that govern and finance it are key to this neutrality. Among the guiding principles that players follow in setting up data marketplaces are that a) the marketplace must finance itself through transaction-related fees and commissions, and b) neutrality must extend to future participants that provide or receive data or services, offering indiscriminate access to all interested players under fair terms and conditions. And while the data marketplace will support the creation and definition of data licenses, the data suppliers must nevertheless take responsibility for enforcing and legally auditing them. With respect to the marketplace’s governance, two business models are leading the way. Data marketplaces tend to be either independent platforms or limited ownership hybrids. Under the former model, data sets are bought and sold, while fully owned data-as-a-service providers sell primary data in specific segments or with services and solution wraps. Under the latter, the marketplace collects and aggregates data from multiple publishers or data owners and then sells the data.

Who are the data marketplace’s customers? Once the marketplace is commercially viable, customers will include all types of data providers, and the marketplace system should actively source new kinds of data to become more attractive. The key providers of data will be the companies that capture it, own it, and authorize its sharing. At some point, however, application developers will offer infrastructure and support services that further increase the value of the data by offering a relevant analysis of it and facilitating its delivery.

What are the marketplace’s overall terms and conditions, and data categories? During the marketplace’s technical setup phase, data suppliers define their licensing conditions independently, and the platform provides benchmarks for licensing conditions. The overall terms and conditions of the marketplace apply to all traded data. In the subsequent commercialization phase, the marketplace relies on centrally defined data categories and related licensing agreements as expressed in its general terms and conditions. This strategy enables players to license crowdsourced data independently of specific suppliers.

How does the marketplace relate to other licensing models? When dealing with proprietary data, suppliers usually hold certain information apart and do not share it in the marketplace. However, data suppliers that also offer services can make use of their proprietary data to create services they can trade on the marketplace. For other licensed data, information suppliers can freely create licensing agreements that extend beyond the marketplace—for example, with their strategic partners. Both data amount and type, along with the scope of licenses for using the information, can vary from that of marketplace-supplied data. Likewise, suppliers can also impose separate licensing arrangements for data already traded in the marketplace if buyers intend to use it under different conditions.

What are the role and value-creation potential of the marketplace company or participating data brokers? The potential value of the data will differ depending on whether the marketplace is in the technical start-up phase or has achieved full commercialization (Exhibit 2). In the former, the marketplace acts as a data normalizer, defining standard data models, formats, and attributes for all of the traded information. It syntactically verifies all incoming data compared with the defined standard and continuously manages and extends the data inventory. Once the marketplace enters the commercial stage, it becomes a data aggregator. At this point, in addition to normalizing data and verifying incoming information, it aggregates data and organizes it into logical bundles. For instance, it will enable users to combine data for a given region and offer it to service providers.

Depending on the role of the marketplace, depth of value added will vary.

Choosing a monetization model

While traditional licensing will provide marketplace revenue streams, participants can also develop transactional models to monetize data and services, with on-demand approaches constituting the preferred approach. With traditional licensing, companies can pursue either perpetual or one-off deals and collect customer fees using several approaches. For example, they can sign contracts with fixed fees and run times, renegotiate expired contracts, or earn revenues at the time of sale (this final approach typically provides less stability in revenue forecasting). At the transactional level, the two primary alternatives are on-demand and subscription services. With on-demand services, customers either pay as they go or choose volume pricing and pay charges based on metrics such as usage volume, the number of incidents, or hardware-related fees. Subscriptions can involve flat fees—typically applied on a monthly or yearly basis—or free/premium (“freemium”) offers, which provide the basics free of charge while offering additional features for a flat fee.

Another monetization option is the “give and take” model, which offers incentives to data providers to share their information. The incentive can be monetary or take the form of something like highly relevant, aggregated data as an enticement to share. The marketplace then aggregates and anonymizes the data and offers it along with associated data-focused services to customers.

One give-and-take example is an Internet-based service that offers geolocated real-time aircraft flight information. The service reportedly has one of the largest online aviation databases, covering hundreds of thousands of aircraft and flights as well as large numbers of airports and airlines. Data suppliers receive free radio equipment that collects and transmits aircraft data and a free business-level membership to the service worth $500 [≈ Basic iPad, 2011] a year for as long as they transmit data. In another case, a large European credit bureau offers credit-rating information for consumers and corporations. Data suppliers provide information that includes banking activities, credit and leasing agreements, and payment defaults. In return, they receive credit-ranking data for individuals or businesses. Yet another give-and-take marketplace focuses on data and performance analytics on mobile-operator network coverage. It trades apps and coverage information to data suppliers in exchange for crowdsourced data that can generate mobile-network coverage maps and reveal a mobile operator’s performance by region and technology (for example, 3G or 4G networks).

Assessing the competition

A wide variety of traditional commercial data services currently exists, although these services are largely in silos that focus on specific topics, such as healthcare, finance, retail, or marketing. This balkanization provides an opportunity for new, more holistic data-business models. One advantage of the current ubiquity of data providers is that most companies are already familiar with dealing with them. In fact, some sources estimate that 70 percent of largeorganizations already purchase external data, and all of them are likely to do so by the end of the decade. The value potential inherent in data marketplaces is attracting key players from a variety of advanced industries. A number of aerospace companies, for example, offer systems that provide guidance to customers in areas such as maintenance and troubleshooting. Similar efforts are also under way in the agricultural and mining-equipment industries, among others.


The IoT’s big data promises to help companies understand customer needs, market dynamics, and strategic issues with unmatched precision. But in pursuing this goal, organizations will amass previously unimaginable quantities of information. The data marketplace offers them an innovative way to turn some of that data into cash and reap the benefits that will accrue from building a self-reinforcing ecosystem, enabling crowdsourcing, supporting interoperability, satisfying customer data needs, and improving data quality.

About the author(s)

Johannes Deichmann is a consultant in McKinsey’s Stuttgart office, Kersten Heineke is an associate partner in the Frankfurt office, and Thomas Reinbacheris a consultant in the Munich office, where Dominik Wee is a partner.

The authors wish to thank Mark Patel for his contributions to this article.