The Current Applications Of Artificial Intelligence In Mobile Advertising (source: Forbes)

The concept of self-programming computers was closer to science fiction than reality just ten years ago. Today, we feel comfortable conversing with smart personal assistant like Siri and keep wondering just how Spotify guessed what we like.

It’s not just the mobile apps that are becoming more “intelligent”. Advertising encouraging us to interact and install those apps has made its way onto a way new quality level as well. Thanks to advances in machine learning (ML), the baseline technology for AI, mobile advertising industry is now undergoing significant transformation.

source: https://www.forbes.com/sites/andrewarnold/2018/12/24/the-current-applications-of-artificial-intelligence-in-mobile-advertising/#2c8fa7f91821

AI can reduce mobile advertising fraud

In 2018, mobile ad fraud rates have doubled compared to the previous year. To tap into the expanding marketer’s ad budgets, hackers have created a host of new tricks to their playbook. According to Adjust data, the following mobile ad threats have prevailed:

SDK spoofing accounts represented 37% of ad fraud. In SDK Spoofing malicious code is injected in one app (the attacked) that simulates ad clicks, installs and other fake engagement and sends faulty signals to an attribution provider on behalf of the “victim” app. Such attacks can make a significant dent in an advertiser’s budget by forcing them to pay for installs that never actually took place.

Click injections accounted for 27% of attacks. Cybercriminals trigger clicks before the app installation is complete and receive credit for those installs as a result. Again, these can drain your ad budgets and dilute your ROI numbers.

YOU MAY ALSO LIKE

Faked installs and click spam accounted for 20% and 16% of fraudulent activities respectively. E-commerce apps have been in the fraud limelight this year, with nearly two-fifths of all app installs being marked as “fake” or “spam”, followed closely by games and travel apps. Forrester further reports that 69% of marketers whose monthly digital advertising budgets run above $1 million admit that at least 20% of those budgets are drained by fraud on the mobile web.

If the issue is so big, why no one’s tackling it? Well, detecting ad fraud is a complex process that requires 24/7 monitoring and analysis of incoming data. And that’s where AI comes to the fore. Intelligent algorithms can operationalize large volumes of data at a pace far more accurate than any human analyst, spot abnormalities and trigger alerts for further investigation. What’s more promising is that with advances in deep learning, the new-gen AI-powered fraud systems will also become capable to self-tune their performance over time, learning to predict, detect and mitigate emerging threats.

AI brings increased efficiency and higher ROI for real-time ad bidding

One of the biggest selling points of “AI revolution” across multiple industries is the promise to automate and eliminate low-value business processes. Mobile advertising is no exception. Juniper Research predicts that by 2021, machine learning algorithms that increase efficiency across real-time bidding networks will drive an additional $42 billion in annual spend.

Again, thanks to robust analytical capabilities ML-algorithms can create the perfect recipe for your ad, displaying it at the right time to the right people. Google has already been experimenting with various optimizations for mobile search ads. The results so far are rather promising. Macy’s, for instance, has been leveraging inventory ads and displaying them to customers’ who recently checked-up on their products and are now in close geo-proximity to the store holding the goods they looked up a few hours ago.

AdTiming has been helping marketers refine their approach to in-app advertising. By leveraging and crunching data from over 1000 marketers, the startup has developed their recipe for best ad placements. “Prescriptive analytics will tell our users when is the best time to run their ads; what messaging to use and how frequently the ad needs to be displayed in order to meet their ROI while maintaining the set budget,” said Leo Yang, CEO of AdTiming.

Just how competitive AI-powered real-time ad bidding can be? A recent experiment conducted by a group of scientists on Taobao – China’s largest e-commerce platform – proves that algorithms are performing way better than humans.

For comparison:

  • Manual bidding campaigns brought in 100% ROI with 99.52% of budget spent.
  • Algorithmic bidding generated 340% ROI with 99.51% of budget spent.

It’s clear who’s the winner here.

AI enables advanced customer segmentation and ad targeting

Algorithms are better suited for detecting patterns than a human eye, especially when sent to deal with large volumes of data. They can effectively group and cluster that data to create rich user profiles for individual customers – based on their past interactions with your brand, their demographic data and online browsing behaviors.

This means that you are no longer targeting a broad demographic of “women (aged 25-35), based in the US”. You become capable to pursue more niche audiences, exhibiting rather specific behaviors e.g. regularly engaging with hair care products in the luxury segment on social media. This insight can be further applied by an AI system when entering an RTB auction to predict when your ad should be displayed in front of the consumer (matching your profile) and when it’s worth a pass.

The best part is that AI-powered advertising is no longer cost-prohibitive for smaller companies. With new solutions entering the market, it would be interesting to observe how the face of mobile advertising will change in 2019 and onward.

Cycle du hype de Gartner: L’IA, propulsée entre les mains de tous

Dans 10 ans, l’intelligence artificielle sera partout, et plus seulement entre les mains des professionnels, d’après Gartner. La démocratisation de l’IA est en effet l’une des cinq principales prédictions technologiques identifiées par l’entreprise de recherche et de conseil dans l’édition 2018 de sa fameuse courbe du « Cycle du hype ». Cette courbe montre les technologies émergentes et identifie leur position sur le cycle du hype.

Réalisé à partir de l’analyse de 2 000 technologies réparties ensuite en 35 domaines émergents, le rapport identifie cette année en particulier cinq tendances technologiques qui vont brouiller les frontières entre les humains et les machines.

Crédit: Gartner (août 2018)

L’IA, propulsée entre les mains de tous

En ce qui concerne l’intelligence artificielle, «les technologies d’IA seront pratiquement partout au cours des 10 prochaines années», prédit Gartner. « Ces technologies, qui permettent aux ‘early adopters’ de s’adapter à de nouvelles situations et de résoudre de nouveaux problèmes, vont être mises à la disposition des masses – démocratisées. Des tendances comme le cloud computing, la communauté des ‘makers’ et l’open source vont propulser l’IA entre les mains de tout le monde».

Un futur qui sera rendu possible par les technologies suivantes: l’AI Platform-as a-Service (PaaS)l’Artificial General Intelligence, la conduite autonome, les robots mobiles autonomes, les plateformes conversationnelles basées sur l’IA, les deep neural networks, les véhicules autonomes volants, les robots intelligents et les assistants virtuels.

Les nouvelles opportunités liées aux écosystèmes numérisés

Parmi les quatre autres principales tendances qui vont brouiller les frontières entre les humains et les machines, Gartner a également identifié «les technologies des écosystèmes numérisés qui font leur chemin rapidement vers le Cycle du hype», explique Mike J. Walker, vice-président de la recherche. «Les plateformes de type blockchain et IoT ont désormais atteint leur apogée et nous pensons qu’elles arriveront à maturité dans les cinq à dix prochaines années».

Le rapport prévoit ainsi que le passage d’un modèle technique d’infrastructure compartimenté vers un écosystème de plateformes pose les bases de l’émergence de nouveaux business models qui constitueront «un pont entre l’Homme et la technologie».

Le «do-it-yourself biohacking»

«Au cours de la prochaine décennie, l’humanité commencera son ère ‘transhumaine’: la biologie pourra alors être piratée en fonction du mode de vie, des intérêts et des besoins de santé », explique Gartner. La société sera alors amenée à se demander les applications qu’elle est prête à accepter et à réfléchir aux enjeux éthiques.

Pour le cabinet, cette tendance concerne quatre secteurs d’activité : les technologies d’augmentation, la nutrigénomique, la biologie expérimentale et le « grinder biohacking », mouvement dont les membres souhaitent améliorer leurs capacités physiques grâce à des implants cybernétiques «fait maison».

Les expériences transparentes et immersives pour une vie plus intelligente

L’étude cite également la catégorie des expériences transparentes et immersives. « La technologie continuera à devenir plus centrée sur l’Homme, au point de favoriser la transparence entre les personnes, les entreprises et les choses». Pour Gartner, cela permettra notamment aux hommes de travailler et de vivre de façon plus intelligente. Cette évolution sera possible grâce aux technologies suivantes : l’impression 4D, la maison connectée, le Edge AI, la technologie auto-réparatrice, les batteries d’anodes à base de silicium, la poussière intelligente (Smart Dust), le Smart Workplace et les écrans volumétriques.

L’infrastructure omniprésente

Cinquième et dernière tendance, l’infrastructure omniprésente. Comme l’explique Gartner, «les technologies prenant en charge une infrastructure omniprésente sont en passe d’atteindre le sommet et de progresser rapidement au sein du Cycle du hype. Les circuits intégrés propres à une application (ASIC) 5G et de réseau de neurones profonds, en particulier, devraient atteindre le plateau au cours des deux à cinq prochaines années».

The History of Artificial Intelligence (by Rockwell Anyoha) (Source: Harvard)

by Rockwell Anyoha

Source: http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

August 28, 2017

Can Machines Think?

In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.

Making the Pursuit Possible

Unfortunately, talk is cheap. What stopped Turing from getting to work right then and there? First, computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. In other words, computers could be told what to do but couldn’t remember what they did. Second, computing was extremely expensive. In the early 1950s, the cost of leasing a computer ran up to $200,000 a month. Only prestigious universities and big technology companies could afford to dillydally in these uncharted waters. A proof of concept as well as advocacy from high profile people were needed to persuade funding sources that machine intelligence was worth pursuing.

The Conference that Started it All

Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956. In this historic conference, McCarthy, imagining a great collaborative effort, brought together top researchers from various fields for an open ended discussion on artificial intelligence, the term which he coined at the very event. Sadly, the conference fell short of McCarthy’s expectations; people came and went as they pleased, and there was failure to agree on standard methods for the field. Despite this, everyone whole-heartedly aligned with the sentiment that AI was achievable. The significance of this event cannot be undermined as it catalyzed the next twenty years of AI research.

Roller Coaster of Success and Setbacks

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively. These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. Optimism was high and expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved.

Anyoha SITN Figure 2 AI timeline

Breaching the initial fog of AI revealed a mountain of obstacles. The biggest was the lack of computational power to do anything substantial: computers simply couldn’t store enough information or process it fast enough. In order to communicate, for example, one needs to know the meanings of many words and understand them in many combinations. Hans Moravec, a doctoral student of McCarthy at the time, stated that “computers were still millions of times too weak to exhibit intelligence.”  As patience dwindled so did the funding, and research came to a slow roll for ten years.

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Expert systems were widely used in industries. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. Unfortunately, most of the ambitious goals were not met. However, it could be argued that the indirect effects of the FGCP inspired a talented young generation of engineers and scientists. Regardless, funding of the FGCP ceased, and AI fell out of the limelight.

Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor. It seemed that there wasn’t a problem machines couldn’t handle. Even human emotion was fair game as evidenced by Kismet, a robot developed by Cynthia Breazeal that could recognize and display emotions.

Time Heals all Wounds

We haven’t gotten any smarter about how we are coding artificial intelligence, so what changed? It turns out, the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem.  , which estimates that the memory and speed of computers doubles every year, had finally caught up and in many cases, surpassed our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Kie Je, only a few months ago. It offers a bit of an explanation to the roller coaster of AI research; we saturate the capabilities of AI to the level of our current computational power (computer storage and processing speed), and then wait for Moore’s Law to catch up again.

Artificial Intelligence is Everywhere

We now live in the age of “big data,” an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process. The application of artificial intelligence in this regard has already been quite fruitful in several industries such as technology, bankingmarketing, and entertainment. We’ve seen that even if algorithms don’t improve much, big data and massive computing simply allow artificial intelligence to learn through brute force. There may be evidencethat Moore’s law is slowing down a tad, but the increase in data certainly hasn’t lost any momentum. Breakthroughs in computer science, mathematics, or neuroscience all serve as potential outs through the ceiling of Moore’s Law.

The Future

So what is in store for the future? In the immediate future, AI language is looking like the next big thing. In fact, it’s already underway. I can’t remember the last time I called a company and directly spoke with a human. These days, machines are even calling me! One could imagine interacting with an expert system in a fluid conversation, or having a conversation in two different languages being translated in real time. We can also expect to see driverless cars on the road in the next twenty years (and that is conservative). In the long term, the goal is general intelligence, that is a machine that surpasses human cognitive abilities in all tasks. This is along the lines of the sentient robot we are used to seeing in movies. To me, it seems inconceivable that this would be accomplished in the next 50 years. Even if the capability is there, the ethically would serve as a strong barrier against fruition. When that time comes (but better even before the time comes), we will need to have a serious conversation about machine policy and ethics (ironically both fundamentally human subjects), but for now, we’ll allow AI to steadily improve and run amok in society.

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. His current project employs the use of machine learning to model animal behavior. In his free time, Rockwell enjoys playing soccer and debating mundane topics.

This article is part of a Special Edition on Artificial Intelligence.

For more information:

Brief Timeline of AI

https://www.livescience.com/47544-history-of-a-i-artificial-intelligence-infographic.html

Complete Historical Overview

http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

Dartmouth Summer Research Project on Artificial Intelligence

https://www.aaai.org/ojs/index.php/aimagazine/article/view/1904/1802

Future of AI

https://www.technologyreview.com/s/602830/the-future-of-artificial-intelligence-and-cybernetics/

Discussion on Future Ethical Challenges Facing AI

http://www.bbc.com/future/story/20170307-the-ethical-challenge-facing-artificial-intelligence

Detailed Review of Ethics of AI

https://intelligence.org/files/EthicsofAI.pdf

Is It Time for Your Organization to Invest in AI? (Source: InformationWeek)

After decades of AI being viewed as a “future” concept it may be time for companies to invest real dollars for real applications.

source: https://www.informationweek.com/big-data/ai-machine-learning/is-it-time-for-your-organization-to-invest-in-ai/d/d-id/1330683

A recent Accenture survey found that 85% of business executives plan to invest heavily in AI-related technologies over the next three years. Most investments, according to the report, will be in major business processes, underpinning a company’s finance and accounting, marketing, procurement, and customer relations activities.

Could it be possible that after years of forecasts and speculation, not to mention an endless number of science fiction stories, movies, and TV shows, that AI is finally ready to become an indispensable real-world technology

Ruchir Puri, an IBM fellow and chief architect of IBM Watson, certainly thinks so. “There are many opportunities for AI across front, middle, and back office process, throughout lines of business and within various verticals,” Puri noted. “AI capabilities, such as conversation, vision and language technologies, can be used to solve a range of practical enterprise problems, boost productivity and foster new discoveries across any area it is applied to.”

Image: Pixabay

Image: Pixabay

Getting down to business

Across the high-tech industry, there’s a strong feeling that AI is about to cross the chasm and become a business-transforming technology, observed Jean-Luc Chatelain, CTO at Accenture Applied Intelligence. “There’s already evidence of the impact AI has and even stronger evidence of what it will have in areas like healthcare and life sciences, where it could facilitate breast cancer detection or personalized drug development.”

Vasant Dhar, a professor at New York University’s’ Stern School of Business, stated that AI can be put to work wherever data exists. “It could be for customer service, marketing, planning, gathering new customers or assets,” he explained.

Vasant Dhar

Vasant Dhar

According to an IBM study, 80% of the world’s data is not on the Web, but sitting unused inside businesses. “Today, most organizations only have a capability to explore a tiny fraction of this ‘dark’ data,” Puri said. AI is the key to unlocking that hidden resource.

Dhar noted that AI offers enterprises three basic benefits: the ability to learn continually from data, the ability to improve decision making and make it more consistent, and improving operational efficiency and cutting costs.

AI can play a role in reducing costs while enhancing speed, accuracy, availability, and auditability, suggested Nichole Jordan, national managing partner of markets, clients, and industry for audit, tax, and advisory firm Grant Thornton. “Any process involving structured digital data and business rules will benefit,” she stated. “AI can also assist with front-office functions — think customer interactions via chatbots and mobile messaging that are able to address common customer/client questions.”

Nichole Jordan

Nichole Jordan

Said Tabet, lead technologist for AI strategy at Dell EMC, sees a particularly bright future for AI in security applications. “AI-based pattern recognition technologies have been employed within various IT and cyber security applications for several years to proactively manage system performance or to block security threats,” he noted. “These capabilities will only become more widespread.”

AI also promises to improve the efficiency of various basic and repetitive tasks, according to Dennis Bonilla, executive dean of the University of Phoenix’s College of Information Systems & Technology. “This allows companies to allocate more resources to people and projects that require a higher level of creativity that computers can’t yet achieve,” he observed. Bonilla expects a rapid acceleration in businesses that use AI to handle rote tasks.

Dennis Bonilla

Dennis Bonilla

“We are seeing AI become increasingly more available to businesses as new platforms are created and people and organizations have more access to the necessary technology to create and utilize AI,” he said.

While AI can automate an entire range of routine functions, the technology can also help workers become more productive and efficient, augmenting human skills in terms of strength or dexterity. “Initial AI projects should be deployed in areas that augment human performance and help people with their jobs, hence demonstrating the value of AI while also easing employees’ fears of replacement,” recommended Joshua Feast, CEO of Cogito, an MIT spinoff that creates AI and emotional intelligence software.

Looking farther the road, Chatelain believes that AI will lead to the creation of a new class of intelligent products “such as self-repairing software and autonomous vehicles, as well as ‘living services’ that learn from the user and adapt to their preferences and needs.”

Jean-Luc Chatelain

Jean-Luc Chatelain

Assessing benefits

“AI is no longer a ‘nice to have’ technology; it is becoming a crucial part of an organization’s arsenal of tools,” Puri said. “Enterprises deploying AI technologies at mass scale will gain dramatic increases in productivity, allowing employees to handle more complex, creative and higher-impact tasks, opening entirely new avenues of exploration and discovery.”

According to Jordan, business process improvements can be measured through several indicators, such as reduced headcount costs and enhanced speed, accuracy, quality, repeatability, availability, auditability and productivity. “Robots don’t take vacation, don’t get sick, and don’t take breaks,” she quipped.

“The improvements that result from the implementation of AI should be measured by benchmarking current performance and then comparing metrics after the AI has been deployed,” Feast suggested.

Joshua Feast

Joshua Feast

For example, when deploying AI within sales and service call centers, to measure customer perception and improve the AI technology’s speaking behavior common metrics such as handle time, first call resolution, customer satisfaction, employee satisfaction can be measured before and after the AI technology has been applied. “This comparison will allow business to see the specific impacts AI on their business results,” Feast said.

Bonilla noted that businesses must also evaluate the impact AI has on customer-facing products and services. “Is the fidelity of your product jeopardized in any way?” he asked. “AI chatbots, for example, have come a long way, but nothing beats a human touch when it comes to customer service.”

Getting started

The first step for businesses planning to get started with AI is to look across their organization for areas where the technology can make the greatest impact. “Once companies have identified these areas, they can work with prominent academic institutions, leading technology vendors and industry analysts to gain better insight into available AI applications that can be deployed in a controlled setting where they can also be easily measured,” Feast explained.

Businesses should take a proactive approach to AI by inventorying current services to see how the technology might be able to enhance them, Jordan said. “They should also take stock of which internal manual, repetitive processes could be assisted or enhanced by AI in order to better and more quickly serve their clients,” she added.

Ruchir Puri

Ruchir Puri

To fully leverage AI’s potential, enterprise leaders must be able to clearly articulate their goals and expectations, and then prepare themselves with the right tools, data, and talent. “Separately, it is helpful to check if anyone in the organization is already using AI in some capacity before diving in, as collaboration across silos and business units can save time,” Puri elaborated.

It takes imagination to identify unique business process to which AI capabilities can be applied, as well as expertise to actually implement a new AI-based innovation. “Organizations can start small by first examining the various workflows around non-critical operations that could be made more efficient if automated using AI,” Tabet said. It’s also a good idea to check for instances where partial automation is already happening. “Could AI help take that process from partial to full automation?” he asked. “What could the gains be to go all the way?”

To avoid becoming bogged down by AI’s inherent complexity, many organizations opt to tap into AI-as-a-service, using productized off-the-shelf APIs and AI applications in areas such as image recognition and natural language processing. “Pilot these proven practical applications to demonstrate the potential of AI, and be ready to fail fast and move on,” Chatelain said.

AI’s inevitability

Bonilla sees virtually no downside to investing in AI, but plenty of potential. “At the end of the day, businesses will fall behind the competition if they don’t keep up with technology—that has always been true,” he observed.

Said Tabet

Said Tabet

Skeptics are quick to point out that AI will eliminate traditional jobs, yet that doesn’t tell the whole story. “AI will also create jobs that don’t exist yet, but that’s an abstract concept for businesses, education providers, and policymakers alike,” Bonilla said. “The challenge is working collaboratively to ensure workers are prepared to fill these jobs when they’re created.”

Trust in AI will always be an issue, even when things are going well. “Part of that is human nature, part of it is still the newness and novelty of AI-based technologies in our lives and workplaces,” Tabet said. Successful experimentation and trials in non-critical business areas can help win over skeptics. “Continued education with workshops and hands-on proof of concepts and demos will help reinforce and illustrate the company’s strategy for AI,” he added.

As AI acceptance builds, IT and business leaders will need to shake off their natural resistance to change. “They need to become agents of the change that AI will bring,” Chatelain said, noting that AI is evolving rapidly and that enterprises need to pay close attention to technology and market developments. “I have rarely seen [technical] papers being published as fast as on AI topics, globally,” he noted. “Staying informed and close to academia and the latest AI research will certainly pay off.”

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic …View Full Bio

Artificial Intelligence & Marketing: A.I. Is definitively moving the 4P’s (Kotler) to the S.A.V.E. model through a Hyper Personalized Relation with the consumer

In november 2017,  I had the great pleasure to give a lecture with Professor Hugues Bersini about A.I. and Marketing…

Started (officialy) 50 years ago, AI is now present in marketing. We estimate that it will take about 10 years to reach the plateau of productivity. But tests are supposed to start now.