Intelligence artificielle : seules 9% des entreprises américaines l’auraient adoptée

Une étude publiée en juillet démontre que l’usage de l’intelligence artificielle en entreprise est encore loin d’être démocratisé aux Etats-Unis.

source: https://siecledigital.fr/2020/08/13/intelligence-artificielle-seules-9-des-entreprises-americaines-lauraient-adoptee/

Par Thibault Minondo – @ThibaultMinondo
Publié le 13 août 2020 à 09h01

      Etude sur l'IA aux Etats-UnisImage : Unsplash

L’US Census Bureau, un organisme délivrant des études de marché basées sur l’analyse de données, publiait le 16 juillet dernier un rapport sur l’utilisation de l’intelligence artificielle en entreprise. Une enquête effectuée fin 2018 auprès de 583 000 entreprises américaine qui met en avant une adoption très résiduelle.

L’intelligence artificielle, encore en phase de déploiement précoce

Le machine learning n’est par exemple déployé que chez 2.8% des entreprises sondées. Si l’on ajoute les autres pans de l’intelligence artificielle considérés dans l’étude (reconnaissance vocale, véhicules autonomes, machine vision, robotique, RFID Réalité Augmentée, etc), nous tombons sur seulement moins d’une entreprise américaine sur 10 (8.9%) ayant recours à au moins un d’entre eux.

Un chiffre qui a de quoi surprendre, mais qui est à mettre en perspective. L’étude date tout d’abord de fin 2018. Nul doute qu’avec l’accélération exponentielle des usages autour de l’IA ces six derniers mois, les résultats de l’enquête sont un instantané en décalage avec la réalité du monde post-Covid-19. Comme nous vous en parlions récemment, d’ici 2022, les technologies basées sur l’intelligence artificielle devraient faire leur trou dans 80% des entreprises.

Mais alors, comment expliquer un gap si conséquent entre l’état des lieux à fin 2018, et la projection à quatre années plus tard ? “Nous n’en sommes qu’au tout début de l’adoption de l’IA. Les gens ne doivent pas penser que la révolution de l’apprentissage machine est en train de s’essouffler ou qu’elle est du passé. Il y a un raz-de-marée devant nous” présente Brynjolfsson, directeur du Standord Digital Economy Lab et co-auteur de l’enquête de l’US Census Bureau.

Une vague d’adoption anticipée, mais encore prématurée si on se réfère aux chiffres de l’enquête. Chiffres bien en-dessous de ceux publiés à la même époque par deux autres enquêtes pilotées par McKinsey et PwC. La première, sortie en Novembre 2018 avançait le chiffre de 30% de dirigeants exploitant l’intelligence artificielle sous une forme ou une autre. La seconde, de PwC, paraissait fin 2018 et montrait qu’un dirigeant sur 5 planifiait le lancement d’une technologie liée à l’intelligence artificielle en 2019.Infographie de l'étude sur l'IA

Capture d’écran : Wired

Les grandes entreprises, leaders sur l’adoption de l’IA

Eric Brynjolfsson explique que contrairement aux études de leurs confrères, celle de l’US Census Bureau se veut plus représentative du tissu économique américain, car non-focalisée sur les Fortune 500. Une méthodologie qui se retranscrit dans la lecture d’une double réalité au niveau des entreprises : presque 25% de celles de plus de 250 employés ont investi dans une forme d’intelligence artificielle, quand seulement 7.7% des entreprises de moins de 10 employés ont fait de même.

“Les grandes entreprises adoptent”, dit Brynjolfsson, “mais la plupart des entreprises américaines – la pizzeria de Joe, le pressing, la petite entreprise de fabrication – n’en sont pas encore là”. Pour les grandes entreprises, ce rôle de porte-étendard dans l’adoption de ces nouvelles technologies sera déterminant dans le rebond économique. Portant en effet une proportion plus large de l’activité économique, il sera vital de voir ces leaders montrer la voie de la transition technologique.

Si la mise en place de composants d’intelligence artificielle peut paraître plus lente à enclencher chez des poids lourds du marché, la recherche de compétences en IA montre que la transition est lancée. Chez Google, le nombre de téléchargement de Tensorflow, son framework dédié à la création de programmes IA, a dépassé les 10 millions rien que sur le mois de Mai 2020.

Côté formation aux compétences de demain, Microsoft s’est allié au Français OpenClassrooms pour créer un programme “AI Engineer”, qui doit recruter et former 1000 ingénieurs en machine-learning et intelligence artificielle“Nous unissons nos forces pour combler le fossé des compétences numériques en offrant l’expertise et le contenu de Microsoft en matière d’IA, de cloud computing, d’apprentissage automatique et de sciences des données à des étudiants de tous âges et de tous horizons via la plateforme d’enseignement en ligne interactive et de haute qualité d’OpenClassroom”, déclare Ed Steidl, directeur des partenariats avec la main-d’œuvre chez Microsoft.

Les projets intégrant de l’IA se multiplient

Les initiatives mettant en avant l’intelligence artificielle commencent également à pointer le bout de leur nez. De nouveaux terrains de jeu émergent, et pas toujours là où on les soupçonne. En avril dernier, nous vous parlions du projet d’Intel, baptisé CORail. Chargé de collecter les données sur les récifs coralliens affectés par le réchauffement climatique, la solution est entièrement basée sur l’intelligence artificielle. Le mois dernier, nous vous présentions Tuna Scope, application dédiée à l’évaluation de la qualité d’un thon à partir d’une simple photographie. Construite à partir du machine-learning, l’application est déjà exploitée par une chaîne de restaurants japonaise. Côté boisson, il y a par exemple AB InBev. À partir de données recueillies dans une brasserie dans le New Jersey, la société a mis au point un algorithme d’IA pour prévoir les problèmes potentiels du processus de filtration utilisé pour éliminer les impuretés de la bière.

Le “raz-de marée” évoqué par Eric Brynjolfsson est donc encore à un stade précoce. Le sujet de l’intelligence artificielle se démocratise toutefois à une vitesse exponentielle, et il faudra garder un œil sur la multiplication des initiatives dans les mois et années à venir.

10 USEFUL Artificial Intelligence & Machine Learning Slides

Evolution of Analytics

AISOMA - Evolution of Analytics
AISOMA – Evolution of Analytics

Analytics is the discovery, interpretation, and communication of meaningful patterns in data; and the process of applying those patterns towards effective decision making. In other words, analytics can be understood as the connective tissue between data and effective decision making, within an organization. Especially valuable in areas rich with recorded information, analytics relies on the simultaneous application of statistics, computer programming and operations research to quantify performance.

Organizations may apply analytics to business data to describe, predict, and improve business performance. Specifically, areas within analytics include predictive analytics, prescriptive analytics, enterprise decision management, descriptive analytics, cognitive analytics, Big Data Analytics, retail analytics, supply chain analytics, store assortment and stock-keeping unit optimization, marketing optimization and marketing mix modeling, web analytics, call analytics, speech analytics, sales force sizing and optimization, price and promotion modeling, predictive science, credit risk analysis, and fraud analytics. Since analytics can require extensive computation (see big data), the algorithms and software used for analytics harness the most current methods in computer science, statistics, and mathematics.

2. Future of Data Science

AISOMA - Future of Data Science
AISOMA – Future of Data Science

Sebastian Raschka, researcher of applied Machine Learning and Deep Learning at Michigan State University, thinks that the future of Data Science does not indicate machines taking over humans, but rather human data professionals embracing open-source technologies.

It is common understanding that future Data Science projects, thanks to advanced tools, will scale to new heights where more human experts will be required to handle highly complex tasks very efficiently. However, according to McKinsey Global Institute (MGI), the next decade will witness a sharp shortage of around 250,000 Data Scientists in the U.S. alone. The question is whether machines can ever enable seamless collaboration between technologies, tools, processes, and end users. Automated tools and assistants can aid the human mind to accomplish tasks more quickly and accurately, but machines cannot ever be expected to substitute for human thinking. The core of problem-solving is intellectual thinking, which no machine, no matter how sophisticated it is, can replicate. (further information)

10 Useful AI & ML Slides #MachineLearning #ML #ArtificialIntelligence #AI #DeepLearning #BigDataAnalytics #Chatbots #NLP #Industry40 #Slides #AIEthicsKLICK UM ZU TWEETEN

Artificial Intelligence Quote
Artificial Intelligence Quote

3. Machine Learning Workflow

AISOMA - Machine Learning Workflow
AISOMA – Machine Learning Workflow

Check out the Google Machine Learning Glossary

4. Deep Learning Workflow

AISOMA - Deep Learning Workflow
AISOMA – Deep Learning Workflow

Check out the Google Machine Learning Glossary

Artificial Intelligence is a Tsunami
Artificial Intelligence is a Tsunami

5. Deep Learning Continuous Integration and Delivery

AISOMA - Deep Learning CI and CD
AISOMA – Deep Learning CI and CD

More info: link

6. Anatomy of a Chatbot

AISOMA - Anatomy of a Chatbot
AISOMA – Anatomy of a Chatbot

More info: How Businesses can Benefit from Chatbots

7. Five ethical challenges of AI

10 Useful AI & ML Slides 1
AISOMA – 5 Ethical Challenges of AI

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically[citation needed] divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs). (more info)

Artificial Intelligence quote
Artificial Intelligence quote

8. NLP / NLU Technology Stack

AISOMA - NLP Technology Stack
AISOMA – NLP Technology Stack

Natural language processing (NLP) is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.

Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation.(more info)

9. Condition Monitoring / Predictive Maintenance Solution Architecture

AISOMA - Predictive Maintenance Solution Architecture
AISOMA – Predictive Maintenance Solution Architecture

More Info: Smart Predictive Maintenance: The Key to Industry 4.0

10. Artificial Intelligence in Marketing

AISOMA - AI in Marketing
AISOMA – AI in Marketing

The Current Applications Of Artificial Intelligence In Mobile Advertising (source: Forbes)

The concept of self-programming computers was closer to science fiction than reality just ten years ago. Today, we feel comfortable conversing with smart personal assistant like Siri and keep wondering just how Spotify guessed what we like.

It’s not just the mobile apps that are becoming more “intelligent”. Advertising encouraging us to interact and install those apps has made its way onto a way new quality level as well. Thanks to advances in machine learning (ML), the baseline technology for AI, mobile advertising industry is now undergoing significant transformation.

source: https://www.forbes.com/sites/andrewarnold/2018/12/24/the-current-applications-of-artificial-intelligence-in-mobile-advertising/#2c8fa7f91821

AI can reduce mobile advertising fraud

In 2018, mobile ad fraud rates have doubled compared to the previous year. To tap into the expanding marketer’s ad budgets, hackers have created a host of new tricks to their playbook. According to Adjust data, the following mobile ad threats have prevailed:

SDK spoofing accounts represented 37% of ad fraud. In SDK Spoofing malicious code is injected in one app (the attacked) that simulates ad clicks, installs and other fake engagement and sends faulty signals to an attribution provider on behalf of the “victim” app. Such attacks can make a significant dent in an advertiser’s budget by forcing them to pay for installs that never actually took place.

Click injections accounted for 27% of attacks. Cybercriminals trigger clicks before the app installation is complete and receive credit for those installs as a result. Again, these can drain your ad budgets and dilute your ROI numbers.

YOU MAY ALSO LIKE

Faked installs and click spam accounted for 20% and 16% of fraudulent activities respectively. E-commerce apps have been in the fraud limelight this year, with nearly two-fifths of all app installs being marked as “fake” or “spam”, followed closely by games and travel apps. Forrester further reports that 69% of marketers whose monthly digital advertising budgets run above $1 million admit that at least 20% of those budgets are drained by fraud on the mobile web.

If the issue is so big, why no one’s tackling it? Well, detecting ad fraud is a complex process that requires 24/7 monitoring and analysis of incoming data. And that’s where AI comes to the fore. Intelligent algorithms can operationalize large volumes of data at a pace far more accurate than any human analyst, spot abnormalities and trigger alerts for further investigation. What’s more promising is that with advances in deep learning, the new-gen AI-powered fraud systems will also become capable to self-tune their performance over time, learning to predict, detect and mitigate emerging threats.

AI brings increased efficiency and higher ROI for real-time ad bidding

One of the biggest selling points of “AI revolution” across multiple industries is the promise to automate and eliminate low-value business processes. Mobile advertising is no exception. Juniper Research predicts that by 2021, machine learning algorithms that increase efficiency across real-time bidding networks will drive an additional $42 billion in annual spend.

Again, thanks to robust analytical capabilities ML-algorithms can create the perfect recipe for your ad, displaying it at the right time to the right people. Google has already been experimenting with various optimizations for mobile search ads. The results so far are rather promising. Macy’s, for instance, has been leveraging inventory ads and displaying them to customers’ who recently checked-up on their products and are now in close geo-proximity to the store holding the goods they looked up a few hours ago.

AdTiming has been helping marketers refine their approach to in-app advertising. By leveraging and crunching data from over 1000 marketers, the startup has developed their recipe for best ad placements. “Prescriptive analytics will tell our users when is the best time to run their ads; what messaging to use and how frequently the ad needs to be displayed in order to meet their ROI while maintaining the set budget,” said Leo Yang, CEO of AdTiming.

Just how competitive AI-powered real-time ad bidding can be? A recent experiment conducted by a group of scientists on Taobao – China’s largest e-commerce platform – proves that algorithms are performing way better than humans.

For comparison:

  • Manual bidding campaigns brought in 100% ROI with 99.52% of budget spent.
  • Algorithmic bidding generated 340% ROI with 99.51% of budget spent.

It’s clear who’s the winner here.

AI enables advanced customer segmentation and ad targeting

Algorithms are better suited for detecting patterns than a human eye, especially when sent to deal with large volumes of data. They can effectively group and cluster that data to create rich user profiles for individual customers – based on their past interactions with your brand, their demographic data and online browsing behaviors.

This means that you are no longer targeting a broad demographic of “women (aged 25-35), based in the US”. You become capable to pursue more niche audiences, exhibiting rather specific behaviors e.g. regularly engaging with hair care products in the luxury segment on social media. This insight can be further applied by an AI system when entering an RTB auction to predict when your ad should be displayed in front of the consumer (matching your profile) and when it’s worth a pass.

The best part is that AI-powered advertising is no longer cost-prohibitive for smaller companies. With new solutions entering the market, it would be interesting to observe how the face of mobile advertising will change in 2019 and onward.

Cycle du hype de Gartner: L’IA, propulsée entre les mains de tous

Dans 10 ans, l’intelligence artificielle sera partout, et plus seulement entre les mains des professionnels, d’après Gartner. La démocratisation de l’IA est en effet l’une des cinq principales prédictions technologiques identifiées par l’entreprise de recherche et de conseil dans l’édition 2018 de sa fameuse courbe du « Cycle du hype ». Cette courbe montre les technologies émergentes et identifie leur position sur le cycle du hype.

Réalisé à partir de l’analyse de 2 000 technologies réparties ensuite en 35 domaines émergents, le rapport identifie cette année en particulier cinq tendances technologiques qui vont brouiller les frontières entre les humains et les machines.

Crédit: Gartner (août 2018)

L’IA, propulsée entre les mains de tous

En ce qui concerne l’intelligence artificielle, «les technologies d’IA seront pratiquement partout au cours des 10 prochaines années», prédit Gartner. « Ces technologies, qui permettent aux ‘early adopters’ de s’adapter à de nouvelles situations et de résoudre de nouveaux problèmes, vont être mises à la disposition des masses – démocratisées. Des tendances comme le cloud computing, la communauté des ‘makers’ et l’open source vont propulser l’IA entre les mains de tout le monde».

Un futur qui sera rendu possible par les technologies suivantes: l’AI Platform-as a-Service (PaaS)l’Artificial General Intelligence, la conduite autonome, les robots mobiles autonomes, les plateformes conversationnelles basées sur l’IA, les deep neural networks, les véhicules autonomes volants, les robots intelligents et les assistants virtuels.

Les nouvelles opportunités liées aux écosystèmes numérisés

Parmi les quatre autres principales tendances qui vont brouiller les frontières entre les humains et les machines, Gartner a également identifié «les technologies des écosystèmes numérisés qui font leur chemin rapidement vers le Cycle du hype», explique Mike J. Walker, vice-président de la recherche. «Les plateformes de type blockchain et IoT ont désormais atteint leur apogée et nous pensons qu’elles arriveront à maturité dans les cinq à dix prochaines années».

Le rapport prévoit ainsi que le passage d’un modèle technique d’infrastructure compartimenté vers un écosystème de plateformes pose les bases de l’émergence de nouveaux business models qui constitueront «un pont entre l’Homme et la technologie».

Le «do-it-yourself biohacking»

«Au cours de la prochaine décennie, l’humanité commencera son ère ‘transhumaine’: la biologie pourra alors être piratée en fonction du mode de vie, des intérêts et des besoins de santé », explique Gartner. La société sera alors amenée à se demander les applications qu’elle est prête à accepter et à réfléchir aux enjeux éthiques.

Pour le cabinet, cette tendance concerne quatre secteurs d’activité : les technologies d’augmentation, la nutrigénomique, la biologie expérimentale et le « grinder biohacking », mouvement dont les membres souhaitent améliorer leurs capacités physiques grâce à des implants cybernétiques «fait maison».

Les expériences transparentes et immersives pour une vie plus intelligente

L’étude cite également la catégorie des expériences transparentes et immersives. « La technologie continuera à devenir plus centrée sur l’Homme, au point de favoriser la transparence entre les personnes, les entreprises et les choses». Pour Gartner, cela permettra notamment aux hommes de travailler et de vivre de façon plus intelligente. Cette évolution sera possible grâce aux technologies suivantes : l’impression 4D, la maison connectée, le Edge AI, la technologie auto-réparatrice, les batteries d’anodes à base de silicium, la poussière intelligente (Smart Dust), le Smart Workplace et les écrans volumétriques.

L’infrastructure omniprésente

Cinquième et dernière tendance, l’infrastructure omniprésente. Comme l’explique Gartner, «les technologies prenant en charge une infrastructure omniprésente sont en passe d’atteindre le sommet et de progresser rapidement au sein du Cycle du hype. Les circuits intégrés propres à une application (ASIC) 5G et de réseau de neurones profonds, en particulier, devraient atteindre le plateau au cours des deux à cinq prochaines années».

The History of Artificial Intelligence (by Rockwell Anyoha) (Source: Harvard)

by Rockwell Anyoha

Source: http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

August 28, 2017

Can Machines Think?

In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.

Making the Pursuit Possible

Unfortunately, talk is cheap. What stopped Turing from getting to work right then and there? First, computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. In other words, computers could be told what to do but couldn’t remember what they did. Second, computing was extremely expensive. In the early 1950s, the cost of leasing a computer ran up to $200,000 a month. Only prestigious universities and big technology companies could afford to dillydally in these uncharted waters. A proof of concept as well as advocacy from high profile people were needed to persuade funding sources that machine intelligence was worth pursuing.

The Conference that Started it All

Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956. In this historic conference, McCarthy, imagining a great collaborative effort, brought together top researchers from various fields for an open ended discussion on artificial intelligence, the term which he coined at the very event. Sadly, the conference fell short of McCarthy’s expectations; people came and went as they pleased, and there was failure to agree on standard methods for the field. Despite this, everyone whole-heartedly aligned with the sentiment that AI was achievable. The significance of this event cannot be undermined as it catalyzed the next twenty years of AI research.

Roller Coaster of Success and Setbacks

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively. These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. Optimism was high and expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved.

Anyoha SITN Figure 2 AI timeline

Breaching the initial fog of AI revealed a mountain of obstacles. The biggest was the lack of computational power to do anything substantial: computers simply couldn’t store enough information or process it fast enough. In order to communicate, for example, one needs to know the meanings of many words and understand them in many combinations. Hans Moravec, a doctoral student of McCarthy at the time, stated that “computers were still millions of times too weak to exhibit intelligence.”  As patience dwindled so did the funding, and research came to a slow roll for ten years.

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Expert systems were widely used in industries. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. Unfortunately, most of the ambitious goals were not met. However, it could be argued that the indirect effects of the FGCP inspired a talented young generation of engineers and scientists. Regardless, funding of the FGCP ceased, and AI fell out of the limelight.

Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor. It seemed that there wasn’t a problem machines couldn’t handle. Even human emotion was fair game as evidenced by Kismet, a robot developed by Cynthia Breazeal that could recognize and display emotions.

Time Heals all Wounds

We haven’t gotten any smarter about how we are coding artificial intelligence, so what changed? It turns out, the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem.  , which estimates that the memory and speed of computers doubles every year, had finally caught up and in many cases, surpassed our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Kie Je, only a few months ago. It offers a bit of an explanation to the roller coaster of AI research; we saturate the capabilities of AI to the level of our current computational power (computer storage and processing speed), and then wait for Moore’s Law to catch up again.

Artificial Intelligence is Everywhere

We now live in the age of “big data,” an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process. The application of artificial intelligence in this regard has already been quite fruitful in several industries such as technology, bankingmarketing, and entertainment. We’ve seen that even if algorithms don’t improve much, big data and massive computing simply allow artificial intelligence to learn through brute force. There may be evidencethat Moore’s law is slowing down a tad, but the increase in data certainly hasn’t lost any momentum. Breakthroughs in computer science, mathematics, or neuroscience all serve as potential outs through the ceiling of Moore’s Law.

The Future

So what is in store for the future? In the immediate future, AI language is looking like the next big thing. In fact, it’s already underway. I can’t remember the last time I called a company and directly spoke with a human. These days, machines are even calling me! One could imagine interacting with an expert system in a fluid conversation, or having a conversation in two different languages being translated in real time. We can also expect to see driverless cars on the road in the next twenty years (and that is conservative). In the long term, the goal is general intelligence, that is a machine that surpasses human cognitive abilities in all tasks. This is along the lines of the sentient robot we are used to seeing in movies. To me, it seems inconceivable that this would be accomplished in the next 50 years. Even if the capability is there, the ethically would serve as a strong barrier against fruition. When that time comes (but better even before the time comes), we will need to have a serious conversation about machine policy and ethics (ironically both fundamentally human subjects), but for now, we’ll allow AI to steadily improve and run amok in society.

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. His current project employs the use of machine learning to model animal behavior. In his free time, Rockwell enjoys playing soccer and debating mundane topics.

This article is part of a Special Edition on Artificial Intelligence.

For more information:

Brief Timeline of AI

https://www.livescience.com/47544-history-of-a-i-artificial-intelligence-infographic.html

Complete Historical Overview

http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

Dartmouth Summer Research Project on Artificial Intelligence

https://www.aaai.org/ojs/index.php/aimagazine/article/view/1904/1802

Future of AI

https://www.technologyreview.com/s/602830/the-future-of-artificial-intelligence-and-cybernetics/

Discussion on Future Ethical Challenges Facing AI

http://www.bbc.com/future/story/20170307-the-ethical-challenge-facing-artificial-intelligence

Detailed Review of Ethics of AI

https://intelligence.org/files/EthicsofAI.pdf

Is It Time for Your Organization to Invest in AI? (Source: InformationWeek)

After decades of AI being viewed as a “future” concept it may be time for companies to invest real dollars for real applications.

source: https://www.informationweek.com/big-data/ai-machine-learning/is-it-time-for-your-organization-to-invest-in-ai/d/d-id/1330683

A recent Accenture survey found that 85% of business executives plan to invest heavily in AI-related technologies over the next three years. Most investments, according to the report, will be in major business processes, underpinning a company’s finance and accounting, marketing, procurement, and customer relations activities.

Could it be possible that after years of forecasts and speculation, not to mention an endless number of science fiction stories, movies, and TV shows, that AI is finally ready to become an indispensable real-world technology

Ruchir Puri, an IBM fellow and chief architect of IBM Watson, certainly thinks so. “There are many opportunities for AI across front, middle, and back office process, throughout lines of business and within various verticals,” Puri noted. “AI capabilities, such as conversation, vision and language technologies, can be used to solve a range of practical enterprise problems, boost productivity and foster new discoveries across any area it is applied to.”

Image: Pixabay

Image: Pixabay

Getting down to business

Across the high-tech industry, there’s a strong feeling that AI is about to cross the chasm and become a business-transforming technology, observed Jean-Luc Chatelain, CTO at Accenture Applied Intelligence. “There’s already evidence of the impact AI has and even stronger evidence of what it will have in areas like healthcare and life sciences, where it could facilitate breast cancer detection or personalized drug development.”

Vasant Dhar, a professor at New York University’s’ Stern School of Business, stated that AI can be put to work wherever data exists. “It could be for customer service, marketing, planning, gathering new customers or assets,” he explained.

Vasant Dhar

Vasant Dhar

According to an IBM study, 80% of the world’s data is not on the Web, but sitting unused inside businesses. “Today, most organizations only have a capability to explore a tiny fraction of this ‘dark’ data,” Puri said. AI is the key to unlocking that hidden resource.

Dhar noted that AI offers enterprises three basic benefits: the ability to learn continually from data, the ability to improve decision making and make it more consistent, and improving operational efficiency and cutting costs.

AI can play a role in reducing costs while enhancing speed, accuracy, availability, and auditability, suggested Nichole Jordan, national managing partner of markets, clients, and industry for audit, tax, and advisory firm Grant Thornton. “Any process involving structured digital data and business rules will benefit,” she stated. “AI can also assist with front-office functions — think customer interactions via chatbots and mobile messaging that are able to address common customer/client questions.”

Nichole Jordan

Nichole Jordan

Said Tabet, lead technologist for AI strategy at Dell EMC, sees a particularly bright future for AI in security applications. “AI-based pattern recognition technologies have been employed within various IT and cyber security applications for several years to proactively manage system performance or to block security threats,” he noted. “These capabilities will only become more widespread.”

AI also promises to improve the efficiency of various basic and repetitive tasks, according to Dennis Bonilla, executive dean of the University of Phoenix’s College of Information Systems & Technology. “This allows companies to allocate more resources to people and projects that require a higher level of creativity that computers can’t yet achieve,” he observed. Bonilla expects a rapid acceleration in businesses that use AI to handle rote tasks.

Dennis Bonilla

Dennis Bonilla

“We are seeing AI become increasingly more available to businesses as new platforms are created and people and organizations have more access to the necessary technology to create and utilize AI,” he said.

While AI can automate an entire range of routine functions, the technology can also help workers become more productive and efficient, augmenting human skills in terms of strength or dexterity. “Initial AI projects should be deployed in areas that augment human performance and help people with their jobs, hence demonstrating the value of AI while also easing employees’ fears of replacement,” recommended Joshua Feast, CEO of Cogito, an MIT spinoff that creates AI and emotional intelligence software.

Looking farther the road, Chatelain believes that AI will lead to the creation of a new class of intelligent products “such as self-repairing software and autonomous vehicles, as well as ‘living services’ that learn from the user and adapt to their preferences and needs.”

Jean-Luc Chatelain

Jean-Luc Chatelain

Assessing benefits

“AI is no longer a ‘nice to have’ technology; it is becoming a crucial part of an organization’s arsenal of tools,” Puri said. “Enterprises deploying AI technologies at mass scale will gain dramatic increases in productivity, allowing employees to handle more complex, creative and higher-impact tasks, opening entirely new avenues of exploration and discovery.”

According to Jordan, business process improvements can be measured through several indicators, such as reduced headcount costs and enhanced speed, accuracy, quality, repeatability, availability, auditability and productivity. “Robots don’t take vacation, don’t get sick, and don’t take breaks,” she quipped.

“The improvements that result from the implementation of AI should be measured by benchmarking current performance and then comparing metrics after the AI has been deployed,” Feast suggested.

Joshua Feast

Joshua Feast

For example, when deploying AI within sales and service call centers, to measure customer perception and improve the AI technology’s speaking behavior common metrics such as handle time, first call resolution, customer satisfaction, employee satisfaction can be measured before and after the AI technology has been applied. “This comparison will allow business to see the specific impacts AI on their business results,” Feast said.

Bonilla noted that businesses must also evaluate the impact AI has on customer-facing products and services. “Is the fidelity of your product jeopardized in any way?” he asked. “AI chatbots, for example, have come a long way, but nothing beats a human touch when it comes to customer service.”

Getting started

The first step for businesses planning to get started with AI is to look across their organization for areas where the technology can make the greatest impact. “Once companies have identified these areas, they can work with prominent academic institutions, leading technology vendors and industry analysts to gain better insight into available AI applications that can be deployed in a controlled setting where they can also be easily measured,” Feast explained.

Businesses should take a proactive approach to AI by inventorying current services to see how the technology might be able to enhance them, Jordan said. “They should also take stock of which internal manual, repetitive processes could be assisted or enhanced by AI in order to better and more quickly serve their clients,” she added.

Ruchir Puri

Ruchir Puri

To fully leverage AI’s potential, enterprise leaders must be able to clearly articulate their goals and expectations, and then prepare themselves with the right tools, data, and talent. “Separately, it is helpful to check if anyone in the organization is already using AI in some capacity before diving in, as collaboration across silos and business units can save time,” Puri elaborated.

It takes imagination to identify unique business process to which AI capabilities can be applied, as well as expertise to actually implement a new AI-based innovation. “Organizations can start small by first examining the various workflows around non-critical operations that could be made more efficient if automated using AI,” Tabet said. It’s also a good idea to check for instances where partial automation is already happening. “Could AI help take that process from partial to full automation?” he asked. “What could the gains be to go all the way?”

To avoid becoming bogged down by AI’s inherent complexity, many organizations opt to tap into AI-as-a-service, using productized off-the-shelf APIs and AI applications in areas such as image recognition and natural language processing. “Pilot these proven practical applications to demonstrate the potential of AI, and be ready to fail fast and move on,” Chatelain said.

AI’s inevitability

Bonilla sees virtually no downside to investing in AI, but plenty of potential. “At the end of the day, businesses will fall behind the competition if they don’t keep up with technology—that has always been true,” he observed.

Said Tabet

Said Tabet

Skeptics are quick to point out that AI will eliminate traditional jobs, yet that doesn’t tell the whole story. “AI will also create jobs that don’t exist yet, but that’s an abstract concept for businesses, education providers, and policymakers alike,” Bonilla said. “The challenge is working collaboratively to ensure workers are prepared to fill these jobs when they’re created.”

Trust in AI will always be an issue, even when things are going well. “Part of that is human nature, part of it is still the newness and novelty of AI-based technologies in our lives and workplaces,” Tabet said. Successful experimentation and trials in non-critical business areas can help win over skeptics. “Continued education with workshops and hands-on proof of concepts and demos will help reinforce and illustrate the company’s strategy for AI,” he added.

As AI acceptance builds, IT and business leaders will need to shake off their natural resistance to change. “They need to become agents of the change that AI will bring,” Chatelain said, noting that AI is evolving rapidly and that enterprises need to pay close attention to technology and market developments. “I have rarely seen [technical] papers being published as fast as on AI topics, globally,” he noted. “Staying informed and close to academia and the latest AI research will certainly pay off.”

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic …View Full Bio

Artificial Intelligence & Marketing: A.I. Is definitively moving the 4P’s (Kotler) to the S.A.V.E. model through a Hyper Personalized Relation with the consumer

In november 2017,  I had the great pleasure to give a lecture with Professor Hugues Bersini about A.I. and Marketing…

Started (officialy) 50 years ago, AI is now present in marketing. We estimate that it will take about 10 years to reach the plateau of productivity. But tests are supposed to start now.

 

AI marketing and the journey through the uncanny valley

Restoring the brand/consumer relationship in the age of aggressive personalization

“Things usually get worse before they get better.”

“It’s always darkest before the dawn.”

Whether it’s the valley of the shadow of death in the 23rd Psalm or the Dangerous Trench on the way to Shell City in the Sponge Bob movie, we’re used to the concept of feeling that things are getting worse, even though we know we’re headed in the right direction.

This experience can be represented by a U-shaped curve, literally forming the shape of a valley between two peaks. In technical terms, the curve represents a nonlinear relationship between two variables. A specific example is the uncanny valley — the hypothesis of the unease, frustration, or even revulsion we feel as something approaches the behavior and appearance of a human without getting all the way there. In this case, the two variables are the humanlike nature of the object and the emotional response to it. This can be experienced with robots and AI assistants, and with 3D animation. Perhaps you know someone who gets illogically angry when Siri or Alexa fails to understand their commands, or maybe you get uncomfortable watching humanoid robots or CGI-animated humans in TV and movies.

Although 78 percent of marketers are adopting or expanding artificial intelligence marketing in 2018, marketers are also uneasy about the uncanny valley. They are concerned that by implementing AI marketing, they will lose control of the customer experience, possibly bewildering or even revolting their customers. While this is a reasonable concern, it could prove to be an unfounded and risky position — because marketers have already forced their customers into the uncanny valley through the use of marketing automation and aggressive personalization. And to quote another truism, when you’re going through hell, keep going. Because you don’t want to stay there.

Your customers are already in the uncanny valley

Could it be true that we’re already subjecting our customers to experiences that create bewilderment and revulsion? You don’t have to have 3D avatars or robots in your customer experience to create these eerie, negative feelings in your customers. The uncanny valley is represented by a sudden decrease in empathy when a human-like being ceases in some way to be human. Here are some specific examples to indicate that your customers may already be in the uncanny valley:

Broken context

Example: An AI assistant or chatbot initially passes for human but fails to understand the context of a question that would be simple for a human to understand, revealing that it is not human. Here is just one of many anecdotes from Reddit:

In just seconds, this user went from loving their Echo to figuratively (literally?) flipping the table in frustration.

Not quite lookalikes

Example: A cursory read of our example user’s Facebook history could tell you that he is a foodie, a vegetarian and a fan of subscription boxes. Recently, this user got targeted by a new artisanal food subscription service that was relevant in many respects, except for the fact that they exclusively offer cured meats. It’s reasonable in some respects that an artisanal cured meat subscription service would target him. Except that as a vegetarian, this user found their ad bewildering and invasive, causing him to lose interest and scroll quickly past. In his scrolling fervor, he accidentally registered a click on the ad, leading to weeks of cured meats in his feed.

Bad timing

Another example from that same user: Currently, his bank is aggressively targeting him with a competitive mortgage offer. Three weeks ago, there were credit, fund consolidation and other signals that he was preparing for a home purchase. At that point, it was stone cold silence from the bank. But now that he has signed a mortgage with another bank and closed escrow, he is getting targeted after the fact with an offer that he would have considered three weeks ago. Now, it’s just aggravating.

Three ways to ascend from the uncanny valley

Ascending from the uncanny valley is possible, but it takes buy-in from executives and a concerted effort by the entire marketing organization. Fortunately, Artificial Intelligence Marketing (AIM) provides a new approach for interacting with customers, allowing for consistent relevant experiences across all channels and continuous optimization at scale. Marketers shouldn’t fear the uncanny valley. They should focus on crossing through to the other side. Here’s how:

1. Keep context: Match your level of sophistication across channels

Ideally, your website, app and chatbot work together to provide integrated, personalized service. Your customers should be able to access the same contextual features whether in the mobile app, at the brick-and-mortar store or when chatting with Alexa. If a user clicks through to your site from a specific offer email, that offer should automatically persist on the website. While your customers often encounter the same creative elements across your app, social, display, email and website, they’re disappointed when experiences are disjointed and out-of-context.

Unfortunately, many brand experiences can only be delivered to users in a single channel due to the inherent limitations of current marketing clouds, making a promise of sophisticated interaction that can’t be delivered in other channels.

2. Reduce uncertainty: Use visual cues to signal behavior and ability

If you do have varying levels of sophistication for some of your communication channels, you can give your customer cues to set appropriate expectations. If you have created a bot or an app with sophisticated abilities, imbue it with human personality. In contrast, a limited chatbot doesn’t need a name, a highly humanized voice or an avatar. And if your mobile app focuses on a subset of features, be clear about what they are in the app name and description.

3. Responsiveness: Reduce the lag between insight and action

If you gain insight about an individual customer, how quickly can you adjust your interactions to be responsively relevant? A human conversation involves both parties responding in real time to conscious and subconscious cues. If your campaigns and audience segments are static, or if your channels are siloed, it can take too long to move at the speed of the customer. However, with AI marketing that has dynamic decisioning at its core, new data and behavioral signals can immediately be acted upon without human intervention. The result is a more responsive customer interaction that adapts as your customer evolves.

To get to the other side

Helping your customers ascend out of the uncanny valley can seem like a monumental task, but with AI marketing, it is now feasible. Consumer brands that make the leap and move away from the rules will be the first to reap the benefits of consistent, cross-channel interactions that are optimized at scale by AI.

ABOUT THE AUTHOR

Amplero is an Artificial Intelligence Marketing (AIM) company that enables enterprise B2C marketers to optimize customer lifetime value at a scale that is not humanly possible. Unlike most approaches which bolt AI onto legacy martech stacks, Amplero’s AIM Platform was built with AI at the core, using machine learning and multi-armed bandit experimentation to dynamically test 1000s of permutations to automatically optimize every customer interaction and maximize customer lifetime value and loyalty. Using Amplero, marketers in telecom, banking, gaming and consumer tech have garnered a 1-3% incremental growth in customer topline revenue and 3-5x lift in retention rates.https://www.amplero.com/