Which classes will give me the skills that A.I. machines will not replicate, making me more distinctly human? A distinct personal voice, Presentation skills, Talent for creativity, Situational Awareness … (Source: DAVID BROOKS – NY Times) (Extract)

 

Which classes will give me the skills that machines will not replicate, making me more distinctly human?

Source: Opinion | In the Age of A.I., Major in Being Human – The New York Times (nytimes.com)

You probably want to avoid any class that teaches you to think in an impersonal, linear, generalized kind of way — the kind of thinking A.I. will crush you at. On the other hand, you probably want to gravitate toward any class, in the sciences or the humanities, that will help you develop the following distinctly human skills:

 

Skills future image

A distinct personal voice. A.I. often churns out the kind of impersonal bureaucratic prose that is found in corporate communications or academic journals. You’ll want to develop a voice as distinct as those of George Orwell, Joan Didion, Tom Wolfe and James Baldwin, so take classes in which you are reading distinctive and flamboyant voices so you can craft your own.

 

Presentation skills. “The prior generation of information technology favored the introverts, whereas the new A.I. bots are more likely to favor the extroverts,” the George Mason University economist Tyler Cowen writes. “You will need to be showing off all the time that you are more than ‘one of them.’” The ability to create and give a good speech, connect with an audience, and organize fun and productive gatherings seem like a suite of skills that A.I. will not replicate.

 

A childlike talent for creativity. “When you interact for a while with a system like GPT-3, you notice that it tends to veer from the banal to the completely nonsensical,” Alison Gopnik, famed for her studies on the minds of children, observes. “Somehow children find the creative sweet spot between the obvious and the crazy.” Children, she argues, don’t just imitate or passively absorb data; they explore, and they create innovative theories and imaginative stories to explain the world. You want to take classes — whether they are about coding or painting — that unleash your creativity, that give you a chance to exercise and hone your imaginative powers.

 

Unusual worldviews. A.I. can be just a text-prediction machine. A.I. is good at predicting what word should come next, so you want to be really good at being unpredictable, departing from the conventional. Stock your mind with worldviews from faraway times, unusual people and unfamiliar places: Epicureanism, Stoicism, Thomism, Taoism, etc. People with contrarian mentalities and idiosyncratic worldviews will be valuable in an age when conventional thinking is turbo powered.

 

Empathy. Machine thinking is great for understanding the behavioral patterns across populations. It is not great for understanding the unique individual right in front of you. If you want to be able to do this, good humanities classes are really useful. By studying literature, drama, biography and history, you learn about what goes on in the minds of other people. If you can understand another person’s perspective, you have a more valuable skill than the skill possessed by some machine vacuuming up vast masses of data about no one in particular.

Situational Awareness. A person with this skill has a feel for the unique contours of the situation she is in the middle of. She has an intuitive awareness of when to follow the rules and when to break the rules, a feel for the flow of events, a special sensitivity, not necessarily conscious, for how fast to move and what decisions to take that will prevent her from crashing on the rocks. This sensitivity flows from experience, historical knowledge, humility in the face of uncertainty, and having led a reflective and interesting life. It is a kind of knowledge held in the body as well as the brain.

 

The best teachers teach themselves. When I think back on my own best teachers, I generally don’t remember what was on the curriculum, but rather who they were. Whether the subject of the course was in the sciences or in the humanities, I remember how these teachers modeled a passion for knowledge, a funny and dynamic way of connecting with students. They also modeled a set of moral virtues — how to be rigorous with evidence, how to admit error, how to coach students as they make their own discoveries. I remember how I admired them and wanted to be like them. That’s a kind of knowledge you’ll never get from a bot.

 

And that’s my hope for the age of A.I. — that it forces us to more clearly distinguish the knowledge that is useful information from the humanistic knowledge that leaves people wiser and transformed.

 

Using AI to Predict Zwift Race Results: the ZRace App (Source: Zwiftinsider)

BYERIC SCHLANGE

FEBRUARY 10, 2022

If you’re anything like me, there are two questions on your mind as you enter a bike race:

  1. How well will I do today? This is the personal expectation piece. Do I anticipate a shot at the podium? Or will I be getting dropped at some point for some reason, and this is more of a workout or team effort than a win attempt?
  2. Who are the strongest riders in this race? If I’m contesting the finish or working for a teammate, who are the key riders I need to be watching? If a weak rider attacks I don’t need to waste my watts, but if a strong one does, I may want to respond!

Back in early 2021, one Zwifter created ZRace – an app that answers both of these questions with impressive accuracy. The app predicts the finishing places of riders signed up for Zwift races, and according to its creator, the tool’s Top 5 prediction is quite accurate, with a 95% probability that 3 predicted top 5 athletes will indeed finish in the top 5.

How It Started

In early 2021, Bruno Gregory had already created racedata.bike, an app that analyzes and predicts races across all categories of cycling in the US. Then Covid happened, sparking Bruno’s interest in Zwift and his subsequent participation in Zwift races.

He quickly learned there was a wealth of Zwift racer data available: power, heart rate, weight, age, sex, historical results, and more. And he realized that, given this additional data, analysis and predictions could be made much more accurate than the initial version of his app.

The “Random Forest” decision tree algorithm is used in the machine learning which powers ZRace

I won’t go into detail how exactly ZRace calculates its predictions, because those details are above my pay grade. But it uses machine learning (a form of AI), and the more races that happen, the more accurate it gets. (To read how the project unfolded, including Bruno’s iterative approach to selecting the best predictive models, read his post on Medium.com.)

What It Does

Bruno describes ZRace like this:

ZRace analyzes all athletes registered in a race and predicts possible winners. It also analyzes each category and presents the average power required for you to have a good result. In addition, athletes with specific profiles are identified, such as climber, sprinter, and time-trialist. This way, depending on the race’s course, it is possible to predict who will have a better result or even who you should keep an eye on for a certain part of the race.

Let’s dig into each of those features, which all live on the Race Predictor screen.

The Race Predictor

While many Zwifters simply visit ZwiftPower and sort the signup list by rank to find out who the top riders are, the ZRace Race Predictor is much more precise, using multiple variables plus a robust machine-learning algorithm to predict each rider’s finishing position.

From the ZRace.bike homepage, select any race. This will load up the Race Predictor for that event. In a multi-category event it defaults to showing the A category predictions, but you can select the category you’d like to view from the “Category” dropdown. Here’s the Race Predictor screen for an upcoming KISS Race:

Along the top you have a summary of each category’s signup list, including the number of riders signed up and the FTP average of the field.

You also have top riders selected by profile: a top sprinter, climber, and break away rider. Depending on the route profile and race situation, these would be good riders to watch.

List of Past Races

Curious how accurate ZRace’s predictions are? Click “Past Races on Zwift” on the left, choose a race, then click Results to see actual results and ZRace’s prediction.

Predict Me

Click Predict Me, select your race category, and enter your Zwift ID. The app will predict your result in the next hour’s Zwift races. (Not sure how to find your Zwift ID?) Here’s what it predicted for me, entering the B category:

Race Statistics

This portion of the ZRace app is quite interesting. It displays stats for:

  • Most popular days of the week and time to race
  • Winners by country
  • Most popular race events
  • Most popular race routes
  • Toughest races (based on power numbers)
  • Winner profile of men and women in all categories
  • Winners Ranking (top-ranked riders in each category based on ZRace’s algorithm)

A Few Gotchas

ZRace only lists ZwiftPower-registered riders, so it’s possible you could enter a race and get beaten by someone who hasn’t signed up for ZwiftPower. Then it’s up to you to wrestle with that age-old Zwifter question… if they aren’t on ZwiftPower, did they actually win?

The ZRace algorithm works well for iTT races and standard scratch races, but doesn’t work for handicap (chase) races. It also can’t predict the winner of a points race with intermediate point segments, since it is only predicting the finish order.

The system can take a bit of time to make its prediction, because it has to process rider data when you view an event. Be patient, it’s worth it!

Intelligence artificielle : seules 9% des entreprises américaines l’auraient adoptée

Une étude publiée en juillet démontre que l’usage de l’intelligence artificielle en entreprise est encore loin d’être démocratisé aux Etats-Unis.

source: https://siecledigital.fr/2020/08/13/intelligence-artificielle-seules-9-des-entreprises-americaines-lauraient-adoptee/

Par Thibault Minondo – @ThibaultMinondo
Publié le 13 août 2020 à 09h01

      Etude sur l'IA aux Etats-UnisImage : Unsplash

L’US Census Bureau, un organisme délivrant des études de marché basées sur l’analyse de données, publiait le 16 juillet dernier un rapport sur l’utilisation de l’intelligence artificielle en entreprise. Une enquête effectuée fin 2018 auprès de 583 000 entreprises américaine qui met en avant une adoption très résiduelle.

L’intelligence artificielle, encore en phase de déploiement précoce

Le machine learning n’est par exemple déployé que chez 2.8% des entreprises sondées. Si l’on ajoute les autres pans de l’intelligence artificielle considérés dans l’étude (reconnaissance vocale, véhicules autonomes, machine vision, robotique, RFID Réalité Augmentée, etc), nous tombons sur seulement moins d’une entreprise américaine sur 10 (8.9%) ayant recours à au moins un d’entre eux.

Un chiffre qui a de quoi surprendre, mais qui est à mettre en perspective. L’étude date tout d’abord de fin 2018. Nul doute qu’avec l’accélération exponentielle des usages autour de l’IA ces six derniers mois, les résultats de l’enquête sont un instantané en décalage avec la réalité du monde post-Covid-19. Comme nous vous en parlions récemment, d’ici 2022, les technologies basées sur l’intelligence artificielle devraient faire leur trou dans 80% des entreprises.

Mais alors, comment expliquer un gap si conséquent entre l’état des lieux à fin 2018, et la projection à quatre années plus tard ? “Nous n’en sommes qu’au tout début de l’adoption de l’IA. Les gens ne doivent pas penser que la révolution de l’apprentissage machine est en train de s’essouffler ou qu’elle est du passé. Il y a un raz-de-marée devant nous” présente Brynjolfsson, directeur du Standord Digital Economy Lab et co-auteur de l’enquête de l’US Census Bureau.

Une vague d’adoption anticipée, mais encore prématurée si on se réfère aux chiffres de l’enquête. Chiffres bien en-dessous de ceux publiés à la même époque par deux autres enquêtes pilotées par McKinsey et PwC. La première, sortie en Novembre 2018 avançait le chiffre de 30% de dirigeants exploitant l’intelligence artificielle sous une forme ou une autre. La seconde, de PwC, paraissait fin 2018 et montrait qu’un dirigeant sur 5 planifiait le lancement d’une technologie liée à l’intelligence artificielle en 2019.Infographie de l'étude sur l'IA

Capture d’écran : Wired

Les grandes entreprises, leaders sur l’adoption de l’IA

Eric Brynjolfsson explique que contrairement aux études de leurs confrères, celle de l’US Census Bureau se veut plus représentative du tissu économique américain, car non-focalisée sur les Fortune 500. Une méthodologie qui se retranscrit dans la lecture d’une double réalité au niveau des entreprises : presque 25% de celles de plus de 250 employés ont investi dans une forme d’intelligence artificielle, quand seulement 7.7% des entreprises de moins de 10 employés ont fait de même.

“Les grandes entreprises adoptent”, dit Brynjolfsson, “mais la plupart des entreprises américaines – la pizzeria de Joe, le pressing, la petite entreprise de fabrication – n’en sont pas encore là”. Pour les grandes entreprises, ce rôle de porte-étendard dans l’adoption de ces nouvelles technologies sera déterminant dans le rebond économique. Portant en effet une proportion plus large de l’activité économique, il sera vital de voir ces leaders montrer la voie de la transition technologique.

Si la mise en place de composants d’intelligence artificielle peut paraître plus lente à enclencher chez des poids lourds du marché, la recherche de compétences en IA montre que la transition est lancée. Chez Google, le nombre de téléchargement de Tensorflow, son framework dédié à la création de programmes IA, a dépassé les 10 millions rien que sur le mois de Mai 2020.

Côté formation aux compétences de demain, Microsoft s’est allié au Français OpenClassrooms pour créer un programme “AI Engineer”, qui doit recruter et former 1000 ingénieurs en machine-learning et intelligence artificielle“Nous unissons nos forces pour combler le fossé des compétences numériques en offrant l’expertise et le contenu de Microsoft en matière d’IA, de cloud computing, d’apprentissage automatique et de sciences des données à des étudiants de tous âges et de tous horizons via la plateforme d’enseignement en ligne interactive et de haute qualité d’OpenClassroom”, déclare Ed Steidl, directeur des partenariats avec la main-d’œuvre chez Microsoft.

Les projets intégrant de l’IA se multiplient

Les initiatives mettant en avant l’intelligence artificielle commencent également à pointer le bout de leur nez. De nouveaux terrains de jeu émergent, et pas toujours là où on les soupçonne. En avril dernier, nous vous parlions du projet d’Intel, baptisé CORail. Chargé de collecter les données sur les récifs coralliens affectés par le réchauffement climatique, la solution est entièrement basée sur l’intelligence artificielle. Le mois dernier, nous vous présentions Tuna Scope, application dédiée à l’évaluation de la qualité d’un thon à partir d’une simple photographie. Construite à partir du machine-learning, l’application est déjà exploitée par une chaîne de restaurants japonaise. Côté boisson, il y a par exemple AB InBev. À partir de données recueillies dans une brasserie dans le New Jersey, la société a mis au point un algorithme d’IA pour prévoir les problèmes potentiels du processus de filtration utilisé pour éliminer les impuretés de la bière.

Le “raz-de marée” évoqué par Eric Brynjolfsson est donc encore à un stade précoce. Le sujet de l’intelligence artificielle se démocratise toutefois à une vitesse exponentielle, et il faudra garder un œil sur la multiplication des initiatives dans les mois et années à venir.

10 USEFUL Artificial Intelligence & Machine Learning Slides

Evolution of Analytics

AISOMA - Evolution of Analytics
AISOMA – Evolution of Analytics

Analytics is the discovery, interpretation, and communication of meaningful patterns in data; and the process of applying those patterns towards effective decision making. In other words, analytics can be understood as the connective tissue between data and effective decision making, within an organization. Especially valuable in areas rich with recorded information, analytics relies on the simultaneous application of statistics, computer programming and operations research to quantify performance.

Organizations may apply analytics to business data to describe, predict, and improve business performance. Specifically, areas within analytics include predictive analytics, prescriptive analytics, enterprise decision management, descriptive analytics, cognitive analytics, Big Data Analytics, retail analytics, supply chain analytics, store assortment and stock-keeping unit optimization, marketing optimization and marketing mix modeling, web analytics, call analytics, speech analytics, sales force sizing and optimization, price and promotion modeling, predictive science, credit risk analysis, and fraud analytics. Since analytics can require extensive computation (see big data), the algorithms and software used for analytics harness the most current methods in computer science, statistics, and mathematics.

2. Future of Data Science

AISOMA - Future of Data Science
AISOMA – Future of Data Science

Sebastian Raschka, researcher of applied Machine Learning and Deep Learning at Michigan State University, thinks that the future of Data Science does not indicate machines taking over humans, but rather human data professionals embracing open-source technologies.

It is common understanding that future Data Science projects, thanks to advanced tools, will scale to new heights where more human experts will be required to handle highly complex tasks very efficiently. However, according to McKinsey Global Institute (MGI), the next decade will witness a sharp shortage of around 250,000 Data Scientists in the U.S. alone. The question is whether machines can ever enable seamless collaboration between technologies, tools, processes, and end users. Automated tools and assistants can aid the human mind to accomplish tasks more quickly and accurately, but machines cannot ever be expected to substitute for human thinking. The core of problem-solving is intellectual thinking, which no machine, no matter how sophisticated it is, can replicate. (further information)

10 Useful AI & ML Slides #MachineLearning #ML #ArtificialIntelligence #AI #DeepLearning #BigDataAnalytics #Chatbots #NLP #Industry40 #Slides #AIEthicsKLICK UM ZU TWEETEN

Artificial Intelligence Quote
Artificial Intelligence Quote

3. Machine Learning Workflow

AISOMA - Machine Learning Workflow
AISOMA – Machine Learning Workflow

Check out the Google Machine Learning Glossary

4. Deep Learning Workflow

AISOMA - Deep Learning Workflow
AISOMA – Deep Learning Workflow

Check out the Google Machine Learning Glossary

Artificial Intelligence is a Tsunami
Artificial Intelligence is a Tsunami

5. Deep Learning Continuous Integration and Delivery

AISOMA - Deep Learning CI and CD
AISOMA – Deep Learning CI and CD

More info: link

6. Anatomy of a Chatbot

AISOMA - Anatomy of a Chatbot
AISOMA – Anatomy of a Chatbot

More info: How Businesses can Benefit from Chatbots

7. Five ethical challenges of AI

10 Useful AI & ML Slides 1
AISOMA – 5 Ethical Challenges of AI

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically[citation needed] divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs). (more info)

Artificial Intelligence quote
Artificial Intelligence quote

8. NLP / NLU Technology Stack

AISOMA - NLP Technology Stack
AISOMA – NLP Technology Stack

Natural language processing (NLP) is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.

Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation.(more info)

9. Condition Monitoring / Predictive Maintenance Solution Architecture

AISOMA - Predictive Maintenance Solution Architecture
AISOMA – Predictive Maintenance Solution Architecture

More Info: Smart Predictive Maintenance: The Key to Industry 4.0

10. Artificial Intelligence in Marketing

AISOMA - AI in Marketing
AISOMA – AI in Marketing

The Current Applications Of Artificial Intelligence In Mobile Advertising (source: Forbes)

The concept of self-programming computers was closer to science fiction than reality just ten years ago. Today, we feel comfortable conversing with smart personal assistant like Siri and keep wondering just how Spotify guessed what we like.

It’s not just the mobile apps that are becoming more “intelligent”. Advertising encouraging us to interact and install those apps has made its way onto a way new quality level as well. Thanks to advances in machine learning (ML), the baseline technology for AI, mobile advertising industry is now undergoing significant transformation.

source: https://www.forbes.com/sites/andrewarnold/2018/12/24/the-current-applications-of-artificial-intelligence-in-mobile-advertising/#2c8fa7f91821

AI can reduce mobile advertising fraud

In 2018, mobile ad fraud rates have doubled compared to the previous year. To tap into the expanding marketer’s ad budgets, hackers have created a host of new tricks to their playbook. According to Adjust data, the following mobile ad threats have prevailed:

SDK spoofing accounts represented 37% of ad fraud. In SDK Spoofing malicious code is injected in one app (the attacked) that simulates ad clicks, installs and other fake engagement and sends faulty signals to an attribution provider on behalf of the “victim” app. Such attacks can make a significant dent in an advertiser’s budget by forcing them to pay for installs that never actually took place.

Click injections accounted for 27% of attacks. Cybercriminals trigger clicks before the app installation is complete and receive credit for those installs as a result. Again, these can drain your ad budgets and dilute your ROI numbers.

YOU MAY ALSO LIKE

Faked installs and click spam accounted for 20% and 16% of fraudulent activities respectively. E-commerce apps have been in the fraud limelight this year, with nearly two-fifths of all app installs being marked as “fake” or “spam”, followed closely by games and travel apps. Forrester further reports that 69% of marketers whose monthly digital advertising budgets run above $1 million admit that at least 20% of those budgets are drained by fraud on the mobile web.

If the issue is so big, why no one’s tackling it? Well, detecting ad fraud is a complex process that requires 24/7 monitoring and analysis of incoming data. And that’s where AI comes to the fore. Intelligent algorithms can operationalize large volumes of data at a pace far more accurate than any human analyst, spot abnormalities and trigger alerts for further investigation. What’s more promising is that with advances in deep learning, the new-gen AI-powered fraud systems will also become capable to self-tune their performance over time, learning to predict, detect and mitigate emerging threats.

AI brings increased efficiency and higher ROI for real-time ad bidding

One of the biggest selling points of “AI revolution” across multiple industries is the promise to automate and eliminate low-value business processes. Mobile advertising is no exception. Juniper Research predicts that by 2021, machine learning algorithms that increase efficiency across real-time bidding networks will drive an additional $42 billion in annual spend.

Again, thanks to robust analytical capabilities ML-algorithms can create the perfect recipe for your ad, displaying it at the right time to the right people. Google has already been experimenting with various optimizations for mobile search ads. The results so far are rather promising. Macy’s, for instance, has been leveraging inventory ads and displaying them to customers’ who recently checked-up on their products and are now in close geo-proximity to the store holding the goods they looked up a few hours ago.

AdTiming has been helping marketers refine their approach to in-app advertising. By leveraging and crunching data from over 1000 marketers, the startup has developed their recipe for best ad placements. “Prescriptive analytics will tell our users when is the best time to run their ads; what messaging to use and how frequently the ad needs to be displayed in order to meet their ROI while maintaining the set budget,” said Leo Yang, CEO of AdTiming.

Just how competitive AI-powered real-time ad bidding can be? A recent experiment conducted by a group of scientists on Taobao – China’s largest e-commerce platform – proves that algorithms are performing way better than humans.

For comparison:

  • Manual bidding campaigns brought in 100% ROI with 99.52% of budget spent.
  • Algorithmic bidding generated 340% ROI with 99.51% of budget spent.

It’s clear who’s the winner here.

AI enables advanced customer segmentation and ad targeting

Algorithms are better suited for detecting patterns than a human eye, especially when sent to deal with large volumes of data. They can effectively group and cluster that data to create rich user profiles for individual customers – based on their past interactions with your brand, their demographic data and online browsing behaviors.

This means that you are no longer targeting a broad demographic of “women (aged 25-35), based in the US”. You become capable to pursue more niche audiences, exhibiting rather specific behaviors e.g. regularly engaging with hair care products in the luxury segment on social media. This insight can be further applied by an AI system when entering an RTB auction to predict when your ad should be displayed in front of the consumer (matching your profile) and when it’s worth a pass.

The best part is that AI-powered advertising is no longer cost-prohibitive for smaller companies. With new solutions entering the market, it would be interesting to observe how the face of mobile advertising will change in 2019 and onward.

Cycle du hype de Gartner: L’IA, propulsée entre les mains de tous

Dans 10 ans, l’intelligence artificielle sera partout, et plus seulement entre les mains des professionnels, d’après Gartner. La démocratisation de l’IA est en effet l’une des cinq principales prédictions technologiques identifiées par l’entreprise de recherche et de conseil dans l’édition 2018 de sa fameuse courbe du « Cycle du hype ». Cette courbe montre les technologies émergentes et identifie leur position sur le cycle du hype.

Réalisé à partir de l’analyse de 2 000 technologies réparties ensuite en 35 domaines émergents, le rapport identifie cette année en particulier cinq tendances technologiques qui vont brouiller les frontières entre les humains et les machines.

Crédit: Gartner (août 2018)

L’IA, propulsée entre les mains de tous

En ce qui concerne l’intelligence artificielle, «les technologies d’IA seront pratiquement partout au cours des 10 prochaines années», prédit Gartner. « Ces technologies, qui permettent aux ‘early adopters’ de s’adapter à de nouvelles situations et de résoudre de nouveaux problèmes, vont être mises à la disposition des masses – démocratisées. Des tendances comme le cloud computing, la communauté des ‘makers’ et l’open source vont propulser l’IA entre les mains de tout le monde».

Un futur qui sera rendu possible par les technologies suivantes: l’AI Platform-as a-Service (PaaS)l’Artificial General Intelligence, la conduite autonome, les robots mobiles autonomes, les plateformes conversationnelles basées sur l’IA, les deep neural networks, les véhicules autonomes volants, les robots intelligents et les assistants virtuels.

Les nouvelles opportunités liées aux écosystèmes numérisés

Parmi les quatre autres principales tendances qui vont brouiller les frontières entre les humains et les machines, Gartner a également identifié «les technologies des écosystèmes numérisés qui font leur chemin rapidement vers le Cycle du hype», explique Mike J. Walker, vice-président de la recherche. «Les plateformes de type blockchain et IoT ont désormais atteint leur apogée et nous pensons qu’elles arriveront à maturité dans les cinq à dix prochaines années».

Le rapport prévoit ainsi que le passage d’un modèle technique d’infrastructure compartimenté vers un écosystème de plateformes pose les bases de l’émergence de nouveaux business models qui constitueront «un pont entre l’Homme et la technologie».

Le «do-it-yourself biohacking»

«Au cours de la prochaine décennie, l’humanité commencera son ère ‘transhumaine’: la biologie pourra alors être piratée en fonction du mode de vie, des intérêts et des besoins de santé », explique Gartner. La société sera alors amenée à se demander les applications qu’elle est prête à accepter et à réfléchir aux enjeux éthiques.

Pour le cabinet, cette tendance concerne quatre secteurs d’activité : les technologies d’augmentation, la nutrigénomique, la biologie expérimentale et le « grinder biohacking », mouvement dont les membres souhaitent améliorer leurs capacités physiques grâce à des implants cybernétiques «fait maison».

Les expériences transparentes et immersives pour une vie plus intelligente

L’étude cite également la catégorie des expériences transparentes et immersives. « La technologie continuera à devenir plus centrée sur l’Homme, au point de favoriser la transparence entre les personnes, les entreprises et les choses». Pour Gartner, cela permettra notamment aux hommes de travailler et de vivre de façon plus intelligente. Cette évolution sera possible grâce aux technologies suivantes : l’impression 4D, la maison connectée, le Edge AI, la technologie auto-réparatrice, les batteries d’anodes à base de silicium, la poussière intelligente (Smart Dust), le Smart Workplace et les écrans volumétriques.

L’infrastructure omniprésente

Cinquième et dernière tendance, l’infrastructure omniprésente. Comme l’explique Gartner, «les technologies prenant en charge une infrastructure omniprésente sont en passe d’atteindre le sommet et de progresser rapidement au sein du Cycle du hype. Les circuits intégrés propres à une application (ASIC) 5G et de réseau de neurones profonds, en particulier, devraient atteindre le plateau au cours des deux à cinq prochaines années».

The History of Artificial Intelligence (by Rockwell Anyoha) (Source: Harvard)

by Rockwell Anyoha

Source: http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

August 28, 2017

Can Machines Think?

In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.

Making the Pursuit Possible

Unfortunately, talk is cheap. What stopped Turing from getting to work right then and there? First, computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. In other words, computers could be told what to do but couldn’t remember what they did. Second, computing was extremely expensive. In the early 1950s, the cost of leasing a computer ran up to $200,000 a month. Only prestigious universities and big technology companies could afford to dillydally in these uncharted waters. A proof of concept as well as advocacy from high profile people were needed to persuade funding sources that machine intelligence was worth pursuing.

The Conference that Started it All

Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956. In this historic conference, McCarthy, imagining a great collaborative effort, brought together top researchers from various fields for an open ended discussion on artificial intelligence, the term which he coined at the very event. Sadly, the conference fell short of McCarthy’s expectations; people came and went as they pleased, and there was failure to agree on standard methods for the field. Despite this, everyone whole-heartedly aligned with the sentiment that AI was achievable. The significance of this event cannot be undermined as it catalyzed the next twenty years of AI research.

Roller Coaster of Success and Setbacks

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively. These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. Optimism was high and expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved.

Anyoha SITN Figure 2 AI timeline

Breaching the initial fog of AI revealed a mountain of obstacles. The biggest was the lack of computational power to do anything substantial: computers simply couldn’t store enough information or process it fast enough. In order to communicate, for example, one needs to know the meanings of many words and understand them in many combinations. Hans Moravec, a doctoral student of McCarthy at the time, stated that “computers were still millions of times too weak to exhibit intelligence.”  As patience dwindled so did the funding, and research came to a slow roll for ten years.

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Expert systems were widely used in industries. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. Unfortunately, most of the ambitious goals were not met. However, it could be argued that the indirect effects of the FGCP inspired a talented young generation of engineers and scientists. Regardless, funding of the FGCP ceased, and AI fell out of the limelight.

Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor. It seemed that there wasn’t a problem machines couldn’t handle. Even human emotion was fair game as evidenced by Kismet, a robot developed by Cynthia Breazeal that could recognize and display emotions.

Time Heals all Wounds

We haven’t gotten any smarter about how we are coding artificial intelligence, so what changed? It turns out, the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem.  , which estimates that the memory and speed of computers doubles every year, had finally caught up and in many cases, surpassed our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Kie Je, only a few months ago. It offers a bit of an explanation to the roller coaster of AI research; we saturate the capabilities of AI to the level of our current computational power (computer storage and processing speed), and then wait for Moore’s Law to catch up again.

Artificial Intelligence is Everywhere

We now live in the age of “big data,” an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process. The application of artificial intelligence in this regard has already been quite fruitful in several industries such as technology, bankingmarketing, and entertainment. We’ve seen that even if algorithms don’t improve much, big data and massive computing simply allow artificial intelligence to learn through brute force. There may be evidencethat Moore’s law is slowing down a tad, but the increase in data certainly hasn’t lost any momentum. Breakthroughs in computer science, mathematics, or neuroscience all serve as potential outs through the ceiling of Moore’s Law.

The Future

So what is in store for the future? In the immediate future, AI language is looking like the next big thing. In fact, it’s already underway. I can’t remember the last time I called a company and directly spoke with a human. These days, machines are even calling me! One could imagine interacting with an expert system in a fluid conversation, or having a conversation in two different languages being translated in real time. We can also expect to see driverless cars on the road in the next twenty years (and that is conservative). In the long term, the goal is general intelligence, that is a machine that surpasses human cognitive abilities in all tasks. This is along the lines of the sentient robot we are used to seeing in movies. To me, it seems inconceivable that this would be accomplished in the next 50 years. Even if the capability is there, the ethically would serve as a strong barrier against fruition. When that time comes (but better even before the time comes), we will need to have a serious conversation about machine policy and ethics (ironically both fundamentally human subjects), but for now, we’ll allow AI to steadily improve and run amok in society.

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. His current project employs the use of machine learning to model animal behavior. In his free time, Rockwell enjoys playing soccer and debating mundane topics.

This article is part of a Special Edition on Artificial Intelligence.

For more information:

Brief Timeline of AI

https://www.livescience.com/47544-history-of-a-i-artificial-intelligence-infographic.html

Complete Historical Overview

http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

Dartmouth Summer Research Project on Artificial Intelligence

https://www.aaai.org/ojs/index.php/aimagazine/article/view/1904/1802

Future of AI

https://www.technologyreview.com/s/602830/the-future-of-artificial-intelligence-and-cybernetics/

Discussion on Future Ethical Challenges Facing AI

http://www.bbc.com/future/story/20170307-the-ethical-challenge-facing-artificial-intelligence

Detailed Review of Ethics of AI

https://intelligence.org/files/EthicsofAI.pdf

Is It Time for Your Organization to Invest in AI? (Source: InformationWeek)

After decades of AI being viewed as a “future” concept it may be time for companies to invest real dollars for real applications.

source: https://www.informationweek.com/big-data/ai-machine-learning/is-it-time-for-your-organization-to-invest-in-ai/d/d-id/1330683

A recent Accenture survey found that 85% of business executives plan to invest heavily in AI-related technologies over the next three years. Most investments, according to the report, will be in major business processes, underpinning a company’s finance and accounting, marketing, procurement, and customer relations activities.

Could it be possible that after years of forecasts and speculation, not to mention an endless number of science fiction stories, movies, and TV shows, that AI is finally ready to become an indispensable real-world technology

Ruchir Puri, an IBM fellow and chief architect of IBM Watson, certainly thinks so. “There are many opportunities for AI across front, middle, and back office process, throughout lines of business and within various verticals,” Puri noted. “AI capabilities, such as conversation, vision and language technologies, can be used to solve a range of practical enterprise problems, boost productivity and foster new discoveries across any area it is applied to.”

Image: Pixabay

Image: Pixabay

Getting down to business

Across the high-tech industry, there’s a strong feeling that AI is about to cross the chasm and become a business-transforming technology, observed Jean-Luc Chatelain, CTO at Accenture Applied Intelligence. “There’s already evidence of the impact AI has and even stronger evidence of what it will have in areas like healthcare and life sciences, where it could facilitate breast cancer detection or personalized drug development.”

Vasant Dhar, a professor at New York University’s’ Stern School of Business, stated that AI can be put to work wherever data exists. “It could be for customer service, marketing, planning, gathering new customers or assets,” he explained.

Vasant Dhar

Vasant Dhar

According to an IBM study, 80% of the world’s data is not on the Web, but sitting unused inside businesses. “Today, most organizations only have a capability to explore a tiny fraction of this ‘dark’ data,” Puri said. AI is the key to unlocking that hidden resource.

Dhar noted that AI offers enterprises three basic benefits: the ability to learn continually from data, the ability to improve decision making and make it more consistent, and improving operational efficiency and cutting costs.

AI can play a role in reducing costs while enhancing speed, accuracy, availability, and auditability, suggested Nichole Jordan, national managing partner of markets, clients, and industry for audit, tax, and advisory firm Grant Thornton. “Any process involving structured digital data and business rules will benefit,” she stated. “AI can also assist with front-office functions — think customer interactions via chatbots and mobile messaging that are able to address common customer/client questions.”

Nichole Jordan

Nichole Jordan

Said Tabet, lead technologist for AI strategy at Dell EMC, sees a particularly bright future for AI in security applications. “AI-based pattern recognition technologies have been employed within various IT and cyber security applications for several years to proactively manage system performance or to block security threats,” he noted. “These capabilities will only become more widespread.”

AI also promises to improve the efficiency of various basic and repetitive tasks, according to Dennis Bonilla, executive dean of the University of Phoenix’s College of Information Systems & Technology. “This allows companies to allocate more resources to people and projects that require a higher level of creativity that computers can’t yet achieve,” he observed. Bonilla expects a rapid acceleration in businesses that use AI to handle rote tasks.

Dennis Bonilla

Dennis Bonilla

“We are seeing AI become increasingly more available to businesses as new platforms are created and people and organizations have more access to the necessary technology to create and utilize AI,” he said.

While AI can automate an entire range of routine functions, the technology can also help workers become more productive and efficient, augmenting human skills in terms of strength or dexterity. “Initial AI projects should be deployed in areas that augment human performance and help people with their jobs, hence demonstrating the value of AI while also easing employees’ fears of replacement,” recommended Joshua Feast, CEO of Cogito, an MIT spinoff that creates AI and emotional intelligence software.

Looking farther the road, Chatelain believes that AI will lead to the creation of a new class of intelligent products “such as self-repairing software and autonomous vehicles, as well as ‘living services’ that learn from the user and adapt to their preferences and needs.”

Jean-Luc Chatelain

Jean-Luc Chatelain

Assessing benefits

“AI is no longer a ‘nice to have’ technology; it is becoming a crucial part of an organization’s arsenal of tools,” Puri said. “Enterprises deploying AI technologies at mass scale will gain dramatic increases in productivity, allowing employees to handle more complex, creative and higher-impact tasks, opening entirely new avenues of exploration and discovery.”

According to Jordan, business process improvements can be measured through several indicators, such as reduced headcount costs and enhanced speed, accuracy, quality, repeatability, availability, auditability and productivity. “Robots don’t take vacation, don’t get sick, and don’t take breaks,” she quipped.

“The improvements that result from the implementation of AI should be measured by benchmarking current performance and then comparing metrics after the AI has been deployed,” Feast suggested.

Joshua Feast

Joshua Feast

For example, when deploying AI within sales and service call centers, to measure customer perception and improve the AI technology’s speaking behavior common metrics such as handle time, first call resolution, customer satisfaction, employee satisfaction can be measured before and after the AI technology has been applied. “This comparison will allow business to see the specific impacts AI on their business results,” Feast said.

Bonilla noted that businesses must also evaluate the impact AI has on customer-facing products and services. “Is the fidelity of your product jeopardized in any way?” he asked. “AI chatbots, for example, have come a long way, but nothing beats a human touch when it comes to customer service.”

Getting started

The first step for businesses planning to get started with AI is to look across their organization for areas where the technology can make the greatest impact. “Once companies have identified these areas, they can work with prominent academic institutions, leading technology vendors and industry analysts to gain better insight into available AI applications that can be deployed in a controlled setting where they can also be easily measured,” Feast explained.

Businesses should take a proactive approach to AI by inventorying current services to see how the technology might be able to enhance them, Jordan said. “They should also take stock of which internal manual, repetitive processes could be assisted or enhanced by AI in order to better and more quickly serve their clients,” she added.

Ruchir Puri

Ruchir Puri

To fully leverage AI’s potential, enterprise leaders must be able to clearly articulate their goals and expectations, and then prepare themselves with the right tools, data, and talent. “Separately, it is helpful to check if anyone in the organization is already using AI in some capacity before diving in, as collaboration across silos and business units can save time,” Puri elaborated.

It takes imagination to identify unique business process to which AI capabilities can be applied, as well as expertise to actually implement a new AI-based innovation. “Organizations can start small by first examining the various workflows around non-critical operations that could be made more efficient if automated using AI,” Tabet said. It’s also a good idea to check for instances where partial automation is already happening. “Could AI help take that process from partial to full automation?” he asked. “What could the gains be to go all the way?”

To avoid becoming bogged down by AI’s inherent complexity, many organizations opt to tap into AI-as-a-service, using productized off-the-shelf APIs and AI applications in areas such as image recognition and natural language processing. “Pilot these proven practical applications to demonstrate the potential of AI, and be ready to fail fast and move on,” Chatelain said.

AI’s inevitability

Bonilla sees virtually no downside to investing in AI, but plenty of potential. “At the end of the day, businesses will fall behind the competition if they don’t keep up with technology—that has always been true,” he observed.

Said Tabet

Said Tabet

Skeptics are quick to point out that AI will eliminate traditional jobs, yet that doesn’t tell the whole story. “AI will also create jobs that don’t exist yet, but that’s an abstract concept for businesses, education providers, and policymakers alike,” Bonilla said. “The challenge is working collaboratively to ensure workers are prepared to fill these jobs when they’re created.”

Trust in AI will always be an issue, even when things are going well. “Part of that is human nature, part of it is still the newness and novelty of AI-based technologies in our lives and workplaces,” Tabet said. Successful experimentation and trials in non-critical business areas can help win over skeptics. “Continued education with workshops and hands-on proof of concepts and demos will help reinforce and illustrate the company’s strategy for AI,” he added.

As AI acceptance builds, IT and business leaders will need to shake off their natural resistance to change. “They need to become agents of the change that AI will bring,” Chatelain said, noting that AI is evolving rapidly and that enterprises need to pay close attention to technology and market developments. “I have rarely seen [technical] papers being published as fast as on AI topics, globally,” he noted. “Staying informed and close to academia and the latest AI research will certainly pay off.”

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic …View Full Bio