“Alexa, is there anything good happening on Prime Day this year?” “…Did you check eBay?”

Shop the Crash Sale starting 7/15.

#CrashingPrimeDay

Ebay has thrown down the gauntlet to Amazon announcing its very own “blockbuster” sales event to troll its online rival.

In an effort to cash in on the Amazon Prime day hype, Ebay has said it is set to launch a “Crash Sale”, a thinly veiled dig at Amazon’s propensity to crash during its Prime Day event, thought to have cost it around $90 million last year.

Ebay will offer discounts of up to 50 per cent on leading technology brands including Apple, Samsung, KitchenAid, Garmin and LG during Amazon’s 48-hour Prim Day event beginning on July 15.

Embedded video

eBay

@eBay

We’re crashing the party with deals on things you actually want. Stay tuned. 🎉

51 people are talking about this

However, if Amazon should crash during the event, Ebay has promised to pile on more “too-good-to-be-true” deals to abate shoppers need for a tech bargain.

This forms part of a wider sales event for Ebay, which will be offering deals of up to 85 per cent off certain items for three weeks starting July 1, including robot vacuums, stand mixers, smart home gear and more.

READ MORE: Amazon Prime Day to run for 48 hours for the first time

From July 1 – 7 this will take the form of “July 4th Savings”, followed by “Hot Deals for Hot Days” from July 8 -22 which will include daily deals on tech, appliances and smart home devices.

“Ebay is primed to deliver exactly what shoppers want during this year’s crash (sale),” vice president of Ebay Americas Jay Hanson said.

“July has become a massive shopping season, and our summer sales include blockbuster deals that will not disappoint.”

Advertisements

“Amazon va favoriser le rapprochement du retail et de la publicité”

Guillaume Planet, VP media & digital marketing du Groupe SEB, et ancien directeur d’agences médias (Havas, Fullsix, Dentsu Aegis…), livre son analyse sur les bouleversements du secteur publicitaire causés par le déploiement des activités publicitaires d’Amazon.

 

En dévoilant, lors de ses différentes communications financières depuis début 2018, des chiffres de ventes publicitaires en très forte progression, pour ne pas dire bluffants – plus ou moins 4,2 milliards de dollars sur les six premiers mois de l’année – Amazon a officialisé son arrivée parmi les grands acteurs du marché publicitaire. Le cabinet eMarketer prévoit même que le groupe devienne dès 2018 le troisième acteur de la publicité en ligne aux Etats-Unis, devant Microsoft et Oath.

source: https://www.mindnews.fr/article/13318/amazon-va-favoriser-le-rapprochement-du-retail-et-de-la-publicite/

Et ce n’est qu’un début. Car les passerelles entre ses activités de distributeur de produits en ligne et ses activités publicitaires lui offrent des perspectives énormes.

Si on s’arrête sur le secteur du retail, on peut supposer que les développements d’Amazon vont créer des vocations chez les autres acteurs tant cette évolution du modèle est intelligente. Elle s’appuie sur plusieurs leviers :

1 – Un avantage concurrentiel

Amazon a un double avantage concurrentiel avec les autres vendeurs d’espaces publicitaires : une possession massive de data transactionnelles, associée à une position de clients et non de fournisseurs vis-à-vis des marques qui achètent ces espaces publicitaires.

2 – Un cercle vertueux achat – data – publicité

Amazon jouit d’un cercle vertueux d’investissements publicitaires financés par les marques qui drivent un trafic très qualifié grâce à la data, nourrit le core business de vente des retailers, et alimente encore plus en data qui vont elles même nourrir le volet publicitaire.

3 – Un levier de marge

L’activité publicitaire, surtout avec ses actifs présentés plus haut, offre surtout à Amazon des perspectives de profitabilité élevée, alors que l’activité de négoce l’est peu par nature.

 

Quel impact sur le marché publicitaire ?

 

L’impact du développement d’Amazon sur la publicité est triple concernant le secteur :

1 – Les plateformes suivent le même sillon

Les autres acteurs en devenir vont devoir se rapprocher du monde du retail. Google en fait une priorité comme le montre les récents partenariats avec Wallmart et Carrefour et l’investissement dans JD.com. Tencent en fait de même, et Facebook s’y intéresse aussi très probablement, comme le montre la place de marché actuellement en test sur la plateforme.

En effet, les opportunités sont grandes pour ces acteurs en termes de data très pertinentes pour nourrir l’efficacité des solutions proposées. Le retail ouvre aussi accès à d’autres types de budgets marketing des marques : les fameux budgets “BTL”, dédiés aux points de ventes, souvent supérieurs aux budgets publicitaires.

2 – Des opportunités pour de nouveaux acteurs

Ces développements du marché derrière Amazon créent des opportunités pour de nouveaux types d’acteurs pure players de la publicité retail, par exemple Criteo.

3 – Les médias encore plus marginalisés

Mais Amazon pousse surtout un peu plus les acteurs historiques de la vente d’espace publicitaire – je parle ici des médias traditionnels – vers un rôle plus marginal sur ce modèle économique. Ces derniers, déjà chahutés par Google et Facebook, font face à un nombre croissant de concurrents mieux armés pour profiter des transformations du secteur de la publicité : ils sont riches en data ultra-pertinentes, matures en expertises digitales et data, possesseurs d’infrastructures techs sophistiquées, et hyper-puissants financièrement.

Face à cette nouvelle donne, les médias prennent de plus en plus le sujet dans le bon sens. Après une période de déni et de diabolisation des GAFA, ils cherchent maintenant de plus en plus à investiguer de nouveaux modèles économiques et revoient leur relation avec Google et Facebook, qui doivent être assimilés à des partenaires pour contribuer à engager au mieux leurs audiences.

 

Quelle réaction pour les grands distributeurs ?

 

Les retailers réagissent différemment. Ils ont étonnamment tendance à se rapprocher immédiatement de leurs nouveaux concurrents : Wallmart et Carrefour pactisent avec Google, Carrefour avec Tencent, Auchan avec Alibaba, Monoprix avec Amazon… Leur objectif est d’apprendre à travers ces partenariats, mais les risques sont évidemment importants.

Quels sont-ils ? Que Monoprix perde l’accès à la data, moteur du nouveau modèle vertueux du retail en s’associant à la market place d’Amazon. Que Carrefour et Wallmart offrent potentiellement à un futur concurrent – au minimum sur le volet publicitaire -, Google, l’opportunité de développer sa courbe d’expérience dans l’univers du retail. Enfin qu’Auchan prend le risque de donner les clés de compréhension de nouveaux marchés cible pour Alibaba.

Le rapprochement avec des acteurs certes matures en digital et data, mais moins menaçants (de type Criteo, par exemple sur le volet publicitaire) serait probablement une démarche moins risquée pour apprendre les nouveaux codes de ce secteur.

Power of recomendation: More than 50% of Amazon shoppers aren’t willing to go beyond the second page when searching for a product

BY DEENA M. AMATO-MCCOY

Customers continue to visit Amazon to discover new products or brands, yet their decisions are seldom influenced by digital ads seen on the site.

source: https://www.chainstoreage.com/technology/study-majority-amazon-shoppers-not-influenced-digital-ads/

When customers visit Amazon, 65% said they don’t even notice the ads featured, while 25% find them “useful or relevant,” according to the “2018 Amazon Shopper Behavior Study: How Shoppers will Browse and Buy on Amazon,” a report from CPC Strategy.

According to the data, Amazon continues to improve its native advertising experience for shoppers, a move that ensures the company is helping consumers to find the right product, for the right price, at the right time. It also means additional digital ads are not paramount to driving sales.

For example, more than 50% of Amazon shoppers aren’t willing to go beyond the second page when searching for a product. Meanwhile, they are more open to trying new products, as 80% are open to “occasionally” or “frequently” trying new products or brands on Amazon. This is a huge jump from 50% last year. And customer reviews are not spurring this curiosity, as approximately 80% of these customers don’t entirely trust Amazon’s customer reviews.

Despite Amazon being their “go-to” shopping source however, 74.8% of Amazon shoppers still price check on other sites.

When customers are ready to make a purchase with Amazon, more shoppers are open to using voice-enabled devices. In fact, 14.2% of Amazon customers made a purchase via a voice-enabled device in the last six months, and 61.3% of voice-enabled device owners have an Amazon Dot or Echo, the study said.

“We expected that some Amazon shoppers owned Amazon’s voice enabled devices, and had made purchases using Alexa, but we weren’t prepared to see numbers like this so early into the game,” said Nii Ahene, COO and cofounder of CPC Strategy. “The battle for ultimate marketplace dominance isn’t over, but Amazon is off to an early lead.”

Retail Revolution: Inside Amazon Go, a Store of the Future

The technology inside Amazon’s new convenience store, opening Monday in downtown Seattle, enables a shopping experience like no other — including no checkout lines.

Source: https://www.nytimes.com/2018/01/21/technology/inside-amazon-go-a-store-of-the-future.html?partner=IFTTT
Image
A row of gates guards the entrance to Amazon Go.CreditKyle Johnson for The New York Times

SEATTLE — The first clue that there’s something unusual about Amazon’s store of the future hits you right at the front door. It feels as if you are entering a subway station. A row of gates guard the entrance to the store, known as Amazon Go, allowing in only people with the store’s smartphone app.

Inside is an 1,800-square foot mini-market packed with shelves of food that you can find in a lot of other convenience stores — soda, potato chips, ketchup. It also has some food usually found at Whole Foods, the supermarket chain that Amazon owns.

But the technology that is also inside, mostly tucked away out of sight, enables a shopping experience like no other. There are no cashiers or registers anywhere. Shoppers leave the store through those same gates, without pausing to pull out a credit card. Their Amazon account automatically gets charged for what they take out the door.

On Monday, the store will open to the public for the first time. Gianna Puerini, the executive in charge of Amazon Go, recently gave tours of the store, in downtown Seattle. This is a look at what shoppers will encounter.

Image

CreditKyle Johnson for The New York Times
Image

CreditKyle Johnson for The New York Times
Image

There is no need for a shopping cart. Products can go straight into a shopping bag.CreditKyle Johnson for The New York Times

There are no shopping carts or baskets inside Amazon Go. Since the checkout process is automated, what would be the point of them anyway? Instead, customers put items directly into the shopping bag they’ll walk out with.

Every time customers grab an item off a shelf, Amazon says the product is automatically put into the shopping cart of their online account. If customers put the item back on the shelf, Amazon removes it from their virtual basket.

You have 3 free articles remaining.

Subscribe to The Times

The only sign of the technology that makes this possible floats above the store shelves — arrays of small cameras, hundreds of them throughout the store. Amazon won’t say much about how the system works, other than to say it involves sophisticated computer vision and machine learning software. Translation: Amazon’s technology can see and identify every item in the store, without attaching a special chip to every can of soup and bag of trail mix.

Image

CreditKyle Johnson for The New York Times

There were a little over 3.5 million cashiers in the United States in 2016 — and some of their jobs may be in jeopardy if the technology behind Amazon Go eventually spreads. For now, Amazon says its technology simply changes the role of employees — the same way it describes the impact of automation on its warehouse workers.

“We’ve just put associates on different kinds of tasks where we think it adds to the customer experience,” Ms. Puerini said.

Those tasks include restocking shelves and helping customers troubleshoot any technical problems. Store employees mill about ready to help customers find items, and there is a kitchen next door with chefs preparing meals for sale in the store. Because there are no cashiers, an employee sits in the wine and beer section of the store, checking I.D.s before customers can take alcohol off the shelves.

Image

CreditKyle Johnson for The New York Times
Image

CreditKyle Johnson for The New York Times
Image

CreditKyle Johnson for The New York Times

Most people who spend any time in a supermarket understand how vexing the checkout process can be, with clogged lines for cashiers and customers who fumble with self-checkout kiosks.

At Amazon Go, checking out feels like — there’s no other way to put it — shoplifting. It is only a few minutes after walking out of the store, when Amazon sends an electronic receipt for purchases, that the feeling goes away.

Actual shoplifting is not easy at Amazon Go. With permission from Amazon, I tried to trick the store’s camera system by wrapping a shopping bag around a $4.35 four-pack of vanilla soda while it was still on a shelf, tucking it under my arm and walking out of the store. Amazon charged me for it.

Image

CreditKyle Johnson for The New York Times
Image

CreditKyle Johnson for The New York Times

A big unanswered question is where Amazon plans to take the technology. It won’t say whether it plans to open more Amazon Go stores, or leave this as a one-of-a-kind novelty. A more intriguing possibility is that it could use the technology inside Whole Foods stores, though Ms. Puerini said Amazon has “no plans” to do so.

There’s even speculation that Amazon could sell the system to other retailers, much as it sells its cloud computing services to other companies. For now, visitors to Amazon Go may want to watch their purchases: Without a register staring them in the face at checkout, it’s easy to overspend.

Nick Wingfield is a technology correspondent based in Seattle. He covers Amazon, Microsoft and emerging technologies and has written on technology’s impact on economies in the Pacific Northwest. He was previously a reporter at The Wall Street Journal. @nickwingfield

A version of this article appears in print on , on Page B1 of the New York edition with the headline: Inside Amazon’s Store of the Future. Order Reprints | Today’s Paper | Subscribe

The 10 tech companies that have invested the most money in AI of the tech giants. Google is the biggest investor in AI by billions.

  • Google has invested the most in artificial intelligence (AI) out of the tech giants, according to research from RS Components.
  • Since the first acquisition in 1998, tech giants have spent nearly $8.6 billion on AI startups.v2-AI-innovations

Google has invested the most money in artificial intelligence (AI), according to research from RS Components. Tech giants have disclosed nearly $8.6 billion in acquisitions since 1998.

The company has spent nearly $3.9 billion in disclosed deals since 2006, with the bulk of that spent in its 2014 acquisition of Nest Labs for $3.2 billion. The Nest Labs purchase was the single largest disclosed investment on RS Components’ list, which includes 103 startup purchases across 15 tech giants.

Here are the top 10 tech companies based on how much they’ve spent acquiring AI startups where the price was disclosed.

1. Google – $3.9 billion

2. Amazon – $871 million

3. Apple – $786 million

4. Intel – $776 million

5. Microsoft – $690 million

6. Uber – $680 million

7. Twitter – $629 million

8. AOL – $191.7 million

9. Facebook – $60 million

10. Salesforce – $32.8 million

Google continued its domination in total number of acquired startups, investing in 29 since its first, Neven Vision, in 2006. Apple grabbed second with 14, and Microsoft was third with nine.

Microsoft was the first to invest in AI, spending $40 million on Firefly Network in 1998. Google was next to invest, but didn’t do so for another eight years.

Here are the eight single biggest disclosed investments in AI startups to date.

1. Nest Labs – $3.2 billion

2. Kiva Systems – $775 million

3. Otto – $680 million

4. Deep Mind – $500 million

5. TellApart – $479 million

6. Movidius – $400 million

7. Nervana – $350 million

8. SwiftKey – $250 million

The pace and price of startup acquisitions are unlikely to drop as AI continues to grow as a technology.

Google launches a new directory to help you discover Assistant actions (Source: TechCrunch)

by  

source: https://techcrunch.com/2018/01/08/google-launches-a-new-directory-to-help-you-find-assistant-action/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&sr_share=facebook

Google says you can now perform more than a million actions with the Google Assistant. Those range from looking up photos with Google Photos to starting a meditation session from Headspace. But one problem with voice assistants is that it’s very hard to discover which actions you actually can perform. For many users, that means they use their Google Home or Alexa to set a few timers and maybe play music, without ever realizing what else they can do.

To help its users a bit, Google is launching a new directory page for the Google Assistant today. This is part of a slew of Assistant-related announcements at CES today; while it’s probably not the most important (those smart displays sure look nice, after all), it’s nevertheless a useful new tool, especially for new users.

It’s been almost exactly a year since Google enabled third-party actions, and while Google can’t boast the same numbers of third-party support as Amazon, there’s clearly a lot of developer interest in building these actions. And to make talking about them a bit easier, Google is also now calling its first-party actions… wait for it… “actions.”

FEATURED IMAGE: BLOOMBERG/GETTY IMAGES

Why AI Is the ‘New Electricity’ (Source: Wharton)

Source: http://knowledge.wharton.upenn.edu/article/ai-new-electricity/

Just as electricity transformed the way industries functioned in the past century, artificial intelligence — the science of programming cognitive abilities into machines — has the power to substantially change society in the next 100 years. AI is being harnessed to enable such things as home robots, robo-taxis and mental health chatbots to make you feel better.

A startup is developing robots with AI that brings them closer to human level intelligence. Already, AI has been embedding itself in daily life — such as powering the brains of digital assistants Siri and Alexa. It lets consumers shop and search online more accurately and efficiently, among other tasks that people take for granted.

“AI is the new electricity,” said Andrew Ng, co-founder of Coursera and an adjunct Stanford professor who founded the Google Brain Deep Learning Project, in a keynote speech at the AI Frontiers conference that was held this past weekend in Silicon Valley. “About 100 years ago, electricity transformed every major industry. AI has advanced to the point where it has the power to transform” every major sector in coming years. And even though there’s a perception that AI was a fairly new development, it has actually been around for decades, he said. But it is taking off now because of the ability to scale data and computation.

Ng said most of the value created through AI today has been through supervised learning, in which an input of X leads to Y. But there have been two major waves of progress: One wave leverages deep learning to enable such things as predicting whether a consumer will click on an online ad after the algorithm gets some information about him. The second wave came when the output no longer has to be a number or integer but things like speech recognition, a sentence structure in another language or audio. For example, in self-driving cars, the input of an image can lead to an output of the positions of other cars on the road.

Indeed, deep learning — where a computer learns from datasets to perform functions, instead of just executing specific tasks it was programmed to do — was instrumental in achieving human parity in speech recognition, said Xuedong Huang, who led the team at Microsoft on the historic achievement in 2016 when their system booked a 5.9% error rate, the same as a human transcriptionist. “Thanks to deep learning, we were able to reach human parity after 20 years,” he said at the conference. The team has since lowered the error rate even more, to 5.1%.

“We have cheap motors, sensors, batteries, plastics and processors … why don’t we have Rosie?”–Dileep George

The Rise of Digital Assistants

Starting in 2010, the quality of speech recognition began to improve for the industry, eventually leading to the creation of Siri and Alexa. “Now, you almost take it for granted,” Ng said. That’s not all; speech is expected to replace touch-typing for input, said Ruhi Sarikaya, director of Amazon Alexa. The key to greater accuracy is to understand the context. For example, if a person asks Alexa what he should do for dinner, the digital assistant has to assess his intent. Is he asking Alexa to make a restaurant reservation, order food or find a recipe? If he asks Alexa to find ‘Hunger Games,’ does he want the music, video or audiobook?

And what’s next for the digital assistant is an even more advanced undertaking — to understand “meaning beyond words,” said Dilek Hakkani-Tur, research scientist at Google. For example, if the user uses the words “later today,” it could mean 7 p.m. to 9 p.m. for dinner or 3 p.m. to 5 p.m. for meetings. This next level up also calls for more complex and lively conversations, multi-domain tasks and interactions beyond domain boundaries, she said. Moreover, Hakkani-Tur said, digital assistants should be able to do things such as easily read and summarize emails.

After speech, ‘computer vision’ — or the ability of computers to recognize images and categorize them — was the next to leap, speakers said. With many people uploading images and video, it became cumbersome to add metadata to all content as a way to categorize them. Facebook built an AI to understand and categorize videos at scale called Lumos, said Manohar Paluri, a research lead at the company. Facebook uses Lumos to do data collection of, for example, fireworks images and videos. The platform can also use people’s poses to identify a video, such as categorizing people lounging around on couches as hanging out.

“Her job is to bring a spot of life to your home. She provides entertainment — she can play music, podcasts, audiobooks.”–Kaijen Hsiao

What’s critical is to ascertain the primary semantic content of the uploaded video, added Rahul Sukthankar, head of video understanding at Google. And to help the computer correctly identify what’s in the video — for example, whether professionals or amateurs are dancing — his team mines YouTube for similar content that AI can learn from, such as having a certain frame rate for non-professional content. Sukthankar adds that a promising direction for future research is to do computer training using videos. So if a robot is shown a video of a person pouring cereal into a bowl at multiple angles, it should learn by watching.

At Alibaba, AI is used to boost sales. For example, shoppers of its Taobao e-commerce site can upload a picture of a product they would like to buy, like a trendy handbag sported by a stranger on the street, and the website will come up with handbags for sale that come closest to the photo. Alibaba also uses augmented reality/virtual reality to make people see and shop from stores like Costco. On its Youku video site, which is similar to YouTube, Alibaba is working on a way to insert virtual 3D objects into people’s uploaded videos, as a way to increase revenue. That’s because many video sites struggle with profitability. “YouTube still loses money,” said Xiaofeng Ren, a chief scientist at Alibaba.

Rosie and the Home Robot

But with all the advances in AI, it’s still no match for the human brain. Vicarious is a startup that aims to close the gap by developing human level intelligence in robots. Co-founder Dileep George said that the components are there for smarter robots.  “We have cheap motors, sensors, batteries, plastics and processors … why don’t we have Rosie?” He was referring to the multipurpose robot maid in the 1960s space-age cartoon The Jetsons. George said the current level of AI is like what he calls the “old brain,” similar to the cognitive ability of rats. The “new brain” is more developed such as what’s seen in primates and whales.

George said the “old brain” AI gets confused when small inputs are changed. For example, a robot that can play a video game goes awry when the colors are made just 2% brighter. “AI today is not ready,” he said. Vicarious uses deep learning to get the robot closer to human cognitive ability. In the same test, a robot with Vicarious’s AI kept playing the game even though the brightness had changed. Another thing that confuses “old brain” AI is putting two objects together. People can see two things superimposed on each other, such as a coffee mug partly obscuring a vase in a photo, but robots mistake it for one unidentified object. Vicarious, which counts Facebook CEO Mark Zuckerberg as an investor, aims to solve such problems.

The intelligence inside Kuri, a robot companion and videographer meant for the home, is different. Kaijen Hsiao, chief technology officer of creator Mayfield Robotics, said there is a camera behind the robot’s left eye that gathers video in HD. Kuri has depth sensors to map the home and uses images to improve navigation. She also has pet and person detection features so she can smile or react when they are around. Kuri has place recognition as well, so she will remember she has been to a place before even if the lighting has changed, such as the kitchen during the day or night. Moment selection is another feature of the robot, which lets her recognize similar videos she records — such as dad playing with the baby in the living room — and eliminates redundant ones.

“Her job is to bring a spot of life to your home. She provides entertainment — she can play music, podcasts, audiobooks. You can check your home from anywhere,” Hsiao said. Kuri is the family’s videographer, going around the house recording so no one is left out. The robot will curate the videos and show the best ones. For this, Kuri uses vision and deep learning algorithms. “Her point is her personality … [as] an adorable companion,” Hsiao said. Kuri will hit the market in December at $799.

“About 100 years ago, electricity transformed every major industry. AI has advanced to the point where it has the power to transform” every major sector in coming years.–Andrew Ng

Business Response to AI

The U.S. and China lead the world in investments in AI, according to James Manyika, chairman and director of the McKinsey Global Institute. Last year, AI investment in North America ranged from $15 billion to $23 billion, Asia (mainly China) was $8 billion to $12 billion, and Europe lagged at $3 billion to $4 billion. Tech giants are the primary investors in AI, pouring in between $20 billion and $30 billion, with another $6 billion to $9 billion from others, such as venture capitalists and private equity firms.

Where did they put their money? Machine learning took 56% of the investments with computer vision second at 28%. Natural language garnered 7%, autonomous vehicles was at 6% and virtual assistants made up the rest. But despite the level of investment, actual business adoption of AI remains limited, even among firms that know its capabilities, Manyika said. Around 40% of firms are thinking about it, 40% experiment with it and only 20% actually adopt AI in a few areas.

The reason for such reticence is that 41% of companies surveyed are not convinced they can see a return on their investment, 30% said the business case isn’t quite there and the rest said they don’t have the skills to handle AI. However, McKinsey believes that AI can more than double the impact of other analytics and has the potential to materially raise corporate performance.

There are companies that get it. Among sectors leading in AI are telecom and tech companies, financial institutions and automakers. Manyika said these early adopters tend to be larger and digitally mature companies that incorporate AI into core activities, focus on growth and innovation over cost savings and enjoy the support of C-suite level executives. The slowest adopters are companies in health care, travel, professional services, education and construction. However, as AI becomes widespread, it’s a matter of time before firms get on board, experts said.