In November 2014, Amazon introduced its first voice virtual assistant Alexa to the world. The technology’s name is said to be inspired by the Star Trek computer system on the Starship Empire and highlights CEO Jeff Bezos’ ambition to create a conversational and intelligent assistant. However, the report claims that despite last year’s tech demo showing a contextually aware Alexa, she’s nowhere near integrating with artificial intelligence (AI) to make her smarter. A former Amazon employee who worked on Alexa AI also highlighted knowledge silos and fragmented organizational structures as detrimental to Alexa’s progress.
Former Amazon employee points out problems with improving Alexa
In a long post on X (formerly known as Twitter), Mihail Eric, who worked as Amazon’s senior machine learning scientist in Alexa AI between 2019 and 2021, shared his experience working at the company and the challenges he faced. He also explained why the Alexa project was doomed.
Highlighting a “poor technical process” at the company, Eric said the company had a very fragmented organizational structure, which meant obtaining data to conduct training for large language models (LLM). “It would take weeks to get access to internal data for analysis or experiments. The data was poorly labeled. The documentation either did not exist or was outdated,” he added.
He also said that different teams were working on identical topics, which created an atmosphere of internal competition that was not productive. Furthermore, he found that managers are not interested in cooperating on projects that do not reward them.
In the post, Eric shared several instances where organizational structure and policies stood in the way of the development of “Amazon ChatGPT (well before ChatGPT was released).”
Amazon employees reportedly highlight Alexa’s problems
Fortune published a lengthy report that quoted more than a dozen unnamed Amazon employees to highlight the problems the company is facing in integrating AI capabilities into the virtual assistant. One particular problem that surfaced was that Alexa’s current capabilities make it difficult to integrate a modern technology stack.
Supposedly, Alexa is trained to respond in “utterances,” which essentially means it’s built to respond to a user’s command and announce that it’s performing the requested command (or that it can’t understand the user). As a result, Alexa wasn’t programmed to talk back and forth.
The publication quoted a former Amazon machine learning scientist, who explained that the model also resulted in Amazon users learning a more efficient way to communicate with the virtual assistant, which is to give a brief instruction to take action. This created another problem. Despite having hundreds of millions of users actively talking to Alexa on a daily basis, the data is suitable for speech training, not conversations. This reportedly created a huge data gap in the organization.
Furthermore, the report claims that Alexa is a cost center for Amazon, with the company losing billions every year because the technology cannot yet be monetized. Meanwhile, Amazon Web Services (AWS) has an AI assistant called Amazon Q that is offered to certain businesses as an add-on and generates money. Over the years, the Amazon Q division has seen more investment and even integration with Anthropic’s Claude AI model. However, Alexa’s AI team was not given access to Claude due to data privacy concerns.
When Fortune reached out to Amazon, a spokesperson reportedly denied the claims and said that these details provided by employees were dated and did not reflect the current state of development of the LLM department. While that might be true, a more conversational Alexa seen at a tech demo last year has yet to be released to the public by the tech giant.