Three days after Amazon announced its AI Q chatbot, some employees are sounding the alarm over accuracy and privacy concerns. Q “experiences severe hallucinations and leaks confidential data,” including the location of AWS data centers, internal discount programs and unreleased features, according to leaked documents obtained by Platform.
One employee marked the incident as “sev 2,” meaning an incident serious enough to warrant calling engineers at night and requiring them to work all weekend to resolve it.
Q’s early woes come at a time when Amazon is working to combat the perception that Microsoft, Google and other tech companies have outpaced it in the race to create tools and infrastructure that take advantage of generative artificial intelligence. In September, the company announced that it would invest up to 4 billion dollars in the AI start-up Anthropic. On Tuesday, at its annual Amazon Web Services developer conference, the company announced Q – arguably the most high-profile release in the series of new AI initiatives the company unveiled this week.
In a statement, Amazon downplayed the importance of the discussions between employees.
“Some employees share feedback through internal channels and ticketing systems, which is standard practice at Amazon,” a spokesperson said. “No safety issues have been identified as a result of these comments. We appreciate all the feedback we’ve already received and will continue to refine Q as it moves from preview to general availability.
After publication, the spokesperson sent another statement pushing back on the employees’ claims: “Amazon Q did not disclose confidential information.”
Q, who is now available for free preview, was presented as a sort of enterprise software version of ChatGPT. Initially, it will be able to answer developer questions about AWS, modify source code and cite sources, Amazon executives said on stage this week. It will compete with similar tools from Microsoft and Google, but it will be priced lower than its competitors, at least to start.
In unveiling Q, executives presented it as more secure than mainstream tools like ChatGPT.
Adam Selipsky, CEO of Amazon Web Services, say it New York Times that companies “had banned these AI assistants from the company due to security and privacy concerns.” In response, the Times reported: “Amazon built Q to be more secure and private than a consumer chatbot. »
An internal document on Q hallucinations and incorrect responses states that “Amazon Q may hallucinate and return harmful or inappropriate responses. For example, Amazon Q may return outdated security information that could put customer accounts at risk. The risks described in the paper are typical of large language models, all of which return incorrect or inappropriate responses at least some of the time.