Google said its Bard chatbot could summarize files from Gmail and Google Docs, but users showed it falsely making up emails that were never sent. OpenAI announced its new Dall-E 3 image generator, but netizens on social media were quick to point out that the official demo footage was missing some requested details. And Amazon announced a new conversational mode for Alexa, but the device repeatedly said fair in a demo for the Washington Post, notably recommending a museum in the wrong part of the country.
Driven by a hypercompetitive race to dominate revolutionary “generative” AI technology capable of writing human-like text and producing lifelike images, tech giants are accelerating the marketing of their products to consumers . Getting more people to use them generates the data needed to improve them, which incentivizes making these tools available to as many people as possible. But many experts – and even technology executives themselves – have warned of the dangers of spreading largely new and untested technologies.
“There’s a really bad sense of FOMO among big tech companies that want to do AI, and they don’t want to miss out on generating an early audience,” said Steve Teixeira, Mozilla’s chief product officer and former Microsoft executive. , Facebook and Twitter. “Everyone knows that these systems are not perfect.”
The companies say they have made it clear that their AI is a work in progress and have taken care to install guardrails to prevent the technology from making offensive or biased claims. Some executives, like Sam Altman, CEO of OpenAI, having argued it’s best to get people using AI tools now to see what kinds of risks they carry before the technology becomes more powerful.
Requests for additional comment were declined.
But this rapid and imperfect rollout is at odds with months of warnings, including a one-sentence statement. signed by hundreds of experts saying AI poses risks to humanity comparable to nuclear weapons and pandemics – and that companies should be more careful. Concerns range from the short-term, such as AI infusing more gender and racist bias into important technologies, to longer-term fears of a sci-fi future where AI surpasses human intelligence and begins to act in an autonomous way.
Regulators have already taken note. The Congress held numerous AI dating Hearings and multiple bills have been proposed, although little concrete action has been taken against the companies. Last week, executives including Tesla CEO Elon Musk and Facebook CEO Mark Zuckerberg gathered to answer questions from legislatorswho announced their intention to draft legislation to regulate the technology.
European Union lawmakers are moving forward on regulation it would ban certain uses of AI, such as predicting criminal behavior, and create strict rules for the rest of the industry. In the UK, the government is planning a major summit in November to allow Amnesty International and government leaders to discuss global cooperation.
But regulation must be balanced by giving companies space to invent beneficial technologies, British Finance Minister Jeremy Hunt said in an interview this week during a visit to San Francisco.
“Competitive tension drives most technological advances,” he said. “We need to be very smart about how we build a regulatory environment that allows for innovation while ensuring there are enough safeguards. »
(Amazon founder Jeff Bezos owns The Post. Interim CEO Patty Stonesifer serves on Amazon’s board of directors.)
Most of the time, companies from Apple to Microsoft use the season to unveil new devices just in time for the holiday shopping rush. This year the focus is on AI.
Many people experienced the latest generation of AI firsthand when OpenAI launched ChatGPT last November. The technology behind chatbots and image generators is trained on billions of lines of text or images scraped from the open internet. ChatGPT’s ability to answer complex questions, pass professional exams, and have human conversations has sparked renewed interest, and top companies have been rushing to respond.
But there are still problems. Chatbots routinely make up false information and pass it off as real, a problem AI experts call “mind-blowing.” Image generators are improving rapidly, but there is still no consensus on how to prevent them from being used to create propaganda and disinformation, especially as the United States races toward elections of 2024.
Quantities of copyrighted material were also used to train the robots, prompting a wave of trials accusing tech companies of theft, raising questions about the legal underpinnings of generative AI. This week again, some of the world’s best-known novelists grouped together sue OpenAI for using its work to train its AI tools.
Microsoft, leader in the AI race, launched the first version of its Bing chatbot in February, touting it as a potential replacement for search engines because it could answer conversational questions on almost any topic. Almost immediately, the robot went off the rails, addressing users, telling people it could think and feel and referring to itself as the alter ego “Sydney”. The company quickly dismissed the robot’s creativity, causing it to act more reservedly and limited number of questions people could request it immediately, which Microsoft said allowed users to steer the robot in strange directions.
On Thursday, Microsoft outlined plans to put its AI “co-pilots,” which help users complete tasks in Microsoft Word and Excel, front and center in much of its software. Starting next week, computers running the latest Windows software will add a highly visible icon allowing users to ask Microsoft’s AI for help with tasks like troubleshooting an audio issue on their computer or the summary of a long article online in Microsoft’s Edge web browser.
At the event, Microsoft CEO Satya Nadella compared the past 10 months since ChatGPT exploded into the public consciousness to previous technological revolutions, including the invention of personal computers, the Internet and smartphones.
“It’s a bit like the 90s are back. It’s exciting to be in a place where we’re bringing software innovation,” he said.
Meanwhile, OpenAI – which provides the foundation for this technology – has launched the latest version of its image generator, Dall-E 3. Instead of users being forced to become experts in writing complex prompts To enable the image generator to create detailed images, the company integrated its chatbot technology into Dall-E 3 to enable it to better understand common conversational language and deliver what people ask for.
But in a live demo for The Post, a frame showed an image of two young people doing business in a steampunk-style town, generated from messages including that one of the characters should be a grumpy old man. Another image posted on OpenAI’s Twitter account asked Dall-E to show it potato kings sitting on potato thrones. It showed tiny smiling potatoes wearing crowns – but no thrones.
If consumers start using AI tools that don’t work, they risk being sidetracked from the field altogether, said Jim Hare, vice president at technology research firm Gartner. “This could backfire.”
Google, which adopted a new slogan, “bold and responsible” to represent its approach to AI, has integrated its Bard chatbot with a handful of its other major products, including Gmail, Google Docs, Google Flights and YouTube. Now, users can ask the bot to search through their emails, summarize them, and extract the most important points.
But the tool makes a lot of mistakes, including inventing emails that didn’t exist and suggesting random marketing emails when asked for a summary of urgent and important messages, according to a Post analysis.
Jack Krawczyk, product manager at Bard, said the technology was still in its early stages and had seen major improvements in the six months since its launch. Google always places a label saying “experience” at the top of Bard’s home page and warns users that it might make mistakes.
Tech companies’ approach of launch first, fix later carries real risks, said Teixeira, the Mozilla executive. Chatbots typically present their information in an authoritative style, making it harder for people to know that what they are being told might be wrong. And companies aren’t open enough about how they use the data people enter when interacting with bots.
“There’s definitely not a sufficient level of transparency to tell me what’s going on with my stuff,” he said.
Amazon’s launch of its generative AI conversational chatbot feature for its Alexa home speakers came this week months after its competitors. Dave Limp, Amazon’s senior vice president of devices and services, said the new technology makes it possible to have a “near-human” conversation with Alexa.
But the company wouldn’t let journalists try it themselves, and during an on-stage demonstration, the chatbot punctuated its conversation with Limp with a few long, awkward pauses.
“It’s not the end game,” Limp said in an interview. The robot will continue to improve over time, he said.
Nonetheless, Amazon aims to make a version of chat mode available to all Amazon speaker users this year. In the United States, more than 70 million people use Alexa every month, according to Insider Intelligence.
Shira Ovide, Geoffrey A. Fowler, Nitasha Tiku and Christina Passariello contributed to this report.