WASHINGTON, Aug 11 (Reuters)
- Many workers across the U.S. are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos poll found, despite fears that have led employers such as Microsoft and Google to curb its use.
Companies worldwide are considering how to best make use of ChatGPT, a chatbot programme that uses generative AI to hold conversations with users and answer myriad prompts. Security firms and companies have raised concerns, however, that it could result in intellectual property and strategy leaks.
Anecdotal examples of people using ChatGPT to help with their day-to-day work include drafting emails, summarising documents and doing preliminary research.
Some 28% of respondents to the online poll on artificial intelligence (AI) between July 11 and 17 said they regularly use ChatGPT at work, while only 22% said their employers explicitly allowed such external tools.
The Reuters/Ipsos poll of 2,625 adults across the United States had a credibility interval, a measure of precision, of about 2 percentage points.
Some 10% of those polled said their bosses explicitly banned external AI tools, while about 25% did not know if their company permitted use of the technology.
ChatGPT became the fastest-growing app in history after its launch in November. It has created both excitement and alarm, bringing its developer OpenAI into conflict with regulators, particularly in Europe, where the company’s mass data-collecting has drawn criticism from privacy watchdogs.
Human reviewers from other companies may read any of the generated chats, and researchers found that similar artificial intelligence AI could reproduce data it absorbed during training, creating a potential risk for proprietary information.
“People do not understand how the data is used when they use generative AI services,” said Ben King, VP of customer trust at corporate security firm Okta (OKTA.O).
“For businesses this is critical, because users don’t have a contract with many AIs – because they are a free service – so corporates won’t have run the risk through their usual assessment process,” King said.
OpenAI declined to comment when asked about the implications of individual employees using ChatGPT, but highlighted a recent company blog post assuring corporate partners that their data would not be used to train the chatbot further, unless they gave explicit permission
BLANKET BANS
Some companies told Reuters they are embracing ChatGPT and similar platforms, while keeping security in mind.
“We’ve started testing and learning about how AI can enhance operational effectiveness,” said a Coca-Cola spokesperson in Atlanta, Georgia, adding that data stays within its firewall.
“Internally, we recently launched our enterprise version of Coca-Cola ChatGPT for productivity,” the spokesperson said, adding that Coca-Cola plans to use AI to improve the effectiveness and productivity of its teams.
Tate & Lyle (TATE.L) Chief Financial Officer Dawn Allen, meanwhile, told Reuters that the global ingredients maker was trialing ChatGPT, having “found a way to use it in a safe way”.
“We’ve got different teams deciding how they want to use it through a series of experiments. Should we use it in investor relations? Should we use it in knowledge management? How can we use it to carry out tasks more efficiently?”
Some employees say they cannot access the platform on their company computers at all.
“It’s completely banned on the office network, like it doesn’t work,” said a Procter & Gamble (PG.N) employee, who wished to remain anonymous because they were not authorized to speak to the press.
P&G declined to comment. Reuters was not able independently to confirm whether employees at P&G were unable to use ChatGPT.
Paul Lewis, chief information security officer at cyber security firm Nominet, said firms were right to be wary.
“Everybody gets the benefit of that increased capability, but the information isn’t completely secure and it can be engineered out,” he said, citing “malicious prompts” that can be used to get AI chatbots to disclose information.
“A blanket ban isn’t warranted yet, but we need to tread carefully,” Lewis said.