[1/2]A response from ChatGPT, an AI chatbot developed by OpenAI, can be seen on its website in this illustration photo taken February 9, 2023. REUTERS/Florence Lo/Illustration/File Photo
LONDON/WASHINGTON, Aug 11 (Reuters) – Many workers across the United States are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos survey found, despite fears that have prompted employers such as Microsoft and Google to restrict use.
Businesses around the world are considering how best to use ChatGPT, a chatbot program that uses generative AI to hold conversations with users and respond to countless prompts. However, security firms and companies have expressed concerns that it could lead to intellectual property rights and strategy leaks.
Anecdotal examples of people using ChatGPT to help with their day-to-day work include writing emails, summarizing documents, and doing preliminary research.
About 28% of respondents to the July 11-17 artificial intelligence (AI) online survey said they regularly use ChatGPT at work, while only 22% said their employer explicitly allowed such external tools.
The Reuters/Ipsos poll of 2,625 adults across the United States had a confidence interval, a measure of precision, of about 2 percentage points.
About 10% of respondents said their managers explicitly banned external AI tools, while about 25% did not know if their company allowed the use of the technology.
ChatGPT became the fastest growing app in history after its launch in November. It has created both excitement and alarm, bringing its developer OpenAI into conflict with regulatorsparticularly in Europewhere the company’s mass data collection has drawn criticism from privacy watchdogs.
Human reviewers from other companies can read any of the generated chats, and researchers found that similar artificial intelligence AI could reproduce data it absorbed during training, creating a potential risk of proprietary information.
“People don’t understand how data is used when they use generative AI services,” said Ben King, VP of customer trust at enterprise security firm Okta (OKTA.O).
“For businesses, this is critical, because users don’t have a contract with many AIs — because they’re a free service — so businesses won’t have run the risk through their normal assessment process,” King said.
OpenAI declined to comment when asked about the implications of individual employees using ChatGPT, but highlighted a recent company blog post that assured corporate partners that their data would not be used to train the chatbot further, unless they gave express permission.
When people use Google’s Bard, it collects data such as text, location and other usage information. The company allows users to remove past activity from their accounts and request that content fed into AI be removed. Alphabetical (GOOGL.O) Google declined to comment on requests for additional details.
Microsoft (MSFT.O) did not immediately respond to a request for comment.
“HARMONOUS DATA”
A US-based Tinder employee said workers at the dating app used ChatGPT for “innocuous tasks” such as writing emails despite the company not officially allowing it.
“It’s regular emails. Very trivial, like making fun calendar invitations to team events, farewell emails when someone leaves … We also use it for general research,” said the employee, who declined to be named because they were not named. authorized to speak to reporters.
The employee said that Tinder has a “no ChatGPT rule” but that employees still use it in a “generic way that doesn’t reveal anything about us being on Tinder.”
Reuters could not independently confirm how Tinder employees used ChatGPT. Tinder said it provided “regular guidance to employees on security and data best practices”.
In May, Samsung Electronics banned staff globally from using ChatGPT and similar AI tools after discovering that an employee had uploaded sensitive code to the platform.
“We are reviewing measures to create a safe environment for generative AI use that improves employee productivity and efficiency,” Samsung said in a statement on Aug. 3.
“However, until these measures are completed, we are temporarily restricting the use of generative AI through the company’s devices.”
That was reported by Reuters in June Alphabet had warned employees about using chatbots including Google’s Bard, while promoting the program globally.
Google said although Bard may make unwanted code suggestions, it helps programmers. It also said it aimed to be transparent about the limitations of its technology.
COVER BAN
Some companies told Reuters they are embracing ChatGPT and similar platforms, while keeping security in mind.
“We’ve started testing and learning about how AI can improve operational efficiency,” said a Coca-Cola spokesperson in Atlanta, Georgia, adding that the data stays inside the firewall.
“Internally, we have recently launched our enterprise version of Coca-Cola ChatGPT for productivity,” the spokesperson said, adding that Coca-Cola plans to use AI to improve the efficiency and productivity of its teams.
Tate & Lyle (TATE.L) Meanwhile, Chief Financial Officer Dawn Allen told Reuters the global ingredients maker was testing ChatGPT, after “finding a way to use it safely”.
“We’ve had different teams decide how they want to use it through a series of experiments. Should we use it in investor relations? Should we use it in knowledge management? How can we use it to perform tasks more efficiently?”
Some employees say they cannot access the platform on their company computers at all.
“It’s completely banned on the office network, as if it doesn’t work,” said a Procter & Gamble (PG.N) employee, who wished to remain anonymous because they were not authorized to speak to the press.
P&G declined to comment. Reuters could not independently confirm whether P&G employees were unable to use ChatGPT.
Paul Lewis, chief information security officer at cyber security firm Nominet, said companies were right to be cautious.
“Everyone benefits from the increased capacity, but the information is not completely secure and it can be engineered,” he said, citing “malicious solicitations” that could be used to get AI chatbots to reveal information.
“A blanket ban is not warranted yet, but we have to tread carefully,” Lewis said.
Reporting by Richa Naidu, Martin Coulter and Jason Lange; Editing by Alexander Smith
Our standards: Thomson Reuters Trust Principles.
#ChatGPT #fever #spreading #American #workplace #sounding #alarm