Shortly after ChatGPT was released to the public, a number of corporate giants, from Apple to Verizon, made headlines by announcing bans on the use of the technology in the workplace. But a new survey reveals that these companies are far from outliers.
More than one in four companies has at some point banned the use of productive AI tools in the workplace, according to a new report from Cisco, which surveyed 2,600 privacy and security professionals last summer. 63% of survey respondents said they limit the data their employees can enter into these systems, and 61% said they restrict the productive AI tools that employees at their companies can use.
At the heart of these restrictions is the concern that employees could accidentally leak private company data to a third party, such as OpenAI, which could then turn around and use that data to further train AI models. In fact, 68% of respondents listed said they were concerned about this type of data sharing. OpenAI offers companies access to a paid enterprise product that promises to keep business data private. But the free public version of ChatGPT and other generative AI tools like Google Bard offer far fewer guardrails.
Dev Stahlkopf, Cisco’s chief legal officer, says this can leave companies’ internal information vulnerable. “With the influx of AI use cases, an organization needs to consider the implications before making a tool available to employees,” says Dev Stahlkopf, Cisco chief legal officer, adding that Cisco conducts AI impact assessments for all new AI products from third parties. states. “Each company needs to make its own assessment of risk and risk tolerance, and for some companies it may make sense to ban the use of these tools.”
Companies like Salesforce have tried to turn this uncertainty into a market opportunity by launching products that promise to remove sensitive data from being stored by the system and screen for toxicity in model responses. But it’s still clear that the popularity of out-of-the-box tools like ChatGPT is already causing headaches for enterprise privacy experts. Despite the restrictions put in place by the majority of companies, the survey found that 62% of respondents were entering information about internal processes into productive AI tools. Another 42% say they enter non-public company information into these tools, and 38% say they also enter customer information into these tools.
But it’s not just employees leaking private data that businesses are worried about. According to the survey, the biggest concern among security and privacy professionals when it comes to generative AI is that AI companies use publicly available data to train their models in ways that violate their businesses’ intellectual property rights. (Also, 58% see leaving a job as a risk.)
“Organizations believe that the return on privacy investment outweighs the expenses,” says Stahlkopf. “Organizations that treat privacy as a business imperative, not just a compliance practice, will benefit from this era of AI.”