Balancing productivity gains with ChatGPT security threats: a proactive approach
The growing popularity of powerful language models like ChatGPT brings about significant productivity benefits for organisations. However, it also introduces potential security threats. In this blog post, I’ll explore the risks associated with ChatGPT usage, suggest actionable alternatives for addressing these concerns, and emphasise the importance of education and fostering a security-conscious culture.
Your organisation likely faces ChatGPT risks
If your organisation has more than a few employees, chances are that some of them are already using ChatGPT. Developed by OpenAI1 , ChatGPT is an impressive tool that employs large language models to produce human-like responses to prompts. Its capabilities range from research, email improvement, essay and blog writing (not this post!), to programming code. It offers fantastic productivity gains and serves as a great equaliser for non-native English speakers.
Drawing from our experience, it’s evident that such usage is already occurring. In the realm of public knowledge, Samsung has issued warnings to its employees, urging them to refrain from sharing information following multiple incidents of leaked confidential details.2
ChatGPT’s tempting benefits make it hard to resist
For example, marketing departments can use ChatGPT to create engaging content, office workers can ask for summaries of key concepts, content creators can request writing improvement tips, and even ask ChatGPT to do the editing. Despite the clear advantages, it’s crucial to recognise the security risks linked to ChatGPT, particularly when handling confidential information.
Humans Will Be Human, Despite Good Intentions
No large company can monitor all employees completely, and AI service providers are no exception. There are examples of when employees who review machine learning training information use it inappropriately34. Many emerging models inform users that humans may use the data entered to improve the model’s output. This means someone might review what you tell an AI model. Below, I’ll discuss some options to address this issue.
Competitive Pressures Drive ChatGPT Adoption
It’s likely that GPT-like services are already being widely used by students at schools and universities. Given the competitive nature of many workplaces, it’s reasonable to assume that some employees use ChatGPT to gain an edge over their peers. As a result, insider threats could emerge – leading to intentional or unintentional leaks of sensitive data.
Lower your risks
To minimise these risks, organisations can consider various alternatives like private, self-hosted models (for example, the open-source GPT-J) or cloud-hosted models with privacy guarantees.
Your Chat Input Today Could Be Someone Else’s Answer Tomorrow
Another option involves contacting OpenAI to request that input data isn’t used for model training (here’s a form). Trusting organisations with your information can be reasonable, as long as privacy controls are satisfactory. But it’s essential to do your homework.
Be aware that confidential information used in ChatGPT might end up training new models and accidentally leaking into others’ responses. Taking precautions, such as limiting access to sensitive data and implementing robust data handling policies, is necessary.
Blocking ChatGPT Isn’t the Answer – Foster a Culture of Security Instead
Education and promoting a security-aware culture are often more effective for positive cybersecurity outcomes than relying solely on strict controls. (I lectured on this at Imperial College, London, last Friday) Organisations can hold regular training sessions, develop internal guidelines or policies, and encourage open communication about security concerns. By helping employees understand the potential risks associated with ChatGPT usage, they can grasp the implications of their actions and contribute to a secure environment.
The Takeaway Message
In conclusion, while ChatGPT usage poses a security threat, it also provides substantial productivity gains. Proactive risk management, employee education, and cultivating a security-conscious culture are vital for striking the right balance between leveraging the benefits of ChatGPT and protecting your organisation and its stakeholders.
OpenAI (2022). ChatGPT: Optimizing Language Models for Dialogue. [online] OpenAI. Available at: https://openai.com/blog/chatgpt. [↩]
Madison, Lewis. (2023). Samsung workers made a major error by using ChatGPT. [online] TechRadar. Available at: https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt [Published 5 April 2023, Accessed 6 Apr. 2023]. [↩]
Brodkin, J. (2023). Tesla workers shared images from car cameras, including ‘scenes of intimacy’. [online] Ars Technica. Available at: https://arstechnica.com/tech-policy/2023/04/tesla-workers-shared-images-from-car-cameras-including-scenes-of-intimacy/ [Published on 6 April 2023] [Accessed 7 Apr. 2023]. [↩]
Guo, Eileen, MIT Technology Review. (2022). A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? [online] Published on: 19 December 2022, Viewed on 7 April 2023; Available at: https://www.technologyreview.com/2022/12/19/1065306/roomba-irobot-robot-vacuums-artificial-intelligence-training-data-privacy/. [↩]