ChatGPT, a term that is not new to any of us. This powerful AI tool developed by OpenAI has gone viral ever since its launch back in November 2022. It is an AI-powered language model where it is trained to follow instructions and provide prompt response through back-and-forth conversation with users. People are using it to plan trips, ask for advice and essays writing. It is no doubt the power of ChatGPT is thrilling, but at the same time it reflects how dangerous it can be.
As it grows, ChatGPT is slowly making a mark in the workplace as well, as employees are asking ChatGPT to help draft emails, PowerPoint decks and contracts. Which starts to ring different threatening alarms such as confidential issues, and not to mention how jobs can slowly be replaced. Cyberhaven a security company in the US have recently conducted research on ChatGPT usage for 1.6 million workers at companies across different industries. Results, as of March 21, 2023, found that 8.2% employees would have used ChatGPT to deal with their daily work tasks and 3.1% employees have once pasted company’s data into ChatGPT. Between February 26- March 4, reports have shown that employees have pasted sensitive data into ChatGPT 199 times, client data 173 times and source codes 159 times, and many more as shown in the chart below.
ChatGPT is surely a fun and useful tool as it can help your complete tasks in a matter of minutes. However, what people are forgetting is ChatGPT is a model that uses Reinforcement Learning from Human Feedback (RLHF), meaning the conversation between users are used for future answers generating purposes. For example, if you list out the company’s goals and action plans in ChatGPT and ask it to create a presentation deck. Your competitors can ask ChatGPT about your company’s plan and ChatGPT could potentially provide an answer resulting in a huge data leakage.
The recent most alarming incident with data leakage would be the incidents with the electronics brand Samsung in South Korea. The company has recently opened the permission for their engineers to use ChatGPT to solve their daily tasks, but within just a month 3 employees have leaked company’s confidential information. With once case, the employee has inputted a source code of a new program into ChatGPT to help identify any faults, while another copy and pasted meeting notes and asked ChatGPT to generate a presentation deck. Such sensitive information is now in the hands of OpenAI, and it seems like Samsung does not have any method in retrieving and deleting such data from OpenAI. Therefore, such data is now in the hands of Open AI and if the right questions is asked you can potentially retrieve all these sensitive information.
This incident reflects how dangerous ChatGPT can be and raises the concern of data privacy. As at this moment, it still lacks rules and regulations that can help protect users that are using ChatGPT, as it is still under debate which regulations does it needs to comply with. At this point, companies can first take the initiative and set up their own rules to prevent its employees from sharing information on ChatGPT while the others should use it at their own risks; or simply do not share personal and confidential information while you seek ChatGPT’s help.
On top of the above cybersecurity tips on ChatGPT, our Modern+ and Safeti+ customers can check out more Cyber Security related end-user awareness training videos, particularly some focusing on Cyber Security Policy and Sensitive Personal Information Protection to further learn about how you can protect yourself and company. If you have not joined us yet, click here to learn more about the different services provided.
References:
CyberHaven, BusinessToday- Samsung Data Leakage in ChatGPT
As your trusted Cloud Solution & IT Service Provider, we empower your business to accomplish truly remarkable feats.