Blog

Understanding the Potential Risks of Using ChatGPT and AI

The capabilities of artificial intelligence (AI) such as ChatGPT have prompted the global market (and threat actors) to increasingly adopt generative AI in their organizations. However, there are potential legal and business risks associated with the use of it.

Understanding the potential risks of using ChatGPT and AI | Flashpoint

As generative AI is adopted by more organizations, its unchecked use can introduce a variety of unprecedented challenges. Any organization deploying, or planning to deploy AI in the workplace needs to consider the potential legal and business risks of doing so.

Here’s what you need to know:

Data privacy

To train generative AI and machine learning tools, vast quantities of unfiltered data are scraped from the internet, and this practice may violate major privacy laws—most notably the EU’s General Data Protection Regulation (GDPR).

GDPR stipulates that the collection of personal data must be limited in scope and be for a predefined purpose, however, given the vast amounts of AI models in the marketplace, it is entirely possible that some models could be violating GDPR, or other laws through their data collection and training methods.

GDPR gives European citizens a right to request deletion of their data from an organization’s records—an article referred to as the “right to be forgotten.” Though there are limits to the scope of this request, it is unclear how this provision applies to AI models like ChatGPT. This uncertainty has caused certain organizations, as well as world governments, to have temporarily banned the use of generative AI or to discuss potential legislation to limit its development and applications.

Data scraping has also introduced a host of concerns related to copyright infringement, plagiarism, and content ownership—where artists, publishers, and coders have stated that their works have been used without their knowledge or permission. Numerous lawsuits have already been emerging against AI firms due to violations of rights and lack of accreditation. As such, any organization using AI in the workplace needs to ensure that they are complying with relevant privacy regulations.

Incorrect information and misinformation

While there are substantial benefits for using chatbots and AI in businesses, organizations should realize the possible legal risks that incorrect responses can pose. Organizations need to be aware that while generative AI provides authoritative responses—not every answer is completely accurate.

Generative AI systems have been known to “hallucinate,” or invent information that appears true but is incorrect. For example, ChatGPT has recently been found to make baseless claims against individuals, citing nonexistent news articles. ChatGPT had also made incorrect claims against an Australian elected official, stating that he had served prison time for bribery. As such, enterprises seeking to incorporate these technologies into daily workflows must consider how AI software could potentially provide incorrect information to employees, customers, and other stakeholders.

Potential security risks of using AI

Organizations need to be cognizant of what kind of information they are supplying to AI vendors and their models. In the current state of machine learning, even AI researchers are sometimes unsure of how generative AI retains personal data. Therefore, depending on how AI is being used in an organization, the compromise of that model, or malicious internal use could be a major risk.

Stay ahead of threats with Flashpoint

While the risks posed by increasing organizational use of AI is unclear, we do know that threat actors are actively discussing the capabilities of ChatGPT and other AI models. To stay ahead of threat actors, organizations need to know the latest developments occurring within the threat landscape. Sign up for a free trial today.

Begin your free trial today.