Upcoming Webinar: Register now >>

What ‘they’ do, but ‘we’ don’t: OpenLM’s commitment to data security

Subscribe to our blog

Loading

A popular generative AI platform recently made headlines for an alarming reason: user conversations were unintentionally being indexed by search engines like Google. This incident, along with others, reveals a worrying trend among some software vendors who, in their race to innovate, often overlook fundamental data management standards, leaving their users vulnerable. 

The predatory approach: Where we should be careful while using AI

When individuals interact with a generative AI tool, they often share personal, and sometimes very sensitive, information. The recent revelation highlighted a troubling reality: what you believe is a private conversation might not be. The default settings of some platforms, along with a user’s mistaken belief in privacy, can lead to our data becoming public.

This is not a one-off issue. These ‘problematic’ AI models reveal a pattern of behavior:

  • Data is for training by default: Many platforms, by default, use your conversations to train their AI models. You have to actively find and change settings to opt out. This is a subtle but significant form of data collection that most users are unaware of.
  • Lack of confidentiality: The CEO of one such platform has even stated that user conversations are not protected by confidentiality laws. This means that if a conversation is required as evidence in a legal dispute, it could be produced in court, regardless of how sensitive it is.
  • Shadow AI risks: Within organizations, employees using these free, public platforms for work-related tasks can inadvertently leak confidential company information. A report by IBM found that one in five organizations surveyed had experienced a cyberattack because of security issues with ‘shadow AI,’ and those attacks cost an average of $670,000 more than breaches at firms with little or no shadow AI.

These practices, while sometimes explained in a lengthy privacy policy, demonstrate a disregard for user privacy by prioritizing model improvement over data protection.

A responsible approach to data protection: What value it brings

In stark contrast, responsible software solutions, whether AI models or not, are built from the ground up with data protection at their core. OpenLM, for example, is a company that understands the crucial importance of a proactive and preventative approach to data privacy. Their philosophy is simple: what we collect, how we process it, and how we protect it must align with the highest standards of data management.

Here’s how OpenLM demonstrates this commitment:

  • Strict adherence to GDPR: OpenLM is fully compliant with the General Data Protection Regulation (GDPR) and other international data privacy laws. This means they are legally and ethically obligated to ensure your data is processed lawfully, fairly, and transparently.
  • Data minimization: They adhere to the principle of data minimization, collecting only the information necessary for their specific, legitimate purposes.
  • Robust security and access controls: OpenLM employs robust security measures and a role-based security system to ensure that only authorized personnel have access to specific data. This level of granular control is essential for preventing unauthorized data access and maintaining confidentiality.
  • Accountability and transparency: OpenLM provides clear and accessible privacy policies and ensures accountability for all data processing activities. They provide individuals with the right to access and correct their personal data and to have it deleted when no longer needed.

So, it is wise to consider data protection as your top priority while using even free versions of any cloud solution, be it generative AI or something else. Start owning your data right away!

If you are looking for a modern software asset management solution that offers flexible deployment options and follows global data management standards, set up a discovery call with us.

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to content