Big-tech companies have done a brilliant job of keeping their users on the edge of their seats at the privacy theatre. Every now and then, a company decides to update its policies – mostly privacy-related.
The names on the top of your mind – Google, Meta, or Zoom – have all at some point arbitrarily decided to shuffle their stance on data privacy, and generative AI has just added fuel to the fire.
These companies, running the tech caliphate, are in need of user data more than ever due to the nature of the data-hungry AI models they are developing. More the data, better the results (barring the trash, of course). Companies fanning the generative AI wave have been trying to attain huge amounts of data from every possible means.
Some have entered into official partnerships. OpenAI has signed deals with Axel Springer and Associated Press; Google has reportedly struck a deal with Reddit to use its content; and even Tumblr and WordPress are about to sell their data to OpenAI and Midjourney.
At the same time, the rest have plans to get their hands on data by tweaking their data policies.
Seven months ago, Google updated its privacy policy, hinting at mining public data from web sources to improve its AI models. The search giant was caught fixing how Chrome describes its Incognito mode.
The move was a response to a lawsuit that accused Google of illegally tracking browsing activity even in the Incognito mode, for which Google is now liable to pay $5 billion as a penalty.
Keep Users in the Loop
Having an epiphany to update the privacy policy is not new. The underlying problem is the lack of disclosure for their users. Firstly, the terms and conditions are already written in a wonky language, full of jargon. On top of that, the lack of transparency from the company’s end makes it worse.
Protecting your data on this vast digital infrastructure is difficult. Clearly these technology manufacturers have yet to make much effort to make it easy for the users to learn about the chameleon-like policies. Only after one meticulously does their homework can they determine what the tech business owners have on them.
Considering all the data about our lives is on our smartphones and in the cloud, digital privacy must be at the forefront for consumers and tech companies. Instead of walking a tightrope between theft and collection, the companies should be more transparent.
In 2020, many tech companies like Facebook, Google and smaller startups, overhauled their privacy policies when a sweeping new privacy law was passed in California. Among other changes, users were now given the opportunity to click a link on major companies’ sites that read “Do Not Sell My Personal Information”.
An occasional privacy policy is natural, but an update email from the company to notify about the change in privacy procedures is a must. The guide needs to be simple enough for laymen to understand why the update is required in the first place. It should also explain how it affects the users.
Do it For The Law
As the current generative AI landscape is still brewing hot with more and bigger investments taking place. High-tech corporations are seeing an upcoming slew of new regulations due to the rise of large language models and other AI models, which hold the potential to disrupt specific industries.
With industry insiders and experts from all other fields calling for regulation actively, ChatGPT and Co. are being prioritised globally from the EU to India. As companies have suddenly started updating their data collection policies again, new laws and regulations should be on the way.
Complying with the laws is the only way companies can save their tech from going haywire since self-regulation is difficult. Since the technology is being used by millions of people every day, it’s the companies’ responsibility to have a privacy-literate, trustworthy relationship with the users.