15.1 C
Madrid
Monday, October 14, 2024

Google Advises Employees to Exercise Caution When Using Chatbots Amid Privacy Concerns

Google released Bard to the public even though it was wary of it

Must read

Aditya Saikrishna
Aditya Saikrishna
I am 21 years old and an avid Motorsports enthusiast.

UNITED STATES: Alphabet Inc., the parent company of Google, is urging caution among its employees regarding using chatbots, including its program called Bard, despite the global promotion of the software.

 The company has updated its long-standing policy to protect confidential information, instructing employees not to input sensitive materials into AI chatbots due to concerns about potential data leaks and unauthorized access.

- Advertisement -

Google’s cautious approach reflects a security standard increasingly adopted by tech companies. 

Many organizations, including Samsung, Amazon, and Deutsche Bank, are alerting employees about using publicly-available chat programs. Apple is also following a similar approach.

- Advertisement -

The updated policy highlights Google’s desire to prevent any adverse impact on its business caused by competition between its software and ChatGPT, a product developed by OpenAI and Microsoft Corp. 

This competition holds significant investment opportunities and potential advertising and cloud revenues from new AI programs.

- Advertisement -

A survey conducted on Fishbowl, a professional networking site, revealed that approximately 43 percent of professionals, including those from prominent US companies, were already using ChatGPT or other AI tools as of January 2023. 

However, many did not disclose this usage to their superiors, underscoring the need for caution and transparency.

Google’s concerns about chatbot usage are not new. Before Bard’s launch, Google instructed its staff testing the program to provide it with something other than internal information. 

Despite these concerns, Bard has been made available in over 180 countries, supporting 40 languages. 

Google is actively engaged in discussions with Ireland’s Data Protection Commission and is addressing regulatory inquiries following concerns about the chatbot’s impact on privacy.

AI-powered chatbots have the potential to streamline various tasks, but they also pose risks, including the accidental inclusion of misinformation, sensitive data, or copyrighted content. 

In light of these privacy concerns, Google’s updated privacy notice explicitly advises users not to include confidential or sensitive information in their conversations with Bard.

Both Google and Microsoft have developed software solutions in response to these concerns. 

They offer conversational tools to their business customers that come at a higher price but avoid incorporating data into public AI models. Users also have the option to delete their conversation history in Bard.

Also Read: Google Empowers Online Shopping with AI-Powered Virtual Try-On Tool

Author

- Advertisement -

Archives

spot_img

Trending Today