• 20/02/2025 02:13

Experts revealed key rules for information hygiene when working with chatbots

ByJohn Newman

Feb 17, 2025

artificial intelligence (AI) models become more and more “smart”, but experts warn the careless users against excessive trust in chat bots.

11 0

photo-t4.com.ua

as NBN informs with & nbsp; link on the material of The Times of & Nbsp; India, some types of confidential or personal information are never & nbsp; it is worth telling AI Models, for example, Chatgpt, Deepseek, Grok or identical neuralates.

in & nbsp; particular, not & nbsp; allow “leakage” in & nbsp; data, you should avoid requests containing:

  • name, physical address, phone numbers or email;
  • bank numbers, accounts , insurance policies;
  • passwords from & nbsp; working accounts/accounts, e-mail or social networks;
  • personal secrets, since everything entered into & nbsp; the chatbot is also preserved & nbsp; can be used in & nbsp; ; the future;
  • the medical history, diagnoses and other medical information, as well as the advice and recommendations of doctors;
  • Content of a frank nature, since “the Internet is not & nbsp; forgets”;
  • everything that is better not & nbsp; publicize & nbsp; – messages can be transmitted to third parties.

our information portal wrote about & nbsp; the fact that & nbsp; Bigme talked about & nbsp; characteristics of their new Android; -martphone with & nbsp; e-ink matrix.

nbnews.com.ua

Leave a Reply

Your email address will not be published. Required fields are marked *

More interesting