AI-generated passwords, YES or NO?
Password generation often seems like a simple task: you need a few letters, numbers, and symbols, and you’re done… you have a “secure” password. However, the reality is much more complex, and if you rely on an artificial intelligence model, such as ChatGPT, Gemini, or Claude, to create secure passwords, you are making a major security mistake. Recent studies show that LLM (Large Language Models) do not provide the strict randomness necessary to protect your online accounts.
1. These AI-generated passwords are not truly random
Many users have started asking chatbot applications to generate passwords for their personal accounts, from email to social networks or banking platforms. The problem is that AI models, including ChatGPT, Google Gemini, and Claude, do not create completely random passwords. Instead of pure cryptography, these models produce results based on recurring patterns in the training data, making the passwords much easier to guess than they appear at first glance.
For example, a recent analysis by the cybersecurity firm Irregular tested these systems by generating 50 passwords. The results showed only 23 unique passwords, and some were repeated dozens of times.
2. “Complicated” passwords that are easy to crack
Although online password checkers may show a good score for these passwords, studies emphasize that apparent chaos does not mean real security. A string of 16 characters generated by AI may seem complex, but if it has a repetitive structure, an attacker can exploit this predictability to dramatically reduce the number of possible guesses. Security experts warn that pattern-based passwords can be cracked much faster than you think.
The entropy of AI-generated passwords, which measures how hard it is to guess a string of characters, has been estimated at values far below the recommended levels for real security. One example shows that AI passwords may have only ~27 bits of entropy, compared to ~98 bits or more for truly random passwords used by top password manager applications.
3. The danger is not just theoretical
In addition to the fact that AI-generated passwords are predictable, they can end up in public repositories or source code (in the case of programmers who leave AI-generated passwords in files .env). Similarly, attackers can build lists of patterns observed in passwords generated by AI models, using them later in automated attacks.
Moreover, if you have already used AI-generated passwords, experts recommend immediate change, especially for sensitive accounts, as the vulnerability is not just theoretical but can already be found in current practice.
4. Safer alternatives for account security
There are much safer methods for managing passwords:
- Password managers: specialized applications that generate truly random passwords using cryptographic algorithms and store them in a secure vault.
- Biometric authentication and/or passkey: modern technologies that eliminate the need for traditional passwords for many services.
- Two-factor authentication (2FA): adds an extra layer of security, even if the password is compromised.
Instead of asking an AI model to create passwords, use dedicated tools that adhere to the strictest security standards and genuinely protect your accounts.
AI models are not designed to create truly secure passwords
In an era where artificial intelligence is part of our everyday digital lives, it may be tempting to use the same tools for seemingly simple tasks, such as password generation. However, the results are clear: AI models are not designed to create truly secure passwords. The lack of randomness, repeated patterns, and predictability of these character strings make them vulnerable to cyber attacks.
For real digital security, experts recommend relying on dedicated security solutions, such as a password manager, two-step authentication, or passkey technologies, and not on generalist AI.
Source: androidpolice.com