Get a Pentest and security assessment of your IT network.

Cyber Security

Security Obscurity: When is it Bad?

TL;DR

Security through obscurity – relying on secrecy to protect something – is generally a weak security measure. It’s okay as an extra layer, but never the main one. If someone figures out your secret (and they usually will), your system is compromised. Strong encryption and good access controls are far more reliable.

What is Security Through Obscurity?

Security through obscurity means hiding information about a system, rather than actually making it secure. Think of it like locking your diary with a simple code you think no one will guess. It might work for a while, but if someone finds the code (or figures it out), your secrets are exposed.

Why is it Usually Bad?

  1. It’s not real security: Obscurity doesn’t stop determined attackers. They can use reverse engineering, social engineering, or simply enough time and effort to uncover the hidden details.
  2. Secrets get out: Employees leave, documents are lost, code gets leaked… secrets rarely stay secret forever.
  3. False sense of security: Believing you’re secure because of obscurity can lead to neglecting proper security measures.
  4. Makes auditing harder: If the system’s workings aren’t well-documented (because they’re ‘secret’), it’s difficult for security professionals to assess its vulnerabilities.

When *Might* Obscurity Help?

Obscurity can be a useful additional layer of defence, but only when combined with strong, fundamental security practices.

  • Delaying attackers: If an attacker has to spend extra time figuring out basic system details, it gives you more time to respond.
  • Low-value targets: For systems that aren’t particularly attractive to attackers, obscurity might be enough to deter casual attempts.

Important: Never rely on obscurity as your primary security mechanism.

Examples

  1. Bad Example: A custom encryption algorithm that isn’t publicly reviewed. The secrecy is the only ‘security’. This will almost certainly be broken by someone who knows what they are doing.
    # Don't do this!  This is a made-up, insecure example
    def encrypt(data, key):
      result = ''
      for i in range(len(data)):
        result += chr(ord(data[i]) + key)
      return result
    
  2. Better Example: Using standard encryption (like AES) and changing the default port for a service. The encryption is the real security; the port change just makes it slightly harder to find.

    For example, instead of SSH running on port 22, you run it on port 49152.

  3. Bad Example: Hiding the location of important log files. An attacker who gains access to the system will eventually find them.
    # Bad practice - hiding logs doesn't prevent access
    logs_directory = '/very/secret/log/location'
    

How to Tell if You’re Relying Too Much on Obscurity

  • You can’t easily explain how your system works: If you need to keep details secret even from internal security teams, that’s a red flag.
  • Documentation is minimal: Lack of clear documentation suggests reliance on hidden knowledge.
  • Security relies on “trade secrets”: If the core security depends on something not widely known, it’s likely obscurity.

    For example, a custom protocol that isn’t published.

What to Do Instead

  1. Use strong encryption: Protect sensitive data with well-established algorithms.
  2. Implement robust access controls: Limit who can access what, and use multi-factor authentication.
  3. Keep software updated: Patch vulnerabilities promptly.
  4. Regular security audits: Have independent experts review your system for weaknesses.
  5. Document everything: Clear documentation is essential for understanding and maintaining security.

    Use a configuration management tool like Ansible or Puppet to automate this.

Related posts
Cyber Security

Zip Codes & PII: Are They Personal Data?

Cyber Security

Zero-Day Vulnerabilities: User Defence Guide

Cyber Security

Zero Knowledge Voting with Trusted Server

Cyber Security

ZeroNet: 51% Attack Risks & Mitigation