I generated 16 character (upper/lower) subdomain and set up a virtual host for it in Apache, and within an hour was seeing vulnerability scans.
How are folks digging this up? What’s the strategy to avoid this?
I am serving it all with a single wildcard SSL cert, if that’s relevant.
Thanks
Edit:
- I am using a single wildcard cert, with no subdomains attached/embedded/however those work
- I don’t have any subdomains registered with DNS.
- I attempted dig axfr example.com @ns1.example.com returned zone transfer DENIED
Edit 2: I’m left wondering, is there an apache endpoint that returns all configured virtual hosts?
Edit 3: I’m going to go through this hardening guide and try against with a new random subdomain https://www.tecmint.com/apache-security-tips/
You say you have a wildcard cert but just to make sure: I don’t suppose you’ve used ACME for Letsencrypt or some other publicly trusted CA to issue a cert including the affected name? If so it will be public in Certificate Transparency Logs.
If not I’d do it again and closely log and monitor every packet leaving the box.
The random name is not in the public log. Someone else suggested that earlier. I checked CRT.sh and while my primary domain is there, the random one isn’t.
My next suspicion from what you’ve shared so far apart from what others suggested would be something out of the http server loop.
Have you used some free public DNS server and inadvertently queried it with the name from a container or something? Developer tooling building some app with analytics not disabled? Any locally connected AI agents having access to it?
Yeah, this is interesting, I’ll dig more into this direction.
But the randomly generated subdomain has never seen a DNS registrar.
I do have *.mydomain.com registered though…hmmm
Do post again if you figure it out!
Will do!
Maybe using subfinder?
if there’s no dns entry do you mean you are getting scans to your ip with these random subdomain headers? so someone would need both pieces of information? curious
Yes, exactly. Super weird, shouldn’t happen. I wonder if I have a compromised box somewhere…
If you have browser with search suggestions enabled, everything you type in URL bar gets sent to a search engine like Google to give you URL suggestions. I would not be surprised if Google uses this data to check what it knows about the domain you entered, and if it sees that it doesn’t know anything, it sends the bot to scan it to get more information.
But in general, you can’t access a domain without using a browser which might send that what you type to some company’s backend and voila, you leaked your data.
Easily verified by creating another bunch of domains and using a browser that doesn’t do tracking - like waterfox
What you can do is segregate networks.
If the browser runs in, say, a VM with only access to the intranet and no internet access at all, this risk is greatly reduced.
Have you sent the URL across any messaging services? Lots of them look up links you share to see if it’s malware (and maybe also to shovel into their AI). Even email services do this.
Nope, but that’s a good suggestion. I set this one up brand new for the experiment.
Crawlers typically crawl by ip.
Are u sure they just not using ip?
U need to expressly configure drop connection if invalid domain.
I use similar pattern and have 0 crawls.
+1 for dropped connections on invalid domains. Or hell, redirect them to something stupid like ooo.eeeee.ooo just so you can check your redirect logs and see what kind of BS the bots are up to.
Is this at a webserver level?
It can be both server and DNS provider. For instance, Cloudflare allows you to set rules for what traffic is allowed. And you can set it to automatically drop traffic for everything except your specific subdomains. I also have mine set to ban a IP after 5 failed subdomain attempts. That alone will do a lot of heavy lifting, because it ensures your server is only getting hit with the requests that have already figured out a working subdomain.
Personally, I see a lot of hacking attempts aimed at my main
www.subdomain, for Wordpress. Luckily, I don’t run Wordpress. But the bots are 100% out there, just casually scanning for Wordpress vulnerabilities.
We’re always watching.
Did you yourself make a request to it or just set it up and not check it? My horrifying guess it that if you use SNI in a request every server in the middle could read the subdomain and some system in the internet routing is untrustworthy.
Previous experiments, yes, I sent a request. The random one, no.
deleted by creator
If you do a port scan on your box, what services are running? Maybe something like email or diagnostics is exposed to the internet and announcing subdomains?
It’s literally just a VM hosting Apache and nothing else.
Maybe that particular subdomain is getting treated as the default virtual host by Apache? Are the other subdomains receiving scans too?
I don’t use Apache much, but NGINX sometimes surprises on what it uses if the default is not specifically defined.
You need to look at the DNS server used by whatever client is resolving that name. If it’s going to an external recursive resolver instead of using your own internal DNS server then you could be leaking lookups to the wider internet.
I can’t say I know the answer but a few ideas:
- did you access it with a browser? Maybe it snitches on you or some extension does?
- did you try to resolve it with a public DNS server at any point (are you sure nothing forwarded the request to one)?
You could try it again, create the domain in the config and then do absolutely nothing. Don’t try to confirm it works in any way. If you don’t see the same behaviour you can do one of the above and then the other and see when it kicks in. If it gets picked up without you doing anything…then pass!






