How would you evaluate the security of lang. ecosystem?

I think one interesting point to consider coming from a recent discussion is how would you approach evaluating the security for a given language’s ecosystem? What would be a proper methodology to compare with alternative options? Would you only look at security history of packages you care about or would you consider studying a larger sample? Would you only care about the base system? What weight would you attach to various factors for making the final evaluation?

It’s a good question. With web there’s so much potential to create vulnerabilities with every new development, that security scanners that can run as part of the build process would be high on my list, like sobelow for Phoenix, Brakeman for ruby, etc.

Beyond that it seems like you’d have to look at published vulnerability listings for the language/framework and how long they remain open (CVE’s, etc).

So much depends on exposure and access.

I think the security of a languages ecosystem is directly related to the security of the base components and how intuitive they are to use securely. Basically, if you have to say “Yeah, well if you know how to do it properly, it’s secure” frequently, then the ecosystem isn’t all that secure. It’s why you can say C or (historically) PHP are unsafe languages. You can absolutely do great stuff with them, but it’s also easy to mess up in catastrophic ways.

The previous thread also touched on something sort of tangentially – and that’s that the maturity of the ecosystem effects overall security. So, to use the previous example, there isn’t currently a vetted, go-to, secure authentication library. If you aren’t comfortable self-auditing or rolling your own auth, then maybe another language and framework is the more secure option. That holds true for a lot of libraries at the moment. This has more to do with the relative maturity of the ecosystem than underlying security, but is ultimately a valid consideration.

Very good point but how could we account for effects of popularity. I would guess that the adaption rate would correlate with how much attention from various security groups a given tech. is receiving?

I don’t think it’s a good thing to trust anything.
Maybe establishing a framework of trust is like establishing a pathway to get hacked.
Everyone is not a thief until they decide to be one.
And then come things like Meltdown and Spectre, and I wonder why bother at all.

Very valid approach but it is pretty much impossible to vet everything in a stack by internal team so some heuristics would need to be developed it seems.

This is def. an issue especially for smaller shops/ projects.

1 Like

Would be nice if there were some type of automated pen testing service out there that could integrate with TravisCI to hammer the popular open source auth libraries automatically and publicly.

1 Like

I think there was a company in this field that was looking for Elixir dev. on the forum a while back they could prob. provide a good bit of advice on this.
found it:

Maybe @QuinnWilton could provide some advice on this.

1 Like

There are a few things that could probably be tested in an automated fashion (timeout, session fixation, etc), but there is just too much that can go wrong in an auth library to make it all that meaningful. Might be fun to try though :stuck_out_tongue:

1 Like

Unfortunately, the only way to feel secure is to vet everything.
Then you can blame Intel when you get hacked.
Or maybe people can start writing their code in extremely modular, with good readability and specific to every functionality way, so it can be easily vetted when used?