An IT professionals job is not one I would wish upon everyone. Not only do you have users with the daily “crisis” that gets tagged urgent, with a subject line in all caps. Or a C-level executive who calls you on holiday to fix their tablet. We occasionally notice a new security patch that needs to be rolled out company wide. Openssl is one that comes to mind. Whether that be from a news article, a conglomeration of rss feeds, a PSA notification on /r/sysadmin or from one of the 50 mailing lists we are on. Most of the time, it’s the mailing list with an announcement. But we are only human. We miss one, two, or ten; because of a badly tagged message and an overly agressive regex filter. So, in the end you go ahead and patch/update your programs all before you become the low hanging fruit hackers love to attack. Hopefully before the latest ransomware somehow gets past everything on a flashdrive and starts infecting everything in your business </rant>

Over the past couple years #Heartbleed, #FREAK, #VENOM and others have brought what we do to the public eye. People are starting to actually understand what IT professionals actually do all day. Besides from grinding our own beans in the kitchen at 9:45 am getting ready for that 10am meeting. The question I have always asked myself is how do others filter through mailing lists, rss feeds, and the constant “spam” that is exploit-db to find relevant data? While it’s occasionally fun to know about the latest Xen vulnerability, it isn’t something I need to be worried about right now as I do not have and Xen boxes. Knowing about a docker escape would be a lot more useful for me.

While I toy with the concept of mailings lists in this article. By no means am I dismissing any other solutions. There are plenty of other avenues to take regarding security notices, and whole companies have developed systems regarding these processes.

Mailing Lists

A fake mailing list example that we frequently receive:

[SomeArbitaryCode] Update package X to our new release v0.4.2.424242. This release fixes Security bug(s) #19242 and #19241

Only when I started being interested in coding, and asking myself why, and why not? Did I find out about Mailing Lists and news services. No, I’m not talking about the multi national news service where they deliver your paper to your house each morning. I’m talking about security mailing lists where announcements are made about latest vulnerabilities, and patches. All graded based upon the severity of the exploit bug’s possible effect on the security of the package.

Mailing lists are a great forum that allows everyone to comment and reply on their specific interests, to share ideas on fixing problems or developing new features. They are also (what I think) one of the greatest ways to broadcast announcements. Which is of course why companies usually have a version of all_staff@ for announcements. This is also why every shopping site wants you to sign up to their mailing lists. So, I went and joined a couple security mailing lists. Then my favourite distro list, and a handful of other technical ones until I came to a conclusion. The same conclusion everyone comes to when they are faced with large amounts of raw data. What now?

Do I get up each morning and comb though the nights activities? Read each thread and see if a program I use or work with is affected from the nights affairs. So how do I filter it? How can I only get data which is “relevant” to me. This is the million dollar problem that everyone has in one form or another. This is also where “machine learning” in categorisation has helped where simple regex and threshold rules will not keep up.

Problems with Security Updates (In General)

They are constant, and changing.

The bug in OpenSSL brought “Security” and 0-days into the public’s eyes. This is not a bad thing. I’m personally glad that it was found, and happened. Everyone needs to take their own personal online security into their own hands, not let someone else do it. In the end you are responsible for the data that you release on the internet. But I digress that is a thought for other blog posts.

The OpenSSL bug was the first of many major security releases that happened in recent years. With Xen and now Flash, and Flash and Flash and Flash and Flash and Flash and Flash and Flash and Flash and Flash and Flash. Do you get it now? Flash needs to die, and these are only from 2015, there was plenty more after these, and I am sure that there will be many more that happen after this blog post. Not just for flash, but for every application.

Staying on top of every release is not only time consuming; due to the amount of data you have to sift though, but also can be mentally taxing if you let it be. This is where applications really help. A few people have used solutions like IFTTT but I much prefer the non; give-me-access-to-all-your-data way. Or, at least a way in which you can audit and understand where the data is going, and who has access to that data, without having to ready a 8 page privacy policy which links to even more policies.

Choose your own solution

As I myself am only in the concept phases, my “solution” won’t be necessarily fit your problems. For my solution I decided to tackle the problem of WP-Vulnurability detection. But Tim, this is already a solved problem with plugins such as wordfence or (insert plugin name here). While yes on a singular wordpress site. When you get to 200+ wordpress sites, you need a solution that scales. Wordfence, as a singular plugin without the premium features, does not.

Step 1: Goals

My goal is to have a system which has one main function:

  • Send an actionable notification based on a subset of rules, so we know about it before it can become a problem.
  • (Optional) Add the vuln to our own db/cache (so we don’t alert again)
  • (Optional) Dashboard the vuln list? Because dashboards make pretty pictures.

Step 2: Aggregate

Subscribe to any/every security list out there, that is related to your system that you want to alert on. As I am choosing Wordpress and Wordpress plugin vulnerabilities, conveniently wp-vulndb is the best place to start aggregating.

Aggregation Services:

  • Feedly, Service to aggregate all your rss feeds into one. (Doesn’t “alert”, and Search is Paid-Only)
  • Tiny Tiny Rss, Self-Hosted RSS aggregation like feedly, just not paid.
  • IFTTT: IF This Then That, Requires a third part to have access to your account, but you get to create your own alerts. It’s very customisable, so I have been told.
  • Any email service

Some Feeds for your pleasure:

If you enjoy security feeds, or are hearing about it for the first time here are some feeds I can recommend. There is ofcourse hundreds more options such as; linux kernel, NANOG, MailOp but that is up to the reader to find and enjoy.

Step 3: Parsing & Classification of data

As anyone in the IT field knows, monitoring your services and notifying on services is easy. If you are getting constant notifications, your doing it wrong. Apparently an old mantra I picked up from one of my old bosses, and still rings true today. Just because you have information doesn’t mean it is any good. It’s about how relevant in time that dataset is. That is what makes all the differences between alerts that you need to be alerted on, and alerts that are just notifications.

Classification can come from any source whether it be from some simple regex [Product_Name - Version] or something as complex as your own Machine Learning algorithms. As our original goal is dealing with Wordpress plugins, and we have a specific “Wordpress Vulnerability” mailing list. We do not need to worry about classification as by default they are “classified” already. We just need to parse the data.

If I were to choose a more generic Mailing list parsing and classification would become harder. You would not only need to identify paterns, you would also need to identify what is critical and what is non critical. The associated scoring mechanism defined by CVSS is a good guide on how soon you should be rolling out the fixes.

Lucky for me WPVulnDB’s mailing list comes with all relevant information as part of the subject line in a predictable format:

WordPress <= 5.0 - Authenticated File Delete 
Wordfence <= 7.1.12 - Username Enumeration Prevention Bypass
Display Widgets 2.6.0- - Backdoored
Pinfinity Theme <= 1.9.2 - Reflected Cross-site Scripting (XSS)

So a very simple psuedo code parsing would come in 3 types; Wordpress/Themes/Plugins

Wordpress Affected Versions - Description # Wordpress
Name Theme Affected Versions - Description # Themes
Name Affected Versions - Description # Plugins

As I already have a database of all active plugins and their specific versions matching “Plugin_Name” and “Version_Number”. This becomes quite easy.

Step 4: Alerting

Upon every entry to some type of temporary cache database (Elastic Search?) we check the vulnerability against our list of known current versions of plugins and wordpress version. If this were to match sent an alert out saying:

Object: <Name>
Version: <Affected Versions>
Description: <Description>

Affected Sites:
<List of all sites with plugin installed>

Step 5: Actioning

While it might not look like much, we now have a template of a system in place which gives us the abilty to take decisive action on all affected resources. This is still a manual process and requires that pesky human interaction. We don’t need that. In this day of automated testing solutions, lets remove that human element and automate the updates.

Instead of alerting straight away we could:

  • Add all the affected sites to an “Update” queue
  • For each item in the queue we kick off a job to our CI/CD infrastructure where it:
    • Creates a new temporary site
    • Performs the relevant updates for
    • Does an E2E (End to End) test on our website (You have those right?)
    • If everything passes, deploy to/on our production site
    • Re-test!
    • Remove the site from “Update” queue, and add to our “Successful” queue (or “Failure” queue)
  • Once there are no more items to grab from our Update queue we can rephrase our alert:
Object: <Name>
Version: <Affected Versions>
Description: <Description>

List of Successful Updated Sites:
<List of "Successful" queue, <link-to-build>>

List of Failed Updated Sites:
<List of "Failed" queue, <link-to-builds>>


Now we have completely scoped out how our system should work, next is implementation! Let me know in the comments your thoughts, or if someone else has already solved these issues.

Edit: Someone in the comments suggested a good possible solution that someone has already created; taranis. Which looks very promising.