Changing the Uncontrolled Release of Offensive Tools (OSTs)

Daniel Gordon
8 min readDec 25, 2019

--

Before I get started, I encourage everyone to go read some of the more constructive twitter threads but more importantly the posts by Andrew Thompson, Joe Slowik, Action Dan/@1njection, and Marcus Hutchins who have very thoughtful blogs on this topic and are in many ways much smarter than I am. I’m a blue team threat intel person that couldn’t hack his way out of a wet paper bag, but I have some pretty significant experience, and I do have something to say on this and I was challenged by Action Dan to come up with some sort of solution, so here goes.

Step1: Problem definition

There are lots of problems in infosec. This is not even close to the worst, but it’s worth talking about.

The biggest source of disagreement, in my opinion, is different perceptions of what the problem might be or disagreement that there is a problem that needs to be fixed, so we’re starting with problem definition.

Step1a: Establishing that uncontrolled OSTs have caused harm.

While the blame for an attack ultimately rests with the miscreant, organizations big, medium, and small regularly suffer breaches enabled in part or completely by open source tools, some examples here and here and here and here. Making discussion more complicated, a number of miscreants use stolen or leaked tools such as Cobalt Strike or the Eternal series. As if that weren’t enough of a problem, miscreants also leverage dual-use tools such as NetCat or Sysmon (really the whole Sysinternals suite). Further confusing things, some security practitioners have lumped exploit Proofs of Concept (PoCs) in with OSTs, and while PoCs do get abused in a similar way, significant awareness has already been raised for them, which is part of why I’m optimistic about non-PoC OSTs. I’m using APT examples, but plenty of non-APTs are on board this train as well, including folks with no training. All of these different types of tools have their own associated ethical considerations related to their creation and maintenance but they all share one thing in common: they’re in the wild and it’s literally impossible to change that. They will not be regulated and suggesting that they could or should be in any western country (and a lot of non-western countries) is ridiculous.

How many breaches in general involve uncontrolled OSTs? Nobody I can find is gathering data on it and even if they were, the data wouldn’t really be that useful…lots of breaches don’t have forensics or logging to identify all the tools that were used and in lots of cases miscreants cover their tracks or are patient enough for the evidence to be gone. Even when tools are found, sometimes there is no data on what was necessary to conduct the attack and what was ancillary. All I can do is cite @pmelson that “hundreds of anecdotes count as data.”

Can we even objectively say that all compromises are bad things? Honestly, no, and it’ll depend on who you ask. There are organizations whose stock recovers just fine after a breach and are desperately in need of a kick to do the basics, and there are organizations who are doing objectively shady things. But, there are also more and more cases where small businesses, hospitals, schools, and local governments are being directly harmed by targeted cyber attacks in ways that are pretty terrible. These victims are very unlikely to be staffed by well-resourced defenders or have Red Teams, and are victimized because they often do not have the resources to accomplish security fundamentals like patching, asset management, account management, vulnerability management, or secure backup.

Lots of OffSec folks have pointed out that miscreants would just buy or build their own tools if there weren’t uncontrolled OSTs for them to lean on. More accurately, if some of the examples I mentioned were magically removed (and they never will be), miscreants would move to a variety of backup options, including some freely available alternatives. Since that will never happen, I’m not going to spend a lot of time on it, but the time spent by adversaries learning or improving their tools is time that they’re not pwning people. The more costs and effort adversaries have to exert in order to develop things, the less money they have for training or paying competent operators or other parts of their activity.

Step1b: Establishing that uncontrolled OSTs have clear, measurable benefits. OSTs are used for general Red Teaming including adversary emulation, building detections/mitigations and testing them, for vulnerability scanning, for forensics, for password recovery, for all the various good sides of dual-use tools, and for lots of other clever applications. These benefits exist and literally nobody is claiming that they don’t. Most of the organizations that can take advantage of these applications have well-resourced defenders and/or Red Teams and hopefully have already tackled the more cost effective security fundamentals that can stop or limit most breaches. OSTs are sometime published with the intent to get a patch developed, or force software to be rewritten more securely.

OSTs can support learning, though this argument for the value of OSTs has always bothered me because it’s presented as a counterargument to regulation of existing OSTs, which, as I’ve pointed out, is not a real thing that will ever happen. The existing uncontrolled OSTs will always exist, in one form or another, for new people joining the field or those who lack resources for training or tools. Yes, they may have to spin up a vulnerable VM to run the tools against but the tools and associated learning will not go away. Yes, this still creates a small barrier for some people who want to join the fight and that’s not a good thing.

Step1c: Establishing that the harms outweigh the benefits. This is a hard one for lots of reasons. Measuring success or failure in security is hard, measuring risk is hard, measuring how changes to a network/system/software have changed risk is at least partly smoke and mirrors and guessing. There is no way to balance the items in 1a against the items in 1b with objective data. @hexacorn took a stab at identifying the cost to adversaries. The best I can say is that we know about a lot of small and/or vulnerable victims of targeted ransomware attacks, some of which appear to use OSTs and lots of APT intrusions that do the same.

Step2: Solutions

This is aspirational, mostly I type information at people.

No solution will fix the problem entirely. The solutions I suggest, if somehow they get implemented, will likely introduce a bunch more problems that many people will complain about on Twitter forever. The solutions won’t be enforceable by law because the community is international, laws around computer crimes are badly written in general, and hell would freeze over before you’d be able to build a consensus around passing a law. Some of these are things that people releasing OSTs are already doing and that’s a good thing.

With that said, here are my suggested solutions such as they are:

A)If you’re going to release an offensive/dual-use tool, better to do it as a binary than sharing the source code. When you do that, build in something that defenders can build detections for, whether it’s a significant string that AV signatures could look for, whether it’s something in formatting or content of C2, an unusual mutex, a beacon pattern, or all of the above. Yes, miscreants can reverse a tool to remove the “flags” but this will create a barrier/delay/cost, albeit not a small one, against script-kiddy abuse and give more defenders a chance to leverage and learn from your tool before miscreants use it without fear of detection. Yes, some defenders and AV vendors will be lazy and just look for your “flag,” but many well-resourced defenders won’t stop there, and defenders that do not have a lot of resources will never see your tool until it shows up on their network as a result of a compromise. Yes, I share everyone’s frustrations with AV and other vendors and Jason Kikta is correct that they should be held accountable for failures.

B)When you share a tool, please share signatures or detection mechanisms, hopefully using common signature languages like Snort or YARA or release an additional tool to help with mitigation. If you need help with this, let me know, and I’ve seen other blue teamers volunteer their time and effort for this. There are tools to make this easier including Snorpy, YarGen and YaraGenerator. Yes, I know Snort as a language has a lot of problems, but it’s still an industry standard that’s usable by many less-resourced orgs.

C)This one will be controversial, but charging, even a small amount, for an OST while giving the associated defensive tool away for free is a partial solution. Yes, a lot of adversaries have mechanisms for acquiring things illicitly, but this is still a small barrier for miscreants, and creates a paper trail where some miscreants won’t want one. Yes, this creates some gatekeeping. Yes, this creates problems in non-western countries for people at risk from their government.

There is a challenge in threat intel to get usable intel to people who need it, who have ability to use it, who can be trusted not to disclose it, and to do it in time to help them, while releasing it publicly tips the adversary who can then make slight modifications to avoid detection. There is an analogy here for OST release. D)I would encourage folks to release OSTs to people they trust, and orgs they know have the resources to actually do something with it. Yes, some adversaries have pen-testing front companies. Yes, some new OSTs will inevitably leak. Yes, this kind of sharing will require introverts to talk to other introverts. Yes, this kind of sharing will force the creation of new, and sometimes problematic sharing mechanisms and communities. Yes, some adversaries will continue to share and maintain OSTs like China Chopper webshell, and there’s nothing to be done about it.

Are those who release tools at fault for the compromises that are conducted with them? Typically there’s plenty of blame to go around for vendors, victim orgs, blue teams, government, and I repeat most of the blame rests with the miscreant that performed the attack. Uncontrolled release of OSTs without mitigation can still be a net negative.

If you’ve made it this far, thank you for reading, and sorry for the longwinded post. I don’t expect uncontrolled release of OSTs to stop, but I would like them to become similar to publicly dropped 0-days where they are less commonly dropped on social media or at Blackhat and more commonly shared responsibly…and then dropped a little bit later.

--

--

No responses yet