A quick thread on intelligence analysis in the context of cyber threat intelligence. I see a number of CTI analysts get into near analysis paralysis phases for over thinking their assessments or over obsessing about if they might be wrong. (1/x)

Consider this scenario. A CTI analyst identifies new intrusions and based on the collection available and their expertise note that the victims are all banks. Their consumer wants to know when threats specifically target banks (not just that banks are victims).
The CTI analyst has, from their collection, at this time, and based on their expertise enough to make an activity group (leveraging the Diamond Model in this example) that meet's the requirement of their consumer. So what's the problem?
The CTI analyst begins to over think it. "What if I had more collection? Would my analysis change? I really don't *know* they aren't also targeting mining companies in Australia as I don't have collection there."
The analyst knows their analysis is going to be shared. Maybe even public. "What if another team or professional intelligence firm has more collection and ends up noting that it isn't banking specific at all. Banks are victims, not targets. Will my consumer distrust me later?"
It's a scenario I see often. I see many of my #FOR578 students run into scenarios like this. "What if I say something about ICS, what will Dragos say. What if I say something about this APT, what will FireEye say. What if I say something about $X, what will CrowdStrike say."
All of our assessments are made using our expertise at a point in time with the available collection. One of the values of estimative language is explicitly accounting for the gaps. If you have significant collection gaps, bake that into the assessment: e.g. Low Confidence
Too many analysts and consumers look for facts. Intelligence is analysis. It's allowed to evolve and change as collection, time, or other considerations change. We're advising consumers to help them make better choices. Not to know the unknowable.
I would advise that analyst to make the group. Sure they can always try to coordinate with others, share, see if other groups/teams see what they see or can expose collection gaps, etc. But that's not always necessary or possible.
One of my favorite articles is the Fifteen Axioms of Intelligence Analysts by Frank Watanabe. It's been in my CTI class for years now as the last slide of the course: https://t.co/45kzw5kLz7
He doesn't start off with anything about being wrong. Or making mistakes. He starts off with "Believe in your own professional judgements." The #2 is "Be aggressive, and do not fear being wrong." It's not until #3 we hear "It is better to be mistaken than to be wrong."
Build confidence with yourself. Then build trust with your consumer. That you are going to deliver the best judgement based on the insights you have at that time. And that if it changes, you'll let them know and admit it. That's really hard to do - but it's vital.
Don't sit on your intelligence and not disseminate it, or overthink it to even finishing your work, if it can be valuable to your consumer. It's always a point in time and based on what you know at that time. You'll never have enough to feel comfortable
The balance is in not blasting your consumer with thinly supported guesses though. Go through your processes. Use your team. "Aggressively pursue collection of information you need." But then make the call. If it was easy it wouldn't be intelligence.

More from Tech

There has been a lot of discussion about negative emissions technologies (NETs) lately. While we need to be skeptical of assumed planetary-scale engineering and wary of moral hazard, we also need much greater RD&D funding to keep our options open. A quick thread: 1/10

Energy system models love NETs, particularly for very rapid mitigation scenarios like 1.5C (where the alternative is zero global emissions by 2040)! More problematically, they also like tons of NETs in 2C scenarios where NETs are less essential.
https://t.co/M3ACyD4cv7 2/10


In model world the math is simple: very rapid mitigation is expensive today, particularly once you get outside the power sector, and technological advancement may make later NETs cheaper than near-term mitigation after a point. 3/10

This is, of course, problematic if the aim is to ensure that particular targets (such as well-below 2C) are met; betting that a "backstop" technology that does not exist today at any meaningful scale will save the day is a hell of a moral hazard. 4/10

Many models go completely overboard with CCS, seeing a future resurgence of coal and a large part of global primary energy occurring with carbon capture. For example, here is what the MESSAGE SSP2-1.9 scenario shows: 5/10

You May Also Like

This is a pretty valiant attempt to defend the "Feminist Glaciology" article, which says conventional wisdom is wrong, and this is a solid piece of scholarship. I'll beg to differ, because I think Jeffery, here, is confusing scholarship with "saying things that seem right".


The article is, at heart, deeply weird, even essentialist. Here, for example, is the claim that proposing climate engineering is a "man" thing. Also a "man" thing: attempting to get distance from a topic, approaching it in a disinterested fashion.


Also a "man" thing—physical courage. (I guess, not quite: physical courage "co-constitutes" masculinist glaciology along with nationalism and colonialism.)


There's criticism of a New York Times article that talks about glaciology adventures, which makes a similar point.


At the heart of this chunk is the claim that glaciology excludes women because of a narrative of scientific objectivity and physical adventure. This is a strong claim! It's not enough to say, hey, sure, sounds good. Is it true?