archive.

I first registered the electricfork.com domain in 2002 after writing a janky perl script to combine random words together – one of the first combinations of words was “electricfork”. It doesn’t have any particular meaning and is a coincidence I began working in the electric sector a few years later. This site is updated rarely.

From 2006 – 2012 I hosted a blog on this domain. It served as a sounding board for ideas was, frankly, low quality and low volume – just 102 blog posts over those six years. That said, once or twice a year I’ll get questions or references from something I wrote on it. Nostalgically, I found the top 5 viewed posts in the wayback machine and reposting them here, with some annotations along the sidebar.

CIA Triad

March 1, 2010

Let’s start with a list:

“Our new company policy must protect Confidentiality, Integrity, and Availability”

“The goal of information security is the protection of the CIA Triad”

“Before we design this architecture, we need to assess the Risk of Availability, Integrity and Confidentiality”

Where did the concepts of the CIA trinity come from?  So far I’ve pinpointed Confidentiality being addressed by [LaPadula and Bell] in 1976 in their mandatory access control model for Honeywell Multics.  This, as you may have guessed, was to address the problem of disclosure to classified data on information systems.

Next, I found [Clark and Wilson] work in 1987 on Integrity recognizing the commercial sector’s primary focus was on the Integrity of the data on their information systems (think: accounting data).

Both of these were derived as “multilevel security” (think: orange book, 1983) as an operating system design principle.  And the third leg that creates the triumvirate?  Availability.  I simply couldn’t find anything I could use as an authoritative source.  If I were to guess, the Morris Worm may have had influence on Availability reaching the status it has.

So when did we accept the wisdom that CIA is the core to information security?  When did CIA become potential risk?  When did we make the conscious decision to apply system design principles to complex systems of systems, policy, and more? CIA is good it is good as an anchor while architecting a system.

I’m hesitant to say CIA is good in wider contexts.  Indeed, I cringe when it’s used outside of system design principles.  It’s oversimplification which has the Risk of creating blind spots in thought.  For instance, CIA does not address mis-use of the system, especially when that mis-use does not have a functional impact.  If a system has a loss of positive control (say, it’s part of a botnet) and begins sending spam out at a rate of 10 messages/minute, does it impact CIA?  See [Tragedy of the Commons]

I’m also not convinced CIA can truly represent secure systems of systems (networks) in any meaningful (indeed, measurable) manner due to the asymmetric conditions.  Ignoring high complexity, the pace of change to networks is too rapid to create a secure state that can be enforced.  A simple addition of one device could completely unbalance any CIA which was perceived to be in place.

Annotations and commentary

I believe this was one or two years after I obtained my CISSP. It agitated me that the coursework had no reference, source-work, or justification of the CIA triad or gave historical context to how the coursework regarded CIA as a foundational component. It just is.

It still bugs me, actually. Apparently it bugs others as well, this is my top post and most linked or referenced post.



Annotations and commentary

I was reading 20-30 books in this timeframe with a lot of exploration of different topics and attempting to apply concepts into cybersecurity. It is a low effort post, but it made #2 in views.

Blackboard Security

January 18, 2011

Ronald Coase is a Nobel winning economist.  I’ve been reading a few of his papers. As an introduction to a 1974 essay entitled “The Lighthouse in Economics” he states:

“… They paint a picture of an ideal economic system, and then, comparing it with what they observe (or think they observe), they prescribe what is necessary to reach this ideal state without much consideration for how this could be done.  The analysis is carried out with great ingenuity but it floats in the air.  It is, as I have phrased it,

Blackboard economics

As a security professional, this should be given careful consideration.



Clausewitz and Defense in Depth

October 13, 2010

I want to introduce and examine Clausewitzian ideas of friction.

In an attempt to explain why the seemingly simple concepts of warfare are actually quite complex Clausewitz (in 1832) suggested a mechanism called ‘friction’ to help distinguish ‘war on paper’ and ‘real war’ in a book titled “On War”.  This idea of friction is the attempt to explain external factors such as chance, weather, individual will, opponent strength and how such variables will swiftly throw any plans out the window.  In my words: complexities in the battlefield must never be assumed to be accounted for. When I speak to external factors, it’s important to point out that your ‘external factors’ may overlap with the offenses ‘internal factors’ and vise versa.

COL John Boyd recognized Clausewitz did not take this idea far enough.  The commander has the ability to increase this friction for the enemy as well as reducing his own.  A great Boyd quote:

“The essence of winning and losing is in learning how to shape or influence events so that we not only magnify our spirit and strength but also influence potential adversaries as well as the uncommitted so that they are drawn toward our philosophy and are empathetic towards our success.” [source]

With that as a backdrop, let’s talk about “Defense in Depth”.  The current practice of vulnerability management is arguably thought of as a major component of “Defense in Depth”. It’s effectiveness (or lack of) I’ve been known to rant at great lengths.  This idea of friction points out it’s weakness as a conventionally relied upon tactic. Vulnerability management focuses on removing one’s known weaknesses before they can cause harm.  Other conventional components of “defense in depth” include blocking, filtering, proxies, antivirus, authentication, access controls, etc.  I suggest this defense in depth methodology, either explicitly or implicitly believes in a condition of creating moderate to high deterrence environment.  That’s where things stop.

“Defense in depth” has become a standard argument for security architecture costs/complexity analysis at the cost of not applying the concepts of Clausewitzian friction. But this is help stagging where the ‘battle’ will be held.  This is you, as commander, preparing for invasion by increasing friction to the enemy through closing doors, windows, and up-righting walls and turrets (Incidentally it’s also adding a degree of friction to you: all this work takes valuable time and effort). And that’s where you stop. But we can’t stop there: we need to additionally throw barriers, traps and make the ‘terrain’ as difficult an environment as possible for the opponent through the use of deception, feigns, warning signals, intelligence, etc.

I believe a great and fundamental question that needs raised is: Does your implementation of “Defense in Depth” increase friction for the offense while decreasing friction for the defense?

Also, you should follow the #mircon hashtag this week.  Lots of good tweets on interesting subjects which inspired me to finish this post.

Note:  The bad guys have learned these lessons already.  Indeed, their infrastructures are far more resilient and clever than the ones they are attacking.

One more implication: Using standard best practices can harm you.  The procedures of (let’s say) patching or enforcing complex passwords create a certain degree of understanding (aka- lowering the friction for both sides) between the defense and offense of the tactics and procedures you’re organization will be adhering to.  (Don’t even get me started on Antivirus.)

Anyone use this approach?

Annotations and commentary

In the 2010 timeframe I was geeking out over John Boyd and similar authors, including Clausewitz. In my defense, I was studying these before they were fashionable in the cybersecurity community. Nowadays it can be tiresome and too frequent to see OODA Loop references or Clausewitz quotes in presentations.

There are good lessons and concepts in these works, though, much like this post: they are contrived and overplayed, often with the authors focusing on being clever and overly abstract rather than practical.



Annotations and commentary

This was referencing Richard Bejtlich’s taosecurity blog. I never told Richard directly, but his blog and first book were very influential in my early career.

This was the very early days of “APT” before the acronym was in the security zeitgeist (The APT1 report wouldn’t come out until February 2013, nearly two years later). It was also when I was still geeking out over OODA Loops and state policy and tradecraft.

One or two years later, Timothy Thomas very generously sent me several hardcopies of other books (Quantum Dragon, Recasting the Red Star, and Cyber Silhouettes). I unfortunately can’t find a current link to his 2004 paper “Russia’s Reflexive Control Theory and the Military”. I believe that to be most prescient in how disinformation could be used. I wish I had saved those PDFs!

A more recent book on some of the historical nation-state level attacks and doctrine is “A Fierce Domain, conflict in Cyberspace” by Jay Healey.

Dragon Bytes Followup

August 29, 2011

Last year Richard posted a review of “Dragon Bytes” by Timothy L. Thomas. This book was no longer being published when Richard reviewed the book; to the extend that Richard had to do a followup post to answer questions on how to obtain a copy.

Fast forward nearly a year and I was able to obtain my own copy through Amazon’s new/used program. I liked the book. I did a few searches and found several of his papers on the Defense Technical Information Center. Some of them are directly related to “Dragon Bytes” while a few cover Russian theory and one covers Al Qaeda. They’re worth checking out if you have not had an opportunity to get a copy of the book due to it’s unavailability.

The Chinese Military’s Strategic Mindset

Google Confronts China’s “Three Warfares

Russian and Chinese Information Warfare Theory and Practice

Russian Views on Information Based Warfare

Dialectical Versus Empirical Thinking: Ten Key Elements of the Russian Understanding of Information Operations

Al Qaeda and the Internet: The Danger of “Cyberplanning

Also, I recommend “On China” by Henry Kissinger if this subject interests you.



Defensive Kill Chain

November 7, 2011

So I became aware of the intrusion kill chain in 2009 when Mike Cloppert referenced it in one of his presentations at the SANS Incident Detection Summit (I can’t find an agenda to this).  In 2010 he released a [formal paper] on the concept.  If you’re not familiar with the intrusion kill chain please pause and read it.  It’s worth your time.  Don’t TL;DR me.

I recently used the kill chain as an example in a few presentations I gave.  That made me think a bit more about the kill chain concept.  Specifically I asked the question:  Does the defensive side have a kill chain?

Short answer? no.

Long answer?  A kill chain relies on the fact that “any one deficiency will interrupt the entire process.”  Through an entirely inductive reasoning process I’ve identified five steps of defense that can, if interrupted, will greatly disrupt the defensive process.  Unlike a kill chain, disrupting one phase will not necessarily interrupt the entire defensive process or posture.

So what’s my back-of-the-napkin defensive kill chain?  More precisely, what would a targeted attack focus on in order to disrupt a defensive operation?  First, the attacker will leverage penetrating the defensive operations security of the target.  This is through a variety of means, including OSInt, HUMInt, etc.  Next, the attacker will find weaknesses in the orientation of the defensive operation.  This means taking advantage of both the defenders and overall target’s mindset, expectations, and beliefs.  This includes social engineering.  It also includes understanding defensive operations shifts, holidays, and general abilities.  Next, the attacker will leverage this combined information and subvert the IT architecture.  This is exploitation, this is escalation, this is action.  This is done in tandem with subverting the security architecture.  This avoids detection and prevention measures; this defeats any defense-in-depth control which is not already inherently built into the IT architecture.  Finally, the attacker will defeat any responsive/reactive measures by the defensive operation.  This means working faster and better than the defensive team.

The short version of the Defensive Kill Chain: Operations Security -> Orientation -> IT Architecture -> Security Architecture -> Response Activities

I’m wishy-washy on this as an idea; but it’s a fun one that I may use and strengthen in the future.

Annotations and commentary

This was a very dumb post, in hindsight. But it made the top 5!