Skip to main content

Table 3 Comparisons of the proposed Insider threat detection algorithms based on theoretical strengths

From: Detection and prediction of insider threats to cyber security: a systematic literature review and meta-analysis

Criteria

Ambre et al. 2015 [15]

Mayhew et al. 2015 [29]

Ahmed et al. 2014 [14]

Zhang et al. 2014 [18]

Axelrad et al. 2013 [25]

Eldardiry et al. 2013 [22]

OCSVM: Parveen et al. 2013 [12]

GBAD: Parveen et al. 2013 [12]

Brdiczka et al. 2012 [24]

Chen et al. 2012 [13]

Raissi-Dehkordi et al. 2011 [21]

Eberie et al. 2009 [27]

Tang et al. 2009 [26]

Yu et al. 2006 [23]

Unbounded patterns and time lags between activitiesa

1

0

0

1

1

0.5k

0

1

1

0

0.5l

1

1

0

Data Non-Stationarityb

0

1

0

1

0

1

0

0

0

1

0

0

0

0

Individualityc

1

1

1

1

1

1

1

1

1

0

1

1

1

1

High Dimensionalityd

1

0

0

1

1

1

0

1

1

0

0

1

1

0

Interaction effectse

1

1

1

0.5m

1

0.5n

0o

0o

1

1

1

1

1

1

Collusion Attacksf

0

1

0

0

0

0

0

0

0

0

1

0

0

0

False alarmsg

0

1

0

0

0

0

0

0

1

0

0

1

0

0

Class imbalance & undetected insider attacksh

0

1

1

1

0

1

1

1

1

1

1

1

0

1

Uncertaintyi

0

0

1

0

0

0

0

0

0

0

0

0

0

1

Number of free parametersj

0

0.14

0.14

0

0

0

0.25

0

0

1

0.25

0

0

0.14

Total

4

6.14

4.14

5.5

4

5

2.25

4

6

4

4.75

6

4

4.14

  1. aCriterion 1: Unbounded patterns vs. time lags between activities: If the proposed algorithm is based on a hierarchical Bayesian networks (HBNs), then the algorithm scores 1 point; otherwise it scores zero
  2. bCriterion 2: Data non-stationarity: If the algorithm employs a CBM(s), then it scores 1 point; otherwise it scores zero
  3. cCriterion 3: Individuality: If the algorithm employs an IBM(s), then it scores 1 point; otherwise it scores zero
  4. dCriterion 4: High dimensionality: If the algorithm is based on a HBN(s), then it scores 1 point; otherwise it scores zero
  5. eCriterion 5: Interaction effects: If the algorithm is based on a HBN(s) or a machine learning algorithm that uses all features simultaneously to predict the output, then it scores 1 point; otherwise it scores zero
  6. fCriterion 6: Collusion attacks: If the proposed algorithm employs a RUM, then it scores 1 point; otherwise it scores zero
  7. gCriterion 7: False alarms: If the insider threat detection algorithm employs an IM(s), then it scores 1 point; otherwise it scores zero
  8. hCriterion 8: Class imbalance and undetected insider threats: If the insider threat detection algorithm adopts the AD-based approach, then it scores 1 point, otherwise it scores zero
  9. iCriterion 9: Uncertainty: If the insider threat detection algorithm is based on a FIS, then it scores 1 point; otherwise it scores 0
  10. jCriterion 10: The number of free parameters in the model: Each algorithm receives a score that is the inverse of its number of free parameters. We assign score zero to the HBNs since the HBN-learning problem is known to be NP-hard
  11. kThe proposed algorithm is a hybrid of the Markov networks and GMMs. Markov networks can handle unbounded patterns but GMMs cannot handle unbounded patterns. So the score is 0.5
  12. lThe proposed alhorithm consists of two types of models: individual behaviour models and resource usage models. All of the variables in the resource usage model is in frequency domain. Hence resource usage model is not affected by the challenge. However, the variables of individual behaviour model is in time domain and is susceptible to the challenge
  13. mThis algorithm is a hybrid of Naïve Bayes (NB) and HBNs. HBNs can capture feature interactions, but NB cannot capture feature interactions. So the score is 0.5
  14. nThis algorithm is a hybrid of of GMMs and Markove networks. In this algorithm, GMM deals with a small subset of features at a time. Therefore the GMM cannot address the interaction of features. However, the Markov network can address the interactions of features. Hence the score is 0.5
  15. oGBAD & OCSVM are the ensembles of classifiers. The outputs of these classifiers are combined to produce the final output. Hence these algorithms cannot address the interaction effects