Tests whether the analysis of competing hypotheses reduces cognitive bias, and proposes a more effective approach Recent high-profile intelligence failures - from 9/11 to the 2003 Iraq war - prove that cognitive bias in intelligence analysis can have catastrophic consequences. This book critiques the reliance of Western intelligence agencies on the use of a method for intelligence analysis developed by the CIA in the 1990s, the Analysis of Competing Hypotheses (ACH). The author puts ACH to the test in an experimental setting against two key cognitive biases with unique empirical research facilitated by UK's Professional Heads of Intelligence Analysis unit at the Cabinet Office, and finds that the theoretical basis of the ACH method is significantly flawed. Combining the insight of a practitioner with over 11 years of experience in intelligence with both philosophical theory and experimental research, the author proposes an alternative approach to mitigating cognitive bias that focuses on creating the optimum environment for analysis, challenging current leading theories. Key features and benefits - Reveals that a key element of current training provided to the UK and US intelligence communities (and likely all 5-EYES and several European agencies) does not have a proven ability to mitigate cognitive biases - Demonstrates that judging the credibility of information from human sources means that intelligence analysis faces greater complexity and cognitive strain than non-intelligence analysis - Explains the underlying causes cognitive biases, based on meta-analyses of existing research - Shows that identifying the ideal conditions for intelligence analysis is a more effective way of reducing the risk of cognitive bias than the use of ACH Martha Whitesmith is a Visiting Research Fellow in the Department of War Studies at King's College London.