“From an analytical perspective, AI has a hard time interpreting intent,” adds Curwin. “Computer science is a valuable and important field, but social computing scientists are making the big leaps in enabling machines to interpret, understand and predict behavior.”
To “build models that can begin to replace human intuition or cognition,” Curwin explains, “researchers must first understand how to interpret behavior and translate that behavior into something that AI can learn.”
Although machine learning and big data analytics provide predictive analysis of what might or is likely to happen, it cannot explain to analysts how or why it arrived at those conclusions. Opacity in AI reasoning and difficulty verification sources, which consist of extremely large data sets, may affect the actual or perceived robustness and transparency of these conclusions.
Transparency in reasoning and procurement are requirements for analytical standards for commercial operations of products produced by and for the intelligence community. Analytical objectivity is also statistically necessary, prompting calls within the US government to update such standards and laws in light of the growing prevalence of AI.
Machine learning and algorithms, when used for predictive judgments, are also considered by some intelligence practitioners to be more art than science. That is, they are prone to bias, noise, and may be accompanied by methodologies that are unsound and lead to errors similar to those found in the criminal forensic sciences and arts.
“Algorithms are just a set of rules, and by definition they’re objective because they’re completely consistent,” says Welton Chang, co-founder and CEO of Pyrra Technologies. With algorithms, objectivity means applying the same rules over and over again. Evidence of subjectivity is the difference in responses.
“It’s different when you consider the philosophy of science tradition,” Chang says. “The tradition of what counts as subjective is one’s own perspective and bias. Objective truth derives from consistency and agreement with external observation. When you judge an algorithm solely on its outputs, not whether those outputs match reality, then you miss built-in biases. “
Depending on the presence or absence of bias and noise in massive datasets, especially in more pragmatic real-world applications, predictive analytics is sometimes described as “astrology for computer science”. But the same can be said for analysis performed by humans. Scholar Stephen Marrin writes that human intelligence analysis as a discipline is “merely artisanal masquerading as a profession.”
Analysts in the US intelligence community are trained to use Structured Analytical Techniques, or SATs, to familiarize them with their own cognitive biases, assumptions, and reasoning. SATS—which use strategies that run the gamut from checklists to matrices that test assumptions or predict alternative futures—exclude the thinking or reasoning used to support intelligence judgments, which is especially important given the fact that in secret competition between nation states not all not all The facts are known or known. But even the SATs, when employed by people, have come under scrutiny from experts like Chang, specifically because of the lack of scientific tests that can prove the SAT’s efficacy or logical validity.
As AI is increasingly expected to augment or automate intelligence community analysis, it has become urgent to develop and implement standards and methods that are both scientifically sound and ethical for law enforcement and national security applications. As intelligence analysts struggle with how to match the opacity of AI with the evidentiary standards and methods of argumentation required for law enforcement and intelligence contexts, the same struggle can be found in understanding analysts’ unconscious reasoning, which can lead to accurate or biased conclusions.