Cognitive Security

From P2P Foundation
Jump to navigation Jump to search

Description

Sara-Jayne Terp:

"The definition of Cognitive Security that I use in class is: “Cognitive security is the application of information security principles, practices, and tools to misinformation, disinformation, and influence operations. It takes a socio-technical lens to high-volume, high-velocity, and high-variety forms of “something is wrong on the internet”. Cognitive security can be seen as a holistic view of disinformation from a security practitioner’s perspective”.

The term Cognitive Security comes from two different places:

MLsec: “Cognitive Security is the application of artificial intelligence technologies, modeled on human thought processes, to detect security threats.” — XTN. This is the MLsec definition, of machine learning in information security — in attack, defence, and attacking the machine learning systems themselves. This is adversarial AI, and Andrade2019 is a good summary of this field.

Social engineering: “Cognitive Security (COGSEC) refers to practices, methodologies, and efforts made to defend against social engineering attempts‒intentional and unintentional manipulations of and disruptions to cognition and sensemaking” — cogsec.org. This version of the term, coined by Rand Waltzman, is the social engineering at scale definition, about manipulating individual beliefs, sense of belonging etc, and manipulation of human communities. This could be seen as adversarial cognition, and Waltzman2017 and the COGSEC.org website created after his testimony are good summaries of it.

These definitions aren’t as incompatible as they look: they’re both based on adversarial activities, and defence against the manipulation of information, knowledge, and belief. But neither of them quite capture what’s going on today, where we’re seeing both humans and algorithms being manipulated to changes the fates of individuals, communities, organisations, and countries, although as I write this, I could see that the second definition could include algorithms if we allow cognition and sensemaking to cover algorithms too.

Both of these definitions are from the point of view of defence — something that was a strong driver of our (the CredCo MisinfosecWG) own adoption of a term that included “security”, but feels less appropriate when we’re modelling influence in information ecosystems, and what we’re looking at seems more and more to resemble massive multiplayer games, where each individual, community, organisation, country etc has its own goals, and may see even the most aggressive influence actions as part of defending its own realm. MLsec is helpful here, with its separation into study of attacks using ML (machine learning algorithms), defence using ML, and attacks on the ML processes themselves (Bruce Schneier’s paper on common knowledge attacks against democracy fits the latter part). It’s useful to be aware that your cognitive security defence moves might be viewed as someone else’s attack."

(https://medium.com/disarming-disinformation/cogsec101-week-1-history-7e7a51640f1f)