Local residents are opposing adding an elevator to a subway station because terrorists might use it to detonate a bomb. No, really. There's no actual threat analysis, only fear:
"The idea that people can then ride in on the subway with a bomb or whatever and come straight up in an elevator is awful to me," said Claudia Ward, who lives in 15 Broad Street and was among a group of neighbors who denounced the plan at a recent meeting of the local community board. "It's too easy for someone to slip through. And I just don't want my family and my neighbors to be the collateral on that."
Local residents plan to continue to fight, said Ms. Gerstman, noting that her building's board decided against putting decorative planters at the building's entrance over fears that shards could injure people in the event of a blast.
"Knowing that, and then seeing the proposal for giant glass structures in front of my building - ding ding ding! -- what does a giant glass structure become in the event of an explosion?" she said.
In 2005, I coined the term "movie-plot threat" to denote a threat scenario that caused undue fear solely because of its specificity. Longtime readers of this blog will remember my annual Movie-Plot Threat Contests. I ended the contest in 2015 because I thought the meme had played itself out. Clearly there's more work to be done.
Fascinating article about two psychologists who are studying interrogation techniques.
Now, two British researchers are quietly revolutionising the study and practice of interrogation. Earlier this year, in a meeting room at the University of Liverpool, I watched a video of the Diola interview alongside Laurence Alison, the university's chair of forensic psychology, and Emily Alison, a professional counsellor. My permission to view the tape was negotiated with the counter-terrorist police, who are understandably wary of allowing outsiders access to such material. Details of the interview have been changed to protect the identity of the officers involved, though the quotes are verbatim.
The Alisons, husband and wife, have done something no scholars of interrogation have been able to do before. Working in close cooperation with the police, who allowed them access to more than 1,000 hours of tapes, they have observed and analysed hundreds of real-world interviews with terrorists suspected of serious crimes. No researcher in the world has ever laid hands on such a haul of data before. Based on this research, they have constructed the world's first empirically grounded and comprehensive model of interrogation tactics.
The Alisons' findings are changing the way law enforcement and security agencies approach the delicate and vital task of gathering human intelligence. "I get very little, if any, pushback from practitioners when I present the Alisons' work," said Kleinman, who now teaches interrogation tactics to military and police officers. "Even those who don't have a clue about the scientific method, it just resonates with them." The Alisons have done more than strengthen the hand of advocates of non-coercive interviewing: they have provided an unprecedentedly authoritative account of what works and what does not, rooted in a profound understanding of human relations. That they have been able to do so is testament to a joint preoccupation with police interviews that stretches back more than 20 years.
Abstract: Frontline investigations with fighters against the Islamic State (ISIL or ISIS), combined with multiple online studies, address willingness to fight and die in intergroup conflict. The general focus is on non-utilitarian aspects of human conflict, which combatants themselves deem 'sacred' or 'spiritual', whether secular or religious. Here we investigate two key components of a theoretical framework we call 'the devoted actor' -- sacred values and identity fusion with a group -- to better understand people's willingness to make costly sacrifices. We reveal three crucial factors: commitment to non-negotiable sacred values and the groups that the actors are wholly fused with; readiness to forsake kin for those values; and perceived spiritual strength of ingroup versus foes as more important than relative material strength. We directly relate expressed willingness for action to behaviour as a check on claims that decisions in extreme conflicts are driven by cost-benefit calculations, which may help to inform policy decisions for the common defense.
Under the law, internet companies would have the same obligations telephone companies do to help law enforcement agencies, Prime Minister Malcolm Turnbull said. Law enforcement agencies would need warrants to access the communications.
"We've got a real problem in that the law enforcement agencies are increasingly unable to find out what terrorists and drug traffickers and pedophile rings are up to because of the very high levels of encryption," Turnbull told reporters.
"Where we can compel it, we will, but we will need the cooperation from the tech companies," he added.
Never mind that the law 1) would not achieve the desired results because all the smart "terrorists and drug traffickers and pedophile rings" will simply use a third-party encryption app, and 2) would make everyone else in Australia less secure. But that's all ground I've covered before.
Interesting article in Science discussing field research on how people are radicalized to become terrorists.
The potential for research that can overcome existing constraints can be seen in recent advances in understanding violent extremism and, partly, in interdiction and prevention. Most notable is waning interest in simplistic root-cause explanations of why individuals become violent extremists (e.g., poverty, lack of education, marginalization, foreign occupation, and religious fervor), which cannot accommodate the richness and diversity of situations that breed terrorism or support meaningful interventions. A more tractable line of inquiry is how people actually become involved in terror networks (e.g., how they radicalize and are recruited, move to action, or come to abandon cause and comrades).
Reports from the The Soufan Group, International Center for the Study of Radicalisation (King's College London), and the Combating Terrorism Center (U.S. Military Academy) indicate that approximately three-fourths of those who join the Islamic State or al-Qaeda do so in groups. These groups often involve preexisting social networks and typically cluster in particular towns and neighborhoods.. This suggests that much recruitment does not need direct personal appeals by organization agents or individual exposure to social media (which would entail a more dispersed recruitment pattern). Fieldwork is needed to identify the specific conditions under which these processes play out. Natural growth models of terrorist networks then might be based on an epidemiology of radical ideas in host social networks rather than built in the abstract then fitted to data and would allow for a public health, rather than strictly criminal, approach to violent extremism.
Such considerations have implications for countering terrorist recruitment. The present USG focus is on "counternarratives," intended as alternative to the "ideologies" held to motivate terrorists. This strategy treats ideas as disembodied from the human conditions in which they are embedded and given life as animators of social groups. In their stead, research and policy might better focus on personalized "counterengagement," addressing and harnessing the fellowship, passion, and purpose of people within specific social contexts, as ISIS and al-Qaeda often do. This focus stands in sharp contrast to reliance on negative mass messaging and sting operations to dissuade young people in doubt through entrapment and punishment (the most common practice used in U.S. law enforcement) rather than through positive persuasion and channeling into productive life paths. At the very least, we need field research in communities that is capable of capturing evidence to reveal which strategies are working, failing, or backfiring.
Good article that crunches the data and shows that the press's coverage of terrorism is disproportional to its comparative risk.
This isn't new. I've writtenaboutit before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can't think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It's not the media's fault. By definition, news is "something that hardly ever happens." But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.
Our brains aren't very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.
There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.
If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.
Novelty plus dread plus a good story equals overreaction.
It's not just murders. It's flying vs. driving: the former is much safer, but accidents are so more spectacular when they occur.
One, he is talking about the long term. The trend lines are uniformly positive across the centuries and mostly positive across the decades, but go up and down year to year. While this is an important development for our species, most of us care about changes year to year -- and we can't make any predictions about whether this year will be better or worse than last year in any individual measurement.
The second caveat is both more subtle and more important. In 2013, I wrote about how technology empowers attackers. By this measure, the world is getting more dangerous:
Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious... and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.
This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.
Pinker's trends are based both on increased societal morality and better technology, and both are based on averages: the average person with the average technology. My increased attack capability trend is based on those two trends as well, but on the outliers: the most extreme person with the most extreme technology. Pinker's trends are noisy, but over the long term they're strongly linear. Mine seem to be exponential.
When Pinker expresses optimism that the overall trends he identifies will continue into the future, he's making a bet. He's betting that his trend lines and my trend lines won't cross. That is, that our society's gradual improvement in overall morality will continue to outpace the potentially exponentially increasing ability of the extreme few to destroy everything. I am less optimistic:
But the problem isn't that these security measures won't work -- even as they shred our freedoms and liberties -- it's that no security is perfect.
Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We'll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.
As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn't kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?
Clearly we're not at the point yet where any of these disaster scenarios have come to pass, and Pinker rightly expresses skepticism when he says that historical doomsday scenarios have so far never come to pass. But that's the thing about exponential curves; it's hard to predict the future from the past. So either I have discovered a fundamental problem with any intelligent individualistic species and have therefore explained the Fermi Paradox, or there is some other factor in play that will ensure that the two trend lines won't cross.
Former TSA Administrator Kip Hawley wrote an op-ed pointing out the security vulnerabilities in the TSA's PreCheck program:
The first vulnerability in the system is its enrollment process, which seeks to verify an applicant's identity. We know verification is a challenge: A 2011 Government Accountability Office report on TSA's system for checking airport workers' identities concluded that it was "not designed to provide reasonable assurance that only qualified applicants" got approved. It's not a stretch to believe a reasonably competent terrorist could construct an identity that would pass PreCheck's front end.
The other step in PreCheck's "intelligence-driven, risk-based security strategy" is absurd on its face: The absence of negative information about a person doesn't mean he or she is trustworthy. News reports are filled with stories of people who seemed to be perfectly normal right up to the moment they committed a heinous act. There is no screening algorithm and no database check that can accurately predict human behavior -- especially on the scale of millions. It is axiomatic that terrorist organizations recruit operatives who have clean backgrounds and interview well.
Imagine you're a terrorist plotter with half a dozen potential terrorists at your disposal. They all apply for a card, and three get one. Guess which are going on the mission? And they'll buy round-trip tickets with credit cards and have a "normal" amount of luggage with them.
What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.
The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That's simply not true. Most of the 9/11 terrorists were unknown and not on any watch list. Timothy McVeigh was an upstanding US citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that Al Qaeda is recruiting non-Arab terrorists for US operations.
Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn't any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn't even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.
And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.
The truth is that whenever you create two paths through security -- a high-security path and a low-security path -- you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.
In the sense that PreCheck bars people who were identified by intelligence or law enforcement agencies as possible terrorists, then it was intelligence-driven. But using that standard for PreCheck is ridiculous since those people already get extra screening or are on the No-Fly list. The movie Patriots Day, out now, reminds us of the tragic and preventable Boston Marathon bombing. The FBI sent agents to talk to the Tsarnaev brothers and investigate them as possible terror suspects. And cleared them. Even they did not meet the "intelligence-driven" definition used in PreCheck.
The other problem with "intelligence-driven" in the PreCheck context is that intelligence actually tells us the opposite; specifically that terrorists pick clean operatives. If TSA uses current intelligence to evaluate risk, it would not be out enrolling everybody they can into pre-9/11 security for everybody not flagged by the security services.
Hawley and I may agree on the problem, but we have completely opposite solutions. The op-ed was too short to include details, but they're in a companion blog post. Basically, he wants to screen PreCheck passengers more:
In the interests of space, I left out details of what I would suggest as short-and medium-term solutions. Here are a few ideas:
Immediately scrub the PreCheck enrollees for false identities. That can probably be accomplished best and most quickly by getting permission from members, and then using, commercial data. If the results show that PreCheck has already been penetrated, the program should be suspended.
Deploy K-9 teams at PreCheck lanes.
Use Behaviorally trained officers to interact with and check the credentials of PreCheck passengers.
Use Explosives Trace Detection cotton swabs on PreCheck passengers at a much higher rate. Same with removing shoes.
Turn on the body scanners and keep them fully utilized.
Allow liquids to stay in the carry-on since TSA scanners can detect threat liquids.
Work with the airlines to keep the PreCheck experience positive.
Work with airports to place PreCheck lanes away from regular checkpoints so as not to diminish lane capacity for non-PreCheck passengers. Rental Car check-in areas could be one alternative. Also, downtown check-in and screening (with secure transport to the airport) is a possibility.
These solutions completely ignore the data from the real-world experiment PreCheck has been. Hawley writes that PreCheck tells us that "terrorists pick clean operatives." That's exactly wrong. PreCheck tells us that, basically, there are no terrorists. If 1) it's an easier way through airport security that terrorists will invariably use, and 2) there have been no instances of terrorists using it in the 10+ years it and its predecessors have been in operation, then the inescapable conclusion is that the threat is minimal. Instead of screening PreCheck passengers more, we should screen everybody else less. This is me in 2012: "I think the PreCheck level of airport screening is what everyone should get, and that the no-fly list and the photo ID check add nothing to security."