Dr. Arun Vishwanath studies the “people problem” of cybersecurity.
His research focuses on improving individual, organizational, and national resilience to cyber attacks by focusing on the weakest links in cyber security—all of us Internet users.
His particular interest is in understanding why people fall prey to social engineering attacks that come in through email and social media, and on ways we can harness this understanding to secure cyberspace. He also examines how various groups—criminal syndicates, terrorist networks, hacktivists—utilize cyberspace to commit crimes, spread misinformation, recruit operatives, and radicalize others.
Dr. Vishwanath is an alumnus of the Berkman Klein Center at Harvard University. He serves as the CTO of Avant Research Group (ARG)—a cyber security research and advisory firm, where he consults for major corporations and governments on issues ranging from cybersecurity to consumer protection. He also serves on a distinguished expert panel for the NSA’s Science of Security & Privacy directorate.
Dr. Vishwanath’s research on improving cyber resilience against online social engineering has been funded by the National Science Foundation. He has published close to 50 articles on technology users and cybersecurity issues and his research has been presented to principals at national security and law enforcement agencies around the world. He has also presented his work at leading global security conferences, multiple times by invitation at the US Senate/SSA and House, as well as four consecutive times at BlackHat.
Dr. Vishwanath was the first researcher to demonstrate the role of users’ cognitions, particularly how users cognitively processed information and their cyber risk beliefs, in making them susceptible to social engineering. His work was the first to highlight the need for user responsibility, from developing cyber hygiene to safer cyber habits, for protecting organizations from social attacks. His research was also the first to highlight the dangers of social media, from the use of fake profiles to the dissemination of deception, years before its impact was ever considered by anyone.
Additionally, he remains the first to demonstrate the threats from mobile based social engineering attacks. While at the time many researchers ignored these ideas, the Verizon 2019 DBIR–for which he contributed a write-up–found unequivocal evidence in support of it.
Dr. Vishwanath also plays the role of a technologist, writing and highlighting, in the public interest, problems in cyber security and solutions for them. Many of his original ideas have led to new products, processes, and policies.
For instance, starting in December 2014, in CNN and other outlets, Dr Vishwanath called for the creation of 911-type system for reporting cyber breaches. Today, organizations in the US and abroad are working to build such systems.
In February 2015, in another CNN opinion piece, he called for a 5-star rating system for new apps and technologies, similar to the 5-star rating system we use to test the crash protections of new cars. In 2019, Consumer Reports launched a system to do exactly this.
In November 2017, he called for an open source breach reporting portal, where breach information was stored and disseminated, so people and companies knew of what information about them was compromised. In 2018, Mozilla Corp. introduced the Firefox monitor that is built to do this.
In January 2018, he wrote about how AI would detrimentally affect the American middle class, displacing truck drivers, retail workers, even local news reporters – almost 2 years before presidential candidate Andrew Yang made it his campaign’s central issue.
Additionally, his research and views on the science of cybersecurity have also been featured on Wired Magazine, USA Today, Politico, CNN, the Washington Post, Scientific American, and hundreds of other national and international news outlets.
How did you become one of the world’s leading experts on human cyber vulnerability and social engineering?
I am a social scientist by training, and my interest in cybersecurity came from my work in the psychology of technology adoption and utilization. I spent a decade studying how people came up with innovations–ideas, techniques, technology–and how people accepted, rejected, utilized, or misutilized them. While working in this area, the university I was teaching in received a spear-phishing attack, one of the early social engineering attacks of the form, asking all email users to change their email logins and passwords. This event occurred around 2009 when even the IT department didn’t care much about such attacks. The attack targeted everyone, faculty, staff, students, and the attack was co-opting some of the same psychological processes that I’d been examining in my work on technology adoption—some of the same processes that led to people using technology in certain ways.
The attack got me to recognize the potential of these attacks and study them. Much of my work since that point focused on experimentally simulating different types of social engineering attacks and examining how and why they worked or didn’t work. The most interesting part of this story is that not many others were working in this area. At least, there were none in my field that cared.
It was the heyday of Facebook, and everyone was focused on the promise of social media and its purported “cure” for all of society’s ills. My work, on the other hand, was focused on how email, messaging, and even social media could be easily co-opted. There was no support whatsoever for my early work. The first conferences I presented my work at would have few, if any, people attending. A colleague even asked me why I was wasting time studying something so minor as phishing, which after all, he reasoned, would soon be rendered obsolete by anti-virus type software.
Thankfully, I persevered and continued to work in the area, examining different attacks, even ones that could come via USB sticks being dropped off, messaging services, and social media. Many of these pieces of research have been published, but others couldn’t because of the novelty of what I was working on at the time.
Then in November 2016, there was the infamous hack into Sony Pictures Entertainment (SPE) by the North Korean sponsored hackers. As you may recall, this was the first of its kind, where a state-sponsored attack was successful against an independent corporation. This attack was among the first ransomware attacks, where data was destroyed on SPE’s computing systems all over the world. At the same time, its internal communications, some highly scandalous emails, were released to the media. While the media was busy covering the salacious information in the emails, I knew exactly what the North Koreans had done and how they had accomplished their attack. It was using social engineering. And it wasn’t the first time they had done it. They had done likewise in South Korea in another attack six months before the SPE hack.
I also know there is worse to come now that hackers knew what was possible. And more did come. We had attacks almost the very next month on various websites, over time leading to hacks into Ashley Madison, Apple, Target, Yahoo, Equifax, the Office of Personnel Management, and the DNC during the 2016 presidential elections. This trend continues, and some five years after SPE, we are no closer to stopping social engineering.
What publications, blogs, websites, and thought leaders do you follow to stay on top of the latest developments in your constantly evolving field of expertise?
One of the primary reasons we haven’t stopped social engineering is because of the field and how it looks at problems. Cybersecurity and IT is the domain of engineers who look at users like machine operators – people who can be trained and whose inputs are of little value. If you read only what people in the field write, you end up thinking no different, and this is why we haven’t solved the problem.
Solving this requires thinking outside the proverbial box. And this means reading material outside the science of security. As Einstein said: “you cannot solve a problem using the same thought processes that created them.”
I read a lot of information security research as well as work from outside the field, even outside the social sciences — this helps inform my own research and writing. I also read historical works that provide context to the research from the time, and this is something I think many people, including academics and practitioners, frequently miss. Academic researchers study topics often without considering the challenges in the field. While professionals in the field, find theories and frameworks useless because it doesn’t readily solve a problem. Translating this requires both developing theories and understanding how they can be applied.
I spend a lot of time talking to scientists and researchers studying various topics and professionals and practitioners. The former is important to understand different ways of thinking, but the latter helps me understand what to focus on. I then spend time thinking about topics, researching them, and writing about them. A lot of my writing never makes it to publication, but it is the process that eventually leads to better ideas. It is this long, lonely, intellectual walk of discovery that helps me clarify my thinking and come up with ideas.
Outside of publishing 50 plus peer-reviewed journal/research articles, I have authored dozens of opinion pieces, many invited pieces, in leading media outlets (including CNN, Washington Post, Scientific American), and presented my work in leading venues (Blackhat 4 times, US Senate 3 times, US House, represented my field of science in Congressional National Science Foundation (C-NSF) presentation).
My ideas have led to the development of new companies, patented ideas, and tools, and technological innovations in cybersecurity.
Can you tell us about your journey from professor to technologist?
I was a tenured research professor at the University at Buffalo for close to two decades. In 2016, defamatory allegations were brought against me by a faculty member and graduate student. After an 11-day arbitration, I was cleared of any wrongdoing, completely exonerated, and fully reinstated. Upon the advice of counsel, I pursued a lawsuit against the University and the individuals involved.
Around this time, I became a Faculty Associate at Harvard University and this opened my eyes to a whole new world of research and thinking. After that phenomenal experience, I decided to follow a different, more rewarding path and become a technologist.