This summary of A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back by security expert Bruce Schneier examines the systems around us through the lens of hacking.
Buy A Hacker’s Mind at: Amazon (affiliate link)
Estimated time: 25 mins
Key Takeaways from A Hacker’s Mind
- A hack is an activity allowed by the system that subverts the goal or intent of the system. Hacks exploit a vulnerability in the system to advance the hacker’s goal, rather than the system designer’s.
- The main ways we can deal with hacks are:
- Prevention. We can try to limit hacks by using principles of secure systems design.
- Patching. Once a vulnerability is discovered, we can patch it to stop the hack.
- Normalisation. Sometimes hacks get integrated into a system’s rules and becomes the new normal.
- All sorts of systems—including market, legal, political, and cognitive systems—have vulnerabilities that can be hacked. Hackers can also target vulnerabilities at different levels of a system to achieve their goals.
- Hacks can be a source of innovation. A system can evolve and become more resilient through hacking and patching.
- However, hacks also have downsides:
- Hacks are parasitical. A hack that is too successful can end up destroying the system it exploits.
- Hacks tend to benefit the rich and powerful, who have more resources to find and normalise hacks.
- Artificial intelligence will hack us at an unprecedented speed, scale, scope and level of sophistication. Our systems will need to become more flexible to respond.
Detailed Summary of A Hacker’s Mind
What is a hack?
All computer code contains bugs or mistakes. Most of them don’t matter. But if an attacker can deliberately trigger a bug to bring about some unintended effect, that bug becomes a “vulnerability”. Hackers can exploit a vulnerability to force system to behave in some unintended way.
Definition: Hack /hak/ (noun) –
- Something that a system allows but which is unintended and unanticipated by its designers.
- A clever, unintended exploitation of a system that (a) subverts the rules or norms of the system, (b) at the expense of someone else affected by the system.
Of course, what the “intent” of a system is can be a tricky question, especially for complex systems that have evolved over time.
Hacking is a useful lens to understand a broad array of systems. When most people look at a system, they focus on how it works. When a security expert looks at the same system, they focus on how it can be hacked. This lens can help us make systems more resilient.
Hacking is not the same as cheating
Cheating is when you do something explicitly against the rules. Hacks, on the other hand, can often be legal. A hack follows the letter of the rules but evade its spirit. Hackers outsmart the system and the system’s designers. They will only be illegal if there is some broader rule that forbids them.
Hacks are not necessarily bad
Hacks may or may not be moral, depending on whether the purpose of the system they’re subverting is itself moral. At its core, hacking is a way to exert power despite the rules.
Example: Kids are natural hackers
Children don’t understand intent and norms in the same way that adults do, so are naturally keen to test the boundaries of systems. Many online games for children try to restrict their speech to prevent bullying and predators. Kids hack them with deliberate misspellings (e.g. “phuq”, “stew putt”) or by separating out information over several messages. Successful hacks then get shared around.
When a school district blocked chat apps, students figured out that they could just chat using a shared Google Doc. (This same technique, “foldering, was what the 9/11 terrorists used.)
During the COVID-19 pandemic, one student renamed himself “Reconnecting … ” and turned off his video so it looked like he was having connectivity problems.
Sometimes people resort to hacking because they feel the systems are too stacked against them. A hack might subvert the intent of one system or rule, without violating the greater social contract.
How to deal with hacks
The three main ways we deal with hacks are: preventing them in the first place; patching vulnerabilities when we find them; or normalising the hacks.
Prevention
Since hacks exploit vulnerabilities in systems, you can prevent hacks by minimising a system’s vulnerabilities before it is released. This might involve:
- Secure systems design. There are some design principles that can increase the difficulty of hacks and reduce their effectiveness. These include: simplicity (more things can go wrong with a complex system); defence in depth (using multiple defences, like a deadbolt in addition to a lock); compartmentalisation (giving each account only the permissions they need); and fail-safe (ensuring that when a system fails, it does so as safely as possible).
- Red-teaming. You can assign a team to try to hack your own system as if they were external hackers, and then shore up any vulnerabilities they find. Red-teaming could also work for legal and regulatory systems—e.g. governments could assign accountants and attorneys to look for loopholes in proposed laws while they are still in draft.
- Bug bounties. Companies sometimes pay rewards for people who discover vulnerabilities in their products. However, with computer systems, hackers can usually make a lot more money selling any vulnerabilities they find to criminals or governments.
Systems will always have some vulnerabilities because there’s a trade-off between security and efficiency. Many practices that increase security are also annoying and expensive. If the expected costs of an attack are lower than the costs of preventing it, a business may just accept the risk. This is especially true when the costs are externalised (i.e. not borne entirely by whoever is responsible for the security).
The threat model can also change over time as the world changes. Hackers are innovative and will keep coming up with new techniques. A system might be designed for a point in time, but later “drift” into insecurity.
Example: The Internet was not designed to be secure
When the Internet was first designed in the 1970s and 1980s, it wasn’t used for anything important and only members of research institutions could access it. The original designers deliberately favoured simple protocol designs even though they weren’t secure.
Today, the Internet looks very different. The threat model has completely changed.
Patching
Patching is when you remove a vulnerability from a computer system and thereby neutralise a hack. This can be done quite quickly and effectively when the system is controlled by a single entity. (Schneier recommends enabling automatic updates on your phone and computer so that patches are promptly installed.)
In social systems, however, patching is not so efficient as laws can take a long time to change. Moreover, while a computer system patch will make an exploit impossible, a social system patch will just make it illegal. So to stop the hack, detection and enforcement systems may also need to be updated.
With cognitive hacks, patching doesn’t really work. Some hacks may naturally lose effectiveness over time as people become used to them. The best security is probably just raising awareness of the vulnerability so that it’s harder to exploit. [Raising awareness could also backfire though, as it could make potential hackers aware of the vulnerability.]
Unfortunately, sometimes even a patch is not enough as the SolarWinds case shows:
Example: The SolarWinds hack
In 2020, a Russian government group hacked the update servers for SolarWinds, a network management software company. At the time, SolarWinds had over 300,000 customers, including multiple governments and most Fortune 500 companies.
The hackers breached the US State Department, Treasury Department, Department of Homeland Security, Los Alamos and Sandia National Laboratories, and the National Institutes of Health. They also breached networks in Canada, Mexico, Belgium, Spain, the UK, Israel, and the UAE.
After getting in, Russian agents established new means of access unrelated to the original vulnerability. Even though the SolarWinds vulnerability was patched, the affected networks may still be vulnerable in all sorts of ways we don’t understand. The only way to truly regain security would have been to throw away all those systems and start from scratch. No organization did that because of the cost involved.
Normalising
Hacks are novel. What counts as a hack will change over time as good hacks get shared around. Those that don’t get patched often become normalised and integrated into the system’s rules.
Example: Card counting becoming normalised
Card counting in blackjack used to be a hack, but there are now plenty of books on how to do it.
Casinos have responded in different ways. Some have made it harder by using more decks of cards and only dealing two-thirds of the way through the decks. Others have trained their staff to watch out for card-counters and to kick them out.
Still, other casinos just accept it as a cost of doing business. Some even encourage it by advertising that they deal blackjack from a single deck! Since people tend to overestimate their skill, casinos make more money from the wannabe card-counters than they lose from the skilled ones.
For vulnerabilities in laws, changes can take years if they take place at all. As more and more people become dependent on the hack, it becomes harder to patch as those benefiting from it will try to preserve it. Loopholes sometimes even get approved retroactively if a powerful enough constituency gets behind it. Schneier notes that hacks of financial regulations are often normalised.
Banks and regulators play an unending game of cat and mouse. Regulators are tasked with limiting irresponsible, aggressive, corrupt behavior. Banks are motivated to make as much money as possible. Those two goals are opposed to each other, so banks hack the regulatory system wherever and whenever possible. Any bank that chooses not to do so will be overtaken by those that do.
Example: Hacking Heaven
Medieval Catholics believed that if you sinned, you could atone and receive forgiveness. One method of atonement was to donate money and receive an “indulgence”—a piece of paper that would relieve some punishment for a sin.
Indulgences were loosely regulated and virtually limitless. People got innovative. Middlemen popped up to broker them. One friar, Johann Tetzel, started selling indulgences for future sins (like a “get out of hell free” card) as well as for indulgences for your deceased friends or relatives. The Catholic Church made some attempts to clamp down on these practices, but it had become so dependent on the profits from indulgences that it struggled to do so.
In 1517, the practice of selling indulgences so disgusted Martin Luther that it led him to start the Protestant Reformation. (The Catholic Church did eventually ban the selling of indulgences in 1567—by then it had diversified its revenue sources.)
All sorts of systems can be hacked
As we’ve seen from some examples above, hacks are not limited to computer systems. Social systems can also be hacked—and at different levels. Almost all systems are part of some larger system, and larger systems can have many different subsystems. A hacker can therefore achieve their goals by targeting vulnerabilities at different levels.
When a vulnerability is at a lower level, it’s usually easier to patch. When a vulnerability targets a more general system—like our cognitive systems—patches may not be possible. It’s also harder to patching vulnerabilities in a social system where there is no single entity in control. The reason why society nevertheless works because of norms and trust, not the rules.
Market systems
Markets depend on self-interested buyers make intelligent decisions among competing sellers. For markets to function, 3 things are needed:
- Information. Buyers need information about products to make informed decisions. But businesses can hack this by obscuring information and making misleading claims.
- Choice. Buyers need multiple products and sellers to choose from. But businesses can hack this by forming monopolies and buying up competitors.
- Agency. Buyers need to be able to use their information and choice in their purchasing decisions. But businesses can hack this by increasing switching costs to “lock-in” their customers—e.g. using proprietary file formats, or preventing customers from taking their data with them when they leave.
Laws that bolster information, choice and agency, such as by banning deceptive practices or anticompetitive behaviour, can therefore help markets work better. However, large businesses will often oppose regulations on the grounds that they “stifle innovation”.
Legal systems
As lawyers often say, “All contracts are incomplete.” Contracts, especially longer-term ones, can never specify everything that could possibly happen. The same also applies to other laws, such as legislation. No matter how hard we try to prevent them, some vulnerabilities will always remain because all rules contain some ambiguities and inconsistencies.
Examples where the legal system has been hacked include:
- Patent injunctions. Before 2006, patent injunctions in the US did not require much evidence. Large tech companies could use them bully smaller competitors to stop selling their products or pay exorbitant fees.
- Luxury real estate. Many countries regulate real estate transactions less harshly than other financial transactions. Luxury real estate is therefore often used for money laundering.
- Work to rule tactics. This is where workers do everything formally, to the letter of the law and their job description, but no more. It can be a very effective labour tactic
- Tax avoidance. Tax laws are incredibly complex. Vulnerabilities often arise when different states or countries’ laws interact with each other. Large, tech companies with a global reach are particularly well-suited to exploit these vulnerabilities.
- Uber and Airbnb. These two companies’ business models depend on evading regulations that apply to taxi services and hotels.
Many “disruptive” gig economy services would be completely unviable if forced to comply with the same regulations as the “normal” businesses they compete with. As a result, they—and their venture capital backers—are willing to spend unbelievable sums of money to combat these regulations.
Legal systems can still work despite their vulnerabilities because they are backed up by trust and relationships. Most people understand the spirit of the law and will abide by it. The courts also act as a backstop, stepping in when disputes arise.
Political systems
People who hack legal systems usually argue that it’s not their fault the law was poorly written. However, laws can be poorly written because of hacks to the broader political system itself.
Some hacks to the political system exploit the ways in which laws are passed:
- Last-minute changes to large bills. The CARES Act 2020 was a $2 trillion COVID-19 stimulus bill passed during the pandemic in 2020. In the final version of the 880-page which was finalised less than an hour before the vote, Republicans snuck in a $17 billion per year tax break for real estate investors. In 2019, a committee recommended something like “tracked changes” that would clearly identify changes and make it harder to sneak in last-minute amendments. [Apparently this change has now been implemented, but not yet publicly available.]
- Must-pass legislation. Some legislation (e.g. an appropriations bill or a bill responding to some disaster) has to get passed. Since presidents can’t veto individual line-items in a bill, legislators will often attach policy changes or “riders” to these bills. It’s often been proposed that bills should just deal with one subject at a time, but that proposal has never been passed.
- Title-only bills. A title-only bill is basically a placeholder, which lawmakers can introduce just in case they want to circumvent legislative deadlines in the future. In the final days of the 2019 legislative session, Democrats used a “title-only bill” to pass a bank tax in Washington State with minimal oversight.
Other hacks subvert the voting system:
- Gerrymandering. The legislators in charge of drawing up voting districts can benefit from how they draw them. The obvious solution would be to remove this conflict of interest by tasking independent commissions with drawing up districts.
- Vote-splitting. Running or funding spoiler candidates with a similar name or policy profile to your opponent can split your opposition’s vote share. An extreme example is in the 2014 Indian election, where 5 different candidates running for the same seat had the same name, but only one was an actual politician.
- Voter suppression. Under the Fifteenth Amendment to the US Constitution, it is illegal to deny men the right to vote based on race. States like Louisiana hacked this by selectively administering an impossible “literacy test” that even Harvard students in 2014 couldn’t pass to black voters. Although this hack was effectively banned after 1966, some states turned to other tactics. For example, Alabama has closed polling places and tried to close 31 DMV offices (where people can obtain voter IDs), mostly in rural and majority black counties.
- Disinformation. Hackers can subvert democracy by distorting the information that voters rely on. In the 2016 US presidential election, fake news websites run from Macedonia had American-sounding domain names like BostonTribune.com or ABCNews.com.co, and gained a lot of traction on social media. As technology improves, risks of disinformation will grow. For example, AI videos of fake events are already being used politically in countries including Malaysia, Belgium, and the US.
Cognitive systems
Cognitive hacks might be the most general hacks of all. If you can hack people’s minds, you can hack any system that depends on humans making rational decisions.
Over twenty years ago, I wrote “Only amateurs attack machines; professionals target people.” It’s still true today, and much of it relies on hacking trust.
Our brains were optimised for living in small kinship groups, not for the environments we live in today. We have limited ability to process information and tend to believe what we hear, particularly if we aren’t paying close attention. As hacks become more prevalent and sophisticated, constant vigilance will be required to guard against them.
Example: Spear-phishing
Phishing is when you send fake emails to try and get someone to do something that will compromise their account. Spear-phishing is a more sophisticated—and more successful—version of this that personalises the emails for the target.
In March 2016, John Podesta, the chair of Hillary Clinton’s presidential campaign, received an email from a Russian intelligence agency that looked like it came from Google. Podesta fell for it, entering his password into the fake Google site. That gave Russian operatives access to over 20,000 of his emails.
Spear-phishing is often for financial gain. A common scam involves sending an email from a company executive’s account to a subordinate, asking them to transfer a large sum of money to a foreign bank account for some big deal. In 2019, Toyota lost $37 million to such a scam. Such scams can be highly sophisticated, with scammers investing months or even years to learn the background details needed to pull them off. Sometimes they’ll even follow up with a fake phone call to make it more convincing.
We don’t have much control over what we pay attention to, either—much of our attentional system is hardwired into our brain. Our brains are not good at assessing probability and risk, and we naturally pay attention to vivid and unusual threats to our survival even if the actual risks are low.
Computers have also made it increasingly easy to hack our cognitive systems. Using our personal data, advertisers can find ways to influence us more effectively. Businesses can also use subversive design elements (“dark patterns“) to trick people into accepting things they don’t want to do. And, like slot machines, online games and social media make use of variable rewards to keep us hooked to their platforms.
Hacks can be a source of innovation and resilience
Since hacks are novel, they can be a source of innovation. Successful and popular hacks will end up changing the system in some way—the system either gets patched or evolves to accommodate the hack. Systems that are flexible enough to keep up with hackers are more resilient.
For example, legislation takes a long time to pass, so a system that relies too heavily on legislation to stop hacks will not be very resilient. Regulations and the common law, by contrast, are much more adaptive. So it may be better to lay down general rules in legislation and leave it up to regulators and judges to implement and interpret those rules as new circumstances arise.
Common law is nothing but a series of hacks and adjudications.
The downsides of hacks
Hacks are parasitical
In order for a hack to work, the system must exist. If a hack becomes too successful too quickly, it can destroy the system it relies on. Sometimes this is exactly what the hacker wants. Other times, however, hackers don’t want to break the system—they just want to hijack the system for their own goals.
Example: ATM hacking
Over decades, there’s been an arms race between ATM hackers and defenders. Hackers want to get the cash inside an ATM without having money deducted from their account (or without an account at all) .
In 2011, an Australian bartender named Don Saunders figured out how to get free money from ATMs and withdrew AU$1.6 million. He wasn’t even caught—he just eventually felt guilty about it, went to therapy, and publicly confessed.
But if any ATM hack got too successful, it would destroy the system as banks would stop using ATMs altogether.
Schneier believes hacking has grown in recent decades, and social trust has declined. If we don’t find a way to stop this, our social systems may eventually break down.
Societal systems rely on trust, and hacking undermines that. It might not matter at small scales, but when it’s pervasive, trust breaks down—and eventually society ceases to function as it should.
Hacks tend to benefit the rich and powerful
People often think of hacking as something that underdogs do to fight against “the system”. In reality, hacking tends to benefit the rich and powerful and reinforce existing power structures.
Rich and powerful people can hack more effectively because:
- They have the resources to hire people with the relevant expertise to find the hacks.
- Once a hack is found, they have more power to magnify with it (e.g. a tax loophole is more valuable the more money you have to shield).
- Rich and powerful people are better able to use their political and legal power to get their hacks normalised and legalised. Hacks by less powerful people, on the other hand, are more likely to be patched and outlawed.
Artificial intelligence will hacks us like we’ve never been hacked before
Systems can still work pretty well despite their vulnerabilities, because most people don’t try to hack them. However, artificial intelligence (AI) has the potential to hack us like we’ve never been hacked before. AI can exploit the vulnerabilities that already exist in our various systems, but at a speed, scale, scope and sophistication that will be unprecedented.
For example:
- Microtargeting. AI has the potential to personalise hacks to you and your preferences. As noted above, the most effective phishing attempts are personalised, and AI will be able to automate these. Researchers are also currently working on AIs that can detect our emotions by analysing our writing, and monitoring our facial expressions or heartrates, which will likely make them better at persuasion.
- Public submissions. AIs can write letters and submissions to newspapers, elected officials, and government agencies. A 2019 study found that bot submissions on a Medicaid reform sounded enough like real people with genuine concerns that official administrators were fooled. (The researcher later requested the submissions be removed.
- Social media bots. “AI Persona bots” can pose as individuals with distinct histories and personalities. Most of the time they’ll act like normal community members, posting about innocuous topics. But, every now and again, they’ll post something political. Thousands or millions of such bots can collectively shift the opinions of real people and change what we think “other people” think. During the 2016 US election, approximately 20% of all political tweets were posted by bots. For the 2016 UK Brexit vote, it was one-third.
- Recommendation algorithms. Though they weren’t programmed to do so, AI algorithms naturally push people towards extreme content because it increases user engagement.
In the short term, any hacking we see is likely going to be from AI and humans working together. But advances in AI are coming fast and erratically, so a world filled with autonomous AI hackers is not too hard to imagine. If that happens, we might get hacked by an AI without any human intending it. This is because whenever we give a goal to an AI or human, the goal will always be underspecified. For example, if I ask you to get me a cup of coffee, you won’t rip the nearest cup out of someone else’s hands. That’s because humans understand context and usually act in good faith.
AIs, on the other hand, naturally think outside the box because they don’t have a conception of “the box”. And so they might end up hacking our systems without realising it—and without us realising it, either.
Example: Volkswagen cheating on emissions tests
Volkswagen cheated on emissions control tests for over 10 years. Its cars detected when they were undergoing an emissions control test and activated the car’s emissions control system only during that test. Since emissions control systems reduce performance in other ways, this hack allowed Volkswagen to deliver better performance while appearing to meet emission standards.
This wasn’t a case of AI, as humans had programmed the computer to behave that way. But you could easily imagine an AI coming up with the same hack and getting away with it. If too many companies did this, we’d quickly barrel over a 2°C global temperature increase.
Schneier offers a few high-level suggestions along the lines of greater regulation and more responsive governance systems, but he mostly points to other authors here.
The overarching solution here is for all of us as citizens to think more deliberately about the proper role of technology in our lives. … Computer systems affect more than computers, and when engineers make decisions about them, they are literally designing the world’s future.
Other Interesting Points
- Internet of Things (IoT) devices are full of vulnerabilities. Many of these cannot be patched at all because the code is embedded in hardware rather than software. They’re often produced with low profit margins and then the production lines are disbanded as companies go out of business.
- A hacker version of “capture the Flag” is a mainstay at hacking conventions. Each team tries to exploit vulnerabilities in their opponent’s systems while finding and fixing any vulnerabilities in their own.
- In 2006, the US Army used a chatbot, SGT STAR, to persuade people to enlist.
My Review of A Hacker’s Mind
While I thought Schneier a lot of many good points, I found his examples a bit too one-sided for my liking. For example, he argues that banks being “too big to fail” is a hack, but there were lots of other factors involved in the 2008 Global Financial Crisis, and I’m not convinced that being “too big to fail” was really the main one. Similarly, Schneier suggests that people who oppose social welfare programs use administrative burdens as a hack to stop people claiming them. However, I recently finished Recoding America, which takes a closer look and argues that it’s more often a problem of miscommunication and a risk averse culture where public servants are incentivised to follow processes over delivering outcomes. In general, I would’ve preferred fewer examples covered in greater depth—especially when it came to the broader societal or cognitive hacks.
The metaphor of “hacking” to describe the subversion of our political and cognitive systems was an intriguing one that sheds new light on some social problems. It underscores how amorphous things like norms and impersonal trust are crucial to the functioning of society, not just the written laws that are made explicit.
Upon finishing A Hacker’s Mind, I’m left with a dilemma. People need to trust their social systems in order for those systems to work. Yet it can’t be a blind trust where we assume everything will always work. Instead, trust has to be broadly shared (so that few people even try to hack the system), conditional (so we respond swiftly to the few hacks that get through), and well-targeted (so we don’t fall into “sloppy cynicism” and see everything as a hack). But what happens when hacking has reached the highest levels of our cognitive and political systems and cynicism is all around us? Is it possible to rebuild social trust once it’s been broken? I don’t know.
Let me know what you think of my summary of A Hacker’s Mind in the comments below!
Buy A Hacker’s Mind at: Amazon <– This is an affiliate link, which means I may earn a small commission if you make a purchase through it. Thanks for supporting the site! 🙂
If you enjoyed this summary of A Hacker’s Mind, you may also like:
- Book Summary: Thinking in Systems by Donella Meadows
- Book Summary: The Big Short by Michael Lewis
- Book Summary: How Democracies Die by Steven Levitsky and Daniel Ziblatt
2 thoughts on “Book Summary: A Hacker’s Mind by Bruce Schneier”
> The metaphor of “hacking” to describe the subversion of our political and cognitive systems was an intriguing one that sheds new light on some social problems. It underscores how amorphous things like norms and impersonal trust are crucial to the functioning of society, not just the written laws that are made explicit.
I found it interesting you say this, given that “hacking” originates from computer systems, which LACK the amorphousness and nuance of social systems. Maybe that is why hacking is so dangerous in these latter systems?
I think hacks of computer systems can be just as dangerous. Computer systems tend to be easier to patch than social systems, but even then it won’t necessarily undo the damage (e.g. SolarWinds).
I also liked the point about systems existing in larger systems, and hackers being able to target different levels. So computer systems can still indirectly rely on things like trust, as the spear-phishing examples show. And the line between a computer system and social system may not always be clear, particularly if we hand over more and more functions to computers.