Human factors and phishing
By the end of this module you will be able to:
- Classify a given attack as phishing, spear-phishing, whaling, vishing, smishing, or pretexting
- Identify which of Cialdini's principles of influence is being exploited in a social engineering scenario
- Evaluate a proposed security awareness training programme against evidence on what works
- Describe the characteristics of a reporting culture that makes security incidents easier to detect

Real-world incident · August 2019
$37 million transferred. No malware. No hacking. Just a believable email and a sense of urgency.
In August 2019, Toyota Boshoku Corporation, a major Toyota Motor Corporation supplier, lost approximately $37 million in a single bank transfer. An employee in a finance role received an email that appeared to come from a known business partner. The email requested an urgent update to bank account details for a supplier payment. The employee complied. The payment went to an account controlled by fraudsters.
No malware was used. No system was technically compromised. The attacker crafted a believable email, created urgency, and exploited the employee's trust in the apparent sender. This is BEC (Business Email Compromise), a subset of social engineering that the FBI's IC3 (Internet Crime Complaint Center) reported caused over $2.7 billion in losses in 2022 alone, making it one of the highest-revenue cybercrime categories globally.
Technical controls, MFA, firewalls, and endpoint protection, protect what they can see. When an attacker convinces a legitimate user to act on their behalf, those controls are bypassed entirely. Understanding why humans are targeted, and what evidence shows reduces that targeting's effectiveness, is essential for building defences that extend beyond technology.
The employee followed the correct process for supplier payment updates. The attacker simply made themselves look like the right person to follow it with. Where does the defence need to be placed?
With the learning outcomes established, this module begins by examining social engineering: the taxonomy in depth.
7.1 Social engineering: the taxonomy
Social engineering is the manipulation of people rather than systems to gain unauthorised access, extract information, or trigger an action. Each attack type uses a different channel or targeting approach.
Phishing uses deceptive email campaigns sent to large numbers of recipients hoping a percentage will respond. The Toyota Boshoku attack was a targeted form called spear-phishing, where the attacker tailored the message to a specific individual using researched context (their role, their supplier relationships, the urgency of payment processing). Whaling is spear-phishing targeted at senior executives (CEOs, CFOs) who have authority to authorise large transfers or access sensitive systems.
Vishing (voice phishing) uses phone calls, often with spoofed caller ID, to impersonate authority figures such as bank fraud teams, IT support, or government agencies. The Twitter 2020 breach (Module 6) used vishing: attackers called Twitter support staff impersonating IT colleagues. Smishing uses SMS text messages with malicious links or fraudulent instructions.
Pretexting involves constructing a fabricated scenario (a pretext) to gain trust. An attacker who calls a company reception claiming to be a new IT contractor needing building access is using pretexting. Many sophisticated attacks combine multiple techniques: a smishing message creates urgency, a follow-up vishing call confirms it, and the target complies with a fraudulent request.
With an understanding of social engineering: the taxonomy in place, the discussion can now turn to why people comply: cialdini's principles of influence, which builds directly on these foundations.
7.2 Why people comply: Cialdini's principles of influence
Social engineering attacks are effective because they exploit psychological patterns that are not vulnerabilities in the usual sense: they are features of normal human social behaviour. Robert Cialdini's six principles of influence, documented in his 1984 book "Influence: The Psychology of Persuasion," provide a structured vocabulary for understanding why.
- Authority: people follow instructions from perceived authority figures. An email appearing to come from the CEO requesting an urgent fund transfer exploits authority.
- Urgency and scarcity: artificial time pressure prevents careful verification. "This must be processed today or the contract is lost."
- Social proof: people follow what others appear to be doing. "Everyone in your department has already completed this security update."
- Liking: people comply more readily with those they like or identify with. Attackers research social media to find shared interests, mutual connections, and familiar names.
- Reciprocity: people feel obligated to return favours. A helpdesk attacker who offers to help a user with a problem first may then ask for something in return.
- Commitment and consistency: once someone has agreed to a small request, they are more likely to agree to a larger one. "You said you wanted to keep your account secure? Then I just need you to verify this code."
“Social engineering exploits well-established cognitive biases. The most effective defenses combine technical controls that make compliance difficult with procedural controls that make verification easy.”
NIST SP 800-50 Rev.1, Building a Cybersecurity and Privacy Learning Program - Section 2.3, Human Factor Threats
NIST SP 800-50 Rev.1 is the US government's reference for designing security awareness training programmes. It explicitly acknowledges that cognitive biases are exploited by social engineering and that training alone is insufficient; technical and procedural controls must complement it.
“Social engineering attacks exploit human psychology rather than technical vulnerabilities. The most effective countermeasures combine technical controls (email authentication, URL filtering) with security awareness training that teaches recognition of manipulation techniques, not just identification of known attack signatures.”
SANS Security Awareness Report 2023: Managing Human Risk, Section 3: Training Effectiveness Metrics
With an understanding of why people comply: cialdini's principles of influence in place, the discussion can now turn to what evidence shows works in awareness training, which builds directly on these foundations.
7.3 What evidence shows works in awareness training
Security awareness training is the most common organisational response to human factor risk. The evidence on what actually works is more nuanced than simply delivering an annual e-learning module.
Proofpoint's State of the Phish 2023 report found that 84% of organisations experienced at least one successful phishing attack in 2022. Organisations with regular simulated phishing exercises (monthly or quarterly) reported lower click-through rates than those with annual training alone. Simulations work best when followed immediately by targeted feedback explaining what the simulated attack used and how to recognise it, rather than simply recording that the employee clicked.
Spaced learning (short, frequent sessions rather than one long annual module) shows better retention in academic research. Role-specific training that tailors scenarios to the actual threats relevant to a person's role (finance staff receive BEC scenarios; IT staff receive credential phishing scenarios) outperforms generic content.
Common misconception
“Annual security awareness training is sufficient to address human factor risk.”
Training addresses knowledge and intention, not behaviour under pressure. A finance employee who receives annual phishing training may still comply with an urgent, well-crafted BEC email under time pressure because the cognitive biases being exploited operate below conscious awareness. Process controls, such as mandatory verification of bank account changes via a pre-established phone number, and technical controls, such as email gateway filtering and MFA, must complement training. Neither training alone nor technology alone is sufficient.
Common misconception
“Security awareness training eliminates the phishing risk for organisations that complete it annually.”
Annual security awareness training reduces click rates on simulated phishing exercises but does not eliminate them. Research from Proofpoint's 2023 State of the Phish report found that organisations with annual training had a 26% simulated phishing click rate; monthly training reduced this to 13%. Spear-phishing targeting specific individuals with personalised content bypasses generic awareness training. The most resilient defence layers are: technical controls (DMARC enforcement, sandboxed URL detonation, anti-impersonation policies) that block attacks before users see them; followed by training that reduces click rates on what gets through; followed by easy-to-use reporting mechanisms that enable rapid response when attacks succeed.
A reporting culture is arguably the most underinvested human factor control. When employees fear punishment for reporting suspected incidents (or for clicking a phishing link), organisations lose their most valuable early detection signal. The Verizon Data Breach Investigations Report (DBIR) consistently shows that insider reporting catches incidents significantly faster than automated detection alone.
Building a reporting culture requires leadership modelling (senior staff report openly when they receive suspicious communications), blameless incident response for honest mistakes, and clear reporting channels that do not require navigating a complex helpdesk process. The goal is to reduce the time between an employee noticing something unusual and the security team knowing about it.
An employee in accounts payable receives an email apparently from the CFO saying: 'I need you to process an urgent wire transfer today for an acquisition we can't discuss publicly yet. Do it before 5pm and don't mention it to anyone.' Which social engineering technique and Cialdini principle are most clearly being used?
An attacker calls a company's reception claiming to be a new IT contractor, mentions the name of the actual IT manager (found on LinkedIn), and asks for temporary building access while they 'set up the server room.' Which technique is this, and which Cialdini principle is most directly being exploited?
A CISO is proposing a security awareness programme that consists of one 30-minute e-learning module per year, with a pass mark of 80% on the final quiz. Based on evidence about what reduces phishing click rates, which critique is most valid?
A CFO receives an email appearing to come from the CEO's email address asking them to urgently transfer £95,000 to a new supplier account before end of business. The email uses the CEO's name, references a real ongoing project, and includes a mobile number to call for confirmation that connects to the attacker. The CFO is under time pressure. Which controls would most effectively prevent this Business Email Compromise attack?
Key takeaways
- Phishing, spear-phishing, whaling, vishing, smishing, and pretexting are distinct attack types differing in targeting precision and channel. BEC is among the highest-revenue cybercrime categories globally.
- Social engineering exploits Cialdini's principles of influence: authority, urgency, social proof, liking, reciprocity, and commitment. These are normal cognitive patterns, not individual weaknesses.
- Effective awareness training combines spaced learning with simulated phishing exercises, immediate feedback, and role-specific scenarios. Annual e-learning alone does not reduce phishing click rates significantly.
- A reporting culture provides the fastest early detection signal. Employees who fear punishment for reporting incidents remove a critical detection layer. Blameless reporting must be actively reinforced.
- Training alone is insufficient. Process controls (verification of payment changes via a pre-established channel) and technical controls (email filtering, MFA) must complement human factor training.
You now understand why humans are targeted and what evidence shows works in training. But when a breach does happen - and the Foundations modules have shown that breaches are inevitable - what legal obligations does data protection law impose? Module 8 covers UK GDPR, lawful bases for processing, data subject rights, and the 72-hour breach notification rule.
Standards and sources cited in this module
NIST SP 800-50 Rev.1, Building a Cybersecurity and Privacy Learning Program
Section 2.3, Human Factor Threats
US government reference for security awareness programme design. Establishes that training must be complemented by technical and procedural controls. Cited in Section 7.3.
NCSC UK, Phishing guidance (2023)
Full guidance document
UK government guidance on phishing attack types and defences. Referenced in Section 7.1.
Proofpoint State of the Phish 2023
Key findings: organisational phishing rates and training effectiveness
Primary industry data source for phishing prevalence and the relative effectiveness of simulated phishing exercises vs annual training. Cited in Section 7.3.
Cialdini, R. (1984). Influence: The Psychology of Persuasion
Chapters 1-7 (principles of influence)
Academic foundation for the influence principles exploited by social engineering. Cited in Section 7.2.
FBI IC3 Business Email Compromise Report 2022
BEC losses statistics
Source for the $2.7 billion BEC loss figure in 2022. Used in the opening case study context.
Module 7 of 25 · Cybersecurity Foundations

