“The government pretty much sucks at everything” — Ben Shapiro
The cyber security industry is under attack. However, instead of communist China being the threat actor, the Australian federal government is the threat actor. The federal government proposes to protect citizens’ data from China by acting like China: Compelling companies that run critical infrastructure to grant the government access to its networks and systems, under certain circumstances.
The full strategy is here. Note that I haven’t commented on most of the strategy.
The Government’s Role in Cyber Security
(In short: The government should concentrate on protecting its own networks and systems, which it fails to do. Likewise, the government should share threat intelligence information with the industry. The government is unable and unwilling to balance its conflict of interest with regard to national security [and the resulting mass surveillance] and protecting individuals’ data. The government has no inventive to protect citizens’ data from itself, because it, too, wants access to the same data.)
The strategy increases government’s role in the private cyber security industry. The government claims to be “working in partnership with business”. However, partnerships are voluntary, not mandated by force through legislation.
Government Involvement in Security Incidents
The federal government proposes to force companies categorised as critical infrastructure — a definition controlled by legislation — to allow the federal government to “assist” with security incidents.
“The nature of this assistance will depend on the circumstances, but could include expert advice, direct assistance or the use of classified tools”.
“Moreover, Hayek shows a technical issue — often argued by Milton Friedman — with government legislation: It’s rare that legislation is repealed (e.g., “hate speech” laws in the West), rare that nonsensical government programmes are closed down (e.g., Safe Schools in Victoria), and rare that governments decrease in size, power, and influence. For digital privacy, this natural cadence of ever increasing government power can only mean more legislation against citizens’ digital privacy and freedom; this cadence can never mean more digital privacy and freedom. This is the road to digital serfdom”.
Clearly there is a risk of the government being able to use such legislation to gain access to companies’ networks, IT systems, and hence data of citizens. We don’t know what such legislation would look like:
- Could the government force companies to hand over citizens’ data?
- Could the government force companies to weaken their security in line with the Assistance & Access Act in order to grant the government continuous access to the company’s network?
It’s obvious that the federal government has unique knowledge of nation state threat actors. However, interaction between companies and the government needs to be voluntary in order to prevent government overreach.
A Cyber Security Baseline Across the Economy
Who’s best placed to secure a company’s network and IT systems? The government? Or each individual company? What would a cyber security baseline even look like?
At the heart of the problem with enforcing a minimum cyber security baseline is Hayek’s knowledge problem:
“The knowledge of the circumstances of which we must make use never exists in concentrated or integrated form,” explains Hayek, “but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess.”
In short: The government cannot understand how to secure a company’s networks and IT systems, because the knowledge to effectively secure these systems rests with separate individuals — cyber security staff, lawyers, vendors, external parties, etc. Multiply one company by all companies that may come under the new legislation, and one suddenly sees that the government can’t possibly deliver one cyber security baseline for rule them all.
Cyber security within organisations is top-down; that is, cyber security strategies are derived from business objectives and a company’s risk profile / risk appetite. Security controls — e.g., security training, password reset procedures, and antivirus; people, process, and technology — are implemented and monitored, but these security controls differ significantly across industries, differ significantly based on risk profile & risk appetite, and a million other reasons (e.g., affordability, cyber security maturity, number of staff to manage the security controls).
Take an example of a manufacturing company and a retailer. Manufactures often value the integrity of data & processes over the confidentiality of data.
Manufacturers must ensure that production lines do repetitive tasks with a high level of accuracy. This means that Windows computers — interfaced with engineering equipment — on production lines often don’t run traditional antivirus, because new antivirus definitions could affect the the ability of the Windows computer to execute repetitive tasks successfully. In such cases, other security controls such as file integrity monitoring (FIM) can be used. However, retailers — whose IT systems are likely be to cloud based — value confidentiality over data integrity, and hence antivirus is a reasonable security control.
Those who have worked in both industries understand the difference, and this difference cannot be written up in an all knowing standard. This is years of industry experience, not a static standard without nuance. Granted, the standard may not be as explicit in its required security controls (e.g., “endpoint protection” rather than “antivirus”). However, my point is that any top-down government standard cannot follow the necessary thought processes to reach an applicable security control.
Anyone in the industry understands that standards — such as PCI-DSS — lead to checkbox security: simply ticking boxes to ensure that a security control is implemented, even if that control provides little value compared to other controls that aren’t in the standard.
Likewise, larger companies already have security standards. These standards include security controls such as encryption, data exchange over public networks, and privileged access management.
The complexity to implement new technology against such standards can be high. For example, no vendor’s product/service meets all standards, some standards simply cannot accommodate an ever changing technology landscape, and hence exemptions and tradeoffs are always needed. In short: No standard can cover a quickly changing technology landscape, the complexity of technology, and the human ability to make tradeoffs.
And don’t even get me started on the complexity of implementing security standards in legacy environments. It’s no wonder security vendors support the strategy; a cyber security baseline is revenue for them.
Possibly Regulating Cyber Security Professionals
Under the guise of “Australia needing more trusted and skilled cyber security professionals”, the strategy suggests an accreditation programme for cyber security professionals. (Here are the differences between certification, accreditation, and licensure.)
Why is this a bad idea?
Cyber security is not a regulated industry. We are not architects. We are not engineers. We are not accountants. We are not lawyers. We don’t build bridges. We don’t do your taxes. We don’t defend you in court.
The argument for accreditation — and indeed licensure — is because individuals cannot assess the quality of the services, and the risk of a doctor/lawyer/accountant making a mistake has real consequences to individuals. However, the cyber security industry is not used by individuals; the industry is used by companies/organisations. Even the arguments for accreditation from the ACCC don’t stack up for the cyber security industry.
The other argument for accreditation is public safety. However, this argument has always been in the realm of physical safety. Cyber security affecting physical safety applies to certain industries (e.g., energy, healthcare) but certainly not all industries.
Accreditation of the industry assumes that a professional body knows which skills are relevant to the industry. If you’ve ever seen the variance in cyber security job descriptions, you know why only a company/organisation can understand the skills required.
Accreditation is a barrier to entry to the industry. How are we meant to hire graduates? Work placements?
Accreditation introduces a professional organisation — over which the industry would have little control — to produce a bureaucracy that will be slow to recognise changes to the industry.
Many cyber security certifications already exist, and these certifications are already used as a baseline for the industry. There is a long history of cyber security professionals arguing for and against these certifications. Many very capable cyber security professionals have no certifications, although certifications, in my opinion, are worthwhile under some circumstances. (I’m CISSP and SABSA Foundations certified.)
Most cyber security professionals come from other areas of IT — architecture, operations, developers, etc. Providing a barrier to entry will only harm other IT professionals with skills to offer to the industry, especially those who ask for an internal transfer to a cyber security role and hence on-the-job training.
There is currently a massive shortage of cyber security skills in the industry. Introducing barriers to entry will only hurt the industry, and hence the industry does not need accreditation.
We need the government to get out of the way. Only we know how to secure our networks and systems, not the government.
Let us do our jobs. The government can’t even protect its own networks and IT systems, and hence their focus should be closer to home.
The industry moves far too quickly to be regulated. Such regulation will be harmful to hiring skilled staff, dangerous to civil liberties, and ultimately cannot produce any workable cyber security baseline across multiple industries, companies with different risk profiles and appetites, and hence significantly differing security controls.