Update: I gave a talk at Secure360 2024 on Security Differently, which included this discussion of the CISO role and a promotional video about how the CISO should be like the CFO.
Lately I’ve been thinking about the role of the CISO and Security and how it compares to the CFO and Finance. It started with two simple questions: “Who is responsible for security?” and “Who is responsible for meeting your budget?”
I suspect that many people would answer the first question with “Security” or “the CISO” while few would say that Finance or the CFO are responsible for meeting the budget. Put more eloquently by my colleague Chris Brown,
“We don’t ask the CFO to make the company profitable, but we do ask the CISO to make the company secure.”
Why the difference? I believe that organizations understand that while Finance can set strategy and keep track of income and expenses, financial success is driven by everyone in the organization. We too need to recognize that it is the organization, not the security team, that creates security, or more directly: the CISO can’t make the company secure.
The need for change
While there is evidence that attitudes towards the cybersecurity team are changing, I believe recent regulatory actions will accelerate this change.
With the new SEC cybersecurity incident disclosure rule and action against the CISO of SolarWinds, I believe cybersecurity is having its Enron moment. Especially when the SolarWinds action was announced, I saw comments along the lines of “if the SEC can take action against a CISO for a breach, no one is safe,” which I think misses the point. Both the disclosure rule and the action against the SolarWinds CISO are measures to ensure proper reporting of security posture, much the same as how the Sarbanes-Oxley Act established rules for financial reporting with clear legal responsibilities for the CFO.
From the SEC press release: “In its filings with the SEC during this period, SolarWinds allegedly misled investors by disclosing only generic and hypothetical risks at a time when the company and Brown knew of specific deficiencies in SolarWinds’ cybersecurity practices as well as the increasingly elevated risks the company faced at the same time.”
These actions send a clear message to publicly traded companies and their CISOs: you must accurately report on cyber risks and controls. I sincerely hope and believe that this will help transform cybersecurity to a team sport at the CEO level. As the SEC complaint reveals, the company and CISO were incentivized to provide public assurances that security controls were effective despite internal discussions to the contrary, to avoid bad press and maintain investor confidence.
By taking this action, the SEC is creating a new incentive, and providing cover to CISOs to accurately present the security posture of publicly traded companies, both to the executive leadership team and the public. It is notable that the complaint names the “SolarWinds Chief Executive Officer, Chief Financial Officer, Chief Technology Officer, and Chief Information Officer at the relevant times are referred to as the “CEO,” “CFO,” “CTO,” and “CIO,” respectively” as “other relevant persons and entities”, and that “Brown failed to ensure that other senior executives were sufficiently aware of, or understood, the severity of cybersecurity risks, failings, and issues that he and others knew about.”
How to change (with metrics)
As I argued in Security Differently, part of the change is to shift from a top-down to a bottom-up approach, and acknowledge that everyone in the organization has a role to play in creating security. Like the CFO and Finance, the CISO and the Security organization can set strategy, but must also provide the organization and its departments with the equivalent of a budget - key metrics that provide useful and timely feedback on how security performance at all levels aligns to company goals.
Security metrics are hard. Entire books have been written about them. There was an active community dedicated to just metrics. And, as we know from safety, lagging indicators like security incidents aren’t a good metric, and many leading indicators are just measuring the work of security - audits and such. Thankfully, research over the past 10 years gives us some useful candidates.
Many leading indicators can be found outside of security; the DORA research program originally started by Nicole Forsgren has found that security both influences and is influenced by the four measures of DevOps Performance: deployment frequency, lead time for changes, time to restore service, and change failure rate. Work by Stephen Magill and Gene Kim found that “most [open source] projects stay secure by staying up to date [on dependencies].” And multiple reports published by Cyentia support the notion that measures of DevOps performance and ProactiveRefreshes of technology improves cybersecurity. Traditional quality metrics, like version currency (N, N-1) and how quickly bugs are resolved are good leading indicators of success, and easily understood by technology teams.
Lagging indicators - like security incidents - can be reframed in terms of security performance. We have limited control how often we’re exposed to security threats, but we have much more control over how we detect, respond, and recover. A measure of the percentage of security incidents that were detected and contained before major impact is a potentially useful metric.
Finally, Cyber Risk Quantification (CRQ), itself over 10 years old, can help organizations make better decisions about security investments. While it can be labor intensive, estimating the risk reduction of a particular security investment in monetary terms is really the only way to fairly compare security spending against other possible projects.
Change is hard
I firmly believe that security should be run more like finance. I must also acknowledge that this change is hard. There are more unsolved than solved problems, and security as a professional discipline is much younger than finance; after all, double-entry bookkeeping has been in use for over 500 years. We have much work to do, but would be well served by following the principles of Finance in Security.
Getting rid of deadlines improves safety and security.
I was recently discussing the safety of Artificial Intelligence (AI) with a colleague. He sent me a link to an episode of the Medicine & Machine Learning podcast featuring Munjal Shah, the co-founder and CEO of Hippocratic AI. In our conversation, my colleague mentioned a quote from the podcast on deadlines:
“You can’t say you have a deadline and say you’re safety-first. One of those things is not true.”
This resonated with me for a specific reason. One of the most effective software engineering teams I’ve worked with was Core Engineering. As the name suggests, Core Engineering created “core” libraries and tools for other software engineering teams to use, like a standardized logging facility and components for authentication and authorization. (Like many large companies, there was value in creating components tailored to our environment). This required writing code, and also high quality documentation and examples to make the components accessible to a broad audience of developers of different skill levels and experience at the company.
The Core Engineering team was notable for a couple of key reasons: first, they were high performing in nearly all aspects of software development: their software was high quality, and the few bugs (including security flaws) that were discovered were fixed quickly, they had high levels of automation including automated tests, wrote excellent documentation, and had effective leadership. Second, and more importantly, they typically did not have hard deadlines for the work they did. Because the software they built wasn’t directly used by clients or internal teams, they had a high level of control over the timing of the release of their software.
Hard deadlines
Why was the lack of hard deadlines so important to their performance? This changed the incentives for the team, giving them the time and space to focus on building it right - put another way, being safety-first.
Because the components the team built were used by many applications, a failure would have broad impact. Additionally, the team wasn’t dependent on client or internal stakeholder funds. Taken together, this meant that the cost and risk of missing a date was low, but the impact of an outage or security flaw was much higher. Almost by accident, the work environment favored prioritizing work to promote confidentiality, integrity, and availability, and so the team did.
Practical Advice
So, to create a safety-first environment, you can’t have deadlines. Just get rid of them and all will be good, right? If it were that easy…
One of the things I’ve learned from safety is that there are always conflicts and trade-offs to manage. Getting rid of deadlines is just not practical; there will always be situations where there is a very real cost of not completing work on time, which in some cases can be quite large (compliance to a new government regulation comes to mind).
So what can be done? This is where safety and security professionals can help, by understanding and reducing the goal conflicts and supporting decisions to prioritize safety or security ahead of deadlines when the risks are too great. Simply being mindful of this conflict can help manage it - leaders can create time and space for teams to prioritize safety and security as best they can, which may be less when the team is under a deadline, but can be more once the deadline has passed. Put another way, they can make the choice to take on security tech debt in the short term to meet an important client obligation and pay it back by raising the priority of security when the production pressures are reduced.
While this won’t always work, acknowledging that the conflict exists and that it will never be fully resolved is a good start.
For a while, I’ve been seeing evidence that cybersecurity, especially traditional security, has been stagnant; adding security controls hasn’t appreciably improved outcomes and we continue to struggle with basic problems like vulnerabilities. (as Cyentia discovered and reported in its Prioritization to Prediction series, organizations of all sizes only fix about 10% of their vulnerabilities in any given month)
Many organizations have accumulated 20+ years of security policies, standards, and controls, without significantly removing rules that may no longer be needed, and organizations of all sizes continue to experience security breaches.
Safety faced a similar problem 10-15 years ago. Safety scientists and practitioners saw that security outcomes were stagnant and looked for new approaches. One of these, Safety Differently, was created by notable safety science academic Sidney Dekker in 2012. It was part of the emerging acknowledgement that the traditional method of avoiding accidents through policies, procedures, and controls was no longer driving improvements in safety.
Safety Differently argues that three main principles drive traditional thinking:
Workers are considered the cause of poor safety performance. Workers make mistakes, they violate rules, and they ultimately make safety numbers look bad. That is, workers represent a problem that an organization needs to solve.
Because of this, organizations intervene to try and influence workers’ behavior. Managers develop strict guidelines and tell workers what to do, because they cannot be trusted to operate safely alone.
Organizations measure their safety success through the absence of negative events.
Safety Differently advocates a switch from a top-down to a bottom-up approach, adopting new principles:
People are not the problem to control, they are the solution. Learn how your workers create success on a daily basis and harness their skills and competencies to build a safer workplace.
Rather than intervening in worker behavior, intervene in the conditions of their work. This involves collaborating with front-line staff and providing them with the right tools and environment to get the job done safely. The key here is intervening in workplace conditions rather than worker behavior.
Measure safety as the presence of positive capacities. If you want to stop things from going wrong, enhance the capacities that make things go right.
What does this have to do with cybersecurity? I believe that we’re seeing the same thing in security: historically, we’ve focused on constraining worker behavior to prevent cybersecurity breaches, and the limits of that approach are becoming increasingly clear. Adapting concepts from Safety Differently offers a solution, by supporting success and focusing on positive capacities: Security Differently.
Adopting Security Differently
In practical terms, what would adopting Security Differently look like? The Safety Differently Movie provides good insights into how this would apply to security and evidence of its effectiveness:
Most importantly, the organization’s top leadership must take responsibility for security. Since security performance can’t be separated from organizational performance, security can’t be “the CISO’s problem” or even “the CIO’s problem.” A key part of this shift is acknowledging that it is our workers - not our security team - that create security.
A clear shift in ownership of security performance to the Operations and Engineering teams. As I argued in a 2021 talk, many positive security outcomes are well within the capabilities of Technology organizations. One example is vulnerability management, which is solved through proactively updating software and refreshing technology - something all technology teams can do.
Likely, a much smaller security team. The head of health and safety for Origin noted that he cut the size of his team from 20-30 people to 5, as his team had to give up safety performance management. This doesn’t necessarily mean that security spending is significantly reduced, because of the shift in ownership of security performance.
A focus on positive measures of security performance (Security Metrics). A security team is still needed to measure security outcomes, like successfully defending against an attack, as well as measures that have been shown to contribute to success, like security updates and secure configuration.
A significant reduction of security policies and procedures, along with training on Security Differently concepts. The Australian grocery Woolworth’s found that the combination of eliminating national safety procedures and training in Safety Differently led to the best outcomes: fewer accidents in the store, along with the highest levels of safety ownership and engagement.
Asking, not telling. While ownership shifts to the Technology team, security expertise is still needed, to coach and support security performance - advising developers on how to fix security bugs - and develop new capacities to address novel threats (the Solarwinds Attack is an example). Simply asking teams, “what do you need to be secure?” is a key part of improving their performance.
In an organization that has fully adopted Security Differently, top leadership (CEO/CIO) sets security goals, the Security team keeps score through evidence-based metrics aligned to those goals, provides expertise and support to achieve the goals, and develops or acquires new defenses when new threats emerge (in practice, this happens infrequently).
Importantly, investment in Security Differently is not a cost, rather, it is an investment in improved organizational performance. By changing the focus from preventing bad outcomes to creating positive outcomes and developing organizational capacities not only improves security, but also improves quality, engagement, and overall organizational performance. (And also reduced incident response costs!) Evidence of this affect can be found in the DORA Research, which can be summarized as “performance begets performance”: the technical capabilities of DevOps, including shifting left on security, improve software delivery performance and ultimately organizational performance.
Adopting Security Differently can both improve both efficiency and outcomes, much like the traffic experiment from Safety Differently: when traffic engineers removed traffic controls from a key mixed-use intersection in Drachten, they forced people to take greater responsibility for safety, and what looked riskier on the surface was much safer, reducing annual accidents from 10 to 1, and also eliminated gridlock.
In a future article I will continue to explore the idea of reimagining the role of security through related work in safety.