As I posted yesterday, I’m working through cataloging my past presentations, and I’m nearly done! Today I’m sharing a talk from SIRAcon 2022 that’s much different than what I typically do, “Making R Work for You (With Automation)”.
Many SIRA members do data analysis as part of their work, and talk about the results of their analysis at SIRAcon. However, we don’t often talk about the mechanics of our craft, how we go about doing data analysis. However, in 2019, Elliot Murphy gave a talk about just that, by showing how to use Jupyter (Python) and R Notebooks for data analysis. His presentation inspired me to start working with R using R Notebooks, and I wanted to share what I’d learned, and built to automate my workflow.
I think the talk went reasonably well, although it was hard to say for sure, as the conference was once again virtual that year. Unfortunately, some of the key attendees weren’t able to attend, and I didn’t get their feedback - although later one of them did watch the replay and shared that what I did was similar to his approach.
Aside from learning how to write better R code, I learned a couple of things from the experience (both doing it and talking about it):
Doing something brings deeper knowledge than reading about something. One of my goals with R was to learn good software engineering practices (documentation, testing, source code control, etc.) including DevOps practices (continuous integration and continuos delivery, CI/CD). While my experience was limited mainly to myself, I did come away with a better and deeper understanding of what it’s like to write modern software.
If writing software was more physically demanding, we’d probably do a better job creating tools and automation to help with the writing. As I noted in my talk, the carpenters who worked on our house spent a whole day setting up their environment to make it easier to move materials they were removing to the dumpster, and didn’t try to just brute-force the work. Experience and the challenge of physical labor led them to an economy of movement.
A copy of my slides are here, and the visual notes from the talk are below!
As mentioned in my last post, I’ve been cataloging my past talks and am posting the “missing” ones here.
Back in 2018, I spoke at Secure360 on “Integrating Security into Emerging DevOps”. This was a brand new talk, based on my experiences from my first three years running Application Security at Express Scripts:
Imagine building a software security practice. Now imagine building a security practice while your organization is modernizing software engineering, shifting from Waterfall to modern Agile/CI/DevOps.
Teams are excited. Agile means more freedom, less bureaucracy, less security. Security rules are blockers; they are preventing software from being written and deployed, and are problems to be removed. The security team resists, worrying that agile will mean only that security bugs are pushed into production faster.
In fact, modern software engineering and security are entirely compatible; the rigor and discipline that comes with DevOps supports strong security. The challenge is that security must evolve as the organization evolves, and must be part of the natural flow of how engineers develop software today.
This session will present solutions for building security in to a modern software engineering organization that reduce friction, making the engineers happy, and reduce security issues, making the security team happy. By understanding the motivation and habits of software engineers, we can design security controls that satisfy both groups.
I’ve been meaning to post this talk for some time, as it was well received and a good case study on integrating security into a software engineering practice. Much like when Herbie Hancock hired funk musicians to play Jazz on his fusion album Head Hunters, we hired people with a software engineering background to do security; our security engineers were developers, and our application security testing team had a QA background. In this way, we were able to extend the software engineering practice into security and avoid much of the conflict that can occur when staffing AppSec with traditional security professionals.
As I work through cataloging presentations I’ve done this week, I’ve come across a few that I haven’t yet posted here (or on https://transvasive.com). I’ll be posting them here over the next three days.
One of the “missing” talks was a short slide deck I put together as part of a “Papers We Love” discussion on Learning from Cyber Incidents: Adapting Aviation Safety Models to Cybersecurity, a paper published by a working group organized by Harvard’s Belfer Center to explore the concept of creating a “Cyber NTSB”.
I came across this paper having met one of the lead authors, Adam Shostack. Adam especially has been interested in creating a “Cyber NTSB”, an idea we share, although I likely take a broader interest in adapting safety to cybersecurity.
The paper is well written and the workshop seemed well thought out, as it included presentations from people actually working at the NTSB, grounding the discussion in work-as-done instead of work-as-imagined at the NTSB. It also included a session led by the psychologist and safety scientist David Woods on cross-domain learning; as I discovered in my studies, safety doesn’t translate directly (for example between aviation and marine safety). The findings are sound and follow current safety science thinking and are included in the slides.
For me, the practical takeaways were and remain:
A recurring theme is discussion of blame, and how NTSB specifically avoids assigning liability in accident investigations, as avoiding blame improves learning
There are domain-specific challenges unique to Security; don’t blindly copy what works in aviation safety
Near Miss reporting is an important complement to incident investigation; share stories of the close calls