A “Free” Beer Swilling Puppy – The Quandary of Open Source Software Security

A “Free” Beer Swilling Puppy – The Quandary of Open Source Software Security

This title comes from a few people I know, which brought up a great point about the support model for popular and, dare I say it, open-source projects of critical or national interest. This article, in fact, was something ruminating in my head, and came off in parts via a few responses via Twitter that I felt needed some elaboration rather than 280-character treatises.

Maybe like, or unlike a lot of folks I know in the technology industry, either in title or by actual role of doing things, probably have either looked at, installed, or used a component as part of their daily work that came from an open-source project. Even less of them may have contributed patches or content back, and even fewer would be considered someone who has their own repository or project or is part of a core development team for one. This is not saying that anybody is lesser or greater based on their level of involvement. But, if an involved or a regular user of such software, there’s a different level of awareness as to what it takes to effectively maintain and use this “free” resource.

Based on the title for this post, and for many years until say, the past decade or so, the concept of many those who have finally brought open-source software into the mainstream public or private sector enterprises, have viewed the pre-pended “free” to open-source as meaning “I got it, the software, for free or no cost”. They didn’t have to write a check, hand over a credit card or slip a few hundreds to somebody under the table in a transaction in order to use the software or components without guilt in their own environment. Or so it seemed, and nothing is actually what it seems.

But, let’s crank the clock back a bit on open source software for those that may have no or little experience with it, or maybe are coming to this article because there was a news story that referenced it and why “the world is now burning”. 

One can tie open-source software (OSS) nearly completely to the original birth of what is now known as the internet, and specifically the sharing of computer code among academics over that network. While code had been shared among entities before, via tape, printed out code, punch cards and various disk storage methods, the premise for the ethos of share, modify and share back really came into its own as academic network node operators as they collaborated on projects, which was brought about by the birth and growth of ARPANET, and improved upon each other’s work. Quite altruistic and egalitarian if you think about it. Granted, yes, most histories will document earlier dates going back to the 1950s, but the network really went and simplified it.

Now, not to get ahead of myself, I will focus a bit on the “simplification” of the sharing, which is really some of the impetus as to how open source and projects built around it grew rapidly as the ARPANET became more widely known as the Internet we know today as more and more of the general public gained access to it. I know when I was younger, and “bopped around” via Gopher, what I found was information piles. Library systems, simple public sites, usually for researchers, to share data and basic information, and a few universities who had department sites there. This was shockingly less than what I had found on BBSes at the time, the novelty of what was there and the irrelevancy to a kid in high school made that Internet experience of the late 80s and early 90s less than what people use it for today. However, the BBS world introduced me to shareware and the first real round of GNU licensed open source software, in the form of SLS Linux.

I mean, at the time, most shareware was a 1-2 person endeavor, now historically immortalized in stories about the Bay Area computer meets that gave us “The Steves” of Apple fame and others, but the “improve upon the code… and share back” was the genesis for open source licensing to come. Sadly, at the time of these initial licenses, like GNU, BSD and such, they were designed in that altruistic, almost hippie commune of the idea of mutual aid and support—not for the growth and adoption by larger groups and organizations for which several versions and branches then required. While those were licenses, there were really no license term police, unless somebody really felt it significantly harmed them and went in for a lawsuit. If you ever want to read up on why some licenses exist in this space, watching a McKusick “History of BSD” complete talk or reading up on the USL v. BSDI lawsuit would be a better backgrounder here.

Again, these were licenses in the legal sense about the code but not how the code was really used, where it was used, and how often it was used or even maintained. Most of these OS-centric OSS project self-regulated/governed by core teams, which were much more academic in how they were formed, managed and passed on responsibility—and really weren’t ever set up for any kind of commercial encounters in those days. Those core teams, and then fanned out over those who had commit access (known colloquially as “maintainers”), were often in those roles due to technical knowledge, longevity and willingness to volunteer for the scale of projects, and for committers, the quality and impactful-ness of the code they submitted. 

This worked for those OSes, and oddly the commits for “ports” or the OS-specific versions of other open source applications, such as languages like Perl, compilers like GCC, and various servers, saw their own code quality increase due to this loose, but formalized quality control. There were also fewer OSS projects at the time too, which made it manageable. I mean, when Brian Behlendorf was at the first BSDCon in Berkeley in 1999, simply admitting that “Apache” really was “a patch-y” project as a pun, but it was pretty much how those projects began and ran. It was a series of new ideas or fixes on prior code to get it to run in various configurations or to add features. When managed or wrapped in some informal governance, this works to a certain level of scale, especially when it’s all volunteer.

At that same conference in 1999, Wilfredo “Fred” Sanchez of the Apple OS team, brought an early release of Darwin, the then to be released OS that was to become OS X, that was a BSD Core and Userland on top of a Mach kernel. The interesting thing about the CD-ROMs that were shared to folks in the hallway was that Apple, a company known to be very protective and secretive about its intellectual property, shared their work back to the community in hopes of embracing and supporting it. My own main claim to fame was being the person who ported over the Darwin Streaming Server (the FOSS version of the QuickTime Streaming Server) to FreeBSD, through, you guessed it, a series of patches. Took forever to get them integrated, but that was the process to ensure stuff compiled and ran.

Since then, Apple has released their own open source license, and has basically abandoned Darwin as a standalone project. Apache formed a foundation and brought more and more important projects under its support auspices, and Linux fractured into a bazillion different distributions, all with their own stories and politics and lawsuits. Oddly, the BSD side of things generally kept to the four main distributions for many years before having a few forked projects—but looking back at all of these, we can learn a bit about the challenges of managing open source projects for performance and stability, and what kind of resources are needed for them to be successful, and what can we do now given the last decade of major OSS vulnerabilities creating a dumpster fire for security professionals and global organizations.

In 2020, the Open Source Security Foundation (OpenSSF) was founded to try to address, or at least identify, issues regarding developing and maintaining secure open source software. Reading through the site that was launched, including a “Values”, “Vision” and “Charter”, the ideals and directions are good… but why was this only something cooked up in 2020? This could be a horrible case of too little too late, or trying to stuff the goop back in Pandora’s Box when it comes to the breadth and depth of software security problems in the open source community. Granted, it did replace the Core Infrastructure Initiative (CII), also under a Linux Foundation umbrella, which was founded in 2014, but barely seemed to offer more than a nod to the rest of the world that suddenly we’re taking things seriously.

Mind you, this isn’t to say that closed-source software doesn’t have the same or worse issues, but due to that nature, they are way less than transparent in most cases about those issues. In many cases, the times we hear about an issue is by an announcement by a researcher, or news story because a bad actor is actively exploiting a vulnerability, or a release note item from a patch set or upgrade that was silently placed there in hopes few noticed the “whoopsie”. Let’s at least make sure you are aware of that.

So, the OpenSSF is staffed, as a board and leadership, but also those within the Slack channels, with a number of well-known security professionals and representatives for large and small corporations who have a hand in providing and supplying services heavily reliant on FOSS. While that is good to see, what isn’t, is at least a token participation from some government CERTs and similar organizations who may have significant policy influence over changing how we approach managing such a challenge at an international and even just national level.

CII, noted above, landed at the end of 2015 with a press release noting a new “badge system”, noting some adherence or compliance with a set of standards and policies to self-certify best practices. Since that time, I have not seen a single project I’ve had to use, recommend for implementation, or a commercial project that was sold back to us based on FOSS software, use such a marker. Did it matter, does it matter—is it a feel good that hoops were jumped through to obtain it—does it actually mean anything? Passing the criteria, as listed on the website, are literally the basic, bare-bones things any project should be doing, and of course benefit large FOSS projects that already have backed into these. To be brutally honest, these are super high bars, even the higher bars aren’t all that high—especially when, as I noted above, projects followed a little of the basics from those early OSS projects.

What’s of concern, and really where we diverge somewhat from the direction OpenSSF is headed, spearheaded by the private sector and some related projects with Foundation umbrellas (Apache, Linux, etc.) but the board governance is nearly 100% corporate. This, from experience, will result in slow, selfish actions. No matter what is stated in the statements of values and vision, it inevitably results in being directed from corporate needs because money will be always held over the heads of actions and policy.

Good stuff is discussed on their Slack, good people are in the discussion, but like other projects I’ve seen some of those same people I’ve lauded for being involved, it’s ideas but rarely something that is operationalized widely. This comes from the available cycles from those volunteers, but also the resources and impact an individual upstart effort has to steer that cargo ship that is policy. So, yes, good ideas and some efforts are there, but to realize things like STIX and TAXII took years to come to fruition, and still aren’t widely adopted, it’s going to still be slow going. It’s all voluntary, or it’s market driven, neither can be relied upon.

There are obviously plenty of other prior efforts, smaller efforts, and, of course, future efforts. However, none seem to tie it all together considering what hurdles need to be overcome. Academic discussion often breeds whimsical ideas that don’t operationalize ideas, but merely analyze. A strictly technical solution misses the policy and political mechanics required to get government and corporations to buy into how to do it, even if it seems to address the problem logically. Policy only solutions miss the efforts and resources and mechanics required to actually make things happen, while good of heart, suffer from lack of visceral awareness of the problem.

Why do I sound so dour about this? Mainly because I’ve been involved at each of these levels in my career, and if it wasn’t one wall to be run into or climbed, it was another. People talking past one another, missing voices in the room, or the will or wherewithal to actually want to do it. It’s often written off as time or other resource constraint or something politically unpalpable or doesn’t jibe with business or commerce needs. Often it comes from misunderstanding or rote ignorance or awareness. I’ve seen it at the project level but also at the highest policy levels, its endemic. 

When helping with the analysis and response the Federal government was performing for the Heartbleed vulnerability in 2014 while on detail at the Office of Management and Budget, which is part of the Executive Office of the President, I observed and experienced one of the major gaps to effectively responding to the issue at hand, but understanding what led to it and how to potentially address those issues. 

First was the immediate response of not only OMB and directing DHS, was to ask agencies to get an inventory of all their exterior systems running OpenSSL, in particular focus nearly entirely on web servers. That’s a good first cut, but as most techies know, and this was before the Software Bill Of Materials (SBOM) took off, the use of the OpenSSL library doesn’t solely exist as a web server issue. It’s embedded in other applications and software, like application servers, which were then obviously identified by a long list of vulnerability reports on the CVE for the OpenSSL issue. I made them aware of that within the first few moments, but also highlighted the inclusion of such in mobile devices such as iPhones and Android devices, but also a number of network equipment manufacturers. 

The private response to this was “oh, shit”, and of course guidance within the government and then to DHS was updated. Briefings to the National Security Council (NSC) showed that this was a “Twinkie the size of Central Park” from Ghostbusters sized issue than previously considered. In short, very, very bad. What was missed, that only was discussed significantly after the initial response, was the actual reasons this happened and the mechanics of open source project maintenance and governance. While surveying those within OMB’s eGov office who were tasked with supporting the response effort, not one of them had contributed or worked on an open source project. Bets for me, if I had been in other meetings both at the level for which coordination happened at DHS and the NSC, is that the same zero or near zero experience would result.

Twinkie The Size of Central Park
Awareness Of The State Of OSS Security Before Heartbleed

While it’s not necessary to have had the experience to form an adequate response or policy to address issues like this at hand, having it definitely makes for more informed responses and better policy. It sort of aligns with the current discourse about a certain “market disruptor” now requiring engineers to go out and do deliveries, and getting a lot of pushback that that work is below them. But the fact that many who now come into some of these roles may not have had or completely skipped more visceral connections with the markets they are developing solutions for. This is not to say that a good software engineer for a pizza company or a convenience store conglomerate who is building and designing their systems needs to have had years of experience in those actual roles to develop those systems. But, as somebody who had education and a short career in human factors, user interface design (UI) and user experience (UX), it helps to build empathy with those who use those developed systems as well as provide a better customer or user interaction.

Most software engineers are now, with the advent of agile development cycles and push towards DevOps and DevSecOps, are now required to be jacks and janes-of-all trades and service the development, securing and operations of the systems they create. That’s a lot of load and responsibility for folks who may have either had this as a hobby, a college degree or even came up through a coding boot camp to have on their proverbial plate. Ever complain about a UI for a software program that made no sense or just failed miserably. Most likely it was either 1) designed by committee, or 2) designed by somebody who was never going to use it once released. Getting people to “walk a mile in the shoes” of users, employees or customers is a vital but often overlooked in modern development shops.

The same can be said for policy development. This is often why having policymakers with a diverse background, preferably from communities and areas of impact for which policy is being developed for, in the room or even leading the efforts. It’s really what the term “stakeholder” should mean. Often, a lot of policy development is outsourced, both in the public and private sector. The public sector will lean on a policy analyst that may have a little experience in the area, but will have a pretty broad overall portfolio and may only spend a few cycles (depending on their level) on a particular request. In the private sector, they’ll tap someone or hire them for a while to drive a policy to satisfy a need, but will rarely return to it after its promulgation. This is a resourcing AND outreach issue. Oddly, time and time again, the lack of time spent on trying to get the right stakeholders in the room and just “get something out the door” often comes back to bite that group when they have to revisit, revise, drop or even litigate policy decisions that didn’t have adequate community representation.

So, we’ve tackled the technologists, the policy folks and a little bit on the project management and maintenance aspects. We now have left off probably one of the more unpalatable portions of this, which is the involvement of business in this sphere. While they say there’s no free lunch, many organizations and corporations have been trying to find ways to avoid the till at the checkout line while reaping major benefits by skipping lots of research and development by merely adopting and integrating FOSS into their products, reaping gobs of financial and market benefits.

How does this get solved? I mean, by nature, FOSS is supposed to be “free and in beer” and based on an ethos of “free as in speech”, but it’s requiring commitment and support like “free as in baby”. Open source, as noted earlier, both in license and project structure, was never really designed and has had a difficult time in managing the growth and consumption of their products while trying to maintain a level of quality and security that is often expected. It needs care, nurturing, and guidance for it to mature, adapt, and grow. And I don’t mean grow as in use and acceptance, but grow as in getting better, iteratively into a functional part of society.

For instance, do you think companies who fully embrace Linux or Apache would run their business on the same Linux or Apache that was given to the world 20+ years ago? Probably not. It came to prominence through interest and “giving back” in various ways. Now with so many new projects, languages and other libraries and other code, maintenance and the care and feeding has become difficult to track and a bit overwhelming. Think of it a bit like our challenge to educate all our kids now. Many who are “captains of industry” now probably didn’t go to school having to take classes out of a trailer or have class sizes of 30 or 40 to one. It’s not necessarily the fact we have a ton more kids, but our ability to make sure they grow and learn suffers from the lack of resources to do it equitable and fairly, even with things like GitHub and automated tools.

Now many companies, once they’ve consumed open source solutions, through the installation or integration of them in their operations, will hopefully have a mature development pipeline (ala CI/CD) that integrates testing, ensures documentation is written, security analysis is completed and operational guidelines are also developed. But how often are those findings from testing and analysis that may discover a flaw or an undocumented and potentially hazardous use case, get shared back with the original code maintainers. Many companies don’t know how to do it, or often want to for fear of leaking intellectual property (IP) or be on the hook, legally for some part or parcel for such disclosures. It doesn’t keep them from using it, only contributing back. 

This is definitely the “free as in beer” model of consumption. In other cases, such as some of those which show up on the lists of OSS foundation donor lists, or through commit audits, are large contributors back to projects, may have figured a way to help out. That’s one resource model. Some may consider, since they are saving a bundle on commercial licensing, may want to help with a tip jar model, but may not know how to do that. I mean, how many projects ask for help with GitHub yearly membership fees, or ask for a maintainer to have access to Sonatype or Snyk? The model for some of these projects are horribly broken on how to ask for help, so it gets down to those developers to triage PRs from the community, or find and fix when they do have a chance to do a code review through a potential feature request.

I am thankful for those companies who are taking active roles in some projects, but it’s also not always altruistic. Some participation is often to further desires of steering groups to make policy or architecture decisions based on their needs, even though nobody freely admits it. But business is business. Those who don’t have the sway or resources to get on such committees, are at the whims of those “who may know best” and take changes and have to react and redirect scarce resources if such a change has a material impact on how the consume and run that software. It’s still a tyranny of the few, even though most OSS projects were never really meant to end up that way.

So what are we to do with some of these issues? We can hope that the OpenSSF gains some traction, or after the recent Log4j issues, that there’s some movement on critical open source policy within the government and CISA. But it seems like aborted starts or stuff done under the cover of those who know or are aware, or literally a coalition of the willing in some cases. But that is really the nature of open source, mostly. There may be a way to get what we need if we pulled a few levers and ripcords and admit no one entity can do it all.

This is where, and while I won’t create some super-duper logo for it, I’ll entitle “Project Wishlist”, which by its own name is highly aspirational, but might be a start to help work through some of the mechanics involved in working towards solving the big parts of the problem.

First, for this work is to catalogue and identify open source projects that are used or integrated into critical infrastructure or are significant enough for national/international concern. Some of this can be leveraged via the SBOM, or software bill of materials, model along with statistics of use from popular software code repositories like GitHub and GitLab, but also for projects we already know are significant who may self-host, like some OS distributions and major projects like Apache, Kubernetes and so forth.

This will at least start to fill in, via some automation, an idea of how widespread and large the problem space is. We get an idea of both volume and breath of impact. SBOM, if more voluntarily adopted, will help with the software composition analysis required to start tracking dependencies. An issue like this, with triaging OpenSSL and Log4j, was related to figuring out how and where such libraries had, in part or in whole cloth, code integrated into applications and services that relied on their code.

This is something the government, maybe CISA or an FFRDC at the behest of CISA or even the Commerce Department, can undertake. It is also not just a US concern, but would need to have similar efforts mirrored internationally. Generally, due to the law of averages, most likely the “crème” of the top 25 to 100 apps and code bases will most likely be the same across those efforts as reported. The next step is the triage.

As I mentioned adoption of the SBOM, this is a shining moment for it, but not in such a way of hoping it’s retroactively preventative, like it will help root out what ails the code bases already there, but provide guidance as to what to focus on. While we’ve drunk all the free beer we can have, we’ve gotten a little drunk on its availability, and to really balance the saving from not having to license versus the costs of cleaning up after a major event or vulnerability announcement, being transparent with includes software for major companies is a tiny price to pay without disclosing any secret sauce.

I actually made a similar point in a presentation explaining software licenses while at a former employer. The issue with understanding intellectual property, and especially those protected by patents and copyright, is that disclosing the ingredients for a recipe is far from something that is unique enough to be covered by such protections. The method of and “ways and means” those ingredients are combined and presented are. Considering that the general recipe for a chocolate chip cookie is the same, but adjustments to mixing time, amount of butter, types of sugar and baking time can lead to widely varying results. Those are protected and can be, but bakers don’t have a say as to trademarking or patenting the flour they use, especially if it’s widely available to others.

So, sharing those ingredient lists, but not the specifics in implementations, is totally fine and acceptable. Getting to dig for unknown inclusions, maybe because those imports weren’t tracked or audited, or the build process was too open and allowed open fetches to external repositories without being cataloged or approved. Some developers may use the holy book of “StackOverflow” and cut & paste snippets that may require something and not realize they just introduced a dependency to their work that wasn’t initially designed or approved. Doing post compile analysis, say via project like GitBOM to catch those build artifacts, especially in distributed or large-scale projects. 

Getting this data is a good first step. However, the next step is making organizations aware of how much they are not only leaning on OSS heavily for their business process but also show the value they receive from it and try to encourage some support back. That’s a difficult “people problem” to address because values and how to measure them vary significantly from person to person and from organization to organization. Sometimes the “stick” versus the “carrot” to convince some useful participation is competitive shaming. Much like those who jump on the bandwagon to form an organization after an event that is getting press attention and get corporate sponsorship, the carry-on of looking like they are doing something is powerful.

This is where we can play a little to peer pressure and ego among those who ride on the backs of OSS projects, and play to the fact that they should become better supporters. Companies, unless they are super protective of IP, tend to like to have their logos and badges assigned to “do good” projects, and being seen as a supporter for some is free marketing. Sort of like those ads that look at your data and see, “you like a handle of cheap vodka, can we interest you in two bottles of Pedialyte for later”, it’s a hook that plays towards expected behavior. I recently saw how the Blender project labels those who provide significant support as “patrons” much the same way you’d support an arts center or your local NPR station. But what’s better with the structure of this one is that the project clearly states what the money does for the Blender Development Fund.

Blender Development Fund Link

Literally, this is one of the few funding models that tries to directly tie supporting such work to what exactly the money is going to. The extension, which can be maybe how Humble Bundle allows you to still donate to charity, but focus amount and receivers based on the patron’s preferences, would be to allow for these “patrons” to tip into specific buckets for things such as coder reviews and audits, or even a sabbatical for some developers to focus on improving specific features before a new release. It short, you’re not licensing or paying a perpetual fee (unless you choose to), but you pick an amount for what you want in return. 

There should be no guarantee of sitting on a board, or influence over a technical committee, but merely in a Patreon-type model, supporting the project. It’s not perfect, but it’s something we have a mental model for that’s been sustainable in other forms. If you want functional, secure and reasonably resilient software, this is one route that should seriously be explored. It’s generally a bit more fair because, as a patron, spelled out on Blender’s site, you merely get a badge, name, link and logo depending on levels—not an outsized voice, early access, or even preferred commit status. This levelling as well, allows for smaller orgs who use, but may not have the deepest pockets, to get a chance to support some software projects.

At scale, and this is where the “wishlist” comes in, instead of finding specific projects for each person to donate to for certain requests, some of these requests are pooled based on some criteria, others may be large asks that needed for a single project but resources among the developers themselves can’t be achieved. This is maybe less an issue for those under a “foundation” umbrella, but as we’ve seen with Log4j in December, even that safety is no guarantee of good project governance.

The prioritized list of “critical” or “of national interest” can help with triaging and getting through the first round of major projects and issues. This also isn’t “one and done”, as the carry-on requires those projects to better manage feature requests, continue to properly use tools and services that were provided to do audits or provide better automation. Like a “patron” model provides, it’s sustainability, and this is where the reality of OSS use and the iterative aspects of security within code and development practices meet.

As these activities become more sustained, and at least have an eye on them by having a transparent reporting structure of use, issues and other metrics, when an eventual “all hands on deck” incident or event occurs that requires an en masse engagement by industry, developers and even governments, we hope that the distance required to scale to fix is much shorter than what we have seen in the past. As each successive release of Log4j in the last 2-3 weeks has showed, is that now more eyes are on the code, there are more systemic issues that are being uncovered which require addressing. This model would hopefully keep the one issue from tuning into many more.

As a carry-on from this engagement model, we can leverage industry sector specific ISACs (Information Sharing and Analysis Centers), both for critical infrastructure, but some of the more informal or upstart ISAO (Information Sharing and Analysis Organization) for interest areas that don’t qualify for specific industry sectors, but still have a common need for such capability. These ISACs and ISAOs can float up items of concern or needs from their members in relation to specific OSS projects and services that can provide some “weighting’ to prioritization, but also once fixes, mitigations, or even uses configurations common to those sectors and groups can be sent down to those partners for implementation or best practices. 

I believe each industry sector and interest/affinity groups will provide relatively differing weighted SBOMs for much of the software and services they use, so a “standard” Top 10, 25, or 100 list of OSS projects that require help may be expanded or re-ordered to address those needs. However, this then starts to mirror the SANS or OWASP lists of common issues, which at least provide awareness to users, developers and consumers as to what to avoid or watch out for. These affinity and sector groups also, when having this shared awareness, can better address the “corps” concept when an incident occurs, and those responders have a better grasp of what they are walking into and tools and mitigations they can use to help recover. This is less “conscription” the way the previous administration proposed, but more along the lines of “mutual aid” which many utilities use in time of need.

Again, this may seem US-specific as a model, as other countries around the globe don’t have the same coordination model we have tried to adopt here, but the framework of the scoreboard/scorecard of high interest projects can help grow the international support. Apache isn’t just a US-only consumed service, neither is Python or a host of other OSS code projects. Modeling and supporting OSS in this way increases the capability for better overall digital security for the entire world – if we have a willingness to participate and capable and knowledgeable leadership. This isn’t without some upfront cost in resources either, however, it’s going to cost much less in the long run as we go from being reactive to addressing OSS security issues to being in front and more proactive to finding and eradicating systemic flaws in OSS software.

So, after all of this, how do we start? As I noted before, OSS was always built on volunteerism, but it’s been pretty undirected, or if directed only towards one or a few efforts. We need to widen the scope. We are lucky that on OpenSSF and other similar teams, there’s some common subject-matter experts that exist across a few of those projects and efforts. Putting those individuals in a leading role and helping to sustain their work, rather than time slicing from their day jobs, if they are willing to go to those roles, is a good first step, as we have created some concentrated and aware knowledge sources.

Second, we need industry to take a moment to step into the breach, and potentially stop nickel and dime-ing support to projects they say they have an interest in, but actually look at what they use and help match the level of resource support to their use or inclusion statistics. While it’s great for Google, Microsoft, Amazon, Meta, IBM, Pivotal and others show up in GitHub contributor statistics, it’s probably not as measured by that consumption volume mentioned earlier. It’s contributions of self-interest, and it runs wholly counter to the hacker and open source ethic.

A public sector, or governmental role, without having to be more dictatorial, should be where governments with authority, but also skill-sets and resources—that of a coordinator. The weight of these institutions isn’t derived from the heady figureheadedness that has been bestowed upon them, but the roles they play in prioritizing and mustering resources. It was proposed that CISA manage a “cyber corps” that’s pulled up in time of need, but I feel better if they worked as a national coordinator of responses, knowing where the skills and talents may need to be applied and what policies should be implemented. Produce a “cyber FEMA,” less so than a “cyber defender,” for Federal participation

Finally, none of this will get anywhere unless OSS projects realize they need help. This isn’t a 12-step program, or phases of grief, or whatever when you need to admit where you are and how to go about recovering. This is taking stock of pars of the project that have been possibly withering or are in a less than ideal state, that, if they got a chance to ask for help, with no strings attached, could they articulate what they need and be willing to accept it. Developers and engineers are a proud bunch. We like to show off what we achieved, no matter how quick and dirty it is. We like to show how clever we are and that it really required a bit of thinking to achieve it. It’s an ego thing, but it also works against those who take it a bit too seriously. Some projects are tight-knit and run somewhat dictatorially—many famous blow-ups and eventual forks have been documented. Others may be too consensus building to move at anything short of a glacial pace to implement features or fixes. 

The problem to overcome, when we address providing the help outlined above, is that people and group dynamic issue. The ones that are “prickly” need to be approached carefully and demonstrate how this new framework will directly benefit them—maybe a proof-of-concept effort. We always enjoy proof before our eyes. Others who may already be drowning, may suffer from not knowing which lifesaver ring to grab and just end up drowning, and may need more direct intervention to get them on the preferred path.

I haven’t solved those problems, but I figure if this gains some traction, those who’ve worked on major projects, or even those who’ve seen a lot of small successes, can take a crack at it. But we need to make these changes sooner than later, or every holiday will become something techies dread, because it’s never a holiday. Every major vulnerability on an off-cycle may cascade into a multi-month slog. The burnout we already see, not only from the pandemic, will grow when others reach the point of wanting to abandon tech faster than what something popularly complained about on social media for years prior.

I look forward to suggestions from you as a reader, security professional, developer, policy wonk, titan of business, or anybody else who wants to get involved to see how we can head off the looming disaster. We can’t survive just riding the waves from each major vulnerability or flaw hat hits the news, not realizing that the ecosystem is co-dependent on lots of volunteer effort and good nature to ensure we can all still use our technology in a form that is reliable and resilient. Make sure we don’t let these puppies die while we sit back and drink that beer. We are better than that.

Leave a Reply

Your email address will not be published. Required fields are marked *