I looked for something like this for years, and could never find it, so I ended up writing it myself: take a look at Fundamentals of DevOps and Software Delivery [1]. It's a hands-on, step-by-step guide to all the tools and techniques for deploying and managing software in production, including hosting (cloud, on-prem, IaaS, PaaS), infrastructure as code (IaC), application orchestration (VMs, containers, serverless), version control, build systems, continuous integration (CI), continuous delivery (CD), networking, monitoring, observability, and so on.
It means that whether you can use Terraform at any future company you work for will be determined... by HashiCorp.
That's because the BSL license is intentionally vague. What does "competing" mean? What does "hosting or embedding" mean? Who decides?
In order to really know if you're a competitor, you have to reach out to HashiCorp (as the FAQ tells you to do). So whether your usage is valid is not controlled by the license terms, but is instead entirely at the whim of HashiCorp. So they switched from a permissive open source license to a HashiCorp decides license: they get to decide on a case by case basis now—and they can change their mind at any time.
That is very shaky footing on which to build anything.
And the legal team at every company you work for will have to take that into account before deciding you can or can't use Terraform.
How do you know you're not competing with HashiCorp?
That's not meant to be a redundant or snarky question. The key issue with the BSL and that FAQ is that the wording is intentionally vague. What does "competing" mean? What does "hosting or embedding" mean? Who decides?
In order to really know if you're a competitor, you have to reach out to HashiCorp (as the FAQ tells you to do). So whether your usage is valid is not controlled by the license terms, but is instead entirely at the whim of HashiCorp. So they switched from a permissive open source license to a HashiCorp decides license: they get to decide on a case by case basis now—and they can change their mind at any time.
That is very shaky footing on which to build anything.
We just moved the signatures to a table format, so you individuals can now add themselves to the table: just set the "type" column to "Individual." Thank you!
Imagine a future CTO trying to pick the IaC tools for their company. They see Terraform as an option, but then learn there are multiple forks, licensing questions, and a big battle happening in the community. What do they do? They are now way more likely to pick a different tool that is genuinely open source. The same is true of every dev considering where to build their career, every hobbyist, every open source enthusiast, every vendor, etc. In the end, no matter which fork wins, everyone will be worse off: the community will be smaller and more splintered.
So we opted to ask HashiCorp do the right thing first. If they choose to do the right thing, we can avoid a fork, and avoid splintering the community. We still think that's the best option. But if that doesn't work, then a foundation + fork it is.
Imagine a future CTO trying to pick the IaC tools for their company. They see Terraform as an option, but then learn there are multiple forks, licensing questions, and a big battle happening in the community. What do they do?
I truly believe that a CTO who sees Terraform as an option and who isn't scared off by the BSL, but then has all of these other concerns, exists only in fantasy.
You may make production use of the Licensed Work, provided such use does not include offering the Licensed Work to third parties on a hosted or embedded basis which is competitive with HashiCorp's products.
Read benevolently it's a prohibition from spinning up a service based on HashiCorp's code and undercutting HashiCorp's pricing.
On the other hand, if I build a product with HashiCorp-owned BSL'd code, then HashiCorp releases/acquires a product that competes with mine, then my license is void.
Redis is 3-clause BSD, BSD does not have a "your license is void if you sell a product that competes with us" clause. Redis does have enterprise products that are licensed in a manner similar to BSL, but Redis itself is not.
MongoDB and Elastic are SSPL. SSPL approaches the problem like the AGPL; it compels licensees who sell a service derived from the software to make available under the SSPL the source of all supporting tooling and software so that a user could spin up their own version of the service.
There's an argument to be made that SSPL is de facto "you can't compete with us" since it would be more challenging to make a competitive SaaS offering if your whole stack is source available. I don't disagree. However, as distasteful as SSPL is, at least it doesn't grant licensing to a product conditionally on the unknowable future product offerings of HashiCorp.
thanks for the explanation, my understanding is that they are all after limiting competition in various ways, while still trying to maintain the mantle of open source
We are certainly in interesting times around the monetization / financial sustainability of open source
SSPL has no provision even close to the reach of the "anti-competition" clause Hashicorp is using. While SSPL is not considered open source, it isn't that far off from the AGPL. The difference between SSPL and AGPL is that SSPL (1) is in effect regardless of modification of the service and (2) extends copy left virality to all programs which support running the service, including those that interact with the software over a network.
MongoDB, Elastic, etc. cannot stop you from running a competitor based on the terms of their licenses, they just ask that you publish the source code for whatever service you're running in its entirety (I acknowledge there are disagreements about how far "entirety" extends). The clause in Hashicorp's license actually revokes the right to use their software at all if you're a direct competitor.
OK, no one is going to build an open source competitor to Elastic or MongoDB because then you have no moat and your business will probably fail, I get it, but it's still possible to do without repercussion. It's not like the AGPL is that far off in terms of limitation, either, which is why you don't see many copyleft services run by large corporations unless they've been dual-licensed.
Just went with Elastic cloud after evaluating both Elasticsearch and OpenSearch. It was an easy choice to stick with the incumbent/creator that I was familiar with. No complaints so far.
Pulumi has a few languages other than YAML and Pulumi is declarative[1], and the programs you write are only as complex as you want them to be. This python program declares an S3 bucket and declares ten objects to exist in it.
from pulumi_aws import s3
bucket = s3.Bucket('bucket')
for i in range(10):
s3.BucketObject(
f'object-{i}',
s3.BucketObjectArgs(
bucket=bucket.id,
key=str(i),
)
)
Even so, Pulumi YAML has a "compiler" option, so if you want to write CUE or jsonnet[1], or other[2] languages, it definitely supports that.
Disclaimer: I led the YAML project and added the compiler feature at the request of some folks internally looking for CUE support :)
I'm aware of the SDKs, but we don't want them because they are an imperative interface, no matter how you want to spin it as "declarative". I have access to all the imperative constructs in the underlying language and can create conditional execution without restriction.
Even if I use the Yaml compiler for CUE (which we did) I still have to write `fn::` strings as keys, which is ugly and not the direction our industry should go. Let's stop putting imperative constructs into string, let's use a better language for configuration, something purpose built, not an SDK in an imperative language. These "fn::" strings are just bringing imperative constructs back into what could have been an actual declarative interface. Note, Pulumi is not alone here, there are lots of people hacking Yaml because they don't know what else there is to do. CEL making it's way to k8s is another specific example.
This cannot be the state-of-art in ops, we can do much better, but I get that Pulumi is trying to reach a different set of users than devops and will end up with different choices and tradeoffs
The imperative part of that code appears to be analogous to templating. The actual work done under the covers is not imperative, but is based on the difference between the result of the template execution and the current state of the system. That's what makes it declarative.
It really depends on the interaction between the user's Pulumi script and the Pulumi engine.
If there is more than one back and forth, you become declarative, even if you imperatively generate a "declarative" intermediate representation (not really sure what state file at a point in time could ever be imperative), you then would get back some data from the engine, then make choices about what to send off to the engine in the next request.
It's important to understand that with Pulumi, you can end up in either situation. You have to be careful to not become imperative overall is probably the better way to consider this.
Another way this can break down is if the user writes code to call the same APIs in the middle of a Pulumi script. I meant to try this myself to verify it works, but I would assume that Pulumi is not stopping me from doing something like this.
In general maybe, but in the specific context above, I think calling that loop declarative is accurate, and laughing at that classification is a poor response rooted in a deep misunderstanding.
import pulumi
from pulumi_gcp import storage
bucket = "hof-io--develop-internal"
name = "pulumi/hack/condition.txt"
cond = False
msg = "running"
cnt = 0
while not cond:
cnt += 1
key = storage.get_bucket_object_content(name=name, bucket=bucket)
print(cnt, key.content)
if key.content == "exit":
msg = "hallo!"
break
pulumi.export('msg', msg)
pulumi.export('cnt', cnt)
---
769 exit
770 exit
771 exit
772 exit
773 exit
774 exit
775 exit
Outputs:
cnt: 775
msg: "hallo!"
Resources:
+ 1 to create
info: There are no resources in your stack (other than the stack resource).
Do you want to perform this update? [Use arrows to move, type to filter]
yes
> no
details
----
Of note, all but the last exit had a newline, until I `echo -n` the file I copied up
TF might be susceptible to the same file contents manipulation between plan & apply as well, but then again, you can save a plan to a file and then run it later, so maybe not? Another experiment seems to be in order
I think this is an advantage of Pulumi, here are two use cases:
1. Creating a resource where created is not the same as ready. This is extraordinarily common with compute resources (a virtual machine, a container, an HTTP server, a process) where attempting to create follow-up resources can result in costly retry-back-off loops. Even when creating Kubernetes resources, Pulumi will stand up an internet-connected deployment more quickly than many other tools because you can ensure the image is published before a pod references it, the pod is up before a service references it, and so on. (The Kubernetes provider bakes some of these awaits in by default.)
2. Resources graphs that are dynamic, reflecting external data sources at the moment of creation. Whether you want to write a Kubernetes operator, synchronize an LDAP directory to a SaaS product, or one of my favorite examples. When I set up demos, I often configure the authorized public IPs dynamically:
import * as publicIp from 'public-ip';
new someProvider.Kubernetes.Cluster('cluster',
{
apiServerAccessProfile: {
authorizedIPRanges: [await publicIp.v4()],
enablePrivateCluster: false,
},
}
Of course you think it is an advantage, you work for Pulumi
I'm telling you this is not how a potential user sees the same situation, that it is a disadvantage and was one of the reasons we are not making the switch.
This example above is exactly the kind of code we don't want in ops, it depends on the user environment and physical location at the time they run the command, bad practice. Thanks for an extra talking point though
The claim above is that Pulumi uses an imperative interface and that it is quite easy to slip past the declarative guardrails, so in most cases Pulumi is imperative, not declarative. The fact that Pulumi makes this separation opaque can be discussed, as can the clear separation be shown an alternative with benefits
The claim I keep seeing from Pulumi folks is that Pulumi is declarative, which is is not, as shown in multiple posts by many people. Please stop calling it such, it demonstrates dishonesty towards users
The claim above was that a for loop implied that the code couldn't be declarative.
> Please stop calling it such
I'm not claiming it is always declarative, I'm only claiming that a declarative example above can contain a for loop, and that laughing at that is the wrong response. That's it.
When someone tries to make a sophisticated argument that up is down and white is black, dismissive and shallow is the right response.
> The actual work done under the covers is not imperative
Having a declarative layer somewhere in the stack doesn't make something declarative, if that's not the layer you actually use to work on and reason about the system. See the famous "the C language is purely functional" post.
you can have loops and still be declarative, CUE has loops, though they are considered comprehensions more technically, but there is no assignment or stack in CUE
One of the interesting aspects of CUE is that it gives us many of the programming constructs we are used to, but remains Turing incomplete, so no general recursion or user defined functions. There is a scripting layer where you can get more real world stuff done too
The CUE language is super interesting, has a very unique take on things and comes from the same heritage as Go, containers, and Kubernetes
Nagios used to be only Open Source then they created the Enterprise version and left the open source core version lagging behind, it was forked a billion times or more :) creating the Nagios Effect. A lot of monitoring software / companies then removed / replaced the core of Nagios from their products.
I didn't know either, so I did some Googling and found an old announcement[1] from 2009:
> A group of leading Nagios protagonists including members of the Nagios Community Advisory board and creators of multiple Nagios Addons have launched Icinga – a fork of Nagios, the prevalent open source monitoring system. This independent project [is based upon a] broader developer community. [...] Icinga takes all the great features of Nagios and combines it with the feature requests and patches of the user community.
It also looks like in 2014, Nagios centralized and appropriated a domain name and website used for hosting Nagios plugins, away from the community (its plugin developers)[2]:
> In the past, the domain "nagios-plugins.org" pointed to a server maintained by us, the Nagios Plugins Development Team. The domain itself had been transferred to Nagios Enterprises a few years ago, but we had an agreement that the project would continue to be independently run by the actual plugin maintainers.¹ Yesterday, the DNS records were modified to point to web space controlled by Nagios Enterprises instead. This change was done without prior notice.
> To make things worse, large parts of our web site were copied and are now served (with slight modifications²) by <http://nagios-plugins.org/>. Again, this was done without contacting us, and without our permission.
> This means we cannot use the name "Nagios Plugins" any longer.
> [Icinga developer]: "Six months before the fork, there was a bit of unrest among Nagios' extension developers [...] Community patches went unapplied for a long time[.]"
> [...]
> Two years ago, more or less when the split happened, [Nagios author] was having problems resolving [trademark] issues with a company called "Netways".
I'm still not sure what the effect is supposed to be tbh.
I don't get this one, you pick OpenTerraform and get on with your life. It's the same with picking OpenSearch over Elastic. I can use the proprietary version that locks me into a single profit-seeking vendor and doesn't have community backing or the one run by a foundation made up of companies that use and are heavily invested in Terraform.
How dare a vendor come up with an idea, pay people to execute on it, give it away for free to the world, acquire users and soak in all the community contributions from people who thought they were using and contributing to a public good, try and fail to indirectly monetize a hosted version because other people were better at it than them, then rug-pull out from under everyone and use copyright/government-stick to kill their competition because they can't compete on even terms.
Then a group of people who are users of idea and actually making money off it with value-adds step up to maintain it as a community project ensuring that it stays open for everyone -- yeah those guys are the assholes. Terraform would have went nowhere if it wasn't OSS and Terraform would be nothing without its outside contributions that make up far more than the code of Terraform core itself. There's a trail of bodies to prove it.
And you should love this, projects that are stewarded by its own users are incentivized to make it the best it can be instead of rejecting contributions because it competes with their cloud offering [1]
The guys at Pulumi must be having a field day right now. It's exactly how you describe it for us. We're long overdue with an upgrade of our Terraform config from pre v1.0. We have to most likely re-write a big part of our HCL code, so why not try a competitor?
With Vault however that's another story, I've yet to find another secrets management system that has a tight integration with Kubernetes, AWS and supports providers for things like Postgresql to have ephemeral database credentials.
This is precisely the problem with the new BSL license. Whether your usage of Terraform complies with the license isn’t determined by the legal terms, but instead is entirely at the whim of HashiCorp. And they can change their mind at any time. It makes it impossible to build anything on top of Terraform.
This covers really well why I think the BSL license is a non-starter for things like TF. I get trying to prevent AWS from competing with you using your own open source code, but it creates this ambiguity where it's not clear whether lots of uses are or are not competing with HashiCorp.
> For example, if you’re an independent software vendor (ISV) or managed service provider (MSP) in the DevOps space, and you use Terraform with your customers (but not necessarily Terraform Cloud/Enterprise), are you a competitor? If your company creates a CI / CD product, is that competitive with Terraform Cloud or Waypoint? If your CI / CD product natively supports running Terraform as part of your CI / CD builds, is that embedding or hosting? If you built a wrapper for Terraform, is that a competitor? Is it embedding only if you include the source code or does using the Terraform CLI count as embedding? What if the CLI is installed by the customer? Is it hosting if the customer runs your product on their own servers?
The answer is at the whim of HashiCorp and subject to change at any point in the future. Even ignoring the attempt to dilute the meaning of "open source", the practical implications of the BSL license are more than enough reason to coalesce around a truly open source fork IMO.
I worked at a financial institution that heavily utilized terraform. Their business is banking and they do not offer automation, orchestration or IaC as a service. They're fine.
This seems to affect only those places that attempt to build a business off terraform.
I am not saying those businesses can't be mad at the rug getting pulled out from under them, but it's important to be accurate that this doesn't affect end users of TF directly.
Is the financial institution made up of separate legal entities which bill each other for services, and does one of those entities provide tech infra for the other legal entities?
The messiness of the real-world unfortunately doesn't play well with ambiguity in licences :)
It'll be a headache for every large company which now has to send the licence to their legal teams who have to ask these kind of questions (another interesting one is "can contractors touch our terraform setup?") - in fairness to Hashicorp they've tried to address some of these issues in their FAQ, but the FAQ isn't legally binding so legal teams have to go on what's actually written in the licence.
Great to see your commitment but I'm also curious why you, unlike some other companies, have chosen not to support with any full time employees? It seems your business is largely based on Terraform and saying pretty much "we'll contribute code" doesn't signal too much commitment.
I realize my comment might sound like an accusation but that's not my intention, I want to hear your reasoning about it!
If you can use a Platform as a Service (PaaS) offering like Vercel, Netlify, Heroku, etc, you absolutely should! I always recommend those types of tools as the first stop tools to anyone building software these days.
But there are many use cases that don't fit into those neat PaaS molds: typically, as a software company grows beyond one team, one service, one database, etc, they start to hit limitations with the PaaS solutions out there. As you scale, you often find you need more control than you can get from a SaaS: you may need more control over the hardware (e.g., for performance or cost reasons), or networking (e.g., you need service discovery or a service mesh to allow microservices to communicate with each other), or security (e.g., to meet compliance standards), or a hundred other items.
That's when many companies find themselves migrating to an Infrastructure as a Service (IaaS) provider like AWS, Infrastructure as Code (IaC) tools like Terraform, orchestration tools like Kubernetes, and so on. I'm guessing every software company with more than 50-100 developers ends up moving from PaaS to IaaS, and that's when developers need to understand how to use the tools covered in this blog post series.
Perhaps, some day, the PaaS tools out there will be good enough that you never have to migrate off of them, regardless of scale or requirements, but we're not there yet, and probably won't be there for a while longer.
Author here. I tried to answer your question in the first two paragraphs. But to add some context, given the nature of my work, I hear from developers on a nearly daily basis who are struggling to get started with the technologies mentioned in this blog post series, which include not only Kubernetes, but also Docker, AWS, and Terraform. In part, they are struggling because they are too scared to ask for help, and comments like yours only make that worse: you seem to be implying that the materials out there for Kubernetes are so good, that if you don't get it, there must be something wrong with you. And yet, there are thousands of devs who don't get it, so maybe for different people, there are different ways to learn?
In discussions like this, I'm a fan of what Steve Yegge wrote about blogging [1]:
> This is an important thing to keep in mind when you're blogging. Each person in your audience is on a different clock, and all of them are ahead of you in some ways and behind you in others. The point of blogging is that we all agree to share where we're at, and not poke fun at people who seem to be behind us, because they may know other things that we won't truly understand for years, if ever.
That's why I write: to share what I know, from my particular perspective. Hopefully, that's useful to some people out there. If it's not useful to you, no problem!
And for the record, I agree the Kubernetes docs are great, including those interactive tutorials: if you read the series, you'd see I actually recommend those exact docs at the end of the post [2].
[1] https://www.fundamentals-of-devops.com/