Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The truth is that a fork hurts everyone.

Imagine a future CTO trying to pick the IaC tools for their company. They see Terraform as an option, but then learn there are multiple forks, licensing questions, and a big battle happening in the community. What do they do? They are now way more likely to pick a different tool that is genuinely open source. The same is true of every dev considering where to build their career, every hobbyist, every open source enthusiast, every vendor, etc. In the end, no matter which fork wins, everyone will be worse off: the community will be smaller and more splintered.

So we opted to ask HashiCorp do the right thing first. If they choose to do the right thing, we can avoid a fork, and avoid splintering the community. We still think that's the best option. But if that doesn't work, then a foundation + fork it is.



  Imagine a future CTO trying to pick the IaC tools for their company. They see Terraform as an option, but then learn there are multiple forks, licensing questions, and a big battle happening in the community. What do they do?
I truly believe that a CTO who sees Terraform as an option and who isn't scared off by the BSL, but then has all of these other concerns, exists only in fantasy.


Lots of people still using elastic, mongo, and redis.

What's different about this one?


  You may make production use of the Licensed Work, provided such use does not include offering the Licensed Work to third parties on a hosted or embedded basis which is competitive with HashiCorp's products.
Read benevolently it's a prohibition from spinning up a service based on HashiCorp's code and undercutting HashiCorp's pricing.

On the other hand, if I build a product with HashiCorp-owned BSL'd code, then HashiCorp releases/acquires a product that competes with mine, then my license is void.


My understanding is that the aforementioned companies' licenses are to the same effect, so what is the difference?


Redis is 3-clause BSD, BSD does not have a "your license is void if you sell a product that competes with us" clause. Redis does have enterprise products that are licensed in a manner similar to BSL, but Redis itself is not.

MongoDB and Elastic are SSPL. SSPL approaches the problem like the AGPL; it compels licensees who sell a service derived from the software to make available under the SSPL the source of all supporting tooling and software so that a user could spin up their own version of the service.

There's an argument to be made that SSPL is de facto "you can't compete with us" since it would be more challenging to make a competitive SaaS offering if your whole stack is source available. I don't disagree. However, as distasteful as SSPL is, at least it doesn't grant licensing to a product conditionally on the unknowable future product offerings of HashiCorp.


thanks for the explanation, my understanding is that they are all after limiting competition in various ways, while still trying to maintain the mantle of open source

We are certainly in interesting times around the monetization / financial sustainability of open source


SSPL has no provision even close to the reach of the "anti-competition" clause Hashicorp is using. While SSPL is not considered open source, it isn't that far off from the AGPL. The difference between SSPL and AGPL is that SSPL (1) is in effect regardless of modification of the service and (2) extends copy left virality to all programs which support running the service, including those that interact with the software over a network.

MongoDB, Elastic, etc. cannot stop you from running a competitor based on the terms of their licenses, they just ask that you publish the source code for whatever service you're running in its entirety (I acknowledge there are disagreements about how far "entirety" extends). The clause in Hashicorp's license actually revokes the right to use their software at all if you're a direct competitor.

OK, no one is going to build an open source competitor to Elastic or MongoDB because then you have no moat and your business will probably fail, I get it, but it's still possible to do without repercussion. It's not like the AGPL is that far off in terms of limitation, either, which is why you don't see many copyleft services run by large corporations unless they've been dual-licensed.


Just went with Elastic cloud after evaluating both Elasticsearch and OpenSearch. It was an easy choice to stick with the incumbent/creator that I was familiar with. No complaints so far.


We just went back to TF after giving Pulumi a try. Prefer declarative syntax for infra and more abuse of Yaml ("fn::..." here) is not what I'm after.

We are working on wrapping TF in CUE since you can CUE->JSON->TF

https://github.com/hofstadter-io/cuelm

Many more CUE experiments are going on in the devops space


Pulumi has a few languages other than YAML and Pulumi is declarative[1], and the programs you write are only as complex as you want them to be. This python program declares an S3 bucket and declares ten objects to exist in it.

    from pulumi_aws import s3

    bucket = s3.Bucket('bucket')

    for i in range(10):
        s3.BucketObject(
            f'object-{i}',
            s3.BucketObjectArgs(
                bucket=bucket.id,
                key=str(i),
            )
        )

Even so, Pulumi YAML has a "compiler" option, so if you want to write CUE or jsonnet[1], or other[2] languages, it definitely supports that.

Disclaimer: I led the YAML project and added the compiler feature at the request of some folks internally looking for CUE support :)

[1] https://www.pulumi.com/blog/pulumi-is-imperative-declarative...

[2] https://www.pulumi.com/blog/extending-pulumi-languages-with-...

[3] https://leebriggs.co.uk/blog/2022/05/04/deploying-kubernetes...


I'm aware of the SDKs, but we don't want them because they are an imperative interface, no matter how you want to spin it as "declarative". I have access to all the imperative constructs in the underlying language and can create conditional execution without restriction.

Even if I use the Yaml compiler for CUE (which we did) I still have to write `fn::` strings as keys, which is ugly and not the direction our industry should go. Let's stop putting imperative constructs into string, let's use a better language for configuration, something purpose built, not an SDK in an imperative language. These "fn::" strings are just bringing imperative constructs back into what could have been an actual declarative interface. Note, Pulumi is not alone here, there are lots of people hacking Yaml because they don't know what else there is to do. CEL making it's way to k8s is another specific example.

This cannot be the state-of-art in ops, we can do much better, but I get that Pulumi is trying to reach a different set of users than devops and will end up with different choices and tradeoffs

(I maintain https://cuetorials.com and am very active in the CUE community)


An imperative for loop is somehow declarative now? Lol.


This seems extremely dismissive and shallow.

The imperative part of that code appears to be analogous to templating. The actual work done under the covers is not imperative, but is based on the difference between the result of the template execution and the current state of the system. That's what makes it declarative.


It really depends on the interaction between the user's Pulumi script and the Pulumi engine.

If there is more than one back and forth, you become declarative, even if you imperatively generate a "declarative" intermediate representation (not really sure what state file at a point in time could ever be imperative), you then would get back some data from the engine, then make choices about what to send off to the engine in the next request.

It's important to understand that with Pulumi, you can end up in either situation. You have to be careful to not become imperative overall is probably the better way to consider this.

https://www.pulumi.com/docs/languages-sdks/javascript/#entry...

Another way this can break down is if the user writes code to call the same APIs in the middle of a Pulumi script. I meant to try this myself to verify it works, but I would assume that Pulumi is not stopping me from doing something like this.


In general maybe, but in the specific context above, I think calling that loop declarative is accurate, and laughing at that classification is a poor response rooted in a deep misunderstanding.


    import pulumi
    from pulumi_gcp import storage

    bucket = "hof-io--develop-internal"
    name = "pulumi/hack/condition.txt"

    cond = False
    msg = "running"
    cnt = 0
    while not cond:
        cnt += 1
        key = storage.get_bucket_object_content(name=name, bucket=bucket)
        print(cnt, key.content)
        if key.content == "exit":
            msg = "hallo!"
            break

    pulumi.export('msg', msg)
    pulumi.export('cnt', cnt)
---

        769 exit
        770 exit
        771 exit
        772 exit
        773 exit
        774 exit
        775 exit

    Outputs:
        cnt: 775
        msg: "hallo!"

    Resources:
        + 1 to create

    info: There are no resources in your stack (other than the stack resource).

    Do you want to perform this update?  [Use arrows to move, type to filter]
      yes
    > no
      details
----

Of note, all but the last exit had a newline, until I `echo -n` the file I copied up

---

ooo...

        348 what?!?!
        349 what?!?!
        350 what?!?!
        351 what?!?!
        352 what?!?!
        353 what?!?!
        354 what?!?!
        355 what?!?!
        356 what?!?!
        357 what?!?!
        358 what?!?!
        359 exit

    Outputs:
        cnt: 359
        msg: "hallo!"

    Resources:
        + 1 created

    Duration: 27s
---

I uploaded a different file while waiting to be asked to continue, and then proceeded to get different outputs

Note, while I can get the contents of a bucket in TF, I cannot build a loop around it as I have above

https://registry.terraform.io/providers/hashicorp/aws/latest...

TF might be susceptible to the same file contents manipulation between plan & apply as well, but then again, you can save a plan to a file and then run it later, so maybe not? Another experiment seems to be in order


I think this is an advantage of Pulumi, here are two use cases:

1. Creating a resource where created is not the same as ready. This is extraordinarily common with compute resources (a virtual machine, a container, an HTTP server, a process) where attempting to create follow-up resources can result in costly retry-back-off loops. Even when creating Kubernetes resources, Pulumi will stand up an internet-connected deployment more quickly than many other tools because you can ensure the image is published before a pod references it, the pod is up before a service references it, and so on. (The Kubernetes provider bakes some of these awaits in by default.)

2. Resources graphs that are dynamic, reflecting external data sources at the moment of creation. Whether you want to write a Kubernetes operator, synchronize an LDAP directory to a SaaS product, or one of my favorite examples. When I set up demos, I often configure the authorized public IPs dynamically:

    import * as publicIp from 'public-ip';

    new someProvider.Kubernetes.Cluster('cluster',
      {
        apiServerAccessProfile: {
          authorizedIPRanges: [await publicIp.v4()],
          enablePrivateCluster: false,
        },
      }


Of course you think it is an advantage, you work for Pulumi

I'm telling you this is not how a potential user sees the same situation, that it is a disadvantage and was one of the reasons we are not making the switch.

This example above is exactly the kind of code we don't want in ops, it depends on the user environment and physical location at the time they run the command, bad practice. Thanks for an extra talking point though


The claim above isn't "imperative is impossible".


The claim above is that Pulumi uses an imperative interface and that it is quite easy to slip past the declarative guardrails, so in most cases Pulumi is imperative, not declarative. The fact that Pulumi makes this separation opaque can be discussed, as can the clear separation be shown an alternative with benefits

The claim I keep seeing from Pulumi folks is that Pulumi is declarative, which is is not, as shown in multiple posts by many people. Please stop calling it such, it demonstrates dishonesty towards users


The claim above was that a for loop implied that the code couldn't be declarative.

> Please stop calling it such

I'm not claiming it is always declarative, I'm only claiming that a declarative example above can contain a for loop, and that laughing at that is the wrong response. That's it.


> Please stop calling it such

That was more me yelling into the void or larger thread than at anything specific you said, sorry :]


I was just wondering what stops me from reading and writing to a cloud bucket like an infinite tape?

https://www.pulumi.com/registry/packages/gcp/api-docs/storag...


> This seems extremely dismissive and shallow.

When someone tries to make a sophisticated argument that up is down and white is black, dismissive and shallow is the right response.

> The actual work done under the covers is not imperative

Having a declarative layer somewhere in the stack doesn't make something declarative, if that's not the layer you actually use to work on and reason about the system. See the famous "the C language is purely functional" post.



> When someone tries to make a sophisticated argument that up is down and white is black

This is where the deep misunderstanding is coming from.


you can have loops and still be declarative, CUE has loops, though they are considered comprehensions more technically, but there is no assignment or stack in CUE

One of the interesting aspects of CUE is that it gives us many of the programming constructs we are used to, but remains Turing incomplete, so no general recursion or user defined functions. There is a scripting layer where you can get more real world stuff done too

The CUE language is super interesting, has a very unique take on things and comes from the same heritage as Go, containers, and Kubernetes

https://cuelang.org | https://cuetorials.com


Hopefully it's not down to CTOs to be picking tools for their company but a process within DevOps/Engineering teams etc.

Does anyone else see this as the Nagios Effect all over again, there must be lots to learn from history?


What is the nagios effect?


Nagios used to be only Open Source then they created the Enterprise version and left the open source core version lagging behind, it was forked a billion times or more :) creating the Nagios Effect. A lot of monitoring software / companies then removed / replaced the core of Nagios from their products.


I didn't know either, so I did some Googling and found an old announcement[1] from 2009:

> A group of leading Nagios protagonists including members of the Nagios Community Advisory board and creators of multiple Nagios Addons have launched Icinga – a fork of Nagios, the prevalent open source monitoring system. This independent project [is based upon a] broader developer community. [...] Icinga takes all the great features of Nagios and combines it with the feature requests and patches of the user community.

It also looks like in 2014, Nagios centralized and appropriated a domain name and website used for hosting Nagios plugins, away from the community (its plugin developers)[2]:

> In the past, the domain "nagios-plugins.org" pointed to a server maintained by us, the Nagios Plugins Development Team. The domain itself had been transferred to Nagios Enterprises a few years ago, but we had an agreement that the project would continue to be independently run by the actual plugin maintainers.¹ Yesterday, the DNS records were modified to point to web space controlled by Nagios Enterprises instead. This change was done without prior notice.

> To make things worse, large parts of our web site were copied and are now served (with slight modifications²) by <http://nagios-plugins.org/>. Again, this was done without contacting us, and without our permission.

> This means we cannot use the name "Nagios Plugins" any longer.

There's some previous discussion of those controversies on HN here: https://news.ycombinator.com/item?id=9452013

From that article[3]:

> [Icinga developer]: "Six months before the fork, there was a bit of unrest among Nagios' extension developers [...] Community patches went unapplied for a long time[.]"

> [...]

> Two years ago, more or less when the split happened, [Nagios author] was having problems resolving [trademark] issues with a company called "Netways".

I'm still not sure what the effect is supposed to be tbh.

--

1: https://icinga.com/blog/2009/05/06/announcing-icinga/

2: https://www.monitoring-plugins.org/archive/devel/2014-Januar...

3: https://web.archive.org/web/20160314090137/http://www.freeso...


I don't get this one, you pick OpenTerraform and get on with your life. It's the same with picking OpenSearch over Elastic. I can use the proprietary version that locks me into a single profit-seeking vendor and doesn't have community backing or the one run by a foundation made up of companies that use and are heavily invested in Terraform.


How dare a vendor come up with an idea, pay people to execute on that idea, and the gasp try to make money from it? Outrageous!


How dare a vendor come up with an idea, pay people to execute on it, give it away for free to the world, acquire users and soak in all the community contributions from people who thought they were using and contributing to a public good, try and fail to indirectly monetize a hosted version because other people were better at it than them, then rug-pull out from under everyone and use copyright/government-stick to kill their competition because they can't compete on even terms.

Then a group of people who are users of idea and actually making money off it with value-adds step up to maintain it as a community project ensuring that it stays open for everyone -- yeah those guys are the assholes. Terraform would have went nowhere if it wasn't OSS and Terraform would be nothing without its outside contributions that make up far more than the code of Terraform core itself. There's a trail of bodies to prove it.

And you should love this, projects that are stewarded by its own users are incentivized to make it the best it can be instead of rejecting contributions because it competes with their cloud offering [1]

[1] https://github.com/hashicorp/terraform/issues/9556


The guys at Pulumi must be having a field day right now. It's exactly how you describe it for us. We're long overdue with an upgrade of our Terraform config from pre v1.0. We have to most likely re-write a big part of our HCL code, so why not try a competitor?

With Vault however that's another story, I've yet to find another secrets management system that has a tight integration with Kubernetes, AWS and supports providers for things like Postgresql to have ephemeral database credentials.


Someone else posted a list of Vault alternatives, multiple of which (AFAICT) check your boxes: https://news.ycombinator.com/item?id=37151218


Most of them do the first two: integration with Kubernetes and AWS. Unfortunately short lived DB creds is not in any of those listed.


I doubt that’s what would happen if they could afford a license from Hashicorp.

Avoiding proprietary licenses has its place but if you aren’t using terraform to build a product this really shouldn’t impact you much.


Shouldnt impact you much. _yet_




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: