Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lots of people still using elastic, mongo, and redis.

What's different about this one?



  You may make production use of the Licensed Work, provided such use does not include offering the Licensed Work to third parties on a hosted or embedded basis which is competitive with HashiCorp's products.
Read benevolently it's a prohibition from spinning up a service based on HashiCorp's code and undercutting HashiCorp's pricing.

On the other hand, if I build a product with HashiCorp-owned BSL'd code, then HashiCorp releases/acquires a product that competes with mine, then my license is void.


My understanding is that the aforementioned companies' licenses are to the same effect, so what is the difference?


Redis is 3-clause BSD, BSD does not have a "your license is void if you sell a product that competes with us" clause. Redis does have enterprise products that are licensed in a manner similar to BSL, but Redis itself is not.

MongoDB and Elastic are SSPL. SSPL approaches the problem like the AGPL; it compels licensees who sell a service derived from the software to make available under the SSPL the source of all supporting tooling and software so that a user could spin up their own version of the service.

There's an argument to be made that SSPL is de facto "you can't compete with us" since it would be more challenging to make a competitive SaaS offering if your whole stack is source available. I don't disagree. However, as distasteful as SSPL is, at least it doesn't grant licensing to a product conditionally on the unknowable future product offerings of HashiCorp.


thanks for the explanation, my understanding is that they are all after limiting competition in various ways, while still trying to maintain the mantle of open source

We are certainly in interesting times around the monetization / financial sustainability of open source


SSPL has no provision even close to the reach of the "anti-competition" clause Hashicorp is using. While SSPL is not considered open source, it isn't that far off from the AGPL. The difference between SSPL and AGPL is that SSPL (1) is in effect regardless of modification of the service and (2) extends copy left virality to all programs which support running the service, including those that interact with the software over a network.

MongoDB, Elastic, etc. cannot stop you from running a competitor based on the terms of their licenses, they just ask that you publish the source code for whatever service you're running in its entirety (I acknowledge there are disagreements about how far "entirety" extends). The clause in Hashicorp's license actually revokes the right to use their software at all if you're a direct competitor.

OK, no one is going to build an open source competitor to Elastic or MongoDB because then you have no moat and your business will probably fail, I get it, but it's still possible to do without repercussion. It's not like the AGPL is that far off in terms of limitation, either, which is why you don't see many copyleft services run by large corporations unless they've been dual-licensed.


Just went with Elastic cloud after evaluating both Elasticsearch and OpenSearch. It was an easy choice to stick with the incumbent/creator that I was familiar with. No complaints so far.


We just went back to TF after giving Pulumi a try. Prefer declarative syntax for infra and more abuse of Yaml ("fn::..." here) is not what I'm after.

We are working on wrapping TF in CUE since you can CUE->JSON->TF

https://github.com/hofstadter-io/cuelm

Many more CUE experiments are going on in the devops space


Pulumi has a few languages other than YAML and Pulumi is declarative[1], and the programs you write are only as complex as you want them to be. This python program declares an S3 bucket and declares ten objects to exist in it.

    from pulumi_aws import s3

    bucket = s3.Bucket('bucket')

    for i in range(10):
        s3.BucketObject(
            f'object-{i}',
            s3.BucketObjectArgs(
                bucket=bucket.id,
                key=str(i),
            )
        )

Even so, Pulumi YAML has a "compiler" option, so if you want to write CUE or jsonnet[1], or other[2] languages, it definitely supports that.

Disclaimer: I led the YAML project and added the compiler feature at the request of some folks internally looking for CUE support :)

[1] https://www.pulumi.com/blog/pulumi-is-imperative-declarative...

[2] https://www.pulumi.com/blog/extending-pulumi-languages-with-...

[3] https://leebriggs.co.uk/blog/2022/05/04/deploying-kubernetes...


I'm aware of the SDKs, but we don't want them because they are an imperative interface, no matter how you want to spin it as "declarative". I have access to all the imperative constructs in the underlying language and can create conditional execution without restriction.

Even if I use the Yaml compiler for CUE (which we did) I still have to write `fn::` strings as keys, which is ugly and not the direction our industry should go. Let's stop putting imperative constructs into string, let's use a better language for configuration, something purpose built, not an SDK in an imperative language. These "fn::" strings are just bringing imperative constructs back into what could have been an actual declarative interface. Note, Pulumi is not alone here, there are lots of people hacking Yaml because they don't know what else there is to do. CEL making it's way to k8s is another specific example.

This cannot be the state-of-art in ops, we can do much better, but I get that Pulumi is trying to reach a different set of users than devops and will end up with different choices and tradeoffs

(I maintain https://cuetorials.com and am very active in the CUE community)


An imperative for loop is somehow declarative now? Lol.


This seems extremely dismissive and shallow.

The imperative part of that code appears to be analogous to templating. The actual work done under the covers is not imperative, but is based on the difference between the result of the template execution and the current state of the system. That's what makes it declarative.


It really depends on the interaction between the user's Pulumi script and the Pulumi engine.

If there is more than one back and forth, you become declarative, even if you imperatively generate a "declarative" intermediate representation (not really sure what state file at a point in time could ever be imperative), you then would get back some data from the engine, then make choices about what to send off to the engine in the next request.

It's important to understand that with Pulumi, you can end up in either situation. You have to be careful to not become imperative overall is probably the better way to consider this.

https://www.pulumi.com/docs/languages-sdks/javascript/#entry...

Another way this can break down is if the user writes code to call the same APIs in the middle of a Pulumi script. I meant to try this myself to verify it works, but I would assume that Pulumi is not stopping me from doing something like this.


In general maybe, but in the specific context above, I think calling that loop declarative is accurate, and laughing at that classification is a poor response rooted in a deep misunderstanding.


    import pulumi
    from pulumi_gcp import storage

    bucket = "hof-io--develop-internal"
    name = "pulumi/hack/condition.txt"

    cond = False
    msg = "running"
    cnt = 0
    while not cond:
        cnt += 1
        key = storage.get_bucket_object_content(name=name, bucket=bucket)
        print(cnt, key.content)
        if key.content == "exit":
            msg = "hallo!"
            break

    pulumi.export('msg', msg)
    pulumi.export('cnt', cnt)
---

        769 exit
        770 exit
        771 exit
        772 exit
        773 exit
        774 exit
        775 exit

    Outputs:
        cnt: 775
        msg: "hallo!"

    Resources:
        + 1 to create

    info: There are no resources in your stack (other than the stack resource).

    Do you want to perform this update?  [Use arrows to move, type to filter]
      yes
    > no
      details
----

Of note, all but the last exit had a newline, until I `echo -n` the file I copied up

---

ooo...

        348 what?!?!
        349 what?!?!
        350 what?!?!
        351 what?!?!
        352 what?!?!
        353 what?!?!
        354 what?!?!
        355 what?!?!
        356 what?!?!
        357 what?!?!
        358 what?!?!
        359 exit

    Outputs:
        cnt: 359
        msg: "hallo!"

    Resources:
        + 1 created

    Duration: 27s
---

I uploaded a different file while waiting to be asked to continue, and then proceeded to get different outputs

Note, while I can get the contents of a bucket in TF, I cannot build a loop around it as I have above

https://registry.terraform.io/providers/hashicorp/aws/latest...

TF might be susceptible to the same file contents manipulation between plan & apply as well, but then again, you can save a plan to a file and then run it later, so maybe not? Another experiment seems to be in order


I think this is an advantage of Pulumi, here are two use cases:

1. Creating a resource where created is not the same as ready. This is extraordinarily common with compute resources (a virtual machine, a container, an HTTP server, a process) where attempting to create follow-up resources can result in costly retry-back-off loops. Even when creating Kubernetes resources, Pulumi will stand up an internet-connected deployment more quickly than many other tools because you can ensure the image is published before a pod references it, the pod is up before a service references it, and so on. (The Kubernetes provider bakes some of these awaits in by default.)

2. Resources graphs that are dynamic, reflecting external data sources at the moment of creation. Whether you want to write a Kubernetes operator, synchronize an LDAP directory to a SaaS product, or one of my favorite examples. When I set up demos, I often configure the authorized public IPs dynamically:

    import * as publicIp from 'public-ip';

    new someProvider.Kubernetes.Cluster('cluster',
      {
        apiServerAccessProfile: {
          authorizedIPRanges: [await publicIp.v4()],
          enablePrivateCluster: false,
        },
      }


Of course you think it is an advantage, you work for Pulumi

I'm telling you this is not how a potential user sees the same situation, that it is a disadvantage and was one of the reasons we are not making the switch.

This example above is exactly the kind of code we don't want in ops, it depends on the user environment and physical location at the time they run the command, bad practice. Thanks for an extra talking point though


The claim above isn't "imperative is impossible".


The claim above is that Pulumi uses an imperative interface and that it is quite easy to slip past the declarative guardrails, so in most cases Pulumi is imperative, not declarative. The fact that Pulumi makes this separation opaque can be discussed, as can the clear separation be shown an alternative with benefits

The claim I keep seeing from Pulumi folks is that Pulumi is declarative, which is is not, as shown in multiple posts by many people. Please stop calling it such, it demonstrates dishonesty towards users


The claim above was that a for loop implied that the code couldn't be declarative.

> Please stop calling it such

I'm not claiming it is always declarative, I'm only claiming that a declarative example above can contain a for loop, and that laughing at that is the wrong response. That's it.


> Please stop calling it such

That was more me yelling into the void or larger thread than at anything specific you said, sorry :]


I was just wondering what stops me from reading and writing to a cloud bucket like an infinite tape?

https://www.pulumi.com/registry/packages/gcp/api-docs/storag...


> This seems extremely dismissive and shallow.

When someone tries to make a sophisticated argument that up is down and white is black, dismissive and shallow is the right response.

> The actual work done under the covers is not imperative

Having a declarative layer somewhere in the stack doesn't make something declarative, if that's not the layer you actually use to work on and reason about the system. See the famous "the C language is purely functional" post.



> When someone tries to make a sophisticated argument that up is down and white is black

This is where the deep misunderstanding is coming from.


you can have loops and still be declarative, CUE has loops, though they are considered comprehensions more technically, but there is no assignment or stack in CUE

One of the interesting aspects of CUE is that it gives us many of the programming constructs we are used to, but remains Turing incomplete, so no general recursion or user defined functions. There is a scripting layer where you can get more real world stuff done too

The CUE language is super interesting, has a very unique take on things and comes from the same heritage as Go, containers, and Kubernetes

https://cuelang.org | https://cuetorials.com




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: