In my (now somewhat dated) graphql experience, evolving an API is much harder. Input parameters in particular. If a server gets inputs it doesn't recognize, or if client and server disagree that a field is optional or not (even if a value was still supplied for it so the question is moot), the server will reject the request.
> Pruning the request and even the response is pretty trivial with zod.
I agree with that, and when I'm in a "typescript only" ecosystem, I've switched to primarily using tRPC vs. GraphQL.
Still, I think people tend to underestimate the value of having such clear contracts and guarantees that GraphQL enforces (not to mention it's whole ecosystem of tools), completely outside of any code you have to write. Yes, you can do your own zod validation, but in a large team as an API evolves and people come and go, having hard, unbreakable lines in the sand (vs. something you have to roll your own, or which is done by convention) is important IMO.
Pruning the response would help validate your response schema is correct and that is delivering what was promised.
But you're right, if you have version skew and the client is expecting something else then it's not much help.
You could do it client-side so that if the server adds an optional field the client would immediately prune it off. If it removes a field, it could fill it with a default. At a certain point too much skew will still break something, but that's probably what you want anyway.
You're misunderstanding. In GraphQL, the server prunes the response object. That is, the resolver method can return a "fat" object, but only the object pruned down to just the requested fields is returned over the wire.
It is an important security benefit, because one common attack vector is to see if you can trick a server method into returning additional privileged data (like detailed error responses).
That's something you should only really do in development, and then cement for production. Having open queries where an attacker can find interesting resolver interactions in production is asking for trouble
> That's something you should only really do in development, and then cement for production
My experience with GraphQL in a nutshell: A lot of effort and complexity to support open ended queries which we then immediately disallow and replace with a fixed set of queries that could have been written as their own endpoints.
But has this been thoroughly documented and are there solid libraries to achieve this?
My understanding is that this is not part of the spec and that the only way to achieve this is to sign/hash documents on clients and server to check for correctness
Well, it seems that the Apollo way of doing it now, via their paid GraphOS, is backwards of what I learned 8 years ago (there is always more than one way to do things in CS).
At build time, the server generates a random string resolver names that map onto queries, 1-1, fixed, because we know exactly what we need when we are shipping to production.
Clients can only call those random strings with some parameters, the graph is now locked down and the production server only responds to the random string resolver names
I mean yeah, in that Persisted Queries are absolutely documented and expected in production on the Relay side, and you’re a hop skip and jump away from disallowing arbitrary queries at that point if you want to
Though you still don’t need to and shouldn’t. Better to use the well defined tools to gate max depth/complexity.
yup, and while they are fixed, it amounts to a more complicated code flow to reason about compared to you're typical REST handler
Seriously though, you can pretty much map GraphQL queries and resolvers onto JSONSchema and functions however you like. Resolvers are conceptually close to calling a function in a REST handler with more overhead
I suspect the companies that see ROI from GraphQL would have found it with many other options, and it was more likely about rolling out a standard way of doing things
I still haven't witnessed a serious attempt at passing the Turing test. Are we just assuming its been beaten, or have people tried?
Like if you put someone in an online chat and ask them to identify if the person they're talking to is a bot or not, you're telling me your average joe honestly can't tell?
A blog post or a random HN comment, sure, it can be hard to tell, but if you allow some back and forth.. i think we can still sniff out the AIs.
A couple of months ago I saw a paper (can't remember if published or just on arxiv) in which Turing's original 3-player Imitation Game was played with a human interrogator trying to discern which of a human responder and an LLM was the human. When the LLM was a recent ChatGPT version, the human interrogator guessed it to be the human over 70% of the time; when the LLM was weaker (I think Llama 2), the human interrogator guessed it to be the human something like 54% of the time.
just for their self executing properties not because there are any transformers involved
although a project could just build a backend that decides to use some of their contract’s functions via an llm agent, hm that might actually be easier and fun than normal web3 backends
If you put more salt into this rather thinly-stretched metaphorical cup when telling me what Microsoft did you are not going to endear yourself to me. Why muddy your message?
You cannot divorce a product from the people who built it. The product reflects their priorities and internal group well-being. A different group of people would have built a different product.
If you've worked in a large company, you know that the product reflects the priorities of the company so much more than the people who work there. Leadership states the priority and the employees do what they're told.
Leadership is part of the group of people who built the product, therefore different leadership would have also built a different product.
With that said, it's also not correct to claim that line folk have no influence at all. I don't believe that you can blame any individual since they may have stood up against something bad being put in the product, but they're still part of a collective group of people that built a bad product.
The product isn't some result of a series of "oopsies". The worst aspects of bad and/or user-hostile software products are that way because the people working at these companies want them to be that way.
Unless you want to call them just that incompetent. I assume they'd complain about that label too.
In short: No it's not "the product", the people building it are the problem. Somehow everyone working in big tech wants all the praise all the time, individually, but never take even the slightest bit of responsibility fro the constant enshittification they drive forward..
That's one way to look at it. I personally think it's worth burning a few hours to learn how to do something yourself even if you don't immediately get value out of it.
I'm very skeptical about this zero commercial value claim. I don't think everyone skips past it, and even if they do, that's just the stuff they're detecting as AI. How much have they not identified? What about in a couple years?
Heck, even humans subtly trying to sell something give off a vibe you can pick up quickly. But now and then they're entertaining or subliminal enough that they get through.
A lot of full conversion mods just find community members that want to do some VO for practice or as a resume booster or just for the funsies. I think you'd be surprised how easy it is to get half-decent voice actors if you've got an interesting idea to build out.
Not sure about the schema evolution part. Protobufs seem to work great for that.
reply