Hacker Newsnew | past | comments | ask | show | jobs | submit | 8n4vidtmkvmk's commentslogin

Pruning the request and even the response is pretty trivial with zod. I wouldn't onboard GQL for that alone.

Not sure about the schema evolution part. Protobufs seem to work great for that.


In my (now somewhat dated) graphql experience, evolving an API is much harder. Input parameters in particular. If a server gets inputs it doesn't recognize, or if client and server disagree that a field is optional or not (even if a value was still supplied for it so the question is moot), the server will reject the request.

> Pruning the request and even the response is pretty trivial with zod.

I agree with that, and when I'm in a "typescript only" ecosystem, I've switched to primarily using tRPC vs. GraphQL.

Still, I think people tend to underestimate the value of having such clear contracts and guarantees that GraphQL enforces (not to mention it's whole ecosystem of tools), completely outside of any code you have to write. Yes, you can do your own zod validation, but in a large team as an API evolves and people come and go, having hard, unbreakable lines in the sand (vs. something you have to roll your own, or which is done by convention) is important IMO.


Pruning a response does nothing since everything still goes across the network

Pruning the response would help validate your response schema is correct and that is delivering what was promised.

But you're right, if you have version skew and the client is expecting something else then it's not much help.

You could do it client-side so that if the server adds an optional field the client would immediately prune it off. If it removes a field, it could fill it with a default. At a certain point too much skew will still break something, but that's probably what you want anyway.


You're misunderstanding. In GraphQL, the server prunes the response object. That is, the resolver method can return a "fat" object, but only the object pruned down to just the requested fields is returned over the wire.

It is an important security benefit, because one common attack vector is to see if you can trick a server method into returning additional privileged data (like detailed error responses).


I would like to remind you that in most cases the GQL is not colocated on the same hardware as the services it queries.

Therefore requests between GQL and downstream services are travelling "over the wire" (though I don't see it as an issue)

Having REST apis that return only "fat" objects is really not the most secure way of designing APIs


"Just the requested fields" as requested by the client?

Because if so that is no security benefit at all, because I can just... request the fat fields.


I think you're oversimplifying it. You've left on the part where the client can specify which fields they want.

That's something you should only really do in development, and then cement for production. Having open queries where an attacker can find interesting resolver interactions in production is asking for trouble

> That's something you should only really do in development, and then cement for production

My experience with GraphQL in a nutshell: A lot of effort and complexity to support open ended queries which we then immediately disallow and replace with a fixed set of queries that could have been written as their own endpoints.


This is not the intended workflow. It is meant to be dynamic in nature.

But has this been thoroughly documented and are there solid libraries to achieve this?

My understanding is that this is not part of the spec and that the only way to achieve this is to sign/hash documents on clients and server to check for correctness


Well, it seems that the Apollo way of doing it now, via their paid GraphOS, is backwards of what I learned 8 years ago (there is always more than one way to do things in CS).

At build time, the server generates a random string resolver names that map onto queries, 1-1, fixed, because we know exactly what we need when we are shipping to production.

Clients can only call those random strings with some parameters, the graph is now locked down and the production server only responds to the random string resolver names

Flexibility in dev, restricted in prod


I mean yeah, in that Persisted Queries are absolutely documented and expected in production on the Relay side, and you’re a hop skip and jump away from disallowing arbitrary queries at that point if you want to

Though you still don’t need to and shouldn’t. Better to use the well defined tools to gate max depth/complexity.


Sure, maybe you compile away the query for production but the server still needs to handle all the permutations.

yup, and while they are fixed, it amounts to a more complicated code flow to reason about compared to you're typical REST handler

Seriously though, you can pretty much map GraphQL queries and resolvers onto JSONSchema and functions however you like. Resolvers are conceptually close to calling a function in a REST handler with more overhead

I suspect the companies that see ROI from GraphQL would have found it with many other options, and it was more likely about rolling out a standard way of doing things


I still haven't witnessed a serious attempt at passing the Turing test. Are we just assuming its been beaten, or have people tried?

Like if you put someone in an online chat and ask them to identify if the person they're talking to is a bot or not, you're telling me your average joe honestly can't tell?

A blog post or a random HN comment, sure, it can be hard to tell, but if you allow some back and forth.. i think we can still sniff out the AIs.


A couple of months ago I saw a paper (can't remember if published or just on arxiv) in which Turing's original 3-player Imitation Game was played with a human interrogator trying to discern which of a human responder and an LLM was the human. When the LLM was a recent ChatGPT version, the human interrogator guessed it to be the human over 70% of the time; when the LLM was weaker (I think Llama 2), the human interrogator guessed it to be the human something like 54% of the time.

IOW, LLMs pass the Turing test.


The prompt for the LLM was to respond with short phrases, though. I don't know if that's fair since it hides it when there is useful utility.

Now we just append AI to everything instead...

thats actually a good idea agentic contracts

or web3 agents

just for their self executing properties not because there are any transformers involved

although a project could just build a backend that decides to use some of their contract’s functions via an llm agent, hm that might actually be easier and fun than normal web3 backends

ok I’ll stop, back to building


Then blast the product, not the people who built it.


They are blasting the product tbf. The people part is a small part of it. And apparently at least distracting the HN Community from their point.


Which is exactly why to cut it out. If you put salt in my cup of tea, I’m gonna notice and it’s gonna ruin the drink.


Microsoft poured salt into your cup years ago, you just did not notice.


If you put more salt into this rather thinly-stretched metaphorical cup when telling me what Microsoft did you are not going to endear yourself to me. Why muddy your message?


You cannot divorce a product from the people who built it. The product reflects their priorities and internal group well-being. A different group of people would have built a different product.


If you've worked in a large company, you know that the product reflects the priorities of the company so much more than the people who work there. Leadership states the priority and the employees do what they're told.


Leadership is part of the group of people who built the product, therefore different leadership would have also built a different product.

With that said, it's also not correct to claim that line folk have no influence at all. I don't believe that you can blame any individual since they may have stood up against something bad being put in the product, but they're still part of a collective group of people that built a bad product.


There's no stupid product, only stupid people.


The product was made by people. Or by AI which was made and controlled by people.


The product isn't some result of a series of "oopsies". The worst aspects of bad and/or user-hostile software products are that way because the people working at these companies want them to be that way.

Unless you want to call them just that incompetent. I assume they'd complain about that label too.

In short: No it's not "the product", the people building it are the problem. Somehow everyone working in big tech wants all the praise all the time, individually, but never take even the slightest bit of responsibility fro the constant enshittification they drive forward..


If you only knew...


That's one way to look at it. I personally think it's worth burning a few hours to learn how to do something yourself even if you don't immediately get value out of it.


I already know how to do it, I just don't see the value in it.


This reads like a satire. There's so much jargon and so many products involved to just do a little bit of logging. It's ridiculous.

That is to say I agree with the author.


I'm very skeptical about this zero commercial value claim. I don't think everyone skips past it, and even if they do, that's just the stuff they're detecting as AI. How much have they not identified? What about in a couple years?

Heck, even humans subtly trying to sell something give off a vibe you can pick up quickly. But now and then they're entertaining or subliminal enough that they get through.


I still think it would be fun for a video game. Write a backstory for a whole bunch of NPCs and let the player dig as deep as they like.

I'm not sure what the bottle neck right now is. Either this idea isn't as fun as I think or we can't do it in real time on consumer hardware yet.


A lot of full conversion mods just find community members that want to do some VO for practice or as a resume booster or just for the funsies. I think you'd be surprised how easy it is to get half-decent voice actors if you've got an interesting idea to build out.


The problem isn't hiring/paying the voice actors, it's that the NPC can say anything. It's not pre-scripted.


On reddit, I delete it daily. Partly for that reason and partly because the Internet is scary.

The line though, is probably when you put my harm out into the world than good. That's probably a good place to draw it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: