I used to select my words very carefully and feel frustration when people misinterpreted them or did not understand the precise angle behind that choice. Reading other people's communication would often be confusing because they were not nearly as precise in their language.
At some point I realized that if I didn't want to be permanently frustrated, I had to adapt to the broad reality of how humans communicate. I introduced more context and redundancy into my writing, I learned to use analogies to make it easier for others to get the big picture. Most importantly, I stopped expecting every word I read to mean exactly what I thought it meant, and instead tried to get an idea of what they were trying to say, rather than fixating on what they were actually saying.
Years later I figured that I was autistic, and that it had played a big role in my difficulties trying to understand and be understood by normies.
I'm usually precise in my wording and choose specific words for a reason and am also sometimes annoyed by people ignoring the preciseness.
However I also sometimes cannot find the correct precise words to describe what I mean in unambiguous, but also concise words, so I sometimes choose much less precise words for lack of a better alternative. Oftentimes I denote that when I find it important, but it happens way too often to do that every time.
Also words simply aren't completely precise. A word might be perfectly fitting for what I want to say with it in a situation, but someone else understands it as something slightly different and they are not wrong about it. Words often simply do not have one exact shared meaning.
Natural language is imprecise and it is fundamentally a lossy compression function. One that uses a shared dictionary that is not identical for both encoder and decoder. You simply need some amount of error correction in encoding and decoding.
In the same way that the "worse" a speaker is at communicating the more likely something gets lost, the same is true the "worse" the audience is at listening or paying attention or understanding. Both ends make the connection. This will be easy to read as calling the audience dumb, but that’s not what I’m saying. I’m saying the ability to understand involves trying and the audience has some control over successful communication much like the speaker does. They can sit with the idea for a second longer before responding, learn and pickup (or ask about) whatever gap they have if they’re not up to speed, or in many cases just listen without distraction.
Conversations have various power dynamics where one person may have more of the burden, but it is far from always a speaker pitching something to someone who isn't inclined to it. Peers leave hallway chats regularly having “aligned” on two different things. Lots of things we’re talking about are actually complex and simple communication will effectively be miscommunication.
I think we’ve moved too far to broadly attributing confusion to weak speaking. It can certainly help to keep polishing and reworking your words to overcome worse and worse listening habits. That can take one very far, but it doesn’t change that we’re making the bar higher and higher and therefore more messages/ideas dissipate into air.
I resonate strongly with this comment chain. At this stage in life I don’t think I’ve essentially figured out how to adapt and don’t see much point in getting diagnosed. But it is interesting seeing comments that feel like I could have written them myself.
> At some point I realized that if I didn't want to be permanently frustrated, I had to adapt to the broad reality of how humans communicate.
See you say that, yet I'm perpetually frustrated because so many humans communicate so fucking poorly, which AI is both making a bit better (no more word salad riddled with typos, ill-understood terms, what have you) but is also making worse (people now put even less effort into communication, which is genuinely an achievement).
I was told all through my school years that I would need to write well to be taken seriously in business, and my entire career has been rife with aging old fools overseeing me who could barely fucking type, let alone write.
This is such a good summary of effective communication practices. It was the same sort of thought process that I went through when writing technical documentation and presentations, and it served me very well.
But the point isn’t that they’re more different than alike. The point is that learning c is not really that hard it’s just that corporations don’t want you building apps with a stack they don’t control.
If a js dev really wanted to it wouldn’t be a huge uphill climb to code a c app because the syntax and concepts are similar enough.
Look at my user profile. Divergence in modern NVidia GPUs does not work the way you think it does. A separate program counter per thread does not mean that on each clock each thread is issuing a different instruction. See section 3.2.2.1. of https://docs.nvidia.com/cuda/cuda-programming-guide/03-advan...
Of course divergence is sometimes unavoidable. That is why GPUs support it. But substantially divergent code comes at a significant cost.
> Modern GPUs will go: huh, it sure would be cool if we just shifted the threads about to produce two non divergent warps, and bam divergence solved at the hardware level
Could you kindly share a source for this? Shader Execution Reordering (SER) is available for Ray tracing, but it is not a general-purpose feature that can be used in generic compute shaders.
> Divergent threads can have a better throughput than you'd expect on a modern GPU, as they get more capable at handling this. Divergence isn't bad, its just something you have to manage - and hardware architectures are rapidly improving here
I would strongly advise against this. GPUs are highly efficient when neighboring threads within a warp access neighboring data and follow largely the same code path. Even across warps, data locality is highly desirable.
>I would strongly advise against this. GPUs are highly efficient when neighboring threads within a warp access neighboring data and follow largely the same code path. Even across warps, data locality is highly desirable.
Its a bit like saying writing code at all is bad though. Divergence isn't desirable, but neither is running any code at all - sometimes you need it to solve a problem
Not supporting divergence at all is a huge mistake IMO. It isn't good, but sometimes its necessary
>Could you kindly share a source for this? Shader Execution Reordering (SER) is available for Ray tracing, but it is not a general-purpose feature that can be used in generic compute shaders.
My understanding is that this is fully transparent to the programmer, its just more advanced scheduling for threads. SER is something different entirely
Nvidia are a bit vague here, so you have to go digging into patents if you want more information on how it works
I guess this supports a vague belief that I have held for decades: it is really difficult to rank the intelligence of people who are smarter than you
Through work I had the privilege of being around lots of people who were smarter than me, but if somebody asked me to rank them from "somewhat smarter" to "much smarter", I would have had a hard time.
Just an anecdote! I don't have any hard evidence.
I also wondered for many years why most of them didn't quit their jobs when on paper they would have been able to do so, but work is not a great place to ask those sorts of questions.
I think this might not be true though. This is like saying a marathon runner can walk like an amputee using a prosthetic.
Just like anyone else with a disadvantage, people who aren't that smart develop diverse compensatory strategies to work around their intellectual limitations, and these can look very different from popular caricatures of "dumb guy". A stupid person is not as simple as a smart person might imagine.
But by talking to them you can tell. It doesn’t matter if they made a ton of money selling real estate or whatever or have lovely personality traits or… let me know if I’m missing something. You can still tell by talking to them, because the structure and detail of a smarter person’s thought process is impossible to fake*. If you are similarly smart you can mirror their structure in your head, but if you are not you will just think they are saying something weird or confusing. Whereas there is nothing stopping a smarter person from simplifying their thought process when communicating, or filtering out thoughts they don’t think will be understood by the listener. Extremely smart people can get very good at this if they are well socialized.
It's funny to imagine that's the reason why "aliens invading us" or "AI taking over" are finally defeated at the end of a movie with a really stupid trick.
Yeah no I totally agree. I feel like I have a strong sense of a person's intelligence and their psychological capacity/abilities. I just passively look for it or analyze it in my interactions with them. But, if I don't myself have a grasp of the subtle abstract layers of complexity "above" a certain level, I can't evaluate another person's strengths in those areas, so I can't sense where they sit compared to others (or myself)!
I also think the more you know about things, the more you can see how well other people have integrated those things into their own psyche and how they employ those things, if that makes sense. Two people might both know a certain physics principle but one may elicit a far deeper and insightful employment of that knowledge than the other, even in casual situations.
Always thought of this as two cars driving faster than you on the road. After a certain distance it's clear both are faster than you, but really hard to say which one is the fastest.
>I also wondered for many years why most of them didn't quit their jobs when on paper they would have been able to do so, but work is not a great place to ask those sorts of questions.
Because they're smart enough to know neither money nor leisure is not the be all end all...
Are we talking of steel-cut oats here? The glycemic index for steel-cut oats is moderate. Instant oats, on the other hand, raise your blood glucose very rapidly.
> Spiritual equivalent of a life sciences forum discovering memory safety, one person who wrote code for a bit saying they wrote a memory bug in C once, then someone clutching pearls about why programmers irresponsibly write memory unsafe code given it has a global impact.
I used to be a code monkey, I wrote systems software at megacorps, and still can't understand why so many programmers irresponsibly write memory unsafe code given it has a global impact.
That's the analogy working as intended: the answer to "why do programmers still write memory-unsafe code" is the same shape as "why do microplastics researchers still wear gloves." The real answer is boring and full of tradeoffs. The HN thread version skips to indignation: "they never thought of contamination so ipso facto all the research is suspect"
(to go a bit further, in case it's confusing: both you and I agree on "why do people opt-in to memunsafe code in 2026? There’s no reason to" - yet, we also understand why Linux/Android/Windows/macOS/ffmpeg/ls aren't 100% $INSERT_MEM_SAFE_LANGUAGE yet, and in fact, most new written for them is memunsafe)
The implication is that if you spent 30yrs as an ambulance driver, followed by 10 years working retail, the death certificate will say "ambulance driver."
At some point I realized that if I didn't want to be permanently frustrated, I had to adapt to the broad reality of how humans communicate. I introduced more context and redundancy into my writing, I learned to use analogies to make it easier for others to get the big picture. Most importantly, I stopped expecting every word I read to mean exactly what I thought it meant, and instead tried to get an idea of what they were trying to say, rather than fixating on what they were actually saying.
Years later I figured that I was autistic, and that it had played a big role in my difficulties trying to understand and be understood by normies.
reply