Hacker Newsnew | past | comments | ask | show | jobs | submit | 19ylram49's commentslogin

> Q9: Do I have to use pronouns I’m unfamiliar or uncomfortable with (e.g., neopronouns like xe, zir, ne... )?

>

> Yes, if those are stated by the individual.

Yikes.

I’m 100% in support of all gender identities, sexual orientations, races, etc., and I’ll fight for everyone’s rights and well-being, but this is insanity.

It’s one thing restrict offensive language but it’s a whole different beast to mandate specific language.

Nope. Nope. Nope. I’m not doing it.

Sorry not sorry.


Not doing what? Being 100% supportive as stated and addressing people respectfully? Seems strange to express displeasure with a corporate policy about a website by being disrespectful to unrelated individuals.


> Seems strange to express displeasure with a corporate policy about a website by being disrespectful to unrelated individuals.

What are you even going on about? Whom have I disrespected?

Allow me to repeat myself: It’s one thing restrict offensive language but it’s a whole different beast to mandate specific language.

If, e.g., you tell me to not refer to a certain group of folks using a particular racially offensive term, that’s fine and pretty reasonable, but if you tell me that I must refer to a certain group of folks using a term that you or they chose simply because you or they said so, as far as I’m concerned, we’re slowly walking ourselves into a dictatorship. For the first option, I can simply not do anything and avoid being a “bad” person, but for the second option, I’m implicitly a “bad” person if I don’t explicitly do as told (in this case, aka, ordered).

There’s no way I could ever accept that. Nope. No way.


How do you feel about using someone's name?


I think the argument boils down to fundamentally restructuring the syntax of our language to the point where pronouns are now individual identity-level constructs. I won't take a position, but I will say that this restructuring is expensive and confuses the hell out of me.


You might want to be getting your site back up: https://news.ycombinator.com/item?id=21252544.


LOL thanks, nice blip of traffic there pushing my little VM to its limits.


Much needed perspective in the age of overly dramatic headlines and straight up misinformation.


> as fast as C

I mean, it actually is C, since the code is transpiled to C.


The fact that it uses C as a compilation target doesn't have much to do with whether it's as fast as C in practice. It's easy to imagine a compiler that generates C but generates terrible C that runs really slowly. (E.g., imagine it goes via some sort of stack-machine intermediate representation, and variables in the source language turn into things like stack[25] in the compiled code.)

Or consider: Every native-compiled language ends up as machine code, which is equivalent to assembly language, but for most (perhaps all) it would be grossly misleading to say "as fast as assembler".


In fact, you can transpile Python to C with Cython. That typically gets one a speed boost, but only a bit. You still have most of the memory allocation/deallocation overhead of Python objects getting created and destroyed, and all the work needed to keep attributes tracked, so a straight C version with no focus towards optimization would likely outperform it greatly.

(A neat tool in one's toolbox, of course. But just transpiling to C does not get one as fast as C.)


I’m not sure that I understand your argument. If the code from which the resulting machine code is compiled is C, then it’s objectively “as fast as C” … because, at the end of the day, it actually is C. Being “as fast as C” means that your resulting program will perform as fast as a C compiler [worth its salt] can get you.

Your comparison to machine code (or human readable assembly code) is less useful in that such a statement means very little until one knows how said machine code is being produced (e.g., manually, from a IR, etc.).


"As fast as C" would commonly be interpreted as "a program written in it will be as fast as a well-written C equivalent", not as "there is a C program with the same performance characteristics".

That a language is compiled to C does not mean that its compiler is going to be able to produce a C program that's as good as a that well-written C equivalent. (A relatively obvious example would be a compiler that introduces a heavy runtime, and doesn't give the C compiler enough information for it to get rid of the runtime)

It's the same with assembly code: that a compiler produces assembly does not mean the resulting program is fast.


> "As fast as C" would commonly be interpreted as "a program written in it will be as fast as a well-written C equivalent"

That’s your interpretation, which is fine, but the objective meaning stands. Even the idea of “well-written C” is, in my experience, fairly subjective amongst C programmers.


Remember that we're comparing languages, not programs. A language can be thought of as the space of all programs that can be interpreted by that language. In any language, any algorithm can be implemented arbitrarily slowly. So the only meaningful point of comparison between language X and language Y is the upper bound on the performance of each language's program-space.

That some particular C program exists that is at least as slow as a program in some other language is always true, trivially, and so is not a good interpretation of "as fast as C" regardless of its objectivity.


You're missing the point. The fact that a language compiles down to C doesn't mean it compiles down to efficient C. At the simplest level, the compiled code could add a bunch of unnecessary function calls and pointers and other forms of indirection that wouldn't be present in hand-written C. But for a more extreme example, you could also compile an interpreter or VM to C, and it would still be much slower than the equivalent hand-written C code. This is why "as fast as C" typically refers to normal, hand-written C code—even though there is no formal definition for what "normal C" looks like, it's still a useful description.


> The fact that a language compiles down to C doesn't mean it compiles down to efficient C.

Where do you see me claiming otherwise?

> At the simplest level, the compiled code could add a bunch of unnecessary function calls and pointers and other forms of indirection that wouldn't be present in hand-written C.

Again, why are you telling me this? Please quote where I claimed otherwise.

> But for a more extreme example, you could also compile an interpreter or VM to C, and it would still be much slower than the equivalent hand-written C code.

The more that I read your response, the more that it seems that you’re debating yourself, because I’m not sure why you’re telling me this. You started your response by telling me that I’m “missing the point” when, in reality, you seem to have not even read my point. My main point was the following:

> If the code from which the resulting machine code is compiled is C, then it’s objectively “as fast as C” […] your resulting program will perform as fast as a C compiler [worth its salt] can get you.

This is true. I made no claims re efficiency; “as fast as C” and “as fast as efficient hand-written C” aren’t interchangeable claims. Forgive me for not assuming efficiency, because I’ve seen a good amount of inefficient hand-written C code in my years.

> This is why "as fast as C" typically refers to normal, hand-written C code—even though there is no formal definition for what "normal C" looks like, it's still a useful description.

Says who though? I’m professionally experienced in C, and as is very clear by this discussion, it’s down to individual interpretations.


But that's not at all a useful way to describe a language. Why would someone ever describe a language as "as fast as C" and not mean "as fast as typical hand-written C"? What would be the purpose of that? With your interpretation, CPython is "as fast as C", since it's written in C, and yet that's not a claim anyone would actually make.


Suppose I claim that some programming language achieves performance "as fast as C". Then, unless I very clearly and explicitly say otherwise, it will be assumed that if I write kinda-C-like code in that language I will get performance similar to comparable code actually written in C.

But that doesn't at all follow from compiling to C. I gave one example above; here's another. Perhaps my programming language has arbitrary-length integers and the C code it produces for a simple loop looks something like this:

    bignum_t i = bignum_from_int(0);
    bignum_t x = bignum_from_int(0);
    while (i < bignum_from_int(1000000)) {  
      bignum_add_in_place(x, bignum_bitand(i, bignum_from_int(1)));  
    }
Corresponding code in "normal" C would use machine integers and the compiler might well be able to understand the loop well enough to eliminate it altogether. Code like the above would run much, much slower, despite being technically in C.


JVM, CLR, Python bytecode or Lua JIT code are ultimately all transformed into machine code but those are not as fast as assembler.

Being as fast as C is not about using C as an intermediate language but having data structures and control flow that resemble what C compiler have been optimized for.

Case in point, Haskell GHC is capable of outputting C code but the algorithm will not get C performance (or Haskell performance without this intermediate C representation).


Nitpick, but it actually compiles down to C. Nim works at a higher abstraction and the compilation is a one-way street. But what is more important is that it generates efficient C code, it looks ugly, and it's not something you would ever dream of writing yourself, but it's been optimised to give fast run-times. Often times in benchmarks the Nim code with optimisations comes out as fast as the C code with optimisations, even in some cases beating it.


Since you seem to know how this works -- I hope you won't mind if I ask you a slightly on-topic question about this...

I've been trying to find out if I can take the generated C code that nim produces and, for example, compile it on some exotic architecture (say an ancient solaris/sparc system or some aix/power thing, or some mips microcontroller with linux) however I can't find any examples of people doing this...

Is it possible? Or should I abandon hope and continue writing C for these platforms? :}


Are you sure your OS/CPU is not on this list? :) https://github.com/nim-lang/Nim/blob/devel/lib/system/platfo... And yes, compiler works on almost all of them (you can see which ones are precompiled in csources - https://github.com/nim-lang/csources/blob/master/build.sh#L7... )

And for CPUs - https://github.com/nim-lang/csources/blob/master/build.sh#L1...


Yeah you can get to the generated C. See this SO question.[0]

Give it a try :)

[0]https://stackoverflow.com/questions/29956898/how-do-i-get-th...


As the others have said it should be possible. Nim is already pretty good at cross-compiling, but it is also able to just spit out the C files it generates and allow you to play with those.


Yes, Nim can run on all sorts of architectures including AVR microcontrollers, MIPS, RISC-V.


What you said here was literally my point. Maybe you misunderstood me?


I think the point is the terminology, Nim doesn't transpile it compiles to C.


"Transpile" is such a nonsense term that I don't think it's useful to split hairs here.


Those are valid questions when evaluating an unknown technology. How could anyone consider this trolling? Not to digress but have we become too sensitive?


Sometime people troll by sealioning: http://wondermark.com/c/2014-09-19-1062sea.png

(I'm not accusing the granparent post)


I've seen that comic before but have no idea what it's trying to express, could you explain? Is it just when you repeatedly pester someone with questions to annoy them?


Unfortunately, asking questions on the internet is often seen as "opposing" or "arguing against" the thing you're asking about. So many qualify their statements to avoid this sort of misreading.


Which just makes things worse. Having to qualify everything by default is a whole lot of foolishness.


It all depends on the tone of the question, which is hard to discern in writing, so the clarification is helpful.

Edit: Yes, those are very valid and reasonable questions. Clarifying that it's not meant as trolling is also valid and reasonable, because writing is easily misunderstood in exactly that way - which is the reason smileys were invented. (And, to be clear, I'm not accusing anyone of missing smileys or of trolling. I am trying to express my agreement with both parent and grandparent.)


Asking a question in plain English like this shouldn't require further clarification.


Communication by writing is harder because you miss facial expressions. Because of this, a lot of it is subject to interpretation on the part of the reader. The tone can be inferred from a lengthy piece, but not so much from a small paragraph.

So, in order to avoid bias on the reader part, I preferred to explicitly tell it was not trolling.


On the contrary, I feel that the stylistic [and sometimes semantic] separation of primitive and boxed types in languages (e.g., `byte` VS `Byte` in Java) improves the developer experience, in that I can very quickly dissect the type of value that I’m dealing with when reading the code.


In Java that difference matters a lot for performance: primitives are unboxed, objects are boxed. (Not as true now with auto boxing and escape analysis, but this was absolutely true in version 1.0.)

In C++ it can matter for correctness because primitives are uninitialized by default. But other types might be too and the standard library uses under_scores for things that are initialized on construction, so it's not a great example of this distinction.

Why do you care in other languages? In Rust for example I'm a little fuzzy on why I care if something is considered a primitive.


In this case it might be a 'documentation as code'thing, being able to see at a glance if something is a language primitive or a potentially very different implementation could have value.

However I'm not super familiar with Rust, so I couldn't speak to that why.


To the creator:

I actually just went through the trouble of resetting my Product Hunt account (I haven’t been on there in a long time) just to give you that upvote!

Thank you for this! Cheers.


My popcorn got cold. This train-wreck is taking too long. The suspense is killing me.


Now, wave at your camera to the lovely NSA agent tracing you.


Yikes.

This is part of the reason why I prefer to never let any services/apps/etc. save my bank/card details. If there’s not an option to save the card details that I can uncheck, 9x out of 10, I reconsider the transaction.

The assumption here though is that the services/apps/etc. that do provide the option actually respect it; in other words, unless you use fake/virtual card details (not entirely reliable, in my experience), you can’t be 100% sure that you’re safe either way, which sucks.


Yes, that’s a significant catch, but this is some good work nonetheless.

That said, I’d hope that someone using a C library like this wouldn’t just start using it without understanding the potentially non-trivial consequences/side-effects of that decision.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: