A point the authors didn't make is that Common Lisp compilers are quite fast compared to e.g. C++ compilers. So even in rare cases where you do need to recompile everything, the cycle time is short.
CCL's compiler is lightning-fast. I can recompile an entire system in CCL almost as fast as I can load the compiled object code. SBCL's compiler is slower (while often generating faster code because it does more work at compile time), but it's still much faster than a typical C++ compiler.
It does, in form of a sane macro system with the full language available for use at compilation time - something that C++ never had and still doesn't have. No, consteval and constexpr aren't equivalent. No, CPP is not a sane macro system. No, don't even get me started about templates.
Yes. That's a consequence of language design - unboxed arrays of structs mean no object identity, which means that operators like SETF and EQ no longer have their invariants satisfied when performing assignment to such arrays.
Sure, there may be reasons for it, but it means you can’t really build zero cost abstractions. For example, you can’t make a simple 2D vector object with the standard operations defined over it and then store those vectors in flat arrays. This is something that can trivially be done in C++, Rust, etc.
Probably the best workaround for this is going into the foreign memory land - grab some raw memory, define operators over it, manage all that yourself. CFFI works across enough implementations for this to be feasible.
Yeah. Common Lisp has quirks and you need to adapt your abstractions to those. Sometimes it's easy, sometimes it's rewarding, sometimes it's just annoying.
This week I'm scaling back some abstractions, writing more Fortran-like code on specialized arrays than individual objects, for the sake of zero cost.
I appreciate a little bit of a headwind against inventing new abstractions too casually. But it does remind me of programming in C or Forth. That's not everyone's cup of tea.
This one isn’t a quirk, it’s a fundamental constraint on the kind of code that you can write in CL. C++, Rust, Go (and even Java to some extent, with all the wizardry in Hotspot) all allow you to build a 2D vector abstraction without requiring you to box your vectors. In CL you simply can’t do this. Just look at the kind of bonkers workarounds my comment gave rise to: https://news.ycombinator.com/item?id=35855576
I don't really see the problem. If you want to define the bit layout of your objects then you define them using FFI. Support for FFI objects is comprehensive. It is exactly the tool for the job in Common Lisp.
Same for LuaJIT. I've spent years happily writing high-level Lua code that's actually operating on objects whose bit layout is explicitly defined using FFI at the C level of abstraction. It doesn't feel much different to objects or dictionaries to me.
Sure, it is great that those other languages have native support for inlining storage of structs into various containers, but the lack of such in Common Lisp only makes me write quirky code and doesn't really hold me back.
It’s not a problem if you don’t need zero cost abstractions (which indeed you may not, depending on your domain). But if you do, then using the FFI to define unboxed arrays of a simple 2D point class is considerably less attractive than writing
struct Point { float x, y };
Point points[10];
If you really can’t see this then I think we’re at an impasse.
It would be useful to have a more abstract, portable form of this (vaguely analogous to WebAssembly): a way to write code for a low level virtual machine that translates to native code.
Nothing in your link suggests a practical way of constructing unboxed arrays of structs in CL. And even if it did, it would presumably be one that worked only on a specific architecture.
The C++ that we have can only be used as an external tool: we write a character-level program into a pipe or file, which is read by an external program. This drops an object file that we have to process. I'm saying that we could have some Algol-like sublanguage with value semantics, and unboxed types. It could output code for a virtual machine, which could be further translated to native code. It would all be in Lisp, not requiring any external tools unrelated to Lisp to be installed.
The source file I linked to shows how you can write native code without leaving Lisp. In that native code, you can move the stack to allocate an unboxed array, and whatever else. But from that it's not a huge leap into having a similar thing but at a higher, and machine-independent level.
That's all pie in the sky isn't it? I was talking about CL. The only practical suggestion I can take from this is that one could drop down to inline assembly in some CL implementations. It's unclear to me how that would help with the problem of defining unboxed arrays of structs.
I am not sure what you mean. If I want to use unboxed array of structs I can easily do so via FFI and create some sort of DSL for working with them. If this sounds like too much work for you, then yes we are at an impase. I think CL is great in this way in that writing FFIs is pretty natural. Also if you need performance code, why would you be worried too much about portability accross implementations. Maybe I am missing something
Anyway in Haskell you use unboxed arrays but without much of the high level benefits of the rest of the language. I fail to see how this is fundamentally different
>If this sounds like too much work for you, then yes we are at an impase
It's not necessarily too much work (that depends on the context), but it is more work.
I am not sure why there is so much reluctance to acknowledge this one particular disadvantage of Common Lisp as compared to e.g. C++, Rust, etc. Common Lisp is not a perfect programming language. It does not need to be reflexively defended against all potential criticisms.
Please note that none of my posts says "Common Lisp sucks", or "you should use Language X instead of Common Lisp", or "the lack of proper support for unboxed arrays outweighs all the potential advantages of Common Lisp in all circumstances".
Not sure why you bring up Haskell, but yes, the same criticism applies to Haskell as well. The difference is really just a cultural one: even the most fanatical Haskell advocates would probably acknowledge that it's not a great language to use if you need fine control over memory layout.
i was more confused about the point you are making. more as an educational piece for myself in the sense maybe there is something im missing in my knowledge. you seem to know what you are talking about. that said, as far as i am concerned, you made your point clear in this reply. common lisp is primarily a high level language, albeit one with really good low level capabilities. however for tasks that require fine grained stack memory control, or even abstractions over that, it would be hard to see how it could compete with c family. i think this is perfectly fine
i brought up haskell as an example of a high level language that does unboxing of arrays, but also because your handle[0] suggested to me that you know quite a bit about haskell and will be able to inform me accordingly :)
for what its worth i personally think that it is a very good thing for a programmer to have thorough knowledge of both high and low level languages, and a great thing if they are able to combine it. for me lisp fits the latter porpose, but thats not important for everyone, and people are definitely free to dislike lisp
I wouldn't go that far, personally. It's convenient to be able to have unboxed arrays of arbitrary non-primitive datatypes, but you can work around it.
given that i use common lisp for numerical computations including ml, i hope not :) given just that python reigns supreme in this field, clear answer is no. however it is worth keeping in mimd that common lisp is a high level language with very good low level features. you can for example use c data structures seamlessly if you need fine grained memory control you, or write inline assembly
I'll grant you that's a kludge compared with your example. It wouldn't hold me back though. And I wouldn't consider trading in my lovely late-bound programming environment for an issue of this magnitude.
A design decision which needs to be made is at what level of abstraction should the data be "fixed", i.e. not to be manipulated with the full power of CL. This is often a flexibility vs. efficiency tradeoff.
In my 3D CL system [0], I have so far kept all geometric data as naive CLOS classes as the intent of the system is to provide a sandbox for experimentation. I have thought of, perhaps one day, representing the geometry as a foreign library for efficient passing to the GPU.
sure taken separately that looks more elegant. but, provided your whole program is complex enough, i think if you take a step back your language of choice is gonna look like a pigs sty compared to the same thing written in common lisp.
i think the amazing thing about common lisp as a high level language is that it can be a low language also. imo it is an unmatched balance of a high/low level language
Are there any examples of elegant CL code that performs lots of vector geometry calculations? I'm skeptical that this sort of code would come out elegantly in CL (at least if it had to perform reasonably well).
This isn't a direct answer to your question (it's not a computational geometry codebase), but one option for
implementing geometry might be to use MAGICL. [1] It is optionally and transparently accelerated by BLAS/LAPACK.
That isn't what Bjarne means by zero cost abstractions.
It means the abstractions produce the same code as having written the same manually without the abstraction, e.g. having a class with virtual methods versus having a struct with function pointers as fields.
I interpret the term compostitonally. But CL doesn’t have zero cost abstractions in that sense either.
Take the example I mentioned. Say that you’re looping through an array of pairs of 2D vectors and calculating the dot product of each pair. In C++ you can use your 2D vector class without any additional cost. In CL you either need to remove that abstraction (and deal with flat arrays of scalars) or incur the cost of boxing.
More generally, if every object with its own methods requires a boxed representation, then that severely limits the range of zero cost abstractions that you can create. If using the abstraction requires boxing then it’s not zero cost. (If Bjarne disagrees on that point, then I disagree with him!)
Anyway, I’m sure you know all this, so I’m not really sure what point you’re trying to make here. I don’t think anyone would suggest that CL is a good language for building zero cost abstractions, whatever the precise definition of the term.
I am interested though: how would you define an unboxed array of structs in Genera's dialect of Lisp?
Your quote clearly applies to my example. You can avoid the boxing by hand coding the dot product computation over flat arrays; you can't avoid the boxing if you use a 2D vector abstraction (in CL).
I don't think so? Judging by the documentation that's just the usual option (also available in CL) to have struct fields stored in an array, list, name value pair list, etc. There's no suggestion that it would accomplish unboxing in general. I guess it is possible that if each field of the struct had the same type (say, a float), then the backing storage for the struct would be an unboxed array of floats, and an array of such structs would then come out with its backing storage as an unboxed multidimensional array of floats. But:
* I'd really want to see this actually working to believe it. The documentation doesn't make clear what would happen in this circumstance. (Time to spin up OpenGenera? Ha!)
* At best this works for structures with homogenous field types. My example of a 2D vector happens to fit that criterion, but it is also useful to have unboxed arrays of structures with heterogenous field types.
Accessors take care of reading/writing from the backing storage mechanism.
If you want to be really picky, don't forget the whole OS was written in Lisp, and even the C, Pascal and Ada compilers targeted it, and much follow the same semantics. The same folder has the manuals for C and Pascal.
And if you want to be really sure how it goes, there is the low level forms for Assembly like coding, e.g. sys:art-q, which is used to pack C structures into arrays.
(:type :array) is documented as the default. So there must be something more required to create an unboxed array of structs than just that.
If, for whatever reason, it is important to you to persuade the internet that the Lisp dialect of a long-defunct operating system was able to define unboxed arrays of structs, then I think you should at least show example code demonstrating this. The more salient point, however, is that Common Lisp can't do this.
> Say that you’re looping through an array of pairs of 2D vectors and calculating the dot product of each pair. In C++ you can use your 2D vector class without any additional cost. In CL you either need to remove that abstraction (and deal with flat arrays of scalars) or incur the cost of boxing.
Or maybe use compiler macros to remove the abstraction without the cost of boxing?
Here is the relevant paragraph from Stroustrup, B (2013): The C++ programming language, 4th edition. Pearson Education, page 10:
"What you don’t use you don’t pay for. If programmers can hand-write reasonable code to
simulate a language feature or a fundamental abstraction and provide even slightly better
performance, someone will do so, and many will imitate. Therefore, a language feature and
a fundamental abstraction must be designed not to waste a single byte or a single processor
cycle compared to equivalent alternatives. This is known as the zero-overhead principle."
I don't know who came up with "zero cost" abstractions, but it's wrong since there is no zero cost. For the people chanting "zero cost" the cost might not be obvious though.
Not quite to the degree that C++ does, but it's pretty good. There are several places in the standard that specifically allow for making a dynamism/performance tradeoff.
CCL's compiler is lightning-fast. I can recompile an entire system in CCL almost as fast as I can load the compiled object code. SBCL's compiler is slower (while often generating faster code because it does more work at compile time), but it's still much faster than a typical C++ compiler.