Much praise!! These guys have incredibly good taste. Almost every single thing I can think of that I want in a programming language, they have it. All in the one language!
The fact that it has parametric types, parametric polymorphism, macros, performance almost as good as C, good C/Fortran interop, 64 bit integers and an interactive REPL all in the one language just blows my mind.
I wasn't able to tell if it is possible to overload operators, which is another thing essential to mathematical code.
I was also unsure why the keyword end was needed at the end of code blocks. It seems that indentation could take care of that.
I also didn't see bignums as a default type (though you can use an external library to get them).
However, all in all, I think this is the first 21st Century language and find it very exciting!
Thanks for the incredibly high praise. It definitely is possible to overload operators — operators are just functions with special syntax (see http://julialang.org/manual/functions/#Operators+Are+Functio...). We don't have bignum support, but adding it via GMP would be fairly easy. Just too many things to do!
If you happened to feel like trying to port it to Julia, it would be fairly doable. All the low-level functionality is there to allow it — bit twiddling operations, growable dense arrays of integers, etc.
Unfortunately the performance would be poor. This is not a reflection on Julia. Even compiled C is between 4 and 12 times slower than assembly for some bignum operations.
Also, LLVM handles carries and certain loop optimisations poorly, so even using LLVM bytecode you can't do much better than compiled C. It would be a massive project to improve this in LLVM (I thought about giving it a go sone time ago but decided it was overwhelming). And that use case is probably too specialised for the improvements to help with much else. Obviously the LLVM backend is fantastic for 99% of use cases and improving all the time.
N.B. I am not implying that a good assembly programmer is generically faster than a C compiler. Bignums are a very special case.
That makes a whole lot of sense. For myself, I wouldn't even attempt this because it's so much harder and more time-consuming to try to reimplement something like bignums efficiently than it is to just use a stable, mature and fast external library like GMP or CLN — or something like your project if BSD/MIT licensing is a must.
It's been done. I think it was Gambit Scheme that had its own bignum library. And for a while, very large integer arithmetic was reportedly faster than GMP, which if you know anything about GMP is quite an achievement. However, the GMP guys subsequently fixed this problem.
The language syntax seemed uninteresting, which is not really a bad thing but what's the case for the need for a whole 'nother language in that category?
I don't think lua has integers ie. like Javascript everything is a double, that can be changed by redefining a macro and recompiling AFAIK but it's still one global numeric type (and you can't change it for luajit).
No; there are definitely some major differences between the two languages, but they seem to have a lot of similarities. Julia's obviously made for scientific work, and Lua's a general-purpose scripting language designed to be embedded in a host application, but they seem to have a lot of design constructs in common.
It would certainly be nice if there was an option to use 0 based indices in blocks of code. It's understandable in that they are pitching at the technical community, and many mathematical papers and books are written with 1 based indices. But I am a mathematician who prefers 0 based indices.
Actually, it is quite easy to implement 0 based indices or any other indexing scheme in julia, since all the array indexing code is implemented in julia itself.
I personally would find multiple indexing schemes confusing both for usage as well as to develop and maintain. Given that 1 based indexing seems to be a popular choice among many similar languages, we just went ahead with that.
It generalizes the choice of 0 or 1 to an arbitrary starting index. So when you create an array you specify
not just where it ends but also where it begins. This lets you do neat things (consider a filter kernel with range [-s,+s]^n instead of [1,2s]^n) and the extra complexity it adds can be hidden when not needed using for-statements or higher order functions.
Nobody uses it because the implementation is not very efficient and Haskellers have a chip on their shoulder about performance. It subtracts the origin and computes strides on every index, but you could easily avoid this by storing the subtracted base pointer and strides with the array. Of course when you go to implement it you'll see light on 0-based indexing :)
A number of programming languages allow an arbitrary range of indices for an array, including Ada, Fortran 77, and Pascal. See the "Specifiable Base Index" column in this table at Wikipedia: http://en.wikipedia.org/wiki/Comparison_of_programming_langu...
Actually, the main reason I want to do away with Matlab and Octave is that I can't stand the 1-indexing! When voicing that opinion among collegues, I have heard no-one disagree with me. If you are actually stuck with this in Julia as well, I don't think I will have anything more to do with it.
Actually, broadly speaking, I think math (think summation etc.) in general is usually 1-index based while programming is 0-index (due to memory locations so that the array index also points to the first element?).
At first we were going to use 0-based indexing, but it made porting any Matlab code over very hard, which defeats a large part of the purpose of having Matlab-like syntax in the first place — to leverage the large amount of Matlab code and expertise that exists out there.
However, as I've used it more and more, 1-based indexing has really grown on me. I feel like I make far fewer off-by-one errors and actually hardly ever have to think about them. This has led me to conclude that 1-based indexing is probably easier for humans while 0-based indexing is clearly easier for computers.
Many divide-and-conquer algorithms seem to be easier to express with 0 based indexing, whereas quite a few array operations seem to be better with 1 based indexing. I can certainly understand and appreciate the different points of view, I just personally always think about algorithms with 0-based arrays.
As does anyone trained in the C tradition, but it's annoying, too, to have to translate 1-based math formulas to the C convention. Having recently used Octave for the Stanford online ML class after a couple decades of C, C++ and Java, I doubt programmers will have trouble with the mental transition.
Well, for what it's worth, I certainly also "grew up" with 0-based indexing (actually, literally grew up since I was a kid when I learned Pascal). I'm just saying that 1-based has really grown on me and that I find myself thinking about avoiding off-by-one errors far less often when using 1-based indexing. There are other times when I really wish I was using 0-based indexing. However, I find that that latter are more often times when I'm doing libraryish internals code, whereas the former are more common when I'm doing high level userish code.
The fact that it has parametric types, parametric polymorphism, macros, performance almost as good as C, good C/Fortran interop, 64 bit integers and an interactive REPL all in the one language just blows my mind.
I wasn't able to tell if it is possible to overload operators, which is another thing essential to mathematical code.
I was also unsure why the keyword end was needed at the end of code blocks. It seems that indentation could take care of that.
I also didn't see bignums as a default type (though you can use an external library to get them).
However, all in all, I think this is the first 21st Century language and find it very exciting!