Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I fully agree and if you look at my other comments in this thread you will see the things that I think are done better and the advantages that I feel modern languages have.

In this case age has something to do with it. There are far fewer people who know fortran, there are far fewer people working on good fortran libraries, and there are languages that can do the same thing that fortran can do.

My issue is that fortran can do the same things (we can argue about speed because that's not really the language's fault) but it has a detraction that other languages don't have. No one outside of very limited subsets of the software development community uses it.

I see that as a huge detraction. Maybe others don't, but I do.



The limited subset of the software dev community is the precise subset you are saying shouldn't use it...

People in my field don't use PHP, either. Doesn't make it a bad language for its purposes, but certainly limited subset.

All the 20-somethings (including myself) in my research group use Fortran.


> No one outside of very limited subsets of the software development community uses it.

Well, sure, but there's nothing wrong with that, it doesn't mean the language is bad, and it certainly doesn't mean that it's wrong to use it.


I'm not saying the language is bad. I'm saying that is a detraction to using the language.

Python has taken off in the scientific community because of the huge amount of code written by non-scientific programmers. Fortran will never benefit from this level of community support.

Scientists aren't interested in abstracting problems, improving libraries, or software organization but those things are important for having a productive, easy to use, and fast language.

Scientists are missing out on the benefits of community support for, in my opinion, no articulable or valid reason.

Some in this thread have said "It's because of number libraries" and others have said that "It's because of speed". I think I have shown, and that cursory research will show, that neither are the case.

Fortran offers nothing that C++, C, Rust, Julia, and Python have that can't be achieved using the respective language. What it does have is a small following who your work can only be appreciated by.

To me writing a program in Fortran is like an English major setting out to write a modern day English dictionary but choosing to write it in 17th century Celtic English.

You can definitely do it, it might be easier for an expert to do it this way, but it will never leave it's audience. Anyone who wants to use your work will need to rewrite it or painfully wrap it. You'll also have perpetual trouble finding people who can work on your project in the future.


C/C++: no convenient array support and ugly, noob-unfriendly syntax. People hate C/C++ type declaration. And don't even mention raw pointers, I have seen first hand how a bunch of clever C programmers can screw up trying to do arrays with these.

Rust: does it even have arbitrarily-indexed multidimensional arrays, again? Are there good compilers? What's the performance on loop-heavy and cache-heavy code? (C at least has this covered). What are the floating point semantics and how do they affect accuracy and optimization opportunities?

Julia: now that's just a Fortran of the 21st century. In few decades bikeshedders will bitch how it doesn't support the latest C++42 features, completely disregarding that understanding C++, especially code written by others, requires pretty much devoting one's life to understanding C++. Not to mention that Julia looks visually different from JavaScript and doesn't even run in the browser. What's this early 21st century Celtic relict even doing here? Don't they get that developers want to use the Web nowadays?

Python: by your own standards it's worse than C because it lacks explicit inlining. QED. Never mind that C doesn't have explicit inlining either :p (hint: read docs).


>Fortran offers nothing that C++, C, Rust, Julia, and Python have that can't be achieved using the respective language.

We are currently preparing to develop and program new classes of algorithms (statistics) for use with (upcomming) Intel MIC processors. I recently succeded with implementing a customized synchronisation procedure based on atomics using Fortran 2008 coarrays. (In my opinion, customized synchronisation procedures allow to enter a new level in parallel programming). To make efficient and safe use of atomics, it was required to make use of several Fortran specific programming techniques, and tricks, that are all together hardly available in any of the other programming languages, at least not easily. (To be honest, Intel ifort does currently not support all of them neither -possibly a bug, I did request on this recently-). Also, to achieve our goal: C++ is too much OOP (runtime polymorphism) whereas C has too less support for objects to allow for highly efficient parallel programming (that's just my personal opinion).


> to achieve our goal: C++ is too much OOP (runtime polymorphism)

That's not a correct statement anymore. There are a few examples of why this statment doesn't hold true but I think the biggest thing to point you to is this talk: https://www.youtube.com/watch?v=zBkNBP00wJE

With constexpr, const, and -O3 you're not going to beat C++ (or C) for any form of embeded-y systems development.

Also.... Phis are supported by OpenCL [0] [1] [2]. This lets you use a number of languages and libraries to better handle your computational tasks [3].

Why use Fortran, C++, or C when Haskell has Accelerate or when Python has ViennaCL [4].

Benifit from the computer science world's work. Don't write your own MPI code that you'll spend 4 months debugging when higher levels of abstraction exist for the problems you want to use. My best bet for you is: either use OpenCL's compute language which is very simple and allows you to do very complex operations completely in parallel on (cheap!) commodity hardware like GPUs for 4 years ago, or learn C++ just enough because the same things are doable using C++ and OpenCL libraries.

It's not too object oriented. Layers of abstraction don't slow code down if they are correctly developed. C++'s compilers and OpenCL have been correctly developed to give you frankly amazing performance for very high level features.

[0] - https://software.intel.com/en-us/iocl_tec_opg [1] - https://software.intel.com/en-us/blogs/2012/11/12/introducin... [2] - http://www.techenablement.com/portable-performance-opencl-in... [3] - http://www.iwocl.org/resources/opencl-libraries-and-toolkits... [4] - http://viennacl.sourceforge.net/


>Benifit from the computer science world's work. (...) Sorry, isn't this a copy and paste from elsewhere? I'd bet I've seen this paragraph some time ago elsewhere.

Nevertheless, our task is not just to write codes that can be executed on the MIC. We must efficiently handle the large number of MIC cores on upcoming HPC systems. What makes the MIC so special is: (a) an increasing number of cores in the near future and, more importantly, (b) we can execute all our existing, highly optimized (vectorized) library codes (C, Fortran 77, IMSL, NAG, or whatever) on the MIC cores. With that we may face two different main approaches for using such many-core hardware: Firstly, we may use massively increasing amounts of data to make better predictions (Big Data, weather forecasting, ...). And secondly, we may use the many cores to develop and program new algorithms to process small data in new ways: With an increasing number of MIC cores, evaluations based on ‘exact values’, as opposed to approximations, will become much more attractive. (Don't take the term ‘exact values’ to literally, in most cases these values are based on approximations as well). That is an important option in those situations where approximations are not available (which is the case with the more sophisticated statistical methods). With today's Fortran, or with PGAS in general, we make a development (away from threading) towards merely (remote) data transfer (through PGAS memory). This allows to write the most sophisticated parallel codes with simple sequential syntax style and brings the amount of required codes for parallel computing to a minimum. Sounds good so far? But the situation is not quite that simple: The (remote) data transfer must be synchronized, i.e. you can only consume the remotely transferred data after some synchronization point. The problem is, if the programmer wants to use the MIC cores efficiently, she/he will need to use some of the cores for distinct purposes and thus, execute distinct parallel codes with different synchronization points on them. This will almost certainly break with what is called ‘ordered execution segments’ (Fortran terminology). ‘Ordered execution segments’ are a very severe limitation for parallel code execution (more precisely: remote data transfer) that is not based on a restrictive programming language but due to some extreme limitation of upcoming HPC hardware. With unordered (user-defined) segment ordering the programer loses nearly all the remote communication channels among the involved cores. Then, the only way left to transmit values remotely are the atomic data types: (binary or integer) scalar values only. Until recently, I had not much hope to circumvent the limitations of such user-defined segment ordering using atomics and thus, not much hope to make efficient use of the MIC cores with current Fortran. Yet, we can not offer professional style programming techniques, but instead some simple, rudimentary Fortran techniques and ‘tricks’ to overcome the limitations of atomics: https://github.com/MichaelSiehl/Atomic_Subroutines--How_the_... Anyone who complains or doubts about Fortran: Try this with your favorite parallel programming language or tool. cheers.


GPUs already beat MICs in core count, performance, price per watt, and initial cost.

Why worry about writing specialized vectorized code? Just by using OpenCL your code will work like that. That's what OpenCL does. You give it chunks of data, you give it a kernel to run, then you run it on your data.

An extremely old AMD cards you can easily find 1280 "core" systems that can push GFLOPS of data through them.

OpenCL is the tool you should use for this. Not a hodge podge of messy fortran code that's been ""vectorized"".

Don't do the compiler's job.


I think we are talking about different things. PGAS is intended to develop more sophisticated parallel LOGIC codes more easily, which is a requirement to handle upcomming MIC hardware. The logic codes itself will play a major role to make efficient use of such hardware and to develop new algorithms for upcomming hardware. I am not against OpenCL or anything else, but consider these as low-level parallelism. The MIC is just starting by now and is very distinct from GPUs. At this time, we are only preparing for the near future.


I'd bet there's more people who know Fortran today than there are who knew Fortran in the 1970s. While scientific programming is no longer dominant in computing, remember that computing itself has grown and expanded into an enormous number of other fields.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: