Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What connection are you drawing between the two? BNF is used to define language grammars. Prolog is written using Horn clauses. There is not an obvious connection.


Prolog DCGs are very similar to BNF. They are also a good example of how you can extend Prolog to use different execution strategies.


Prolog DCGs have pitfalls that aren’t immediately evident to someone only familiar with BNF, mainly that you have to specify your grammar in Greibach normal form (every nonterminal production starts with a terminal) to get any sort of performance out of them.


Woa, blast from the past! :D I used Greibach normal form grammars in my Master's, in 2014, when I was doing some grammar induction stuff with DCGs.

Now, Greibach normal form is useful, but its use is to avoid left-recursions. That's not to say it improves performance: Prolog's SLDNF-Resolution just can't handle left-recursion and it will often "go infinite" with it (although many staple Prolog programs like member/2, append/3, reverse/2 etc. are left-recursive).

DCGs are not the pinnacle of performance, sure, but that's not because of left-recursion. It's because a Definite Clause Grammar evaluated by SLDNF-Resolution is a left-right descent parser, and those are just not very efficient. I think it's O(n^3) to parse a CFG with an LR descent parser? So if you want better performance out of a DCG, parse it with a chart parser.


"Language grammars" are the same as specifying what is valid and what is invalid. This is done using hierarchical class membership, roughly equal to mathematical sets or programming language types.

I would argue that this can attract users better than the additional formalism you can obtain through logic programming, because once you have determined the validity and class (roughly "set membership") of something, you can potentially generalize about it more easily and you have a clear cognitive boundary.

Speaking from experience, in practice, I often rigidly define one layer of a system and then implement algorithms less rigidly using a less formal approach. This seems standard across the industry. Think about UUIDs, network addresses, etc.

I believe this type of tradeoff is ultimately what killed logic programming. If you really want to do maths, do maths. If you want to define something, do so succinctly. And if you are here to implement an algorithm, use the right tool. Don't pretend the code will write itself. Logic programming always seemed to me a sort of allusion to mathematical inference that didn't transfer well to real world programming problems.

Just my opinion.


Thanks for the explanation - interesting link. The same sort of rigidity appears in functional programming styles, where is bound more tightly into the data definition. I'm thinking about the sum types in Haskell for example, where the programmer can make a rigid definition of the boundary that you are describing and then it is reflected in the structure of the code that manipulates the types.

There is an algebraic similarity between Horn clauses, BNF grammar clauses (if you consider concatenation and choice to be operators) and sum types. After working in logic programming, functional programming and imperative styles I would agree that the rigid definition part makes it easier to write the code that will sit on top, even if that code is far less rigid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: