I tried to suppress the urge to release my frustration, but seeing that the only slightly critical comment in this thread is the downvoted one let me forget my good intentions. WTF. 10s of thousands of line of code, > 10K lines in include files (yay compile times!) for a task that should be only a side concern and should be straightforward to implement. JSON has how many? 2? data types, and writing super efficient parser primitives for it must be easily possible to do in <1K lines of implementation and <50 lines of headers. If there are special requirements in performance or type safety / structural integrity, JSON is far from the ideal choice. And even if one is required to use it under such circumstances due to political issues (in which case, sorry), that doesn't mean that all the other >99% of applications call for a massively oversized library like this. (Update, upon closer inspection I'm not even sure that speed is the main concern of this library. It really looks like it tries hard to be "Modern C++" by implementing all sorts of compile time insanity, bells and whistles (which will never fit 100% with clients and will be impossible to adapt)).
Recently had a discussion with a colleague about JSON in C++. It's actually a big issue, because reflection isn't really supported in C++.
It's not just a blocker for parsing objects as JSON, it's a fundamental limitation for being able to map objects to any kind of format in a general fashion without creating initializers for every object.
I actually have no idea what kind of black magic they're doing to achieve this, but it sounds pretty admirable to me.
Reflection isn't required; keys in JSON are strings, and there are basic data types that are supported (strings, numbers, booleans, arrays, and dictionaries which are more of the same).
What's wrong with writing "initializers" which are serializers/deserializers? If you're looking for automatic file format to C++ class object, why settle for JSON (whether it's this library or JSONCpp) why not use Thrift or Protocol Buffers?
This exactly, sometimes you have to use something not because it is the best option, but because it is what your team is using, or because the ease of use is worth the performance drawbacks.
I think because looking up in a hash map is going to be bad for performance. You want it in an object so you can use fixed offsets. And using individual hash maps for each object is wasteful if they all have the same layout.
> If performance is a concern, why on earth are you using JSON.
Maybe JSON is what you get sent and you've got no control over that? Or JSON is what you want to expose and you've got no control over that either? You can want to achieve reasonable performance while still meeting external requirements.
If you're receiving JSON from outside of your system then the performance talk about converting to objects with fixed offsets and hashmaps with the same layout goes out the window, as you don't control the format.
You shouldn't take major compromises to improve a 90 to a 100, if you could get a 1000 by simply choosing the right approach (like using binary when performance really matters).
JSON is effectively a data interchange format. And for data interchange formats, there are reasons to emit property names (for example, because you want humans to be able to read, and maybe even write it). The fact that you want to use JSON as your serialization format shouldn't require you to make your internal data formats use hash tables instead of structs.
Parsing is simple, you just need to be explicit about what you expect to parse. Like, write 10 lines of straightforward parse function (or maybe better, data description + your own code that you use for all your types) that fills your object's fields. Alternatively, if you have a super structured approach about your data (like you're required to have with template metaprogramming) then you can easily generate this stuff. Like, using a usable scripting language or such.
The idea that types can replace code is just broken. The type system is not a programming language. Or at least, as proven by C++/Haskell/..., not a remotely usable one.
Tip, if you want to defeat an argument by quoting only three words, make sure you're talking about the same thing.
To be clear, I was not talking about lexing JSON, but about how to configure a "meta-parser" like this library to actually convert some JSON to concrete application data objects. The point of contention was, "should we really use such a massive library or mustn't it be sufficient to have a simple bag of parsing primitives (which we can use to actually build the right thing)".
Parsing JSON is not simple - not even when you know the structure you're parsing against. For example that reference shows even parsing numbers is not the same everywhere, so that applies even when the structure is as expected.
> Tip, if you want to defeat an argument by quoting only three words, make sure you're talking about the same thing.
Please don't be snarky - if you disagree say so and why.
WTF indeed. Comment made me check out the source code..
I could only find some headerfiles which a) have the complete implementation in it and b) of course include all dependencies.
This is the exact nonsense that is happening all over the place, and why we need 512GB ram in workstations to compile something.
I can't help but think that some basic knowledge about libraries is missing.