If C#/.Net was invented today you could just implicitly have access to this information from within your function's scope. The compiler is smart enough to figure out if the CallerArgumentExpression() is used and optimize away the ones which were not.
nameof was added in C# 6, this like that, could have been added too e.g.: $"arg: {nameof(thing)}, with value: {thing}, from caller's: {callerargumentexpression(thing)}". It is a very useful logging/diagnostic tool, but I'd have to throw CallerArgumentExpression("") all over my code just-in-case, and I hate that.
PS - Yes I read the whole article, and understand how useful it will be for asserts/specific tooling; my point is that their imagination was too small and this will creep into all kinds of code.
It's not for you, though. The boilerplate is for a very small number of libraries and frameworks that have use for this feature. The ergonomics of using the resulting methods from these libraries and frameworks are quite good.
It is "not for me" though because of the reason I said: Too much boilerplate for common usage. If it had less it would be for "me" and everyone else too, just like the nameof() is.
So it is a niche tool exactly because of the implementation. I bet a lot of people expected nameof() to be niche too, but it isn't.
No, it's a niche tool because it's a niche tool. This feature has very limited use; even more limited than CallerMemberName. It actually makes a lot of sense not to invent a lot of new syntax or compiler logic to handle something this rarely used.
It's not that their imagination is small, it's that the designers of a language as popular as C# have to be cautious: not only they must not break tons of existing code, but they must also not introduce features that end up not being used because they're poorly designed (but still have to be supported for the same backwards compatibility reasons).
Thus it's not uncommon for C# to add things like these: they are small and easily defined, solve a specific but very common use case, and are much easier to put on life support in the future if something better comes along. It's not "neat", but practicality beats purity.
Having led the creation of a language/compiler professionally, the number of features that are "nice to have" are nearly infinite. And occasionally a lower priority item that was easier to implement at the time, become far larger in scope because of how another higher priority feature was implemented. Some of these can get scrapped or pushed back till the cycles allow it.
Often though, we learn in the implementation, and some of the lower priorities are no longer needed. And Reflections provided a lot of the functionality that you could make seamless from a parent class and attributes.
I agree with you about the `nameof` comparison, mainly because it deceptively looks like something you'd be able to write yourself (since it's just an attribute), but under the hood it's compiler magic. I'd rather it be more like a keyword or obvious language built-in so there's a clearer boundary between "this is a language feature" and "this is a standard library feature".
Is there a good use case outside of automatic more decent assert messages? That’s nice, but…
(1) that’s far from a big problem in writing/maintaining unit tests. Isn’t the first thing you do with a failing unit test is examine the code on the call stack anyway? So this maybe saves you a click, assuming some pretty common tooling?
(2) I think you actually want to capture call stack source lines?
Not to mention, quite often the assert expression isn’t that interesting. e.g.,
var result = ComputeAnswer(“what is the meaning of life?”);
Assert.AreEqual(42, result);
The auto generated failure message would be something like: “result is expected to be 42 but was 41”
Whereas what you want is like, “ComputeAnswer(“what is the meaning of life?”) is expected to be 42 but was 41”
The style of separating the call to the code under test from the assert is pretty common, often recommended, and clean.
Yes, it's also used in component models, like to auto-fill the property that changed to implement INotifyPropertyChanged [1]. I believe it's used in the MVVM toolkit, which has been getting positive press lately. It's apparently for building generic controller layers for a UI.
> Logging sounds like exactly when you want this and why it's useful?
Yes, that is handy… Log($”someValue = {someValue}”) can be Log(labelit(someValue)). That is a small nicety.
> If it's running locally, sure! If you're inspecting logs from a CI system then not so much.
Inspecting logs is a start, but if you don’t bring it back to the code you aren’t solving the problem. So you want a call stack. I agree an enhanced call stack would be even better, and expressions could be a helpful part of that. But CallerArgumentExpression doesn’t seem like a very good way to achieve that. For .net, I guess what you need is the symbol map (.pdb) and source code corresponding to the binary, and a tool to put it together — so that’s all achievable now, without CallerArgumentExpression.
> For .net, I guess what you need is the symbol map (.pdb) and source code corresponding to the binary, and a tool to put it together — so that’s all achievable now, without CallerArgumentExpression.
Sounds like no one needs logs ever again!
“This tool can help you make your logs better”
“But did you know if you do a bunch of work you can have something better than logs?”
This language feature is a small, nice feature. It provides some additional value in numerous common use cases. It may be superseded by other things that an individual may or may not have tooling support for.
Never mind that for a test log with numerous failures you’d need to generate a .dmp per failure. And each .dmp needs to be individually opened, run, and analyzed. So even if you had this you still probably want as much information as is possible in the log. It’s not like this is an either-or thing. You can have both!
As I read it, it's more than that - you get to see the expression passed as an argument.
If they would bake in that expression within NullReferenceException, well that would be so good... because there are methods that contain dozens of lines and that particular exception can happen almost anywhere. Usually I'm considering that when writing code and thinking: well this method can't throw NullReferenceException, just to be greeted with that particular exception at runtime, ugh.
> If they would bake in that expression within NullReferenceException, well that would be so good...
CallerArgumentExpression doesn't do that, though.
I think you could maybe use it to do something explicit, like...
someexpr.CheckNotNull().DoSomethingWithSomeExpr()
where CheckNotNull() somehow uses CallerArgumentExpression to capture "someexpr" so that if someexpr was null, the exception could include the actual expression in the exception message. But that means decorating code with CheckNotNull() all over the place, which really limits this.
What you really want here is a good call stack -- maybe with expressions, source lines, local variable values, etc.
In a function getMeaningOfLife(), a variable called result is perfectly meaningful. In fact I'd find it a better name than meaningOfLife, especially if you consistently result for all your functions. But yes you could rename the variable if you know it's going to be used in an assert like that.
In that code snippet, “answer” would really be the right name for the variable (because the method is called ComputeAnswer). Redundantly restating immediately apparent context isn’t a good style.
I cannot disagree with you any harder. Redundantly naming variables is good style, as it gives you more room to rename functions slightly, split functions, introduce new variables and inject new operations in the middle. And frequently, the function name has nothing to do with the semantics of the returned value. "result := sort(someOtherResult)" doesn't tell me anything. "sortedItems := sort(items)" tells me a lot.
I'm making some assumptions about what these methods do and return. E.g., if validate/validateUserInput does something more specific than general user input validation then it should have a more specific name and the variable its result is assigned to would too.
I don't know do you follow C# these days, but they have source generators now. [1] Only rule is that it can't modify existing code, but add more. This way they won't break the intellisense etc. But you probably want hygienic macros?
I really don't understand the dislike of macros. They would be super handy for things like WPF's INotifyPropertyChanged or for a lot of logging and parameter checking. Instead you have to do all kinds of weird tricks with refelection that are way more complex.
They are adding all this stuff in small pieces instead of having a comprehensible solution.
I think the C macro system (and other similar macro systems) really ruined macros for a lot of people. A lot of the complaints about macros go away if you track scope carefully for generated code either by enforcing hygiene or by a more complex namespace system such as with syntax-case in Racket/Scheme.
It is very unfortunate that C taught multiple generations of programmers that macros are a deadly trap waiting to blow up at runtime if you don’t bend yourself to insane coding idiosyncrasies. Macros are awesome and problems with them can be caught at compile-time, same as real code! But only if you have a macro system that isn’t worse than the problem it’s trying to solve.
> It is very unfortunate that C taught multiple generations of programmers that macros are a deadly trap waiting to blow up at runtime if you don’t bend yourself to insane coding idiosyncrasies.
I don't think that's a fair or reasonable take. If anything, generations of programmers learned from experience that instruction systems that are orthogonal to the language and blindly manipulate text strings while ignoring semantics and context and even basic types that is readily available, such as the C preprocessor or M4, leads to a subpar developer experience, code that is far harder to reason about, and causes bugs that are hard to troubleshoot.
Given the choice, implementing features into the language that can be evaluated by static checks is far cleaner and easier to understand and maintain.
> If anything, generations of programmers learned from experience that instruction systems that are orthogonal to the language and blindly manipulate text strings while ignoring semantics and context and even basic types that is readily available, such as the C preprocessor or M4, leads to a subpar developer experience, code that is far harder to reason about, and causes bugs that are hard to troubleshoot.
Good thing Lisp macros are absolutely nothing like that.
Parse tree manipulation, on the other hand, can be extremely reliable and make code that's much easier to read about.
The problem is that the lack of macros in C# makes a lot of people use reflection and other stuff which is even harder to debug. The code emission stuff in Roslyn is very interesting but God be with you if you have to understand somebody else's code who uses it.
Also, if you don't overuse the preprocessor you can save tons of boilerplate.
> The problem is that the lack of macros in C# makes a lot of people use reflection and other stuff which is even harder to debug.
Arguably, the problems that require reflection to be solved are also problems that are not solved with macros in a way that leads to code that's easy to maintain and reason about. Therefore, I don't see a net benefit in advocating macros.
Nevertheless, if anyone really wants to use macros in C# they are free to do so. Just pick your favourite macro processor and bolt it onto a project as a preprocessing step.
Yep, this.
Once, I wanted to try Rust (while been heavily invested in C++ and a bit in C), and after just a bit digging into it, I discovered how present macros are in Rust (e.g. printing).
This really turned me off such that I aborted learning Rust.
It’s like buying a 3D printer and finding out it can only print out the name of the files you give it. Sure it’s doing something, and if that’s the thing you want then great, but you can’t help but look enviously at your friend’s 3D printer that does whatever they ask it to.
When I think of "call by expression" I think of R. Are there other languages that are call by expression?
I don't know how this compares to others. But it seems weird to me that you get only the value and only the text expression. There's no AST. You don't get some first-class closure of the elements that make up the expression that you could compute on. You also seem to have to compute the expression ahead of time.
Ideally I'd love to have a full AST of the expression, with all the intermediary values filled in. Or perhaps, like, an AST with nothing computed, that I can then compute myself.
So I guess, like, the intent here is not call by expression. It's meant to inform, but not to allow the callee to compute.
In R, every function argument may be 1) ignored, 2) accessed as an AST, 3) passed to another function, or 4) accessed as a value, or some combination thereof. Note that 1, 2, and 3 don’t result in the expression being evaluated, as function arguments are lazy in R. There’s no way to look at only a function’s signature and determine whether a given argument is going to be evaluated never, once, or multiple times, or conditionally (though the vast majority of the time it’s “once”).
On the plus side it does allow for some extremely elegant libraries to be written.
The call syntax is just a regular lambda. But in this case, instead of getting a function, you get the AST for its body. Which can be compiled to an actual function and then invoked if desired, or inspected and used for other purposes.
I think having it explicit is a good idea, since it tells the caller that something funky is going on. The corresponding feature in R has to be implicit to allow the rest of the language to be desugared into function calls (in R, even assignments, statement-separating semicolons, and function definitions are handled that way). It's neat conceptually, but now every single function call can do crazy things - and some libraries make use of that.
The guard clauses remind me of the compile-time verification done by Microsoft's Spec#[0] compiler, which included non-null types, checked exceptions and throws clauses, method contracts and object invariants. Some of those saw the light of day with C#, like the non-nulls, but whatever happened to the rest?
Once I tried to use them, but forgot what was the issue... first of all - couldn't easily google on how to use it and then I think some legacy stuff was there that couldn't be used in .NET 4.7
> Code contracts aren't supported in .NET 5+ (including .NET Core versions). Consider using Nullable reference types instead.
Yep, it has been effectively deprecated for a while, and now gone in new .NET versions. But when it was there, there was a static checker that you could run, and I thought that's where all the Spec# bits ended up in.
.NET 5+ basically doesn't have anything like contracts out of the box anymore.
This is awesome for dealing with razor bindings in cshtml when you need to manually set or clear an model state validation error. I previously wrote code to reverse compile a lambda specifying the element to set/clear and attempted to reverse that into its most likely form - it was a very disheartening process!
Yes, it would be just like the attributes that already exist for getting the caller method name, line, and file. It's unfortunate, but it would be a bigger language change to fix that.
That's a veery long name for such a decorative feature. I'd preferred something less verbose - this is just bloat. The nameof() is not that bad and does not bloat function signature.
1. You're probably not using this on every other method, and the name is consistent with other features in the same vein (CallerMemberNameAttribute, etc.).
2. Something like nameof wouldn't really work here as the idea is to re-use both evaluation of an expression _and_ the textual form. nameof doesn't evaluate its argument. So a similar solution would require you to write your expression twice.
I believe the `= ""` makes this parameter 1. have a default value of empty string and 2. more importantly, makes it fit with the existing syntax for a function signature that indicates the function argument is optional.
If I am not mistaken, that means that the language feature can be included as simply as the newly supported 'CallerArgumentExpression' annotation without needing any change to the language specification for what defines a the correct syntax of a function signature.
You can pass it manually, and you can invoke this method from other compilers that don't implement the attribute. In those cases you want an explicit parameter and one with a default value.
This explanation makes sense to me. If you invoke this function with reflection, there's no hope of capturing the textual argument from other parameters.
> so if you mistype the parameter name in the decoration, there's no warning
Mistyping the parameter name results in the following compiler warning, thankfully:
CS8963 The CallerArgumentExpressionAttribute applied to parameter 'conditionExpression' will have no effect. It is applied with an invalid parameter name.
Nice, an actual error that describes what you did wrong and why it's wrong. That's what you want to see in the twenty first century. Does it have typo checks too, so that if you wrote coditionExpression or conditionExpresion it would point out that maybe you meant the parameter conditionExpression ?
C# continues to quietly release incredible features to little fanfare. I hear the same from the Rust and Zig communities. Really feels like we’re in the golden age of language usability and development.
One thing I’d like to see in C# is the theft of the .. operator from Dart so that you can chain calls on a single object without needing to return that object in each call.
I don't think that's true by and large. C# started purely object-oriented, and it progressively started extending it's usability into functional programming domains (pattern matching, null coalescing, tuples, lambdas), lower-level code (return ref structs, memory spans), and relational programming (LINQ).
So basically, C# is incorporating diverse programming idioms with different strengths from OOP so you can choose the right one for the problem at hand.
A lot of the improvements are just logically extending the language to remove arbitrary restrictions. That's what most of C# 10 and 9 appear to be. So most of the features you wouldn't go out of your way to use, but instead stuff that used to be impossible is now possible. I can't say that I've ever wanted a generic attribute, or a constant interpolated string, but if I did want them, I'd be surprised when they didn't work. C# 8 was the last major new language features.
But a lot of them seem totally arbitrary and inelegant. CallerArgumentExpression requires a lot of attributes. CallerMemberName gives you the name of the caller but not the class name. This is hard to explain. It just seems random.
It's not hard to explain, it's a logical and small extension of other compiler-driven attributes. Just because you don't like it doesn't mean it's a bad change to the language.
Yeah I used to love these releases as a c# developer but now I think they're doing more harm than good. I guess it's my time as a go developer that has changed me.
go development surely affects your taste a lot. This is to me one of the greatest benefit of this language : realizing how much better your code becomes when you spend 90% of your time thinking about the problem and 10% on which language feature you're going to use.
I love the language, but I always end up going back to nodejs for API development because of how ridiculously easy it is to implement json based http APIs.
>I intensely dislike the need to define a rigid type to serialize / deserialize.
You can deserialize to a map, without having to define rigid types upfront. But for me, it's more painful to use, because instead of writing something simple like "person.Address.City" with "rigid types", I have to retrieve values by key and cast them to target types.
But pretty much all JSON I have dealt with had an implicit schema, so I see no problem with defining a schema using Go's type system. For me, it's a plus, and defining a type takes, like, 30 seconds? There's also the handy json.RawMessage which you can use to skip certain parts, or defer parsing to a later time.
>And go's libraries and syntax for dealing with dynamic json are pretty painful.
I don't remember having to deal with "dynamic JSON" in an API, can you give an example?
The problem with the typed interfaces are the many cases where the JSON isn't strongly structured, so you end up jumping through all sorts of hoops to make it work, or using a sub-par API like you mentioned, and casting everywhere.
Maybe it's just unavoidable with strong typing, I've had pleasant experiences with C# dynamics with JSON data, but nothing is as easy to me as "let thing = JSON.parse(stringstuff)" and then being able to just do "thing.?someprop" for consumption.
Sure, it only takes 30 secs to add a new type, but the issue is that it has cascading effects. Then build and deploy, etc... whereas with Node, I can just do runtime checks and fail softly.
Sorry, I don't mean to resurrect the old strong vs dynamic typing debate, they each have their use - but for REST APIs specifically I find JSON + dynamic types to be a lot more effective.
By a Fourth Edition Stroustrup's book devotes 400 pages of almost 1400 to the C++ Standard Library. These pages are a mix of mindless documentation you'd expect from a reference (which even in 2013 seems pretty useless as an actual book) and Stroustrup's own observations, such as on vectors:
> I stopped trying to improve performance using reserve(). Instead, I use it to increase predictability of reallocation delays and to prevent invalidation of pointers and iterators.
Here Stroustrup is talking about abusing vector's reserve() to assure yourself that the memory won't get re-allocated and so it's OK to point into the vector directly. Now, who wants to put their hand up to say what a good idea this is? And how about who has tried to maintain any code that was written by people who thought it was a good idea?
The book would be better (not good, but better) without this "advice".
The Second Edition of Stroustrup's book doesn't explain the Standard Library at all (there was in some sense no "standard library" because ISO isn't done standardising C++ when it's written) and was 700 pages in the edition I own. It's also far from a comprehensive explanation of C++ despite being so thick.
But why? What's wrong with macros? I agree that you can do stupid stuff things with macros but before disabling macros I would get rid of reflection first. With reflection you can do way more stupid things than you can do with macros.
Nah. Macros allow library developers to do really stupid things that can't be improved on. Reflection lets library users get around the regular stupid things library developers do.
Yes, especially if that variable is an IDisposable that the library author also forgot to dispose. Like Microsoft's own System.IO.Ports.SerialPort with its underlying data stream.
Another problem with them is they can’t. Personally I’ve never run in a macro problem in C (and I find these abstract complaints overblown and a little parroting, but ymmw), but their power is far from absolute because it’s just a text concatenation method, not a source code or a type editing method. As an example exercise, you may try to implement:
Inability to do that, i.e. to get access to the language structure and fix it according to your convenience, combined with non-hygiene, was what led to C++ compilers and other hardcoded madness.
Great now I can change behavior based on how someone calls function. This will be great for companies like Volkswagen skirting emissions tests, or making things work for sonarqube
If C#/.Net was invented today you could just implicitly have access to this information from within your function's scope. The compiler is smart enough to figure out if the CallerArgumentExpression() is used and optimize away the ones which were not.
nameof was added in C# 6, this like that, could have been added too e.g.: $"arg: {nameof(thing)}, with value: {thing}, from caller's: {callerargumentexpression(thing)}". It is a very useful logging/diagnostic tool, but I'd have to throw CallerArgumentExpression("") all over my code just-in-case, and I hate that.
PS - Yes I read the whole article, and understand how useful it will be for asserts/specific tooling; my point is that their imagination was too small and this will creep into all kinds of code.