Hacker Newsnew | past | comments | ask | show | jobs | submit | pfultz2's commentslogin

Yea and C++ static analysis tools already warn for the case, so even for new C++ programmers where it might not be entirely obvious, its still easy to catch the error.


You dont need annotations, cppcheck already warns with:

    test.cpp:16:45: error: Using object that is a temporary. [danglingTemporaryLifetime]
        assert((std::vector<int>{1, 2, 3, 4} == append34({1, 2}))); // FAIL: UB
                                                ^
    test.cpp:3:12: note: Return lambda.
        return [&](std::vector<int>&& items) {
               ^
    test.cpp:2:50: note: Passed to reference.
    auto make_appender(std::vector<int> const& suffix) {
                                                     ^
    test.cpp:4:36: note: Lambda captures variable by reference here.
            return append(move(items), suffix);
                                       ^
    test.cpp:15:35: note: Passed to 'make_appender'.
        auto append34 = make_appender({3, 4});
                                      ^
    test.cpp:15:35: note: Temporary created here.
        auto append34 = make_appender({3, 4});
                                      ^
    test.cpp:16:45: note: Using object that is a temporary.
        assert((std::vector<int>{1, 2, 3, 4} == append34({1, 2}))); // FAIL: UB


As awesome as that is, cppcheck doesn't seem ready for real-world use. Literally the first invocation I ran resulted in this error:

  error: Syntax Error: AST broken, binary operator '!=' doesn't have two operands. [internalAstError]
   explicit operator bool() const { return this->get() != pointer(); }
This is for a simple wrapper class that looks like this:

  template<class T>
  class Foo : private Bar<T> {
  public:
   typedef value_type *pointer;
   pointer get() const { return ...; }
   explicit operator bool() const { return this->get() != pointer(); }
  };
Another example:

  void foo(uintptr_t const (&input)[2]) {
   if constexpr (sizeof(uintptr_t) == sizeof(int) && sizeof(long long) == 2 * sizeof(int)) {
    long long value;
    memcpy(&reinterpret_cast<uintptr_t *>(&value)[0], &input[0], sizeof(input[0]));
    memcpy(&reinterpret_cast<uintptr_t *>(&value)[1], &input[1], sizeof(input[1]));
   }
  }

  error: The address of local variable 'value' is accessed at non-zero index. [objectIndex]
    memcpy(&reinterpret_cast<uintptr_t *>(&value)[1], &input[1], sizeof(input[1]));
                                                 ^


Concepts specify constraints on templates, but the types are still templates, so you can't put the function definition in a .cpp file nor put them into a vector(which is what type erasure allows).


That seems more like type erasure as defined in Java where the container is the same for all data types and the compiler implicitly inserts casts from the base type to the parameter type.


But cppcheck can find a lot of dangling references with lambda captures.


Clang's lifetime profile will catch the first example:

    <source>:8:16: warning: passing a dangling pointer as argument [-Wlifetime]
      std::cout << sv;
                   ^

    <source>:7:38: note: temporary was destroyed at the end of the full expression
      std::string_view sv = s + "World\n";
                                         ^
And cppcheck will catch the second example:

    <source>:7:12: warning: Returning lambda that captures local variable 'x' that will be invalid when returning. [returnDanglingLifetime]
        return [&]() { return *x; };
               ^
    <source>:7:28: note: Lambda captures variable by reference here.
        return [&]() { return *x; };
                               ^
    <source>:6:49: note: Variable created here.
    std::function<int(void)> f(std::shared_ptr<int> x) {
                                                    ^
    <source>:7:12: note: Returning lambda that captures local variable 'x' that will be invalid when returning.
        return [&]() { return *x; };
               ^
Cppcheck could probably catch all the examples, but it needs to be updated to understand the newer classes in C++.


For everyone interested in experimenting with this: https://gcc.godbolt.org/z/vmcnXh

Godbolt's compiler explorer is a great tool to try new features (language, compiler, standard library, etc.).


Meson looks nice, but it still lacks a way to tell it where your dependencies are installed(like cmake’s CMAKE_PREFIX_PATH). You can try to get by, by setting pkg config path, but it doesn’t help for dependencies that don’t support pkgconfig.


You can try xmake's dependency package management.

    add_requires("libuv master", "ffmpeg", "zlib 1.20.*")
    add_requires("tbox >1.6.1", {optional = true, debug = true})
    target("test")
        set_kind("shared")
        add_files("src/*.c")
        add_packages("libuv", "ffmpeg", "tbox", "zlib")


> a way to tell it where your dependencies are installed(like cmake’s CMAKE_PREFIX_PATH).

That's not how dependencies are discovered in cmake. Dependencies are added with calls to find_package, and if you have to include dependencies that don't install their cmake or even pkconfig module then you add your own Find<dependency>.cmake file to the project to search for it, set targets, and perform sanity checks.


That's not entirely true, CMAKE_PREFIX_PATH is used in find_package and find_library calls.


> For the sake of history, `range` was found/coined by Andrei

No it wasn't. Boost.Range library predates Andrei's Range talk in 2009. Boost.Range was introduced in boost 1.32, which was released in 2004:

https://www.boost.org/users/history/version_1_32_0.html

And from Boost.Range's "History and Acknowledgement" it explains where the term came from:

> The term Range was adopted because of paragraph 24.1/7 from the C++ standard

https://www.boost.org/doc/libs/1_32_0/libs/range/doc/history...

Furthermore, what is being standardized in C++ is an expansion of what is in Boost.Range, which uses iterators underneath.

Andrei's term for ranges(and what is in D) are actually quite different as it removes the basis for iterators completely.


Higher density may allow for more volume but it can increase viscosity.


Daniel Pfeffier’s effective cmake talks about how to do that. Setup each project standalone and get the dependencies with find_package. Then create a superprojects thats add each dependency with add_subdirectory and override find_package to be a no-op for dependencies added with add_subdirectory, since the “imported” target is already part of the build:

https://m.youtube.com/watch?v=bsXLMQ6WgIk

Now, overriding a builtin function is not the best idea so in boost cmake modules we have a bcm_ignore_package function which will setup find_package to ignore the package :

http://bcm.readthedocs.io/en/latest/src/BCMIgnorePackage.htm...


Just watched that video, long but awesome. I'm not sure on how exactly his strategy is supposed to go, but it sounds like the superproject is another repo. I don't really want that. What I'm thinking right now is to use the same overall idea, but instead of a superproject, I use the same repo, and use: https://cmake.org/cmake/help/latest/module/ExternalProject.h...

But put that behind a flag, that defaults to off, so it doesn't interfere with the distributions. And it supports patching so I can patch whatever dependency's CMake, in the event that they don't (yet?) support the same strategy so it can work transitively without having to boil the ocean. Granted, you would need to write patch files, for basically all your deps initially because nobody else is doing it that way.

For reference, I'm trying to help convert an existing open source project from autotools to cmake, but at work I'm the resident cmake expert, and we have multiple products, which effectively are built as different 'distributions' (not always Linux based) so I've been searching for a good way to do this for a while, even though I just started contributing to OSS.


I was working on this topic a few years ago and one of the things I noticed was that a lot of software patterns that work on the source code level can also be applied at the library or project level. What you're describing is basically Dependency Injection and the super-project is a Container.

It seemed like such a good idea, but I quit working on large software projects about the time I thought of it, so I never got a chance to actually try it outside of toy projects. I'm glad to hear that somebody else independently discovered and pursued the idea. If anyone has written about how adopting this approach has worked out for them, I'd love to read it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: