Hacker Newsnew | past | comments | ask | show | jobs | submit | pejman_gh's commentslogin

Thank you.


Thank you. First time in 2 years that I actually purchased a book recommendation and am optimistic to read it through. Under-appreciated perks of having a life again. :)


I think that's in Portuguese? Touca is also the name of a bird (Toucan) in Farsi.

I wanted a name that is easy to type on the command-line (since that's the interface for many users), easy to pronounce, and somewhat meaningless in English. Touca was the best I could think of. :)


Touca (github.com/trytouca/trytouca) gives fast feedback (via email or PR comment) for each PR you create that shows you how the behavior and performance of your software has changed compared to a previous trusted version.

We were not competing with or replacing GitHub Actions so the flow that you described wouldn't change. If you were to use Touca, you would continue to run "cargo test/npm test/dotnet test/whatever" that are likely running unit tests and integration tests.

Touca tests are more like snapshot tests and property-based testing in that you wouldn't specify expected values. You could run them locally or on CI or on a dedicated machine.

You would write high-level regression tests using our SDKs in Python, TypeScript, C++, or Java that would let you capture values of variables and runtime of functions for any number of test cases. The SDKs would submit that information to the our remote server that would compare them against the baseline version, and visualize and report any differences.

If you like to learn more: https://touca.io/docs/basics/ And you can find some product highlights here: https://touca.io/blog/touca-v2/


> Touca (github.com/trytouca/trytouca) gives fast feedback (via email or PR comment) for each PR you create that shows you how the behavior and performance of your software has changed compared to a previous trusted version.

Like SonarQube?

> If you were to use Touca, you would continue to run "cargo test/npm test/dotnet test/whatever" that are likely running unit tests and integration tests.

My company can barely invest in 80% unit test coverage. We have engineers writing tests to game coverage with no assertions. How is an organization expected to invest in even more time to write another type of test?


> Like SonarQube?

No. Instead of linting or static analysis, we tell you when a recent code change is causing your software to behavior or perform differently when handling a particular test case. Like Cypress but for software workflows that don't have a web interface or are not easy to test end-to-end.

> My company can barely invest in 80% unit test coverage. We have engineers writing tests to game coverage with no assertions.

Organizations are already investing in this other type of test by assigning QA teams to write and run manually or using software test automation tools. Touca offers a developer-friendly alternative to these tools that helps orgs save costs. It's a shift-left testing solution that helps engineers get the same confidence that QA teams provide, during the development stage.

From my experience, teams resort to superficial coverage goals because higher-level tests or the test infra needed to continuously run them are too expensive and complicated to run. We wanted to fix that.


I admit it's always been a point of struggle for me as a technical solo first-time founder. We should have done a better job with messaging too.

Having said that, our v2 launch last month was on the front page for about a day: https://news.ycombinator.com/item?id=34959276 ...but it was too little too late.


Thank you!


I hope you guys were able to pay yourselves a good salary for all the work put in.


Not doing so was certainly one of my execution mistakes. It makes a ton of difference in how you execute and the knowledge that you are being compensated for the work may have helped protect my mental health.


Really appreciate your willingness to be transparent and share your thoughts openly here.


Founder and author here. Yes, while there were other external factors beyond our control, e.g. bad market, I think we failed primarily because our product was a nice-to-have and easy for customers to justify not paying for.

Sadly, building a must-have developer tools product is incredibly difficult as users are often not the primary buyers. One trick that some founders use is to postpone launching for real until after they raise one or multiple rounds. This way, they won't be held accountable for growth and revenue numbers. But that's dangerous since they can build for years with confidence, not realizing the flaws in their product positioning.

Startups are just really hard...


Hello! Sorry for the confusion. What we mean is that, unlike unit testing, the test inputs are not tightly coupled with the test code and you can write a single block of test code and have it executed for any number of inputs.

The example in Readme is passing the inputs as a list. You could also use a lambda function (see https://touca.io/docs/sdk/testcases/#programmatic-testcase-d...). Alternatively, you can remove the list and pass the inputs as command-line arguments. You can even choose not to pass them at all, in which case the test runner will re-use the list of test cases in the baseline version.


Glad to hear. Yes, your understanding is very accurate. TDD requires a different mindset. Our primary focus has been do a great job of finding side effects of code changes to existing systems such as day-to-day refactors and occasional rewrites. From my personal experience, these are the scenarios in which having hard-coded expected values in assertions is inconvenient and requires mechanical adjustments. Thanks for taking the time to review this and sharing your thoughts.


Unless a special command-line flag is provided, the test runners return a success exit status. Since we don't know whether potential differences are intended, we don't want to fail the CI by default. When differences in a given version are expected, you can promote that version as the new baseline through the web interface. This way, the test results for all subsequent versions will automatically be compared against the new version.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: