Hacker Newsnew | past | comments | ask | show | jobs | submit | tommymanstrom's commentslogin

Currently using Insomnium [1], a fork of Insomnia before the login demands. Most likely found it here at HN [2] and decided to try it out.

[1] https://github.com/ArchGPT/insomnium

[2] https://news.ycombinator.com/item?id=37708582


OSINT by Markus Jonsson (https://x.com/auonsson?s=21&t=L_vyKMe6Kz1tXjeWTeGk3g) has been tracking this for some time now.


"Krigare: Ett personligt reportage om de svenska soldaterna i Afghanistan" (eng "Warriors") [1] by Johanne Hildebrandt[2].

A book about the FS19 deployment of Swedish soldiers in Afghanistan in 2010, written by an experienced war reporter. Some chapters reminded me about Generation Kill, with regards to lack of proper equipment (wrong colour, missing etc...) or purpose of deployment (too much bodyguarding of politicians visiting etc).

1. ISBN: 9789143510737, ISBN-10: 9143510736 https://books.google.se/books/about/Krigare.html?id=LpJjAgAA...

2. https://en.wikipedia.org/wiki/Johanne_Hildebrandt


IPcenter by Amelia (former IPSoft) was like that, it could use Bayesian statistics on incoming events/alerts to determine where to route a ticket. This would only work after a few tickets, with roughly same content, being routed manually.

One issue with this was it learned that a particular database event would be routed to team_a after an incident. Next time similar tickets was raised, it would be routed to team_a incorrectly. This was an issue since events/alarms tend to look same for eg an application database and the organisations would route tickets to each application team first - not the centralized database team.

It had "virtual engineers" which could do investigation (collecting logs etc) and remediations (basically scripts) too.

https://en.wikipedia.org/wiki/Amelia_(company)


One of my favorites found here, interesting and fun. Looking forward for the last tidbits :)


This works well when people: * assign themselves to tasks * keep status correct, eg move to correct column, resolve etc * updates tasks with comments every now and then

So many times this is not the case, especially not all above at same time. Sadly.

A short question and answer in chat better than a 60 minutes meeting though!


My experience is mixed, non safety-critical industries (but some regulated):

1. Requirements first written in Excel, later imported to Jama and later imported to HP QC/ALM for manual tests

Pros: Test reports in HP QC helped protected against an IT solution which was not on par with what needed and requested

Cons: Tests where not helping the delivery - only used as a "defence", requirements got stale, overall cumbersome to keep two IT systems (Jama, HP QC) up to date

---

2. Jira for implementation stories, with some manual regression tests in TestRail and automated regression tests with no links besides Jira issue ID in commit Polarion was used by hardware and firmware teams but not software teams.

Pros: Having a structured test suite in TestRail aided my work on doing release testing, more lightweight than #1

Cons: Lots of old tests never got removed/updated, no links to requirements in Jira/Polarion for all tests (thereby losing traceability)

---

3. Jira with Zephyr test management plugin for manual tests, automated tests with no links besides Jira issue ID in commit

Pros: Relative lightweight process, since plugin to Jira was used

Cons: Test cases in Zephyr was not updated enough by previous team members

---

4. Enterprise tester for requirements/test plans, Katalon for e2e tests by separate QA team With automatic tests with Jira issue ID in commit (no links to Enterprise tester) inside team

Pros: Again, rather lightweight when it comes to automated regression tests inside team

Cons: Process not optimal, Enterprise tester only for documentation but no actual testing

---

Today, there are good practices which helps building quality in - DevOps, GitOps, automatic tests (on several levels), statistical code analysis, metrics from productions... Try to leverage those to help guide what tests needs to be written.

Many times requirements/user stories are incomplete, no longer valid or simply wrong. Or a PO may lack some written communication skills.

Overall, I want to focus on delivering value (mainly through working software) rather than documenting too much so I prefer a lightweight process - issue ID on commit with the automated tests. Bonus points if you use eg markers/tags/whatever in test framework like JUnit/pytest to group and link to eg Jira issue ID.


A pragmatic approach, which I value as a team member!

Currently my small team is using “user stories” for deliverable value and tasks as default for everything else.

Sometimes there are longer descriptions as the why is important to add (for future team members and myself) to “user stories”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: