Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What happens when you complete your work before you know what you need to do next?

If this never happens, then you have some invisible queue, as you do have things to do next.

As far as your example, that's a great example of a task that seemed like it would take long, and ended up being very very short. Can you describe why this would be bad to add into your task system?

    - Add Task: Run banking API in $BigBank test environment
    - Start work time clock.
    - Find out we don't need to do it, and switch to prod mode
    - switch to prod mode
    - Close task, and time clock
This is now data for your estimates of future tasks, as this will probably happen randomly from time to time in the future.


Switching to prod mode takes 5 to 7 business days, because we have to order certs from DigiCert and then upload them to $BigBank, whose team requires 5 to 7 business days to activate said certs.

We expected to turn on prod once testing was finished. But we ended up discovering that prod was the only correct test environment, because their test environment is rand() and fork()ed to the point that it doesn't even slightly resemble the prod environment. Hence, "prod am become test, destroyer of estimates."

So for 5 to 7 business days, we'll be building out our APIs by "assuming a spherical cow," i.e. assuming that all the test environment brokenness is actually working correctly (mocking their broken responses with non-broken responses.) Then in 5 to 7 business days, hopefully we'll discover that our spherical-cow representation is actually closer to the physical cow of the real production environment. Or it'll be a spherical cow and I'll be reshaping it into a normal cow.

By the way, if you've never had the pleasure of working with a $BigBank like Scottrade, Thomson Reuters, or $BigBank, let's just say it's ... revealing.


Maybe I'm missing your point. It seems you're attempting to answer the wrong question: is this task accurate, given all the changes that have happened? This is irrelevant for large scale estimation.

The question for scheduling prediction is: what distribution of time will it take to mark any task in this queue as FIXED/INVALID/WONTFIX/OBSOLETE/etc? The queue can have any amount of vagueness you want in it.

Regardless of the embedded work, regardless of whether or not it changes, becomes invalid, doesn't exist, etc - these are all probability weights for any given task/project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: