Hacker Newsnew | past | comments | ask | show | jobs | submit | fghgfdfg's commentslogin

Really? I had a very different experience. I mostly just used the official Phoenix docs and never touched ecto. It was a strictly optional thing I didn't need, so I didn't touch it.

I did look at some other tutorials and I don't recall ever having issues because I wasn't using ecto. Either they weren't using it or I got what I needed anyway.

This was probably about a year ago or so. Maybe things changed at some point?


I think you're thinking a little too big. All I think that is really being suggested is an automatic window cracker/closer.

If the goal is just to maintain a temperature I don't think you need to go to most of the lengths you mentioned. You have a window that is currently adequately insulated when shut. This is intended for maintenance so no need for a fan to speed things up. However the room is getting too hot now will suffice to heat it back up if needed, but again being slow means it shouldn't overshoot badly either.

It still may not be worth your time, but if just cracking a window would be sufficient cooling for most of the year it could be.


A moment's thought leads me to think you could use something like this: (temp-controlled plug bar, basically) http://www.homebrewtalk.com/showthread.php?t=536763 Attached to a motor /servo (depending on your window) that will turn on or off at a given temperature. Have a switch on the window which cuts off/enables the power when the window is open or closed at the desired max/min.


Perhaps I'm just reading too much into the phrase "deeper issues" but I think you're reading too much into the whole "nightmare" phrasing.

I don't think it's so much of an issue of feeling left out as it is feeling pressured to suffer through it with a smile on your face.

Being visibly unhappy or disinterested while you are there usually brings additional pressure to engage. Leaving early invites later questions that require you to lie to be socially acceptable. And if the events are frequent enough it'll be obvious you are lying sooner rather than later.

And then add in considerations about what those above you in the company will think.

It's enough that I do have similar responses to such things. And it's also true that it's probably a somewhat too extreme of a response. Thinking about it is usually worse than actually being there, as it's usually possible to end up off in a corner with one or two other like-minded people which is mostly tolerable.

Although as things get more structured this gets harder, and the structure is usually accompanied by even more "optimistic language" so... again, the response seems fairly justified without invoking a need for "deep issues."


I guess what this makes me think about is, part of being an adult is doing things you don't necessarily like.

A company gathering, a few hours a couple times a year, that should be something a person can tolerate. They're your coworkers, you know them all to some extent already, and it's an evening or whatever once a quarter.

Especially since your managers will already know you pretty well.

I don't know, it's hard to be sympathetic, cause there are lots of awkward situations I just put up with in my day-to-day, that are a lot higher-pressure than that.


As I mentioned, it ends up being the thought of it that's the worst. I can grin and bear it - as you mentioned that just tends to be part of life.

I think we're probably talking about this a little too broadly. I wasn't thinking so much of the semiannual office party. On that side of things I'd probably completely agree with you.

I was thinking more about things that were some kind of event designed to "raise morale" or something with dedicated activities, potentially run by outsiders, for "team building." The sorts of things that have a strong "drink the koolaid" feel. Those are the things I start to think about (and dread) when that ultra-optimistic corporate speak shows up.

And I've seen at least one blog post on HN advocating for that sort of thing to be as frequent as possible for company size, down to even weekly IIRC - although at such a small size it might be harder for it to get too terrible.


The poster's wording is deeply vitriolic. If you can't see that points to some abnormal issues (either with them or with their particular company) then I'm not sure what to say.

Out of curiosity, have you ever worked at a company with a good deal of older married men/women with families?

Nobody is pressured to "suffer through it with a smile on your face." Nobody has to lie about anything. Among mature adults it is enough to say you simply had other plans, were tired, or simply "Oh, I was there for a bit."

In my personal experience, the kind of thinking that "I have to pretend I'm enjoying this," "I have to lie about why I left," or "people will judge me for not showing up" is a sort of neuroticism that is really entirely self-imposed / self-imagined. Nobody is actually judging you, you only think people are judging you (out of self-consciousness or something else). When I experienced it, it reminded me a lot of being in high school (and being very self-conscious).


I've never understood what people actually mean when they say things like "dynamic languages let you express yourself more easily."

I tend to find it significantly harder in dynamic languages. Yeah, there is way more flexibility in small decisions (like right now, is it easier to return a string or an integer or whatever) but it's not free. Your little decision interacts, directly or otherwise, with potentially the entire rest of the program that might ever exist. There is probably a best choice, and it's hard not to try to find it every single time. Doing the easiest thing right now might mean you need to go and adjust many other things. In order to know what is overall easier you need to keep the rest of the program in mind.

I find dynamic languages kind of exhausting to write anything but one-off scripts in. I can't make good choices without holding far more of the program in my head than I would in a static language. Sometimes I don't even realize I'm making such a decision because I don't have a correct or full view of the rest of the program. To make matters worse those errors won't even show up as being a problem at the decision-site.

I feel like I express myself much easier in a language with a strong static type system. I can write down clearly exactly what the model is, and then the compiler or types let me know when I try to stray from it. From there I can decide if it's better to adjust what I'm doing in the small or the large.


Because languages like Lisp, Smalltalk, Ruby give you greater control over bending the language to more closely match the problem domain, instead of jumping through extra hoops to express what's needed, and the result is a more readable program, because it's expressed in terms of what the program's trying to accomplish.

The entire history of computing is abstraction away from the machine, and offloading as much of the work to the machine, so that humans can focus on solving problems instead of worrying about lower level details. Dynamic languages tend to be better at that.

Now if you Haskell is the comparison, then maybe not. But Haskell has an advanced type system with excellent composability, so it's quite capable of expressing high level abstractions and domain specific code.

That's the general idea, but clearly not everyone agrees. Or perhaps, other concerns are considered more important.


These days I don't blame them. I'm guilty of it myself. After Microsoft repeatedly dropped in the Windows 10 "updates" (including nag) under new names it got to be enough of a hassle to avoid them that I've basically stopped updating. Finding the latest update names to ignore, then actually finding them in the update listing is enough of a pain to get me to continually put it off.


>These days I don't blame them. I'm guilty of it myself. After Microsoft repeatedly dropped in the Windows 10 "updates" (including nag) under new names it got to be enough of a hassle to avoid them that I've basically stopped updating.

My PC is next to my bed. I love being woken up at 3 in the morning by Windows attempting and failing to install updates.

It's got to the point where I turn it off at the power supply to stop it.


I tired of the 'whack-a-mole' game and just stopped installing post-March 2015 updates on my Win 7 install. It may be vulnerable(what isn't?), but 3rd party sandboxing, firewall and noscript mitigate the immediate, automated threats well enough(last succeasful exploit on my machines outside of a purposely infected VM: ~2009). When MS can no longer harvest my activities(or I can deny them control) I will revisit my security policies. Until then, I will continue to disable updates, harden my firewall and deny any contributions to MS* 's data grab.

* et al. Sadly, "everybody's doing it" these days.

edit:fixed asterisks and unwanted italics.


From a quick google search, it appears the methyl group gets converted to methanol.


Which we all know has no ill effects whatsoever.

/s


In very high quantities, yeah, it's bad. Just like everything, including water and oxygen.

In small amounts, your body can handle methanol just fine. Which is fortunate, because it's present in many foods at much higher levels than what you'd get from drinking a can of Diet Coke.


But you don't know which IMU is misreporting. Now, given the nature of the situation you could probably say it's best to err on the side of acceleration. If you've stopped accelerating and haven't initiated landing procedures you're either wrong or somehow already landed and can afford to wait.

It's also not clear to me that this was an issue of erroneous data. All they said so far was that a rotational sensor maxed out for about a second. I don't think a simple delay-and-retry would have sufficed here. When attempting to land a second seems like a pretty long wait.


I think it's also important to note that the inertial platform was developed for the Ariane 4 where it worked correctly.

The software was actually developed correctly, and functioned as intended. At least for it's intended use. Then it was tossed at a new use-case without any accounting for any differences in the new situation.


> The software was actually developed correctly

Not quite. If you read the details about the case you can find that it didn't have the handler for the overflow in the calculations(!) It's similar to this case now that both were developed with under the assumptions "can't happen," in the sense, developed to be too brittle, for the inputs that were certainly possible to happen as soon as the trajectory (in the case of Ariane 5) or the duration of the spinning movement (this case now) doesn't match their initial test cases.

Still, the development, especially in this kind of projects, is always a balancing act to organize covering most of the cases that can go wrong. Murphy's law works against the whole organization. Given the amount of real problems, I'm still amazed that the Apollo 11 succeeded.

Or even that there weren't any really destructive "accidents" involving rockets with the nuclear warheads. Think about it, these are prone to the same problems any other computer-related projects are: the amount of the damage is effectively infinitely larger than the effort needed to start it.

https://www.theguardian.com/world/2016/jan/07/nuclear-weapon...

“These weapons are literally waiting for a short stream of computer signals to fire. They don’t care where these signals come from.”

“Their rocket engines are going ignite and their silo lids are going to blow off and they are going to lift off as soon as they have the equivalent of you or I putting in a couple of numbers and hitting enter three times.”

http://thebulletin.org/

"It is 3 minutes to midnight"

Also: "How Risky is Nuclear Optimism?"

http://www-ee.stanford.edu/%7Ehellman/publications/75.pdf

And if you still think "but it works, the proof is that it hasn't exploded up to now", just consider this graph from Nassim Taleb:

http://static3.businessinsider.com/image/5655f69c8430765e008...


> Not quite. If you read the details about the case you can find that it didn't have the handler for the overflow in the calculations(!) It's similar to this case now that both were developed with under the assumptions "can't happen," in the sense, developed to be too brittle, for the inputs that were certainly possible to happen as soon as the trajectory (in the case of Ariane 5)

I'm not sure that's entirely fair. The software was intended for the Ariane 4 which wasn't intended to have as much horizontal acceleration as the 5. If the 4 had experienced such an acceleration it wasn't intended to be capable of recovering from it. That area of the code also explicitly had some protections provided by the language removed for the sake of efficiency. So it wasn't a total oversight that just happened to work out - there was a decision made based on the fact the rocket had already irrecoverably failed if the situation ever occurred.

While I agree it's somewhat distasteful not to cover all the bases in the most technically correct way all the time, I'm not sure how important it is to have an overflow handler fire in the inertial reference system just as the rocket self-destructs.


> That area of the code also explicitly had some protections provided by the language removed for the sake of efficiency

As far as I know the efficiency wasn't the issue, just that the "model" was, as I've said, brittle. The overflow was to be handled with what we'd today call "an exception handler" and the selected solution was, instead of (reasonably) writing "keep the maximum value as the result" handler, to leave the processor effectively executing random code in the case the overflow occurs. And the "exception" occured. It's not that the overflow detection was turned off to save the cycles, or that some default handling was provided. It was that it was handled with "whatever" (execute random instructions)! by intentionally omitting the handlers.


I don't really see that as the main point. Perhaps I shouldn't have mentioned it at all.

I don't see the practical issue with a model being brittle in the face of imminent mission failure. The model breaking down shortly before you self-destruct the whole thing seems like a rather minor concern. It's entirely irrelevant at that point what the model is.

It turns into an issue when somebody throws the software into a new environment without looking at it or it's requirements and then doesn't do any testing with it. But that's not on the original developers. Their solution was entirely valid for their problem.

Even if they had done something like report the maximum value instead, the rest of the software for the Ariane 5 could well have been expecting it to do something else entirely which would still result in a serious problem.

It's an issue of inappropriately using software in a new situation. Without knowing and account for how it behaves, you can't just use it and expect everything to work perfectly the first time around. It doesn't matter how well the software accounts for various issues - at some point something won't have only a single correct answer and the software you are using will have to pick how to behave. If you aren't paying attention to that, it can/will come back to bite you.


> It doesn't matter how well the software accounts for various issues - at some point something won't have only a single correct answer

It does, immensely. That's why we have floating point processing units instead of the fixed point. Think about it: even the single precision FP allows you to have "expected" responses between 10E-38 to 10E38. There are less stars in the observable universe. The double precision FP allows the ranges of inputs and outputs to be between 10E−308 and 10E308: there are only 10E80 atoms in the whole observable universe. Can the response which says how much the rocket is "aligned" be meaningful -- sure it can.

This piece of program catastrophically failed because some input was a just somewhat bigger than before.

Properly programmed components that are supposed to handle "continuous" inputs and provide "continuous" outputs (and that is the specific part we talk about) should not have "discontinuities" at the arbitrary points which are the accidents of some unimportant implementation decisions (leaving "operand error" exception for some input variables but protecting from it for others!).

I can understand that you don't understand this if you never worked in the area of the numerical computing or signal processing or something equivalently part of the "real life" responses, but I hope there are still enough professionals who know what I talk about.

Again from the report:

"The internal SRI software exception was caused during execution of a data conversion from 64-bit floating point to 16-bit signed integer value. The floating point number which was converted had a value greater than what could be represented by a 16-bit signed integer. This resulted in an Operand Error. The data conversion instructions (in Ada code) were not protected from causing an Operand Error, although other conversions of comparable variables in the same place in the code were protected.

The error occurred in a part of the software that only performs alignment of the strap-down inertial platform. This software module computes meaningful results only before lift-off. As soon as the launcher lifts off, this function serves no purpose."


> That's why we have floating point processing units instead of the fixed point.

I'm not sure what that is supposed to mean. I was talking generally. Not every situation has a single appropriate value to represent it. I don't particularly care if this one example could have used a floating point or not.

> This piece of program catastrophically failed because some input was a just somewhat bigger than before.

As far as the software was concerned the rocket had already catastrophically failed. It actually hadn't, because it was a different rocket than the software was designed for. It was "somewhat bigger" in the sense that it was large enough that the rocket the software was designed for would have been in an irrecoverable situation.

> Properly programmed components that are supposed to handle "continuous" inputs and provide "continuous" outputs (and that is the specific part we talk about) should not have "discontinuities" at the arbitrary points which are the accidents of some unimportant implementation decisions (leaving "operand error" exception for some input variables but protecting from it for others!).

That's theoretically impossible. If you want to account for every possible value you're going to need an infinite amount of memory. There will be a cutoff somewhere, no matter what. Even if that cutoff is the maximum value of a double precision float - that's an arbitrary implementation limitation. You can't just say you can more than count the stars in the sky and that's clearly and obviously good enough for everything. It's not.

There will be a limit, somewhere. It will be an implementation-defined one. As long as the limit suits the requirements, it effectively doesn't matter. In this case, the limit was set such that if it was reached the mission had already catastrophically failed. That's all that can practically be asked for.


I've checked the report: the exception resulted in the transmission of effectively random data to the main computer:

http://www.math.umn.edu/~arnold/disasters/ariane5rep.html

"g) As a result of its failure, the active inertial reference system transmitted essentially diagnostic information to the launcher's main computer, where it was interpreted as flight data and used for flight control calculations."

So the handler in the processes existed but it effectively confused the main computer. The units shut off but before that sent "the diagnostic." For which there was no handler at all in the main computer. And even more interesting, these processes weren't even needed for the flight. The main computer were able to just ignore such input and the flight would have continued (R1).

Brittle.


> It was that it was handled with "whatever" (execute random instructions)! by intentionally omitting the handlers.

Which is a perfectly valid course of action.

In fact, it is usually the only correct course of action, because there is no other correct course of action to take.

A "keep the maximum value as the result" is always plain wrong (and that extends to all cases of <return whatever fixed value sounds cool>), it wouldn't pass a code review.

Source: That's covered in the "safety & testing" courses of my previous university, that happen to be given by one guy who worked on the Arianes. :p


:) I could have expected that, that these involved have said "it was according to the specs." I don't claim it wasn't. But the commission didn't find that "it had to be all done as it was":

http://www.math.umn.edu/~arnold/disasters/ariane5rep.html

"4. RECOMMENDATIONS"

"R3 Do not allow any sensor, such as the inertial reference system, to stop sending best effort data."

See my other post, they effectively have sent something random ("diagnostics" instead of the data). And this piece of software wasn't even needed to run:

"R1 Switch off the alignment function of the inertial reference system immediately after lift-off. More generally, no software function should run during flight unless it is needed."

And of course, everything wasn't even tested together:

"R2 Prepare a test facility including as much real equipment as technically feasible, inject realistic input data, and perform complete, closed-loop, system testing. Complete simulations must take place before any mission. A high test coverage has to be obtained."


The piece of software was fine. It was done for Ariane 4 and worked as expected.

They re-used it for ariane 5 without checking/adapting it for work in the different environment (more acceleration & thrust). I don't even know what's the name for that kind of mistake. ^^

> See my other post, they effectively have sent something random ("diagnostics" instead of the data).

The software failed. It doesn't matter what it returned at this point. There is nothing to do but to fix the bug in the software.

If it returned "last number" instead of what it did, it would be considered a bug in the exact same way.

For R2, I suppose that they reused the tests from Ariane4 as well :D


What do we do about this?


Act! Share the info, raise the awareness. It seems non-technical people can't imagine how easy the computers and the technology can be catastrophically wrong. The accident will hapen and we must rationally minimize the impacts:

http://nuclearrisk.org

The political action is essential.


I think you're actually completely wrong about that. I think most people will assume it's an actual update when the animation moves. It's the devs that will stop to think if they're actually doing that or not, and have the knowledge to check for themselves.


Anecdotally, it's vastly reduced debugging time in my own personal projects. My most notable problems have all turned out to be the result of things like using the wrong input, or misunderstanding some external spec. At first it was actually a little unnerving that things weren't breaking, but with time I've come to expect that. I came from C++, where I always joked that if things appeared to work correctly the first time around something serious was broken.

It's not free though. You do end up spending more time getting the compiler to actually accept your code. With time I've gotten much better at this, although at first it could be quite a fight. But a compiler error is so much nicer than a bug, so I do think it's very worth it.

And when it comes to coding, how long it takes to type is pretty irrelevant as long as you can type half decently. Typing the code is really the easiest part.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: