Hacker Newsnew | past | comments | ask | show | jobs | submit | xyzzyz's commentslogin

Traditional defense contractors have low profit margin because of the cost plus pricing on the contracts. They literally are only allowed to charge the cost they incur plus some fixed profit percentage. As such, they have incentive to drive up the costs, so that their profit, while low percentage, is on high base.

SpaceX wouldn’t need to so that. Companies like Anduril already are trying to win contracts on fixed price model, and if they succeed, they’ll have much higher profit margins than Raytheon et al.


The estimates that have Golden Dome at anything close to a trillion dollars are posited on the assumption that it will be much more expensive to build than the administration believes it will take. If it ends up as fixed price bids and costs less than people think, it will be well under $200 billion.

There are multiple estimates, including by Republican members of Congress and think-tanks that put it in the many trillions of dollars.

That's right.. and Golden Dome (which is definitely a mult-trillion dollar program if space based weapons are employed) has a bunch of convenient oligarch properties like built-in planned obsolescence with orbital decay that amplifies a launch monopoly.

> which is definitely a mult-trillion dollar program

The program already exists and you can see how much has been allocated to it.


Sure let's pretend the first year budget of the program represents its entire future.

Even still it is already 2.2% of the entire federal budget. Multiple estimates put the total Golden Dome cost in the $$ trillions.


His political goals seem to align pretty well with the goals of the democratically elected governments, which are perfectly happy to buy products and services from him. You might not agree with their goals, but it’s absurd to suggest that this should make him ineligible for clearance. Clearance is not some kind of a “good boy with right politics” certification, it’s rather “is this person trustworthy enough to depend on in the matters of national security”.

On modern machines, looking things up can be slower than recomputing it, when the computation is simple. This is because the memory is much slower than the CPU, which means you can often compute something many times over before the answer from memory arrives.

Not just modern machines, the Nintendo64 was memory bound under most circumstances and as such many traditional optimizations (lookup tables, unrolling loops) can be slower on the N64. The unrolling loops case is interesting. Because the cpu has to fetch more instructions this puts more strain on the memory bus.

If curious, On a N64 the graphics chip is also the memory controller so every thing the cpu can do to stay off the memory bus has an additive effect allowing the graphics to do more graphics. This is also why the n64 has weird 9-bit ram, it is so they could use a 18-bit pixel format, only taking two bytes per pixel, for cpu requests the memory controller ignored the 9th bit, presenting a normal 8 bit byte.

They were hoping that by having high speed memory, 250 mHz, the cpu ran at 90mHz, it could provide for everyone and it did ok, there are some very impressive games on the n64. but on most of them the cpu is running fairly light, gotta stay off that memory bus.

https://www.youtube.com/watch?v=xFKFoGiGlXQ (Kaze Emanuar: Finding the BEST sine function for Nintendo 64)


The N64 was a particularly unbalanced design for its era so nobody was used to writing code like that yet. Memory bandwidth wasn't a limitation on previous consoles so it's like nobody thought of it.

> This is also why the n64 has weird 9-bit ram, it is so they could use a 18-bit pixel format, only taking two bytes per pixel, for cpu requests the memory controller ignored the 9th bit, presenting a normal 8 bit byte.

The Ensoniq EPS sampler (the first version) used 13-bit RAM for sample memory. Why 13 and not 12? Who knows? Possibly because they wanted it "one louder", possibly because the Big Rival in the E-Mu Emulator series used μ-law codecs which have the same effective dynamic range as 13-bit linear.

Anyway you read a normal 16-bit word using the 68000's normal 16-bit instructions but only the upper 13 were actually valid data for the RAM, the rest were tied low. Haha, no code space for you!


Unless your lookup table is small enough to only use a portion of your L1 cache and you're calling it so much that the lookup table is never evicted :)

Even that is not necessarily needed, I have gotten major speedups from LUTs even as large as 1MB because the lookup distribution was not uniform. Modern CPUs have high cache associativity and faster transfers between L1 and L2.

L1D caches have also gotten bigger -- as big as 128KB. A Deflate/zlib implementation, for instance, can use a brute force full 32K entry LUT for the 15-bit Huffman decoding on some chips, no longer needing the fast small table.


It's still less space for other things in the L1 cache, isn't it?

Interesting. About 20 years ago, it must have been the other way around because I remember this paper [1] where the authors were able to speed up the log function by making use of a lookup table in the CPU cache.

[1] https://www.researchgate.net/profile/Nikki-Mirghafori/public...


It........depends.

I make things faster all the time by leveraging various CPU caches, sometimes even disk or networked disks. As a general principle though, memory lookups are substantially slower than CPU (and that has indeed changed over time; a decade or three ago they were close to equal), and even cache lookups are fairly comparatively slow, especially when you consider whole-program optimization.

That isn't to say that you can't speed things up with caches, but that you have to be replacing a lot of computations for even very small caches to be practically better (and even very small caches aren't helpful if the whole-program workload is such that you'll have to pull those caches from main RAM each time you use them).

To your paper in particular, their technique still assumes reasonably small caches which you constantly access (so that you never have to reach out to main RAM), even when it was written, and part of what makes it faster is that it's nowhere near as accurate as 1ULP.

Logarithms are interesting because especially across their entire domain they can take 40-120 cycles to compute, more if you're not very careful with the implementation. Modern computers have fairly fast floating-point division and fused multiply-add, so something I often do nowadays is represent them as a ratio of two quadratics (usually rescaling the other math around the problem to avoid the leading coefficient on one of those quadratics) to achieve bounded error in my domain of interest. It's much faster than a LUT (especially when embedded in a larger computation and not easily otherwise parallelizable) and much faster than full-precision solutions. It's also pretty trivially vectorizable in case your problem is amenable to small batches. Other characteristics of your problem might cause you to favor other solutions.


Logarithms are interesting because there's hardware to approximate them built into every modern processor as part of floating point. If you can accept the error, you can abuse it to compute logs with a single FMA.

An example of an exp and a log respectively from my personal library of bit hacks:

    bit_cast<float>((int32_t)(fma(12102203.2f, x, 0x3f800000)));

    bit_cast<float>((uint32_t)(-0x3f800000 - 36707.375f*x)) + 7;

It’s a delicate balance and really hard to benchmark. You can write a micro benchmark that keeps the lookup table in cache but what if your function isn’t the only thing being done in a loop? Then even if it’s in the hotpath, there’s insufficient cache to keep the table loaded the entire way through the loop and lookup is slower.

TLDR: it depends on the usage and we actually should have multiple functions that are specialized based on the properties of the caller’s needs where the caller can try a cache or compute approach.


It may be, especially when it comes to unnecessary cache. But I think `atan` is almost a brute force. Lookup is nothing comparing to that.

Sin/cos must be borders of sqrt(x²+y²). It is also cached indeed.


What do you mean brute force?

We can compute these things using iteration or polynomial approximations (sufficient for 64 bit).


There is a loop of is it close enough or not something like that. It is a brute force. Atan2 purely looks like that to me.

> Sin/cos must be borders of sqrt(x²+y²). It is also cached indeed

This doesn't make a ton of sense.


In what way do you think a sin function is computed? It is something that computed and cached in my opinion.

I think it is stored like sintable[deg]. The degree is index.


> In what way do you think a sin function is computed?

In some way vaguely like this: https://github.com/jeremybarnes/cephes/blob/master/cmath/sin...

> I think it is stored like sintable[deg]. The degree is index.

I can think of a few reasons why this is a bad idea.

1. Why would you use degrees? Pretty much everybody uses and wants radians.

2. What are you going to do about fractional degrees? Some sort of interpretation, right?

3. There's only so much cache available, are you willing to spend multiple kilobytes of it every time you want to calculate a sine? If you're imagining doing this in hardware, there are only so many transistors available, are you willing to spend that many thousands of them?

4. If you're keeping a sine table, why not keep one half the size, and then add a cosine table of equal size. That way you can use double and sum angle formulae to get the original range back and pick up cosine along the way. Reflection formulae let you cut it down even further.

There's a certain train of thought that leads from (2).

a. I'm going to be interpreting values anyway

b. How few support points can I get away with?

c. Are there better choices than evenly spaced points?

d. Wait, do I want to limit myself to polynomials?

Following it you get answers "b: just a handful" and "c: oh yeah!" and "d: you can if you want but you don't have to". Then if you do a bunch of thinking you end up with something very much like what everybody else in these two threads have been talking about.


It isnt good idea to store such values in code. I think it is something that computed when a programming environment is booting up. E.g. when you run "python", or install "python".

I try to understand how Math.sin works. There is Math.cos. It is sin +90 degrees. So not all of them is something that completes a big puzzle.


atan(x) = asin(x / sqrt(1+x*x))

So the asin is brute force. I think it is `atan` function. Written article explains nothing. Sqrt is also like that.

It looks like there is a reasonable explanation when it is written math form but there is no.


And why do you think Congress has passed this law? What prompted them to micromanage the military in this manner? I encourage you to research this topic, “McNamara’s folly” will serve as a good starting keyword. Spoiler: it has everything to do with unsuitability of low IQ enlisted.

FWIW, ASVAB is an IQ test. Any intelligence researcher will tell you so, because it exhibits the usual positive manifold, you find the usual g factor in it, and it shows high correlation with other IQ test. The military doesn’t usually call it as such for political reasons, but will happily admit in private that ASVAB and WAIS measure the same thing: https://web.archive.org/web/20200425230037/https://www.rand....


Sorry, what exactly do you mean by "is representative of general intelligence"? This is a very abstract statement. What does this mean in scientific, empirical terms? What kind of facts we would observe in the world where this is true? What empirical observations we'd make in the world where it's false?

> Sorry, what exactly do you mean by "is representative of general intelligence"? This is a very abstract statement.

No need to apologize. Perhaps my g is too low to describe my thoughts properly.

> "is representative of general intelligence"?

This factor that is derived from the positive correlations, g, is called general intelligence. So, g is nominally general intelligence, but is g actually what the name implies? One can take n number of positively correlated but independent things, and there will always be a some factor that can be derived from it. However, that does not mean the underlying factor is necessarily causal.

> This is a very abstract statement.

We are discussing abstract concepts.

> What does this mean in scientific, empirical terms?

That causality would be scientifically and empirically verifiable.

> What kind of facts we would observe in the world where this is true? What empirical observations we'd make in the world where it's false?

Alas, that is precisely the point I was trying to paraphrase from Shalizi. Whether g be true or false -- the result wouldn't look any different. The methodology being used cannot determine what is true nor false, and that is the crux of this entire problem.


One can take n number of positively correlated but independent things, and there will always be a some factor that can be derived from it.

I hope you understand that your vague question cannot be seen as equivalent to this rather more concrete statement. That’s why I asked for clarification, and your patronizing comments were really not called for.

In any case, Shalizi is very wrong, probably because he is entirely unfamiliar with the literature. He is wrong on multiple accounts.

First, yes, any number positively correlated measurements will yield a common factor. However, when talking about g, this is not an artifact of how we constructed IQ tests. Shalizi says:

What psychologists sometimes call the “positive manifold” condition is enough, in and of itself, to guarantee that there will appear to be a general factor. Since intelligence tests are made to correlate with each other, it follows trivially that there must appear to be a general factor of intelligence.

But this is just not true. Tests are not made to correlated with each other. Any time anyone attempts to construct a test of general mental ability, we always find the same g factor, even if they explicitly attempt to make a battery that tries to measure distinct, uncorrelated mental aptitudes. Observe how Shalizi fails to provide a single example of a test that does not exhibit the positive manifold with other tests.

Second, unlike Shalizi, we know that g is the predictive component of the IQ tests. IQ predicts real world outcomes very well, but what is really interesting is that the predictive power of individual subtests of an IQ test is practically perfectly correlated with g-loadings of the subtest. This would be very surprising if g was just a statistical artifact.

Shalizi says

So far as I can tell, however, nobody has presented a case for g apart from thoroughly invalid arguments from factor analysis; that is, the myth.

But this is just baffling if you have any familiarity with the literature.

Whether g be true or false -- the result wouldn't look any different. The methodology being used cannot determine what is true nor false, and that is the crux of this entire problem.

That’s just not true. For example, if g was a statistical artifact, one of the hundreds of intelligence tests devised would have not exhibited the positive manifold with all the others. It would not be correlated with heritability. It would not be correlated with phenotype features like reaction time. The world where g is a statistical artifact looks much different than our world.


There's no debate on construct validity of IQ among the experts in the field. The consensus position is that IQ tests measure something real, that the tests enjoy extremely high measurement invariance (which implies construct validity), and that the results have extremely high predictive validity (relative to literally anything else in the entire field of psychology). The current debate is more along the lines, whether the contribution of genes to variance in IQ is closer to 30% or to 80%.

Wait, this comment starts out with an assertion about one scientific question (the construct validity of a quantitative psychological metric) and ends with a statement about the range of a totally different question, and it's one studied by different fields than the former question.

Yes, I could have left off the last comment. I added it to illustrate where the debate currently lies. I am not sure what your point is.

That the logic of your comment doesn't even hang together? Which debate? You've managed to cite two of them.

I’m really struggling to understand what your point is. The person I replied to was wrong as to where the current debate is among experts, so I pointed it out, and gave an example of where the debate currently is. Is that really so strange thing to say?

Yes, but since the heritability is high, the average IQ of the children will be close to the average IQ of the parents, despite the fact that it will tend to regress towards the mean.

That’s not how genetics works. It’s not a simple averaging scheme.

That’s exactly how it works in the standard additive model of heritability, and we have lots of empirical evidence that heritability of intelligence matches that model very well.

Not quite. The point of Taylor’s theorem is that the n-th degree Taylor polynomial around a is the best n-th degree polynomial approximation around a. However, it doesn’t say anything about how good of an approximation it is further away from the point a. In fact, in math, when you use Taylor approximation, you don’t usually care about the infinite Taylor series, only the finite component.

Yes, it's unlikely that people are illegally voting in person in large numbers. It is relatively easy to do so, and the risk is relatively low, if you approach it intelligently (e.g. vote as someone who is registered, but highly unlikely to vote -- even if they do vote, you're highly unlikely to be caught anyway). However, there's just no incentive for individuals to do so, because the reward is very low: each individual's vote is really worth very little, and an individual fraudulent voter does not benefit from it enough to counterbalance the risk.

On the other hand, there are other ways for people to steal elections. For example, you can steal mail-in ballots from mailboxes, fill them, and covertly drop them in. It's particularly easy to do in states where all ballots are mail-in by default. The risk-reward calculation is different, because now one organized person can cast dozens, or hundreds of fraudulent votes, instead of just one.

In other states, you don't even need to steal them: you can just knock on the door, ask people for ballots (or buy them, many people will happily sell their right to vote for $20, because it's worthless to them), fill them in, and drop them off completely in the open. Of course, the stealing/buying and filling in the ballots is illegal, but since this happens in private, it's much harder to detect and prosecute. That's why most states disallow dropping off votes for third parties, but some states inexplicably allow it.

There are multiple recent cases, where people were convicted for schemes like that, e.g State of Arizona v. Guillermina Fuentes, Texas v. Monica Mendez, Michigan v. Trenae Rainey, U.S. v. Kim Phuong Taylor, and more. Since these are only the cases where conviction was secured, the true number is much higher.


Buying ballots on a large scale seems difficult to me, because you have to keep a large group of strangers from talking. They will brag to their friends and family members and the information will come out. I can only imagine people buying a few ballots from their apolitical family members.

They’d be filled to capacity even if they literally gave everything for free, because the unsold stuff is mostly the kind of things that people don’t want in the first place. The good stuff would be snatched, and the things nobody wants would linger there forever.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: