> Karma has some problems too, and could no doubt do with some adjustments, but I think that it's more useful to have it than not.
I agree that it could use adjustments.
Real-life karma is a good thing, but karma points are mostly a bandaid for "no other way of trying to keep trolling and other poor quality communication down".
The serious downside of having karma is providing undeserved power and prestige via a high status and ability to have certain actions to those that don't necessarily deserve it.
But the only way to get rid of it and get similar benefit is to have a community that can elect users into power that deserve it, like StackOverflow's elections. It would be even better if you gave the power to everyone and they were all wise enough and dependable enough to wield that power as needed, but that is unlikely to happen anytime soon.
> If you cannot figure out in one minute what a C++ file is doing, assume the code is incorrect.
This statement at first resonated with me, and then I thought about it: this doesn't reduce the complexity of the overall application or service, it just means that one file is simple. You could have 10,000 files instead of 1 much shorter one; is that any more simple?
Yes. If each file makes sense in isolation then the whole will as well. Just splitting code into lots of files won't necessarily produce files that you can figure out in 1 minute though (you have to define the boundaries between files such that they make sense).
I disagree. A complicated function may be made of a bunch of statements where each statement makes sense easily. The entire function may still be complicated. The same argument can be extended for files and projects. Even if each file is simple, if the code in those files interact with each other in a complicated manner, the project becomes complicated. This can happen despite having neat boundaries between files. Nothing stops a new programmer from writing new simple files that interact with the existing files in a complicated manner. Simplicity of source code in individual files or functions is just one of the factors behind a simple project. Simplicity of design has to go hand in hand with it.
On the other hand, a couple of files may be very complicated but the entire project could still be simple if those complicated files hide the complexity behind neatly exposed functions, and the remainder of the project does not make use of those functions in a complicated manner.
> A complicated function may be made of a bunch of statements where each statement makes sense easily. The entire function may still be complicated.
Statement yes, but I avoid them where possible - the complexity comes from their interactions because their interactions are unmanaged, implicit and arbitrary. If you make each function an expression made up of expressions and functions, then I think it becomes true that if each expression makes sense easily then the whole will also make sense easily.
As far as the complexity of programs are concerned, there is a similarity between statements at one level of abstraction and functions at a higher level. I have seen many cases where small functions have been assembled into complicated programs. These programs often have a proliferation of 'helper' classes and functions, where you have to trace through long series of calls to get to where the work is done. They often seem to come from a poor design that has been repeatedly patched instead of fixed, or from programmers who write functions because they think they will be part of the solution, but not backing out and replacing them them when they find a complication they had not anticipated.
Using small functions is a necessary, but not sufficient, condition for making understandable code.
I think what you're describing is a case where you can't understand what those helpers do, and therefore can't understand what the function that calls them does. I maintain that if each individual function makes sense then the whole will too.
This holds if the small functions are built around a coherent top-down design, respecting each other's invariants. Once the project is too large to fit in one's head, it is no longer sufficient for each function to be 'correct' in a local sense.
Sensible, understandable functions can be assembled into complicated, incomprehensible programs in exactly the same way that the sensible, understandable operators of a programming language can.
In my opinion this only shift the problem from coding clean code to handling hundreds or thousands of simple and small source files. This will make much more complicated to handle a large project because everything is scattered in such an extend that a developer spend more time searching thru the include list of files than understanding what the code is actually doing. Implementing complex functionality is going to be a nightmare and the end, everything is going to be merged in a single translation unit anyway. But this is just my personal opinion..
> In my opinion this only shift the problem from coding clean code to handling hundreds or thousands of simple and small source files. This will make much more complicated to handle a large project because everything is scattered in such an extend that a developer spend more time searching thru the include list of files than understanding what the code is actually doing.
I've worked on a number of very large codebases and that simply isn't my experience. If code is easy to understand it tends to make good use of the domain language and therefore also be easy to search.
> Implementing complex functionality is going to be a nightmare
The opposite, in my experience. The only maintainable way to implement complex functionality is to break it into small pieces.
> everything is going to be merged in a single translation unit anyway.
That's the compiler's business. I don't care one way or the other about its implementation details.
> That's the compiler's business. I don't care one way or the other about its implementation details.
Actually, you do- for at least several reasons.
1. If the runtime or compiler were to have problems with interdependencies.
2. If the compiled code that will actually be executed or the application or service itself across cores, processors, VMs, geography at runtime takes longer to run because of its compiler implementation, that might make it more expensive or too slow for your needs or to compete.
lmm's statement was clearly intended to be taken in the context of the statement he was replying to. While your points are valid in general, the fact that the functions will generally be composed into a single translation unit is not an argument against the benefits of making them small.
Felt like this after doing lisp/python coming from Java. In the end it's about being sensible on reducing a system size, no matter where the 'unit' is.
Depends how well the project is structured. Imagine that you're writing a function that adds some values to a hashmap. Would you rather have the the logic, the hashing, and the datastructure details in that function? Getting a shorter function and reduced complexity in that function is great, even if it doesn't affect the complexity of the whole project.
If the modules are well-designed, you can ignore how the hashmap works and the details of hashing itself. You'll get at least 4 extra files, but yes, it's very likely worth it.
> I don't think JavaScript's syntax is a selling point
Under "Why OCaml?" on the Reason page, it states, "OCaml has a very mature (and still growing) ecosystem for targeting browser and JavaScript environments with a focus on language interoperability and integration with existing JavaScript code," and "Reason‘s non-invasive approach to the OCaml compiler allows Reason code to take advantage of all of the existing OCaml compiler optimizations/backends such as ... and even JavaScript compilation."
It seems like what's being said is that one of the main goals for Reason is to integrate with JavaScript, and it would seem to make sense instead of changing between language syntaxes, you'd want more in common between them, so it makes sense why they are similar. I'm confused as to why you seem to be trying to distance Reason and OCaml from JavaScript, when it definitely seems like the similarity with and integration with JavaScript would be a driving factor in Reason's development now, even if maybe it wasn't in the beginning.
I personally think that JavaScript is one of the most meaningful and important languages that exists today. This needn't imply that JavaScript's syntax, or its semantics are to be emulated (though we're seeing a bunch of welcome improvements in recent versions of JS). The thing I like most about JavaScript is that it allows anyone, anywhere to easily deploy and share code with anyone anywhere.
So I hope this helps explains why many people feel that compiling to JS is important, even if the syntax/semantics of JS are not equally sought after by those same people.
At the same time, we can't overlook that JavaScript has become one of the most popular languages today. If that were the case thirty years ago, I'm sure the syntax for the ML family would have taken that into account somehow, at least to convey the parts of the two languages that actually are similar - even if only for things as simple as comments. So at least in this V0.0.1 of Reason, some of the things that just don't matter tend to look like things that are familiar to a larger set of people.
I think you've picked up on what looks like an inconsistency in messaging so thanks for pointing it out. I can't speak for everyone else who works on Reason, but I see the two major components (compiling to, and resembling) JavaScript to be largely independent. We can want to compile to JavaScript for totally different reasons than we want some pieces of the grammar to resemble JavaScript and I believe that is the case here. The pieces that currently resemble JavaScript are that way simply because those pieces don't matter too much and why not just be familiar? There are certain syntax features of JS that members of the JS community would tell you were mistakes, and we're not looking to recreate them just for the sake of being like JS.
"If the antibodies already exist within your organization to destroy new endeavors, you need to go outside of the organization to overcome them."
Another option is to just fire or move employees that are resistant to change and innovation to other parts of the company where their resistance to change might be more helpful. I don't understand why this option is not considered or discussed.
Well, there's more than one way to block new endeavors.
Many organisations sometimes have costly problems, and try to learn from them by adding new procedures to stop the problems recurring. After a few years, an accumulation of such procedures can raise the costs of a new project significantly - even though every rule is a reasonable one put in place with the best of intentions.
For example, where a startup can test a concept with a PHP website on a single server using MySQL, a large company might have standards calling for a high availability configuration, a 24/7 support rota, monitoring logging testing automatic scalability change management backups and security to these standards/levels and independently audited, a bug bounty program...
Before you know it a project a startup could have prototyped with one guy and a week needs several guys and several months - but the reasons for it are all individually reasonable and firing the people behind them makes no sense at all.
> CIA's policy is probably controlled by someone else, and that's where changes need to be made because I like to think they don't come up with the stuff on their own.
Beyond you liking to think that, what evidence do you have that the CIA's policy is controlled by anyone other than the CIA's Deputy Director who commands internal operations, and its Director, who reports to the director of National Intelligence as well as having to answer to Congress and the White House?
For the most part, theories of secret groups that control things are false, not counting well-known groups like the Masons, etc. that make secrecy part of their identity. There is plenty of evidence to support that Congress and the White House are lobbied heavily by outside interest, and that's no secret. Combine that with the varied interests and agendas in the involved organizations, human error, incorrect or misinterpreted information, etc. and you have plenty enough reason for things like this to go wrong.
If you are an American citizen, and you believe the U.S. government is so wrong and misguided, nothing is typically stopping you from leaving the country or attempting to vote for others that might be able to make some changes, but the fact is that there is an extreme momentum of the country that is simultaneous chaotic and well-intentioned, so no matter who you vote for, things will typically continue on. Well- maybe not if Trump is elected, because the entire country could turn into a sideshow ;) , but this is true for the most part.
If you live in another democratic country, vote for leaders that you believe will positively influence the U.S. in one way or another, or you can speak up about it.
Belief in some shadowy group is just not helpful, and is a result of the imprint on the psyche from movies, television, and other media sources. There are real groups out there with influence, and real single players with influence, but it's really not that hidden; it's just complex.
Not secret group, I don't believe in stuff. I should have been clearer and say that part of the government likely has a body which controls and influences the policy of the CIA. Sorry for my imprecise wording.
There are congressional intelligence oversight committees, who I'd like to think are aware of these things.
More likely, there's a little golf club agreement now and then between some corporation or lobbying group and government agencies. For example, if you guys can help us with a little information and maybe a teensy overthrow, we'll give you some better terms on that spy sat you wanted.
Thing is, they're not, as we discovered when the Snowden revelations came to light. Intelligence agencies do, in fact, operate without proper oversight, and are run by means of informal channels to private interests with overseas assets to protect. That's been true since the Dulles brothers.
Well, they were elected by a majority who doesn't believe in science. That's why it's strange that most of the people who believe in majority rule are surprised when democratic elections result in such outcomes.
Would an alternative be any better? Democracy seems the only way forward, the solution is education (difficult if not impossible).
The mob is ignorant so pick someone smart to be in charge? How do you pick? Who decides? Maybe the smartest people are the ones who take power already. Really the money is in charge.
I think you are completely wrong about this. There is plenty of evidence to indicate that "shadow groups" of varied interests that sometimes align and sometimes dont, are often pulling the strings of "public puppets". Your view, while common, (especially in the academic world where conspiracy is avoided like the plague) doesnt seem to reflect reality.
The conspiratorial view of history is the correct one.
This is about the NSA, not the CIA, but still a nice link.
"As the most recent National Security Advisor of the United States, I take my daily orders from Dr. Kissinger, filtered down through General Brent Scowcroft and Sandy Berger, who is also here. We have a chain of command in the National Security Council that exists today."
You might be correct, depending on your definition of "shadow groups".
For example, George W. Bush initially got elected on a platform of non-intervention. There was, however, a strong core of Neocons in his administration, like Paul Wolfowitz. After 2001 the President switched directions completely and they became considerably more powerful and their lobbying probably directly led to the Iraq war.
Could we start with asking you what kind of evidence would actually satisfy you? How much effort are you prepared to expend in questioning this?
Honestly. What do you need to just to consider the small possibility that your view of things is the incorrect one?
Do you want macro/micro/historical/current evidence and do you expect such evidence to be easily disseminated here or are you just asking without any of these things in mind?
The evidence is there. It just takes a little work to dig through and a lot of reading. So, I suggest trying to look around on your own first. Read about the United Fruit Company. 1950's Iran. Rand Corporation. There's just so much out there already that I think anybody who hasn't read about this stuff by now must not really care.
Not the OP, but it's sunday afternoon and you've piqued my curiosity. I'm not opposed to conspiracy theories, but my view of the world is one of different groups competing for power without a guiding plan. I find it interesting reading about individual conspiracies and learning how the world really works, but in my view they eventually backfire on the people who instigate them.
Take the United Fruit Company - so they lobbied the US government to instigate a coup in Guatemala. That's devious. (I've just looked up the coup in more detail, and the consequences for Guatemala were horrendous, so I'd say the coup instigators were not merely devious, but outright evil).
Thing is, neither party benefited. The US got involved to prevent communism growing in its backyard, but ended up pissing off the entire region. The UFC wanted to protect its assets in Guatemala, but was forced by Eisenhower to divest them all 4 years later.
What's the grand pattern here? What's the motive behind all these conspiracy theories? Do they intentionally backfire, or are the people pulling the strings just short-sighted?
Thanks, great response. I guess my overall point would be that once you think you know something, you've stopped thinking about it.
I agree that it's a fact that there are different groups competing for power and that there is a constant struggle. I would quibble over the "without a guiding plan" part because I don't know what you mean by that exactly. Large and powerful groups can certainly exert leverage over numerous smaller and less powerful groups.
A guiding plan could be a general philosophy. Look at all of our political and military power systems. They are designed hierarchically, so that the nearer you get to the top, the smaller and more powerful the group is. The pyramid on the dollar bill is obviously symbolic of this. That's indicative of some sort of guiding plan. I mean, there's one group (the Masons) who can get all of their secret symbols implanted into our money forever? That's scary to me.
Regarding United Fruit - yes, the company suffered but can you say that company profit was the prime motive for messing with South America? Have you considered that all warfare starts with economic warfare? What if de-stabilizing a region makes you all sorts of profits in other ways, with other companies? What if you can get all sorts of secret money to do secret things by controlling illicit trade that now comes out of that region?
I have a LOT of questions. How could the Taliban have virtually ended poppy production in Afghanistan in 2001 and yet our own government which supposedly is at "war" with drugs like Heroin, cannot quell the supply from that area which produces 90% of the drug for the rest of the world?
In my view of the world - the puppets change all the time but somehow shit stays the same, so I just am not that sure that there isn't a guiding plan of sorts. (EDIT: Even if that plan is just "greed". Endless greed. And "do what you want" mentality, which is the philosophy of Satan/Lucifer. You might think I'm crazy just for mentioning Satanism but secret power didn't start in modern times.)
I think this statement is hilarious. You tried to make fun of conspiracy theorists, but I have never heard a conspiracy theorist say that they "have evidence, but it's secret".
The reason it's funny to me is because the only time I ever hear bout "secret evidence" is when I read about secret courts and prisons that the US government runs, such as the US Foreign Intelligence Surveillance Court. (LOL! It's so funny that people around the world can be prosecuted, killed and tortured because of secret evidence, isn't it?)
"Is The Analysis Different With Source-Only Distribution?
We cannot close discussion without considering one final unique aspect to this situation. CDDLv1 does allow for free redistribution of ZFS source code. We can also therefore consider the requirements when distributing Linux and ZFS in source code form only.
Pure distribution of source with no binaries is undeniably different. When distributing source code and no binaries, requirements in those sections of GPLv2 and CDDLv1 that cover modification and/or binary (or “Executable”, as CDDLv1 calls it) distribution do not activate. Therefore, the analysis is simpler, and we find no specific clause in either license that prohibits source-only redistribution of Linux and ZFS, even on the same distribution media.
Nevertheless, there may be arguments for contributory and/or indirect copyright infringement in many jurisdictions. We present no specific analysis ourselves on the efficacy of a contributory infringement claim regarding source-only distributions of ZFS and Linux. However, in our GPL litigation experience, we have noticed that judges are savvy at sniffing out attempts to circumvent legal requirements, and they are skeptical about attempts to exploit loopholes. Furthermore, we cannot predict Oracle's view — given its past willingness to enforce copyleft licenses, and Oracle's recent attempts to adjudicate the limits of copyright in Court. Downstream users should consider carefully before engaging in even source-only distribution.
We note that Debian's decision to place source-only ZFS in a relegated area of their archive called contrib, is an innovative solution. Debian fortunately had a long-standing policy that contrib was specifically designed for source code that, while licensed under an acceptable license for Debian's Free Software Guidelines, also has a default use that can cause licensing problems for downstream Debian users. Therefore, Debian communicates clearly to their users that this code is problematic by keeping it out of their main archive. Furthermore, Debian does not distribute any binary form of zfs.ko.
(Full disclosure: Conservancy has a services agreement with Debian in which Conservancy occasionally gives its opinions, in a non-legal capacity, to Debian on topics of Free Software licensing, and gave Debian advice on this matter under that agreement. Conservancy is not Debian's legal counsel.)"
I wouldn't be surprised if the Canonical is taking a calculated risk here that Oracle doesn't actually care, or would be actively helpful if it meant spiting Redhat.
Of course Oracle also owns part of the copyright to the kernel. They still employ some kernel developers. And oracle could certainly sue distributors for violating their copyright (not on zfs, but on Linux)
> Nevertheless, there may be arguments for contributory and/or indirect copyright infringement in many jurisdictions
In the United States there are two kinds of indirect infringement: contributory infringement and vicarious infringement.
Contributory infringement can occur when you know that someone else is or will directly infringe, and you substantially aid that by inducing, causing, or materially aiding their direct infringement. That can include providing the tools and equipment they use to infringe.
Vicarious infringement can occur when someone who is a direct infringer is your agent or under your control.
A very important aspect of both of these types of indirect infringement is that they make you liable for the direct infringement of someone else. If there is no someone else who is a direct infringer, then you cannot possibly be a contributory or vicarious infringer.
In Sweden, the Pirate Bay tried and failed using a similar argument. The court instead found a law that targeted biker bars, where a law had been created to make it easier to shut down such facilities and prosecute its owners under contributory crimes. The prosecutor only need to convince the court that the average use is primary of a criminal nature, which in the Pirate Bay case consisted of a screenshot of the top 100 list. There didn't need to be someone that was found guilty of an actual infringement.
"Downstream users should consider carefully before engaging in even source-only distribution."
Why would Free/open-source distros have to distribute ZFS source code? Couldn't they simply provide a method for downloading the source from the already existing ZFS repos and then compile the source? Wouldn't that be enough?
It seems like this could have worked if the parent bought points and the kid could redeem earned points for gifts like Amazon gift cards, etc. This model is used by a lot of corporate reward programs.
It may be anything, but I think it's a little strange how they shot this down so fast without any evidence. Makes me wonder if this is a desperate attempt to deter looters.
> Very few Maya constellations have been identified, and even in these cases we do not know how many and which stars exactly composed each constellation. It is thus impossible to check whether there is any correspondence between the stars and the location of Maya cities. In general, since we know of several environmental facts that influenced the location of Maya settlements, the idea correlating them with stars is utterly unlikely.
> In this case, the rectilinear nature of the feature and the secondary vegetation growing back within it are clear signs of a relic milpa. I’d guess its been fallow for 10-15 years. This is obvious to anyone that has spent any time at all in the Maya lowlands.
I didn't know about the relic milpa idea. I guess if there were a lot of thick solid fertilizer just in that rectangular area that wouldn't run off easily, it could possibly do that.
I agree that it could use adjustments.
Real-life karma is a good thing, but karma points are mostly a bandaid for "no other way of trying to keep trolling and other poor quality communication down".
The serious downside of having karma is providing undeserved power and prestige via a high status and ability to have certain actions to those that don't necessarily deserve it.
But the only way to get rid of it and get similar benefit is to have a community that can elect users into power that deserve it, like StackOverflow's elections. It would be even better if you gave the power to everyone and they were all wise enough and dependable enough to wield that power as needed, but that is unlikely to happen anytime soon.