Hacker Newsnew | past | comments | ask | show | jobs | submit | jasonhansel's commentslogin

Can you use this with kubecolor? https://github.com/kubecolor/kubecolor

Incidentally: I have no idea why something like kubecolor isn't built in to kubectl itself.


Absolutely! kubesafe is simply a wrapper, so you can use it with any Kubernetes tool by passing the tool as the first argument to kubesafe.

Example with kubecolor:

`kubesafe kubecolor get pods --all-namespaces`


That just shifts the problem around: if there's a bug or mistake in the smart contract itself, then you face the problem that you can't reverse that smart contract.


Also, you can't patch bugs in the smart contract because people can spot the patch transactions and outbid you to exploit it instead.


As I always ask on posts like this:

Since you're (going to be) selling a product that claims to help with treating or managing a medical condition, have you conducted a clinical trial? And why not?


While developing MyndMap, my goal was to create a tool that helps users manage and prioritize their time more efficiently. It's crucial to note that MyndMap is not a replacement for professional or medical assistance but rather a supplement to aid in personal productivity.

I also collaborated closely with educational and clinical psychologists, drawing insights from their research papers to lay the foundation for our product. My vision was to create a tool that not only aligns with psychological principles but also provides practical benefits in managing and prioritizing time effectively.

In terms of clinical trials, the focus has been on refining the user experience and observing how the software becomes a helpful companion in daily tasks. Since MyndMap is not a medical product, we chose to prioritize user experience and behavioral adaptation over conducting clinical trials. We are dedicated to ongoing improvements based on user feedback to ensure that MyndMap effectively enhances productivity and time management for our users.

Also sorry for the late response.


How do we sign up for the beta? This provides information on how to install Test Flight, but doesn't provide a link to get an invite once installed.



This is why, when you claim to be running a non-profit to "benefit humankind," you shouldn't put all your resources into a for-profit subsidiary. Eventually, the for-profit arm, and its investors, will find its nonprofit parent a hindrance, and an insular board of directors won't stand a chance against corporate titans.


> This is why, when you claim to be running a non-profit to "benefit humankind," you shouldn't put all your resources into a for-profit subsidiary.

To be frank, they need to really spell out what "benefitting mankind" is. How is it measured? Or is it measured? Or is it just "the board says this isn't doing that so it's not doing that"?

It's honestly a silly slogan.


They should define it, sure. Here's what I'd expect this means:

- Not limiting access to a universally profitable technology by making it only accessible to highest bidder (e.g. hire our virtual assistants for 30k a year).

- Making models with a mind to all threats (existential, job replacement, scam uses)

- Potentially open-sourcing models that are deemed safe

So far I genuinely believe they are doing the first two and leaving billions on the table they could get by jacking their price 10x or more.


That's kinda weasel-y in itself.

If a model is not safe, the access should be limited in general.

Or, from a business model perspective; a 'sane' nonprofit doing what OpenAI should, at least in my mind, be able to do the following harmoniously:

1. Release new models that do the same thing they make others allow access do to via their 'products' with reasonable instructions on how to run them on-prem (i.e. I'm not saying what they do has to be fully runnable on a single local box, but it should be reproducible as a nonprofit purportedly geared towards research.)

2. Provide on-line access to models with a cost model that lets others use while furthering the foundation.

3. Provides enough overall value in what they do that outside parties invest regardless of whether they are guaranteed a specific individual return.

4. Not allow potentially unsafe models to be available via less than both research branches.

Perhaps, however, I am too idealistic.

On the other hand, Point 4 is important, because we can never know under the current model, whether a previous unsafe model has been truly 'patched' for all variations of a model.

OTOH, if a given model would violate Point 4, I do not trust the current org to properly disclose the found gaps; better to quietly patch the UI and intermediate layers than ask whether a fix can be worked around with different wording.


If they jack the prices, they leave too wide a door for other entrants.

Right now, OpenAI mostly has a big cost advantage; fully exploiting that requires lower pricing and high volume.


From my time working on search related problems at Google, this might be a bit of a winner take most market. If you have the users, your system can more effectively learn how to do a better job for the users. The interaction data generated is excludable gold, merely knowing how 100s of millions use chat bots is incredibly powerful, and if the company keeps being the clear and well known best, it's easy to stay the best, because the learning system has more high quality things to learn from.

While google did do a good job milking knowledge and improving from its queries and interaction data, openai surely knows how to get information from high quality textual data even better.

Openai made an interface where you can just speak your natural language, it didn't make you learn it's own pool of keyword jargon bastardized quasi command language. It's way more natural.


> "the board says this isn't doing that so it's not doing that"?

I believe that is indeed the case, it is the responsibility of the board to make that call.


Worst part is their own employees don't care about the non profits value


Because they were promised shares for a future IPO?


I thought they are given shares to the private owned for profit branch--not that it will ever IPO?


I suspect some do and some don't. Hard to know what the ratio is.


Interestingly it was the other way around this time, at least to start...


This was pretty clearly an attempt by the board to reassert control, which was slowly slipping away as the company became more enmeshed with Microsoft.


I'm not trying to throw undeserved shade, but why do we think this is something as complex as that and not just plain incompetence? Especially given the cloak and daggers firing without consulting or notifying any of their partners beforehand. That's just immaturity.


Does that mean that the move of the board was actually good for openness of AI ?


The board were literally doing their job. Anyone claiming incompetence is just mistaken about the stated goal of openai: safe available agi for all.

Throw in a huge investment, a 90 billion dollar valuation and a rockstar ceo. It’s pretty clear the court of public opinion is wrong about this case.


>safe available agi for all.

And they can pick two. Gpus don't grow on trees so without billions in funding they can't provide it to everyone.

Available means that I should have access to the weights.

Safe means they want to control what people can use it for.

The board prioritised safe over everything else. I fundamentally disagree with that and welcome the counter coup.


Competence is generally about effectiveness of execution, and less about intent. This was a foreseeable hot mess executed with staggering naïveté.


For sure. What league did they think they were playing in?


No it wasn’t. For a long time Sam was the guy.

Then, he progressively sold more and more of the companies future to Ms.

You don’t need chatgpt and it’s massive gpu consumption to achieve the goals of openai. A small research team and a few million, this company becomes a quaint quiet overachiever.

The company started to hockey stick and everyone did what they knew, Sam got the investment and money. The tech team hunkered down and delivered gpt-4 and soon -5

Was there a different path? Maybe.

Was there a path that didn’t lead to selling the company for “laundry buddy”, maybe also.

On the other hand, Ms knew what they were getting into when its hundredth lawyer signed off on the investment. To now turn around as surprised pikachu when the board starts to do its job and their man on the ground gets the boot is laughable.


You're arguing their most viable path was to fire him, wreak havoc and immediately seek to rehire and further empower him whilst diminishing themselves in the process? It's so convoluted, it just might work!

Whether fulfilling their mission or succumbing to palace intrigue, it was a gamble they took. If they didn't realize it was a gamble, then they didn't think hard enough first. If they did realize the risks, but thought they must, then they didn't explore their options sufficiently. They thought their hand was unbeatable. They never even opened the playbook.


No, I’m not???


Oh, then my apologies, it's unclear to me what you're arguing; That the disaster they find themselves in wasn't foreseeable?

That would imply they couldn’t have considered that Altman was beloved by vital and devoted employees? That big investors would be livid and take action? That the world would be shocked by a successful CEO being unceremoniously sacked during unprecedented success, with (unsubstantiated) allegations of wrongdoing, and leap on the story. Generally those are the kinds of things that would have come up on a "Fire Sam Pro and Cons" list. Or any kind "what's the best way to get what we want and avoid disaster" planning session. They made the way it was done the story, and if they had a good reason, it's been obscured and undermined by attempting to reinstate him.


they likely could have negotiated it better, but I agree, all these Altman fans really suggest the cult of business sapping any nonprofit motive.


Their actions were reckless and irresponsible. They are currently picking their successors. The incompetence is staggering regardless of motive.


We're still waiting for the explanations from Altman about the alleged involvement in spending time on conflicting companies while he is CEO at OpenAI.

According to FT this could be the cause for the firing:

“Sam has a company called Oklo, and [was trying to launch] a device company and a chip company (for AI). The rank and file at OpenAI don’t dispute those are important. The dispute is that OpenAI doesn’t own a piece. If he’s making a ton of money from companies around OpenAI there are potential conflicts of interest.”


Isn’t it amazing how companies worry about lowly, ordinary employees moonlighting, but C-suiters and board members being involved in several ventures is totally normal?


I don’t see how that factors in. What matters is OpenAI’s enterprise customers reading about a boardroom coup in the WSJ. Completely avoidable destruction of value.


This is toatlly irrelevant to the board's initial decision though.


I think what people in this thread and others are trying to say is that to run a organization like OpenAI you need lots and lots funding. AI research is incredibly costly due to highly paid researchers and an ungodly amount of GPU resources. To put all current funding at risk by pissing off current investors and enterprise customers puts the whole mission of the organization at risk. That's where the perceived incompetence comes from no mater how good the intentions are.


I understand that. What is missing is the purpose of running such an organisation. OpenAI has achieved a lot, but is it going to the direction and towards the purpose it was founded on? I do not see how one can argue that. For a non-profit, creating value is a means to a goal, not a goal in itself (as opposed to a for-profit org). People thinking that the problem of this move is that it destroys value for openAI showcase the real issue perfectly.


Exactly. Add to that the personal smearing of one person and it seems like a very unnecessarily negative maneuver.


It is a complete departure from past stated means without clear justification.


Some would say it is the opposite way around. Mission of openAI was not supposed to be maximising profit/value. Especially if it can be argued that this exactly goes against its original purpose.


It is hard to negotiate when the investors and for-profit part basically has much more power. They tried to bring them in front of a fait accompli situation, as this was their only chance, but they seem to have failed. I do not think they had a better move in the current situation right now, sadly.


Why’d they smear Sam? Couldn’t they have released a statement saying they just don’t see eye to eye anymore?


You do not fire a CEO because you hold some personal grudges towards them. You fire them because they do something wrong. And I do not see any evidence or indication of smearing Altman, unless they lie about (ie I do not see any indication of them lying about it).


That's not their stated goal, you're misinterpreting it by changing the wording.


Sorry, I read a quote from one of the people involved in this mess and assumed it was direct from the company charter.

Can you fill me in as to what the goal of OpenAI is?


> Creating safe AGI that benefits all of humanity

Benefit and available can have very different meaning when you mix in alignment/safety concerns.


Openness in the context of AI is not straightforward. The open source folks read it one way, and the alignment people read it another.

It is entirely possible a program that spits out the complete code for a nuclear targeting system should not be released in the wild.


Nuclear codes, assuming they are using modern cryptography would not be spat out by any AI, unless they were leaked publicly.

Bigger concern would be the construction of a bomb, which, still, takes a lot of hard to hide resources.

I'm more worried about other kinds of weapons, but at the same time I really don't like the idea of censoring the science of nature from people.

I think the only long term option is to beef up defenses.


>Bigger concern would be the construction of a bomb, which, still, takes a lot of hard to hide resources.

The average postgraduate in physics can design a nuclear bomb. That ship sailed in the 1960s. Anyone who uses that as an argument wants a censorship regime that the medieval catholic church would find excessive.

https://www.theguardian.com/world/2003/jun/24/usa.science


Significantly more info out and available now than when they worked on that project, too. It's only gotten easier.


I feel that people have a right to life and liberty, but liberty does not mean access to god-like powers.

There are many people that would do great things with god-like powers, but more than enough that would be terrible.


I don't think history, looking back at this moment, is going to characterize this as god-like powers.

Monumental, like the invention of language or math, but not like a god.


To be fair it is a very subjective term, god-like. You could make a claim for many different technical advancements to represent god-like capabilities. I'd claim that many examples exist to day, but many of them are not readily available to most people for inherent or regulatory reasons.

Now, I feel even just "OK" agential AI would represent god-like abilities. Being able to spawn digital homunculi that do your bidding for relatively cheap and with limited knowledge and skill required on the part of the conjuror.

Again, this is very subjective. You might feel that god-like means an entity that can build Dyson Spheres and bend reality to it's will. That is certainly god-like, but just a much higher threshold than what I'd use.


If Microsoft had to put out a statement "its all good we got the source code" clearly the openness of OpenAI was lost a while ago. This move of the board was presumably primarily good for the board.


>Microsoft had to put out a statement "its all good we got the source code"

IP lawyers would sell their own mothers for a chance to "wanna bet?" Microsoft.


Uh … no? It’s practically impossible to win a suit with Microsoft to that degree inside a decade. And by then you’ll have lost anyways.


Is AI/AGI safety the same as openness?


According to OpenAI investment paperwork:

It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation. The Company exists to advance OpenAl, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members.

I guess safe artificial general intelligence is developed and benefits all of humanity, means an open AI (hence the name) and safe AI.


no. it's anti-openness. the true value in ai/agi is the ability to control the output. the "safe" part of this is controlling the political slant that "open" ai models allow. the technology itself has much less value than the control that is possible to those who decide what is "safe" and what isn't. it's akin to raiding the libraries and removing any book or idea or reference to historical event that isn't culturally popular.

this is the future that orwell feared.


No, though I think OpenAI at least wants to achieve both.

Whether we can actually safely develop AI or AGI is a much tougher question than whether that's the intent, unfortunately.


"The board" isn't exactly a single entity. Even if the current board made this decision unanimously, they were a minority at the beginning of the year.


So basically, "if you aim for the king, you'd better not miss" kind of situation.


Now known as Prigozhin's Law.


More like last desperate attempt.


The problem, though, is without the huge commercial and societal success of ChatGPT, the AI Safety camp had no real leverage over the direction of AI advancement worldwide.

I mean, there are tons of think tanks, advocacy organizations, etc. that write lots of AI safety papers that nobody reads. I'm kind of piqued at the OpenAI board not because I think they had the wrong intentions, but because they failed to see that the "perfect is the enemy of the good."

That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did. Now I fear that the "pure AI researchers" on the side of AI Safety within OpenAI (as that is what was being widely reported) will be even more diminished/sidelined. It really feels like this was a colossally sad own goal.


Better to have a small but independent voice that can grow in influence then being shackled by commercial interest and lose your integrity - e.g. How many people actually gives a shit what Google has to say about internet governance?


A LOT of people care about what Google does in that area. What they say is kinda redundant.


And everything Google does is in its self interest or prioritizes its self interest. Altruism falls by the sidelines.


> That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did.

If the board were to have any influence they had to be able to do this. Whether this was the right time and the right issue to play their trump card I don't know - we still don't know what exactly happened - but I have a lot more respect for a group willing to take their shot than one that is so worried about losing their influence that they can never use it.


Why should anybody involved put up with this sort of behavior? Smearing the CEO? Ousting the chairman? Jeopardizing key supplier relationships? It’s ridiculous.


> Why should anybody involved put up with this sort of behavior? Smearing the CEO? Ousting the chairman? Jeopardizing key supplier relationships?

Whether it was "smearing" or uncovering actual wrongdoing depends on the facts of the matter, which will hopefully emerge in due course. A board should absolutely be able and willing to fire the CEO, oust the chairman, and jeopardize supplier relationships if the circumstances warrant it. They're the board, that's what they're for!


Because they're right. Maybe principles other than, "Get the richest," are important when we're talking about technology that can end the world or create literal hell on Earth (in the long term).

One wishes someone had pulled a similar (in sentiment) move on energy companies and arms suppliers.


I agree. I think a significantly better approach would have been to vote for the elaboration of a "checks and balances" structure to OpenAI as it grew in capabilities and influence.

Internal to the entire OpenAI org, sounds like all we had was just the for-profit arm <-> board of directors. Externally, you can add investors and public opinion (basically defaults to siding with the for-profit arm).

I wish they worked towards something closer to a functional democracy (so not the US or UK), with a judicial system (presumably the board), a congress (non-existent), and something like a triumvirate (presumably the for-profit C-suite). Given their original mission, it would be important to keep the incentives for all 3 separate, except for "safe AI that benefits humanity".

The truly hard to solve (read: impossible?) part is keeping the investors (external) from having an outsize say over any specific branch. If a third internal branch could exist that was designed to offset the influence of investors, that might have resulted in closer to the right balance.


Why would any employee put up with that? Why not go work somewhere else that better aligns with what you want?


What is it that you want that this doesn’t offer?


I think the idea of separate groups within the company checking and balancing each other is not a great idea. This is essentially what Google set up with their "Ethical AI" group, but this just led to an adversarial relationship with that group seeing their primary role as putting up as many roadblocks and vetoes as possible over the teams actually building AI (see the whole Timnit Gebru debacle). This led to a lot of the top AI talent at Google jumping ship to other places where they could move faster.

I think a better approach is to have a system of guiding principles that should guide everyone, and then putting in place a structure where there needs to be periodic alignment that those principles aren't being violated (e.g. a vote requiring something like a supermajority across company leadership of all orgs in the company, but no single org has the "my job is to slow everyone else down" role).


I like this idea, but I'm not sure if "democracy" is the word you're looking for. There's plenty of functioning bureaucracies in everything from monarchies to communist states that balance competing interests. As you say, a system of checks and balances balancing the interests of the for-profit and non-profit arms could have been a lot more interesting. Though honestly I don't have enough business experience to know if this kind of thing would be at all viable.


I've yet to hear what, exactly, underlies the sneering smugness over the notion that the board is going to get their asses handed to them. AFAICT, you have a non-profit with the power to do what they want, in this case, and "corporate titans" doing the "cornered cat" thing.


The most logical outcome would be for Microsoft to buy the for-profit OpenAI entity off its non-profit parent for $50B or some other exorbitant sum. They have the money, this would give the non-profit researchers enough play money that they can keep chasing AGI indefinitely, all the employees who joined the for-profit entity chasing a big exit could see their payday, and the new corporate parent could do what they want with the tech, including deeply integrate it within their systems without fear of competing usages.

Extra points if Google were to sweep in and buy OpenAI. I think Sundar is probably too sleepy to manage it, but this would be a coup of epic proportions. They could replace their own lackluster GenAI efforts, lock out Microsoft and Bing from ChatGPT (or if contractually unable to, enshittify the product until nobody cares), and ensure their continued AI dominance. The time to do it is now, when the OpenAI board is down to 4 people, the current leader of whom has prior Google ties, and their interest is to play with AI as an academic curiosity, which a fat warchest would accomplish. Plus if the current board wants to slow down AI progress, one sure way to accomplish that would be to sell it to Google.


The new investors entered at a ~90B USD valuation for info.

Microsoft I don't think they need it:

Assuming they have the whole 90B USD to spend: it doesn't really make sense;

they have full access to the source-code of OpenAI and datasets (because the whole training and runtime runs on their servers already).

They could poach employees and make them better offers, and get away with a much more efficient cost-basis, + increase employee retention (whereas OpenAI employees may just become so rich after a buy-out that they could be tempted to leave).

They can replicate the tech internally without any doubt and without OpenAI.

Google is in deep trouble for now, perhaps they will recover with Gemini. In theory they could buy OpenAI but it seems out-of-character for them. They have strong internal political conflicts within Google, and technically it would be a nightmare to merge the infrastructure+code within their /google3 codebase and other Google-only dependencies soup.


The reason for a buy-out is to make this all legally "clean".

Sure, Microsoft has physical access to the source code and model weights because it's trained on their servers. That doesn't mean they can just take it. If you've ever worked at a big cloud provider or enterprise software system, you'll know that there's a big legal firewall around customer data that is stored within the company's systems, and you can't look at it or touch it without the customer's consent, and even then only for specific business purposes.

Same goes for the board. Legally, the non-profit board is in charge of the for-profit OpenAI entity, and Microsoft does not get a vote. If they want the board gone but the board does not want to step down, too bad. They have the option of poaching all the talent and trying to re-create the models - but they have to do this employee-by-employee, they can't take any confidential OpenAI data or code, etc. Microsoft may have OpenAI by the balls economically, but OpenAI has Microsoft by the balls legally.

A buyout solves both of these problems. It's an exchange of economic value (which Microsoft has in spades) for legal control (which the OpenAI board currently has). Straightens out all the misaligned incentives and lets both parties get what they really want, which is the point of transactions in the first place.


> They can replicate the tech internally without any doubt and without OpenAI.

Would they be also able to keep up with development?


Would a 2.75 trillion dollar software company that has been around since the inception of the modern computer be able to keep up?

Probably. If the people running it and the shareholders were committed to keeping up and spending money to do so.


That wasn’t sufficient for Bing, Cortana, or Windows Mobile/Windows Phone.


I personally believe these are marketing failures rather than technical failures.

I also personally loathe Microsoft, but even I will concede that they probably have the technical wherewithal to follow known trajectories, the cat is out of the bag with AI now.


It wasn't sufficient for Google+ or Farmville either, but both Google and Meta have extremely competitive LLMs. If Microsoft commit themselves (which is a big if), they could have a competitive AI research lab. They're a cloud company now though, so it makes sense that they'd align themselves with the most service-oriented business of the lot.


both Google and Meta have extremely competitive LLMs

No they don’t. Both Bard and Llama are far behind GPT-4, and GPT-4 finished training in August 2022.


GPT-4 is a magnitude larger and not a magnitude better. Even before that, GPT-3 was not a particularly high watermark (compared to T5 and BERT) and GPT-2 was famously so expensive to run that it ran up a 6-figure monthly cloud spend just for inferencing. Lord knows what GPT-4 costs at-scale, but I'm not convinced it's cost-competitive with the alternatives.


GPT-4 is an existential threat to Google. Since March 24 of this year, 80% of the time I ask GPT-4 questions I would google before. And Google knows this. They are throwing billions at it but simply cannot catch up.


From a users POV, GPT-4 with search might be, but not alone. There's still a need for live results and citing specific documents. Search doesn't have to mean Google, but it can mean Google.

From an indexing/crawling POV, the content generated by LLMs might (and IMO will) permanently defeat spam filters, which would in turn cause Google (and everyone else) to permanently lose the war against spam SEO. That might be an existential threat to the value of the web in general, even as an input (for training and for web search) for LLMs.

LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio (even if you think LLMs are "just convincing BS generators"), so I'm glad the propaganda potential is one of the things the red team were working on before the initial release.


LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio

Soon (1-2 years) LLMs will be good enough to improve the general SNR of the web. In fact I think GPT-4 might already be.


I think they'd only be able to improve the SNR if they know how to separate fact from fiction. While I would love to believe they can do that in 1-2 years, I don't see any happy path for that.


Beating OpenAI in a money-pissing competition is not their priority. I don't use Google or harbor much love for them, but the existence of AI does not detract from the value of advertising. If anything, it funnels more people into it as they're looking to monetize that which is unprofitable. ChatGPT is not YouTube; it doesn't print money.

Feel however you will about it, but people have been rattling this pan for decades now. Google's bottom line will exist until someone finds a better way to extract marginal revenue than advertising.


Beating OpenAI in a money-pissing competition is not their priority

I bet Google has already spent an order of magnitude more money on GPT-4 rival development than OpenAI spent on GPT-4.


For the sake of your wallet, I hope you don't put money on that. Google certainly spends an order of magnitude more than OpenAI because they have been around longer than them, ship their own hardware and maintain their own inferencing library. The amount they spend on training their LLMs is the minority, full-stop.

I despise both of these companies, but Google's advantage here is so blatantly obvious that I struggle to see how you can even defend OpenAI like this.


Google's advantage here is so blatantly obvious

Exactly. Google has so much more resources, tries so hard to compete (it's literally life or death for them), and yet it's still so far behind. It's strange that you don't see that - if you haven't tried comparing Bard's output to GPT-4 for the same questions - try it, it will become obvious.

It's quite possible their rumored Gemini model might finally catch up with GPT-4 at some point in the future - probably around the time GPT-5 is released.


If you see "beating GPT-4" as an actual goalpost, then sure. Google doesn't; their output reflects that.


Why does ChatGPT-4 say its knowledge cut off date is April 2023?

https://chat.openai.com/share/3dd98da4-13a5-4485-a916-60482a...


There are many versions of GPT-4 model that appeared after the first one. My point is that Google and others still cannot match the quality of the first one, more than a year after it was trained.


According to the Bard technical paper (page 14), their model beats GPT-4 in several reasoning benchmarks: https://ai.google/static/documents/palm2techreport.pdf


The larger the corporation the harder for it to keep up or innovate.


Doesn't Mozilla run the same way with for-profit under a non-profit?


How’d that work out for them?


They still exist and put out a product used by quite a few people, so quite well?

You can't judge a non-profit by the same success metrics as a for-profit.


They have been a consistent downward spiral into irrelevance. So I’d say … not fine.


They are the last survivor offering an alternative to Blink and seem to be indefinitely sustainable for their core product.

Other companies tried competing against Chrome, and so far Mozilla is the most successful, as everyone else gave up and ship Chrome skins people basically only use by subterfuge or coercion. I'd say that's pretty good.


Really well - they make the best web browser around!


Looks fine to me, man. I asked chatGPT to summarize this (https://en.wikipedia.org/wiki/Mozilla_Corporation) in the context of your question

Mozilla Corporation's Experience

*Challenges and Adaptation:* Mozilla Corporation has faced financial challenges, leading to restructuring and strategic shifts. This includes layoffs, closing offices, and diversifying into new ventures, such as acquiring Fakespot in 2023

*Dependence on Key Partnerships:* Its heavy reliance on partnerships like the one with Google for revenue has been both a strength and a vulnerability, necessitating adaptations to changing market conditions and partner strategies

*Evolution and Resilience:* Despite challenges, Mozilla Corporation has shown resilience, adapting to market changes and evolving its strategies to sustain its mission, demonstrating the effectiveness of its governance model within the context of its organizational goals and the broader technology ecosystem

In conclusion, while both OpenAI and Mozilla Corporation have navigated unique paths within the tech sector, their distinct governance structures illustrate different approaches to balancing mission-driven goals with operational sustainability and market responsiveness.


It is also why dont bring on board members you cant trust. I doubt getting ousted was what Sam, Greg, and Elon had in mind when they picked it.



> Ilya has a good moral compass and does not seek power.

At this point that coming from Elon may not be the endorsement you think it is.

Also maybe Elon sees that Ilya is going to be ousted and wants to extend a hand to him before others do.


Regardless, Elon appears to back the board contrary to the suggestion above.


My point was actually a little different, that a board of Elon Sam and Greg wouldn't have pick this and board control is important.


See Mozilla for proof of this statement


They quickly realize without money you really can’t do as much.


Don’t be evil.


Yeah, but also remember that Altman and Musk started the non-profit to begin with (back when both their reputations were much different). They were explicitly concerned about Google's dominance in AI. It was always competitive, and always about power.

Wikipedia gives these names:

In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[15] the formation of OpenAI and pledged over $1 billion to the venture

Do any of those people sound like their day job was running non-profits? Had any of them EVER worked at a non-profit?

---

So a pretty straightforward reading is that the business/profit-minded guys started the non-profit to lure the idealistic researchers in.

The non-profit thing was a feel-good ruse, a recruiting tool. Sutskever could have had any job he wanted at that point, after his breakthroughs in the field. He also didn't have to work, after his 3-person company was acquired by Google for $40M+.

I'm sure it's more nuanced than that, but it's silly to say that there was an idealistic and pure non-profit, and some business guys came in and ruined it. The motive was there all along.

Not to say I wouldn't have been fooled (I mean certainly employees got many benefits, which made it worth their time). But in retrospect it's naive to accept their help with funding and connections (e.g. OpenAI's first office was Stripe's office), and not think they wouldn't get paid back later.

VCs are very good at understanding the long game. Peter Thiel knows that most of the profits come after 10-15 years.

Altman can take no equity in OpenAI, because he's playing the long game. He knows it's just "physics" that he will get paid back later (and that seems to have already happened)

---

Anybody who's worked at a startup that became a successful company has seen this split. The early employees create a ton of value, but that value is only fully captured 10+ years down the road.

And when there are tens or hundreds of billions of dollars of value created, the hawks will circle.

It definitely happened at say Google. Early employees didn't capture the value they generated, while later employees rode the wave of the early success. (I was a middle-ish employee, neither early nor late)

So basically the early OpenAI employees created a ton of value, but they have no mechanism to capture the value, or perhaps control it in order to "benefit humanity".

From here on out, it's politics and money -- you can see that with the support of Microsoft's CEO, OpenAI investors, many peer CEOs from YC, weird laudatory tweets by Eric Schmidt, etc.

The awkward, poorly executed firing of the CEO seems like an obvious symptom of that. It's a last-ditch effort for control, when it's become obvious that the game is unfolding according to the normal rules of capitalism.

(Note: I'm not against making a profit, or non-profits. Just saying that the whole organizational structure was fishy/dishonest to begin with, and in retrospect it shouldn't be surprising it turned out this way.)


this makes a lot of sense. I wonder if board's goal in firing Sam was to make everyone (govt., general public,) understand for-profit motives of Sam and most employees at this point.

Either Sam forms a new company with mass exodus of employees, or outside pressure changes structure of OpenAI towards a clear for-profit vision. In both cases, there will be no confusion going forward whether OpenAI/Sam have become a profit-chasing startup.

Chasing profits is not bad in itself, but doing it under the guise of a non-profit organization is.


Thank-you. Not a lot of times remind me of this heady stuff, but the comment did. So here goes.

---

A Nobel Prize was awarded to Ilya Prigogine in 1977 for his contributions in irreversible thermodynamics. At his award speech in Stockholm, Ilya showed a practical application of his thesis.

He derived that, in times of superstability, lost trust is directly reversible by removing the cause of that lost trust.

He went on to show that in disturbed times, lost trust becomes irreversible. That is, in unstable periods, management can remove the cause of trust lost--and nothing happens.

Since his thesis is based on mathematical physics, it occupies the same niche of certainty as the law of gravity. Ignore it at your peril.

-- Design for Prevention (2010)


One answer would be to provide something like a GetPointer() method which, if the inner pointer is nil, creates a new struct of type T and returns a pointer to it.


Unfortunately, the fake hitman site has been shut down, apparently due to financial difficulties: https://rentahitman.com/


It looks like their form submission handler is being annoying, probably because it was being asked to store a database of murder requests.

In the day of free static hosting and insane amounts of otherwise free computing power, it's hard to believe it's "financially difficult" to serve web page and store what probably is a form submission per week at most.

Also from the website:

> I have purposefully refrained from accepting funds from any source to avoid any misinterpretation.

and in the next paragraph:

> [if you] feel inclined to contribute, you can do so by clicking the SUPPORT link

Guy's weird.


I mean, the entire site is pretty clearly a meme and meant to be a joke. I’m not sure why you’re taking it so seriously.

They also pretty clearly specify that “any source” means LEO, governmental agencies and other organizations in the next paragraph.


> This website uses yummy cookies.

ZAHFjkdahflkasd

> We are 100% HIPPA Compliant

sdajfkghlkhgfjhlgasldhjkfgblhj

(this website is a meme, who would ever believe it??)


believe it or not, literacy is a skill. many has it and many more doesn't. not to mention, humour is difficult. so, never set you expectation too high.


literacy is easy. I read word, make sound.

common sense, credulity, and being able to place things in contexts, esp. ones you might not know, is also a skill, and arguably what this person was missing.

Like she was able to email and negotiate with the undercovers, she clearly had basic literacy.


Literacy is an effort.


It's only entrapment if you wouldn't have committed the crime otherwise. I think it's safe to say that, had Rentahitman.com not existed, she would still have tried to rent a hitman if she were able to do so.


But what would she find one? Go to the second result on Google? Most people would never be able to find a real hitman since it would require taking very obvious risks to find a real hitman - and would just give up.


This is the sort of thing that makes me wonder if I’m backward as all get’out. I’ve never been very tough-on-crime, but if you genuinely hire a hit (murder) on somebody through a shady website, and agree on a contract, well, this is an act that seems indefensible.

The fact that it was a parody site seems amusing, but not defensible, since presumably she was not aware of the parody.


The whole part about her meeting the hitman at a Waffle House, and giving him a $100 down payment seals the deal. That's not LARPing. That's an overt act.

Guess I'm backward as well.


Well this is going to be depressing.


> Some people find my indifference off-putting, thinking I'm rude or arrogant. But I'm not. I'm just being honest.

The famous excuse of rude and arrogant people everywhere.

As Richard Needham once said: "The person who is brutally honest enjoys the brutality quite as much as the honesty. Possibly more."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: