Hacker Newsnew | past | comments | ask | show | jobs | submit | kozikow's commentslogin

I did lose 30-40kg about 2 years ago on Ozempic.

I don't count calories. I went off Ozempic (now Mounjaro) and I gain weight at about 0.5-1kg a month.

As I am resistance (gym) training, significant % of that ends up being muscle mass rather than fat.

So I end up taking Mounjaro for about 1-2 months every 3-4 months, approximately 33% of the time being "on".

Funnily, I end up with bulk/cut periods without doing them explicitly. This ends up working well for growing muscles.

Notice all people in the story are women. I guess pairing GLP-likes with bodybuilding works quite well for men. As times goes on, I end up needing mounjaro less due to my increased muscle mass.


When you restart taking it after an "off" period, do you immediately resume whatever dose you were at before? Or do you taper up each time? (Curious because I know for many people, side effects level out after they've been at a given dose for long enough, but temporarily return whenever they increase their dose; not sure how "off" periods affect that kind of tolerance.)

I don't count calories. I went off Ozempic (now Mounjaro) and I gain weight at about 0.5-1kg a month.

yeah that is how ppl become obese. over 2-5 years it adds up

Notice all people in the story are women.

Probably due to social media. Women may be more inclined to show off their success online. Also, women respond better to these drugs compared to men.


The H-1B program was already broken by the lottery. This new fee just solidifies the L-1 visa as the real high-skilled pipeline. More L-1 visas are already approved annually than new H-1Bs, and this policy only widens that gap.

In addition to L1, O1 is also often gamed. $100K for H1B is mostly "posturing" at this point, as voters don't know about other options.


L-1 has been broken for decades as well. The same problems that impact an H1B impact an L1 as well.

The only way abuse of both visas can stop is if they are not tied to an employer, allowing free movement of labor. Thus, if someone is talented and at TCS then they can either demand a salary equal to their skill or go to an employer who can offer that salary.

Additonally, federal, state, and local governments need to start playing the subsidy game that Poland, Romania, Czechia, India, Israel, and other companies play to attract offshore offices.

> H1B is mostly "posturing" at this point, as voters don't know about other options

I disagree. This was clearly timed to distract and overshadow the Gold and Platinum card announcement.


> the subsidy game that Poland, Romania, Czechia, India, Israel, and other companies play to attract offshore offices.

Do you mean US government must dramatically reduce cost of living by offering subsidized housing, investing in education, healthcare etc? When I hire, I never consider USA and nobody pays me to find skilled labor in Eastern or Central Europe. You can pay one half of American salary there and people will be put in upper middle class with such income, being able to afford a lot and living comfortable life.


> The only way abuse of both visas can stop is if they are not tied to an employer, allowing free movement of labor.

https://www.uscis.gov/working-in-the-united-states/temporary...

    Changing or Leaving Your H-1B Employer
    Q. What is “porting”?

    A. There are two kinds of job portability, or “porting,” available based on two different kinds of employer petitions:

    H-1B petition portability: Eligible H-1B nonimmigrants may begin working for a new employer as soon as the employer properly files a new H-1B petition (Form I-129) requesting to amend or extend H-1B status with USCIS, without waiting for the petition to be approved. More information about H-1B portability can be found on our H-1B Specialty Occupations page.

    ...

    Q. How do I leave my current employer to start working for a new employer while remaining in H-1B status?

    A. Under H-1B portability provisions, you may begin working for a new employer as soon as they properly file a non-frivolous H-1B petition on your behalf, or as of the requested start date on the petition, whichever is later. You are not required to wait for the new employer’s H-1B petition to be approved before beginning to work for the new employer, assuming certain conditions are met. For more details about H-1B portability, see our H-1B Specialty Occupations page, under “Changing Employers or Employment Terms with the Same Employer (Portability).”
---

Someone on a H-1B visa can change jobs as soon as the other employer files a form I-129 to hire them.


That process remains tied to an employer, which is the very point that was originally made.

Many employers simply won’t do that paperwork by policy and treat that process as no different than sponsorship.


It still means you cannot, for instance, quit to escape intolerable conditions, unless you already have another job lined up.

It also means that you're much, much less likely to find another employer willing to fill out the paperwork to hire you—especially if they also have to pay the $100k fee (yes, I know, the announcement doesn't say they have to—wanna take bets on whether Trump would say they do if he learned that it's possible?).


Is Trump the great puppet master people believe him to be, or more likely someone like Lutnick? Think about that. It’s hard to imagine a fool that’s more easy to manipulate than Trump, and I was here for GW Bush!


...I don't particularly credit any grand strategy to Trump, nor do I think my post suggested that?

It's possible that someone intended the knock-on effects I describe, but I would say it's just as likely that it's pure coincidence that they support the right's desire to hurt labor as a whole.


Ads inside LLMs (e.g. pay $ to boost your product in LLM recommendation) is going to be a big thing.

My guess is that Google/OpenAI are eyeing each other - whoever does this first.

Why would that work? It's a proven business model. Example: I use LLMs for product research (e.g. which washing machine to buy). Retailer pays if link to their website is included in the results. Don't want to pay? Then redirect the user to buy it on Walmart instead of Amazon.


I actually encountered this pretty early in one of these user tuned GPT's in OpenAI's GPT store. It was called Sommelier or something and it was specialized in conversations about wine. It was pretty useful at first, but after a few weeks it started lacing all its replies with tips for wines from the same online store. Needless to say, I dropped it immediately.


Forget links, agents are gonna just go upstream to the source and buy it for you. I think it will change the game because intent will be super high and conversion will go through the roof.


Yeah I’m gonna give an AI agent my credit card and complete autonomy with my finances so it can hallucinate me a new car. I love getting findommed.


Look, the car shop might not bill you at all because their A.I agent will hallucinate the purchase, so I don't see why you're so pessimistic about agents.


It can still give you an overview with a few choices and a link to the prepared checkout page, and you enter your CC details yourself.


That’s basically what any halfway decent e-commerce site is today


Feels like this hope is in the same vein as Amazon Dash and then the expectation that people would buy shit with voice assistants like Alexa.


People are already wary of hosted LLMs having poisoned training data. That might kill them altogether and push everyone to using eg Qwen3-coder.


No, a small group of highly tech-literate people are wary of this. Your personal bubble is wary of this. So is some of mine. "People" don't care and will use the packaged, corporate, convenient version with the well-known name.

People who are aware of that and care enough to change consumption habits are an inconsequential part of the market.


I don't know, a bunch of the older people from the town I grew up in avoided using LLMs until Grok came out because of what they saw going on with alignment in the other models (they certainly couldn't articulate this but listening to what said it's what they were thinking.) Obviously Grok has the same problems but I think it goes to show the general public is more aware of the issue than they get credit for.

You combine this with Apple pushing on device inference and making it easy and anything like ads probably will kill hosted LLMs for most consumers.


Yeah, average people that I know (across continents) just ChatGPT their way into literally anything without a second thought. They don't care.


Maybe Grok was just pushed by their political influencers. It’s a republican, anti-woke LLM after all.


Who doesn’t want to associate their product with unreliability and incorrect information? Think about that reputational damage.


> And realize the thing on your head adds absolutely nothing to the interaction.

There are some nice effects - simulating sword fighting, shooting, etc.

It's just benefits still outweigh the cost. Getting to "good enough" for most people is just not possible in short and midterm.


Mindless optimization of basic "attention grab" metric is why the whole internet feels like a slots machine. Be it reddit, Facebook, YouTube, any google result

Thankfully this won't happen with LLMs, as compute is too expensive so execs can't just take an easy way out of optimizing for number of questions asked


More to that - at this point, it feels to me, that arenas are getting too focused on fitting user preferences rather than actual model quality.

In reality I prefer different model, for different things, and quite often it's because model X is tuned to return more of my preference - e.g. Gemini tends to be usually the best in non-english, chatgpt works better for me personally for health questions, ...


I am big fan of "cost monitoring".

In my previous company I had a good setup for costs monitoring - including release to release comparisons, drill downs, statistics, etc.

After each release I looked at this data. It saved a lot of $, by simple fixes like "why we are calling this API twice?".

It also quite some issues that weren't strictly customer related, but weren't apparent from other type of data (you will always have some "unknown unknowns" in your monitoring, and costs data seem to be pretty wide net to catch some of those)


What levels of observability did you have for costs of data transfer and how did you do it?


Chatgpt content is getting pasted all over the web. Now, for anyone crawling the web, it's hard to not include some chatgpt outputs.

So even if you put some "watermarks" in your AI generation, it's plausible defense to find publicly posted content with those watermarks.

Maybe it's explained in the article, but I can't access it, as it's paywalled.


> the harder it is for me to use these tools in a way that doesn’t feel like too much blind faith (even if it works!)

I tend to ask multiple models and if they all give me roughly the same answer, then it's probably right.


Also keeping context short. Virtually all my cases of bad hallucinations with o1 have been when I've provided too much context or the conversation has been going on for too long. Starting a new chat fixes it.

You can see this effect in the ARC-AGI evals, too much context impacts even o3(high).


> if they all give me roughly the same answer, then it's probably right.

... or they had a lot of overlapping training data in that area.


Or maybe they were just trained on the same (incorrect) dataset.


> Cersei Lannister: Power is power.

Knowledge is a necessary, but not sufficient component of power

Or in other words observability is a necessary, but not sufficient component of optimization.


Power determines what knowledge is possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: