Hacker Newsnew | past | comments | ask | show | jobs | submit | dimitry12's commentslogin

Saw this today and instantly liked the UX. This is not the first attempt to cross spreadsheets and LLMs, but I like the conceptual simplicity here and how clearly it packages "multi-chat + one extra dimension" pattern.

I imagine it works by:

- treating non-enrich columns as inputs

- running a prompt contained in the "description" of the enrich-column against web-enabled LLM

- populate the answer

- repeat for every enrich-column (each column corresponds to a different prompts against the same "subject")

- repeat for each row/subject

I run flows like this in Python almost every day and they were able to capture it perfectly in their UI I think.

I see they realized that "cold-email marketing" is a killer use-case and built-in templated send-email feature as well.


Now I want open-source self-hosted BYOM version of this


What are the keywords for finding a lawyer who can advise on non-competes?

Asking because it turned out nearly impossible to find a local lawyer to advise on a dispute couple months ago - with 9 out of 10 telling me they only do divorces or real estate or immigration. I was literally calling one by one from a list based on what I believe were relevant search criteria on State Bar website.


https://g-s-law.com/flat-fee-employment-agreement-review/

I had them recommended to me. I have used them and was pleased.


Thank you! Not my states but seems spot on and I can extract keywords from there.


> What are the keywords for finding a lawyer who can advise on non-competes?

If you're in California, try the following; they all know their stuff and aren't in supercostly BigLaw firms (although I think most are BigLaw alumni):

- Betsy Bayha - https://www.linkedin.com/in/betsy-bayha-560107/

- George Grellas - used to post here a lot, not so much recently - https://grellas.com/our-team/george-grellas/

- Sean Hogle - https://www.linkedin.com/in/epiclaw/

- Kyle Mitchell - https://kemitchell.com/


Thank you!


https://github.com/TheRobotStudio/SO-ARM100/tree/main/Simula... I hope applies to this first gen of the product.


SO-ARM101 has a leader-arm, which is the arm with same exact dimensions and same servos - but used to read/record the trajectory. You move it with your own hand and teleoperate the follower-arm in real-time. Follower-arm is visible in the demo videos.

If you fully control the environment: exact positions of arm-base and all objects which it interacts with - you can just replay the trajectory on the follower-arm. No ML necessary.

You can use LLM to decide which trajectories to replay and in which order based on long-horizon instruction.


Yes. You are exactly right. If you want the model to have some adaptability, you will need to train a policy like ACT or GR00T.

Just a quick difference I need to point out as it's critical product spec: leader arms are using 7.4v version of ST3215 (various gear ratios) while follower arms are using 12v version of ST3215. (12v version have higher peak torque at close to 3 Nm)


> various gear ratios

for anyone following along at home: THIS IS NOT A SMALL DETAIL!

there are 12 servos total and 4 different types of 7.4 volt ones in the box. make sure you use the right one, or else you'll waste precious time to reassemble the arm.


Thank you for confirming! Love how simple yet magical your demos look, the elegance of bridging LLM-driven long-horizon planning with the arm.


Wait so the arm isn't doing any learning or moving on its own? I don't understand why you need a leader arm?


It sounds like you use the leader arm to show the robot how the task should be done. If you just used your own arm for the task the robot would have to translate human movements to its own mechanics (hard) but this way it does only need to replicate the movement you showed (easier). After you teach it how to do the movement it can then do it by itself. You show once and it can repeat a million times.


Ok I was under the impression (due to the cameras) that it's doing something with machine learning or can do a novel movement. This is just recording movements and playing them back.


If you bridge recorded trajectories with LVLM, then cameras are necessary visual input for LLM to decide which sub-tasks need to be performed to accomplish long-horizon task, and sub-tasks correspond to pre-recorded ("blind") trajectories which are replayed.

If you go beyond pre-recorded "blind" trajectories into more robust task-policies (which you would have to train from many demonstrations) then cameras become necessary to execute the sub-task.


Do I understand correctly that chess-moving demo decomposes into:

- you recorded precise arm-movement using leader-arm - for each combination of source- and target- receptacles/board-positions (looking at the shim visible in the video, which I assume ensures the exact relative position of the arm and chess-board);

- the recorded trajectories are then exposed as MCP-based functions?

Bought the kit. Thank you for the great price! Are table-clamps included?


> Whenish is an iMessage app

Where can I read more about using iMessage as a medium for generic multi-player collaboration? Or if you can just share the right keywords, I will appreciate that!



I can't find the "Coming from Hackernews?" button. Where should I look for it?


On https://www.magicpatterns.com/ you should see an input box. Type anything you want, hit enter, and then you'll get hit with our regular login panel. But at the bottom, for today online, you'll see a "Coming from Hackernews, no login required" button.


No content, no code. "Roadmap" and "Training pipeline" in README are summaries of Section 2.3 of "DeepSeek-R1"-paper.

Sad.


Many such cases


Sharing ideas early is not a bad thing and very much encouraged by YC, we are gauging interest in collaboration on the topic. Our company has open sourced almost our entire compute use stack already https://github.com/agentsea


From a practical standpoint, scaling test-time compute does enable datacenter-scale performance on the edge. I can not feasibly run 70B on my iphone, but I can run 3B even if takes a lot of time for it to produce a solution comparable to 70B's 0-shot.

I think it *is* an unlock.


To spend more compute at inference time, at least two simple approaches are readily available:

1) make model output a full solution, step-by-step, then induce it to revise the solution - repeat this as many times as you have token-budget for. You can do this via prompting alone (see Reflexion for example), or you can fine-tune the model to do that. The paper explores fine-tuning of the base model to turn it into self-revision model.

2) sample step-by-step (one "thought"-sentence per line) solutions from the model, and do it at non-zero temperature to be able to sample multiple next-steps. Then use verifier model to choose between next-step candidates and prefer to continue the rollout of the more promising branches of "thoughts". There are many many methods of exploring such tree when you can score intermediate nodes (beam search is an almost 50 years old algorithm!).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: