Hacker Newsnew | past | comments | ask | show | jobs | submit | crashabr's commentslogin

Honestly not sure what I would be using this for when there's Claude remote control? Is it because you can script the telegram bot to send messages at regular intervals? But Claude has a /loop as well, so I'm still confused.

The Telegram bot is just an example (and I guess a subtle jab at Openclaw, which people tend to use via Telegram). Personally I'm hoping to set this up so it can receive Github webhooks when a pull request opened by Claude Code receives comments.

You can't send an image for example from Claude iOS app to remote control session. With this new channels the attachment you send from Telegram bot is saved into your local Telegram inbox folder ready for you to process.

Remote Control is buggy a hell, the websocket keeps disconnecting every 10 minutes. And the UI is unusable on mobile.

You have an amazing tagline. This is the first time I read a tagline and thought: this is exactly what I was looking for.

But the product seems much more narrow than an actual tool run the whole business in markdown. I was hoping to see Logseq on steroids, and it feels like a tool builder primarily. I love the tool building aspect, but the fundamentals of simply organizing docs (docs, presentations, assets etc, the basics of a business) are either not part of the core offering or not presented well at all.

I love the idea of building custom tools on top of MD and it's part of my wishlist, but I feel little deceived by your tagline so I wanted to share that :)


This is great feedback, thank you. I will say that IS our goal... but we only really launched last week and are still figuring out what resonates with people and what they really want! It sounds like you're saying that the organization aspects are not there, which is very helpful to know... I am not quite sure I understand if you also think the toolbuilding is lacking?

If you are open to it, I'd love the opportunity to hear more. Here or email (alex@moment.dev) or our Discord (bottom right of our website) or Twitter/X... or whatever you prefer.


No, the tool building looks very sophisticated and powerful and I love that it hinges very much on the new era of building your own custom tools with the help of agents. The live collaboration on top of md files is also exactly what I was looking for!

If you're saying that Logseq on steroids is what you're aiming for, then, my immediate feedback would be to emphasize more: - the writing experience: at the end of the day, writing and taking notes will be the most common activity - the file organisation: tags, templates, media files, does it do the basics? - the sharing and access mechanism: can I easy share a doc with a partner / client?

Those are the basics of daily business tasks for my consultancy, and so the first thing I'm looking for. I really wish to get off Google drive, but those points need to be solved for that to sound feasible.

As for the tool building it looks very powerful, but the first example you presented (on-call dashboard), was a bit too much from the get go to wrap my head around the building blocks of your system. I've been building custom tools/wrappers of varied complexity on top of markdown for my team, from a custom revealJS skill that follows our design guide, to a form builder to a project/client DB that wraps duckdb (for yaml frontmatter parsing) with a semantic layer. I've watched your intro video but I'm still not sure whether your service would help me more closely integrate those tools to my company's knowledge base or not.

But once again, if your vision matches your tagline, then I'm really looking forward to hear more from you


Is it possible to link/wrap several skills together? I haven't managed to get Claude to react to a reference to another skill within a skill.

I have this as a skill Claude created to run the rest. It mentions each skill in turn, see below. It’s not deterministic but it definitely runs each skill and it’s raised a bunch of issues, which I then selectively deal with. Where I can, once an issue is identified, I make deterministic tests.

Text includes:

Invoke each review/audit skill in sequence. Each skill runs its own comprehensive checks and returns findings. Capture the findings from each and incorporate them into the final report.

IMPORTANT: Invoke each skill using the Skill tool. Each skill is independently runnable and will produce its own detailed output. Summarize findings per skill into the unified report format.

4. Architecture Health

Invoke: Skill(architecture-review)

Covers: module boundaries, cross-module communication, dependency direction, infrastructure layer rules, hexagonal architecture compliance.

5. Security Health

Invoke: Skill(security-review)

Covers: hardcoded secrets, SQL injection, authorization, HTTPS, CORS, input validation, authentication patterns.


Looking forward to try this with my students. Thanks!


Set it up and never managed to have it work. Only thing it did was renaming my sessions on my main cc instance. Mobile did nothing, not even an error message.


This is broadly how I worked when I was still using chat instead of cli agents for LLM support. The downside, I feel, is that unless this is a codebase / language / architecture I do not know, it feels faster to just code by hand with the AI as a reviewer rather than a writer.


Would that book be useful as a reference to introduce data journalism students to AI? I'm less interested in the basics of using the API or claude code etc than best practices for workflows dealing with unstructured data, entity extraction, automated pipelines (with evals)? Although I do have some decent workflows around this I'd be interested in reading from someone who lives and breathes this kind of work. Pure data analysis to me is also something where I haven't found a good bridge between the current "generate a python script for me that I'll double check" paradigm and the spreadsheet centric world of most data journalists.


The book is likely a good fit to this type of work. The chapter on structured outputs shows how to extract out data from text, walking through prompt engineering and k-shot examples to generate json, to pydantic, then batch processing with the different providers.

It also shows how to set up evals in different parts of the book. (Depending on what you want to do, the structured outputs has evals show comparing models/prompt changes to ground truth, and the agent chapter has evals LLM as a judge.)


What's the hook for switching out of plan? I'd like to be launch a planning skill whenever claude writes a plan but it never picks up the skill, and I haven't found a hook that can force it to.


Could you share your setup? Also a logseq fan here


Would you be able to share more? I lead a tiny non-profit org doing data literacy mentoring and I've been meaning to move more of our process docs to Logseq. Although I probably don't need a tool of the level of sophistication of usm.tools, I could take inspiration from your core ideas for our homegrown system.


To understand the approach, you need to first understand the method it is based on.

I have written a simple introduction about it that you can download for free from simpleusm.com, no sign-up required.

Simple homegrown system for processes is not that difficult to do. You basically model the USM process model, templates as instances which you then copy as a basis for editing and make a UI around the editing.

You could even just use JSON files and git, but while the data model is not complex, it is still not simple enough for editing by hand in an editor.

Then the question is what is the benefit. I would say that just using USM to define your services is helpful.

By this approach you can build various stakeholders views to your services that are always up to date and do not require manual labor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: