I currently use Banktivity which is OK. Would love to hear from any others that have used Banktivity and migrated to something else. Ideally, there should be OFX support.
FYI you should have used llama.cpp to do the benchmarks. It performs almost 20x faster than ollama for the gpt-oss-120b model. Here are some samples results on my spark:
Is this the full weight model or quantized version? The GGUFs distributed on Hugging Face labeled as MXFP4 quantization have layers that are quantized to int8 (q8_0) instead of bf16 as suggested by OpenAI.
Example looking at blk.0.attn_k.weight, it's q8_0 amongst other layers:
One of the quite expensive paid plans, as the free one has to have "Created with Datawrapper" attribution at the bottom. I would guess they've vibe-coded their way to a premium version without paying, as the alternative is definitely outside individual people's budgets (>$500/month).
Inspecting the page, I can see some classes "dw-chart" so I looked it up and got to this: https://www.datawrapper.de/charts. Looks a bit different on the page, but I think that's it.
I saw a TikTok of someone saying that farmers are not stupid (due to the wide variety of skills to successfully farm) and were just betting on Trump not actually going through with tariffs.
It's hard to have any sympathy for such cynical behavior while simultaneously asking for handouts. Especially since the same people probably voted against others getting social services.
It also hurts when I drop the iPad mini on my face. In fact, I was considering getting a Pro Max to replace both a iPhone Pro and iPad mini combo but figured it might too big of a compromise.
I wonder if anyone has successfully gone down this path.
Not yet, but hope to have something up in September! It’s unfortunately not most compatible - I thought about that but didn’t see a lot of value and there were some downsides like re-implementing an encryption layer that doesn’t make sense if you use WebRTC. Just curious, what’s your use case for mosh compatibility?
Hmm, we support prompts at both 1. the model level (the Whisper supports a "prompt" parameter that sometimes works) and 2. transformations level (inject the transcribed text into a prompt and get the output from an LLM model of your choice). Unsure how else semantic correction can be implemented, but always open expand the feature set greatly over the next few weeks!
They might not now how whisper works. I suspect that the answer to their question is 'yes' and the reason they can't find a straightforward answer through your project is that the answer is so obvious to you that it's hardly worth documenting.
Whisper for transcription tries to transform audio data into LLM output. The transcripts generally have proper casing, punctuation and can usually stick to a specific domain based on the surrounding context.
It would sure be nice to have some standardized conventions around this. AGENTS.md etc. It seems insane to have to have multiple files/rules for essentially the same goals just for different tools.
The idea of having a bunch of A100 GPU cycles needed to process the natural language equivalent of a file pointer makes me deeply sad about the current state of software development.
Good coding and #prompting hack for coding. Imagine you have a large codebase and you work with other people and you want version control your .cursor folder. Just symlink it to another folder and version control that folder.