Grammarly even from the start was very distracting to me even as a someone using english as a second language to communicate. I have developed my own taste and way of articulating thoughts, but grammarly (and LLMs today) forced me to remove that layer of personality from my texts which I didn't wanted to let go. Sure I sounded less professional, but that was the image I wanted to project anyways.
Unrelated but surprising to me that I've found built-in grammar checking within JetBrains IDEs far more useful at catching grammar mistakes while not forcing me to rewrite entire sentences.
JetBrains’s default grammar checking plugin[1] is actually built on languagetool[2], a pretty decent grammar checker that also happens to be partly open source and self-hostable[3]. Sadly, they have lately shoved in a few (thankfully optional) crappy LLM-based features (that don’t even work well in the first place) and coated their landing page in endless AI keywords, but their core engine is still more traditional and open-source, and hasn’t really seemed to change in years. You can just run it on your own device and point their browser and editor extensions to it.
> ePaper displays are niche, and worse for most personal and business use-cases compared to LCD et al.
Hence we need more resources for R&D to figure out the shortcomings. LCD didn't pop into existence randomly either. It's not a guaranteed win, but neither AI has proven any realized gains in the majority of industries that gambled on adopting it.
They don't have to reinvent electron. They shouldn't need to use a whole virtualized operating system to call their web API with a fancy UI.
Projects with much smaller budget than Atrophic has achieved much better x-plat UI without relying on electron [1]. There are more sensible options like Qt and whatnot for rendering UIs.
You can even engineer your app to have a single core with all the business logic as a single shared library. Then write UI wrappers using SwiftUI, GTK, and whatever microsoft feels like putting out as current UI library (I think currently it's WinUI2) consuming the core to do the interesting bits.
Heck there are people whom built gui toolkits from scratch to support their own needs [2].
Regarding notifications, both iOS and android doesn't support reading and responding to text messages. The feature works on android because of a workaround: apps create a global notification listener and they can also interact with notification - read UI contents and respond.
I know it's still better than not having a workaround at all like in iOS. But just pointing out that Google probably never meant to let others access notification mirroring.
For claude at least I have been getting more assumption clarification questions after adding some custom prompts. It is still making some assumptions but asking some questions makes me feel more in control of the progress.
In terms of the behavior, technically it doesn’t override, but instead think of it as a nudge. Both system prompt and your custom prompt participates in the attention process, so the output tokens get some influence from both. Not equally but to some varying degree and chance
Let me clear my cache after logging in twice to get the OOM fixed so I can finally login to show you what’s wrong with it over a teams call and hope it doesn’t logout and reload randomly during the call.
I find the whole idea of context window inefficient. The model that knows more than anyone could, can’t hold a memory of a codebase? I know it’s a limitation of the transformer design, but I find it quite disappointing that most of the investment is being spent on optimizing inefficient technologies rather than rethinking about the design.
That’s something LLMs are also presumably good at. At least I’m seeing more and more push to use LLMs at work for ambitious business requirements instead of learning about the problem we’ve been dealing with. Instead of knowing why you are doing what you’re doing, now people are just asking LLMs for specific answers and move on.
Sure some might use it to learn as well, but it’s not necessary and people just yolo the first answer claude gives to them.
Website doesn't have any info about the product. I don't know which features are there going to be. Screenshots would be cool as well to evaluate UX.
Frankly the github readme is more useful than the website in that regard.
You're correct. TBH I didn't want to spend time on the website more than a waitlist, instead focused on building the app. I intended the Github readme to show more about the project and I post updates with screenshots on Twitter/X.
I'm putting up a docs page today to incrementally add more information about the roadmap, vision, Bot SDK, API docs, etc.
Anything in particular you're curious about?
Are you planning on releasing the server as a standalone application? Or it will be source available client + proprietary server? I've checked Noor, which looks to have a nice UX and functionalities I'd like to see (chat and realtime voice channels) without gimmicks like threads inside chats, pretending to be forums. I am wondering why you've decided to start over again?
Unrelated but surprising to me that I've found built-in grammar checking within JetBrains IDEs far more useful at catching grammar mistakes while not forcing me to rewrite entire sentences.