I have not worked with it but for a while now, considering its core functionality of database+forms+cases, have harbored the idea of repurposing it and using it to modernize areas in the healthcare industry. E.g. practice management
We evaluated most of these (minus Workato) and landed on Nango. Was by far the most flexible and having the source code available was a big plus (vs. the closed-source alternatives). The team is also reactive to feedback in their Slack community, they even added a few new endpoints for us in < 24 hours
Nango is awesome. I've used it across several projects and it's sped up our "time to live integration" significantly + reduced maintenance costs. Congrats Bastien, Robin and team!
Very cool. I was initially confused since I expected the questions to be multiple choice, but then I realized you're evaluating whether the answer is correct or not using AI (so I can type "Obama" or "Barack Obama" and both are correct). Haven't seen that approach before!
(OP here) Bit of a controversial take I wrote about. Most people think that AI benefits incumbents and LLMs deepen software moats.
But for the first time, we have an intelligent, format-agnostic file converter. Give ChatGPT an export file from one app, and seconds later you get an import file ready for another tool.
We've seen the benefit first-hand at our startup with our "AI Form Importer". It's now easier than ever to demo and migrate to new software solutions.
I think this is largely a good thing for the ecosystem.
We use Nango for https://fillout.com and it's been a great addition to our tech stack. Has made it much faster to add new integrations without having to navigate OAuth docs each time
We've been using the ChatGPT API from Retool Workflows (for spam detection) and it's been valuable for our business. The built-in vector DB looks interesting
How does the spam detection work? Does chat gpt output something that your application can understand, like {isSpam: true}, or does it output a sentence in english?
It outputs both - { isLikelySpam: boolean, reason: string }
Then we have an inbox app (also made in Retool) that our support team uses to manually review any submissions that are isLikelySpam = true. The <reason> helps to understand why it was flagged.
Our use case is for a form builder (https://fillout.com) but I imagine this type of use case is pretty common for any app that has user-generated content
I'm interested in this aspect of llms too, are you simply just passing it some input (email, customer message) and asking chat gpt to decide if it's spam? Do you provide any prior examples of spam / not spam or just rely on the knowledge already embedded within the model?
Spam detection is a classic example for classification problems. I guess I'm trying to gauge whether there's an entire suite of traditional problems that llms solve well enough by simply asking a question of the base model. I've found a few areas in my own work where this is the case.
We give 2-3 examples and find that it works pretty well (few shot fine tuning) but haven't tried actual fine tuning yet so I don't have a 1-1 comparison.
We also have other spam filters that are not LLM-based. One of the main benefits of the LLM-based approach is that it's good at catching people who try to avoid detection (e.g. someone purposefully mis-spelling suspicious words like "pa$$word")