Hacker Newsnew | past | comments | ask | show | jobs | submit | mmcn's commentslogin

Such a clean developer experience. What’s the cost look like for this use case? Is it viable as it scales up?


Enabling an agent to query financial data really helps on the analysis side. How are you tackling the data ingestion side? The challenge I’ve seen again and again is logging financial data from different sources in a consistent way such that it is able to be aggregated and queried. I’ve been curious if AI can help there.


We’re tackling ingestion primarily through direct bank connections. Users connect their bank and financial accounts, and transactions flow into our system automatically. From there, we store the data in a structured database and normalize it into a consistent internal format so it can be aggregated and queried reliably.

Right now, the ingestion layer handles most of the heavy lifting—parsing the raw feeds, mapping fields into a standard schema, and ensuring consistency across institutions. Our next version will include layering AI to help with classification and enrichment (e.g. categorizing ambiguous transactions, detecting anomalies, and filling in context where the raw data is thin).

So it’s a mix: the ingestion pipeline makes the data uniform, while AI helps make it more useful and accurate for analysis. As we move toward our “agentic” roadmap, we see AI playing a bigger role in automating the messy parts of ingestion as well.


Agreed that performance always has a cost, and not just on maintainability. I've seen working with financial ledgers that there's a ceiling on throughput when read-after-write consistency is required. If you want to make sure a financial account can't go below 0, then each write must be processed serially. That means a round trip to a database each time.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: