Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

10 million per day is about 100 per second. SQLite performs about this quickly. I wrote a test script and it did 125 (unindexed) lookups per second. Then I ran two of these tests at the same time, and the rate stayed about the same. I have 8 cores, so I made 8 processes, and it was the same. 125 requests/second * 8 * 86400 seconds/day = 86_400_000 requests per day.

I added another thread writing as quickly as possible to the mix (7 readers, 1 writer), and this brought the read rate down to about 45/reads per second per thread. Still more than 10 million per day, so technically I am right.

Also, I don't doubt that MySQL and Postgres (and BDB) are both significantly faster than this. It's just that SQLite is not going to guarantee "instant failure" of your project, as the article implies.

(One thing to note -- every time you type a character in Firefox's address bar, you are doing an SQLite query. It is Fast Enough for many, many applications.)



I respect your research into the matter, but it frankly doesn't matter one bit. Here's why.

Your peak load/sec is never going to be close to your average load/sec. I've seen be between 2x-5x average load.

Read performance on simple selects is straightforward to scale. Most solutions to that problem just put the data in memory, via the database cache or memcache.

Part of the difficulty is in scaling writes. What if your 1000 queries/sec are all inserts on the same 500M row table? What if they are updates updates on a table that's 10M rows long? This is when you have to make hard decisions about sharding and the like.

I certainly believe that SQLite is "fast enough" for many applications. I also know that many applications are NOT doing 10s of millions of requests a day.


That -is- fast, but I still have trouble reconciling that deep down in my computer, a human readable SQL query gets built, and then another process parses that SQL. Seems so wasteful building and then parsing a human readable string for something that's happening on the same machine.

I know nothing of SQLites's internals, but wouldnt it make more sense to parse the query once and then store a compiled version of the query for subsequent lookups? Like you might do with a regexp?


Yes, This is known as a prepared statement. You compile a parametrized statement once, then execute it as many times as you like with different arguments.

Also, SQLite, unlike most other databases, is an embedded database which does everything in-process rather than invoking multiple processes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: