Discord was originally a place to organize for playing videogames with people. If you're using it for that purpose, you're probably on your computer already (because that's where the games are). Thus, the mobile app ends up being more for checking in with your community while away from the computer, rather than a primary driver of your engagement with the platform.
They could have been using it in browser. One of discords underrated decisions is that the browser version is fully featured and doesn't force you to download the app, a new user can join a server in their browser with just a name which is as low friction as you can get.
"Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist" by Liz Pelly goes into more detail about their origins and the culture around piracy in Sweden at the time.
Yes. Mint also does this. From what I've heard, there are a lot of banks without APIs, so the next best approach is to login on behalf of users and scrape the data.
The data is encrypted with a key that you have not one that the server has which is much much better. If someone breaks in to the server they are not able to very quickly grab all the data. They have to be able to deploy some malware on the server and allow it to run for a while to collect passwords.
If the on-line component goes anywhere beyond the ability to sync an opaque binary blob that only your local machines can decrypt and reencrypt, there's a problem there.
The devices could exchange their keys through a secure connection - be it direct (Bluetooth, LAN) or routed by a third-party service. It could also be transferred physically (through removable storage, or through retyping a bunch of numbers shown on one device into another device).
They're one of the few companies who are building GraphQL developer tools, so they have a financial incentive to have their name associated w/ GraphQL.
This piece, while somewhat valuable, is largely content marketing. First they sell you on the idea of "best practices." Then they'll follow up w/ a tool that, surprise, does all those "best practices" for you. "The modern marketer creates their own demand." Prisma is another GraphQL company that produces content like this.
Many GraphQL libraries take some sort of schema definition and then serve it at a route (eg. /graphql). To support multiple schemas, you'd just write a different definition and serve it at a different route. How you resolve the fields is up to you, but both can use shared underlying business logic in these resolvers.
In terms of maintainability, you have to take care that your changes to the underlying business logic don't break assumptions of each schema. And if you want to evolve one schema (eg. say, deprecate a mutation argument, rename a field and deprecate the old naming), you have to ensure that your underlying business logic is backwards compatible for any other schemas (and their clients) relying on it.
I agree. I think their recommendation is a bit overzealous.
I can see the argument if you have a web frontend that consumes data from multiple backend services – have one GraphQL service that manages them all instead of a GraphQL layer on each service.
But this breaks down greatly when you have different "Viewers". In a web app, the "Viewer" can be a logged in user. In an admin dashboard, the "Viewer" is very different – an employee acting on behalf of users. Service to service communication likely doesn't have a concept of a "Viewer".
I would propose that you have different schemas when you have these different views of the world or different permission boundaries. The business logic can be shared – you may just enforce different authorization checks at the GraphQL layer. You could also share GraphQL types that are common between schemas.
> The lesson we can learn from this story is the following: start with a generic database...SQL database are a good choice because they can do many tricks...The modern and successful architecture that is commonly used today is to have an SQL database that is sometimes surrounded by some one-trick ponies to take care of a few pain points.
Yup.
I like that we now have more of these one-trick ponies to choose from our toolbox when necessary, when a relational database just won't cut it.
But, my biggest complaint around the NoSQL movement is the marketing pseudo-hype it created. So many amateurs who don't understand database selection took it as gospel and evangelized it across the web (eg. Mongo w/ Node).
It's hard to correct people's understanding when they learn things wrong the first time, especially when there's a mountain of incorrect information they can point to on the web ("These people can't all be wrong, can they?" Well...).
The quoted statement is not correct at least as far as analytics are concerned. Two examples:
1.) Analytics in many enterprises increasingly feed off data lakes consisting of enormous quantities of data in object storage. SQL has a part to play but it's effectively computing aggregates and creating data marts off this deeper pool of data. Data lake architecture is likely to be increasingly dominant given the enormous growth in data volumes.
2.) Machine learning is transforming analytics. This looks like the next feature likely to be absorbed into DBMS systems. SQL integration with ML is likely to be a hot topic in future systems but a substantial fraction of ML processing will remain outside the DBMS.
So SQL is going to be present widely in most future solutions but that's not the same as saying that a single relational DBMS architecture will solve all problems. It's been clear for years that ACID-compliant RDBMS have a part in this picture but it's just part.
Overall the article still seems to be fighting the SQL/NoSQL wars of the last decade. A large part of the market is moving on to other use cases.
The most infuriating part of "NoSQL" for me was always the conflation of SQL and RDBMS and ACID. Most NoSQL is simply non-relational or non-ACID. You still need an access layer.
After all these years people seem to have finally realized that the challenge was never SQL, it was data and you still have to think about that, even if you don't use SQL.
howfuckedismydatabase.com[0] is still as accurate as it ever was.
Data warehousing and ML have different requirements and needs than your typical N-tier web app. Even streaming event data warrants a different solution. It comes back to knowing how to choose the right database for the job.
The issue is around how these technologies are marketed – grandiose claims and few practical use cases. Once the marketing material permeates the industry and some part latches on, it becomes a self-reinforcing cycle. Blog posts, books, and courses bring the information to the masses. Then companies start to adopt the tools. Then they need to hire engineers who know those tools. So more information gets published about them because that's what people want to learn to get hired.
Many engineers today will turn to NoSQL for everything because of the past few years of marketing hype (and acronym-driven-development), and that's quite a shame.
For those looking to understand how to choose the right database for the job, I'd recommend first reading "Designing Data-Intensive Applications" (https://dataintensive.net)
Diversity and competition makes for better outcomes. I was never on the NoSQL bandwagon but even then I could see the benefits for its users. Namely schemaless documents for rapid development and relatively easy horizontal scaling. From those we got things like Firebase and a slew of NewSQL databases. Some SQL databases now have horizontal scaling in their roadmaps.
> It's hard to correct people's understanding when they learn things wrong the first time
Don't have a good solution to this. Trouble is that the 'first time' is learning and any amount of hand waving and thought experiments isn't enough to refute what they've learned combined with what they've yet to learn. No pain, no gain I suppose.
Ah, it's that time of the year again – always enjoy seeing where the JS ecosystem is moving.
I would like to see two categories added for next year's survey:
1. Auth – would be interested to see if Passport is still the tool of choice
2. ORM – curious how Sequelize, Knex/Bookshelf, Objection, etc. compare in usage.
I understand this survey will obviously skew towards frontend tools, but some of the categorization just feels off to me:
For example, I wouldn't consider Next.js a backend framework. Sure, it's a server-side-rendering framework, but you wouldn't typically be using it for traditional backend things like accessing a database. Perhaps consider adding a SSR sub-section on frontend frameworks.
The data layer section seems a bit confused – it's mix between client-side stores and server-side persistence which have very different use cases. I'd probably move out the databases into their own category - would be interesting to see which JS devs prefer (eg. Postgres, MySQL, Mongo).
And trying to draw conclusions from the testing category seems odd when both frontend- and backend-specific tools are bunched together.
1. working on some of that now, going to hand-roll multi auth combined with password logins. I don't want to outsource my actual users' authentication beyond oauth to FB/Twitter etc.
2. I don't. If it's SQL, I use a tagged template string library, and just project from the response of the given library. If it's mono or similar, I don't see the point so much. I understand you can use type checking, but you can still do that at the API layer.
As to the data layer, I agree... client side libraries and abstractions should be different from server-side tech... though there's some overlap (libraries that sync client-server for you).