If you're behind a CDN then use whatever backend suits your deployment style the best. If you like working with git then GitHub pages and Heroku are both great.
If the goal is the ability to have certain routes processable by different languages/systems you could achieve this with reverse proxying (from eg. nginx) [1].
That way you can leverage any existing language frameworks and run them as standard HTTP responders. No need to work with a queue (and add it to the stack).
You can still limit the HTTP methods each proxy responds to as well [2].
Thanks for the suggestion, it's a good one. A few cases a message queue can be advantageous -- (1) persistence (2) a few responders can work on the same request in parallel (3) adding/removing responders dynamically according to the load.
These are not common/generic use cases but would be useful under particular circumstances.
* I could be wrong with (3) -- I'm not very experienced in reverse proxies.
The implementation today is as a task queue which removes the request from the queue once a responder acknowledges, but it could be a pub-sub model, where a number of independent responders can work on the same message in parallel, and only 1 responder need to return a response. In this case, persisting in the queue is useful.
An alternative is to chain the responders where one responder can leave a message in the queue for another responder, and the final responder returns the response.
Polyglot is still experimental though, and the current implementation is a prototype.
What makes this a problem? The initial request to Polyglot is also "push".
The rest of the web works on "push" too; pull in this case would only help if you don't care that a request could take a long time (seconds) to resolve.
I didn't see mention of it, but what happens if a message is not responded to? How does Polyglot handle time outs?
I had to do the same thing from NameTerrific but got my migration code from the top-level.
The worst part is that I was "in contact" with the owner via Twitter, and he assured me that he would totally look into things at some point in the near future..
What does this have over AniDB? I'm seeing MyAnimeList mentioned in the comments, but not AniDB?
Do you guys have an API checking file hashes/fingerprints against episodes to add to your list etc? That's one of the biggest draws for AniDB (for me).
I've not used AniDB much, but I once half-finished an API wrapper for Node.js so I guess I'm about as qualified as we're gonna get.
In my experience, AniDB is effectively unmaintained code. They have developers, the developers just don't know the codebase well and they don't mess with old stuff. I've heard this is because it's a behemoth Perl script, but I don't know for sure.
But to cover your main point of "an API checking file hashes/fingerprints against episodes to add to your list", I just wanna say that this is kind of a silly system: you can easily extract that data from a filename.
For example, the desktop app Taiga has an open-source parser which extracts lots of data effectively (https://github.com/erengy/anitomy) and I myself wrote a simple regex-based one for my own client. Both are capable of extracting at least the episode number, subber, and series title.
So why would Hummingbird want to store hundreds of thousands of MD5 checksums when there's better options?
Just want to quickly add the Taiga will support Hummingbird fairly soon. Within the next week or so.
We also have incredible desktop apps like HAPU that detect what you're watching (including from sources like Hulu and Crunchyroll) and updates your library. No fingerprinting or hashes necessary.
Mainly because the checksum is a very good source of truth. Not every file you get is named correctly, or the same as originally intended.
Regardless of the checking method, will there be an API to add episodes to a personal list, regardless of check sum or filname checking?
Next feature: notification of new episodes, particular by collecting group?
These are pretty much the 2 killer features I personally use AniDB for. The back end code for me is less important, because everything I need it to do, is being done. :)
I've been writing an entirely automated anime sorting and processing system that runs off of a "on torrent completed handler" and sorts it into a folder. It uses the hash lookup feature to get information about the file. That's about half of my AniDB usage.
The other half is discovering who has subbed what group. By maintaining a list of all the files a particular subber has released it's easy to lookup and easily determine who to get a release from without having to look everywhere.
Unfortunately, as awful as the API and interface is. It's the only one of its kind.
Answering the question specifically: Yes. Same as if you were in say, construction, and you were expecting crane operators to bring their own cranes. A dentist their own drill. A chemist their own centrifuge.
In terms of the debate: Eventually you'll need to have policies in place around ownership of work produced, and while there's nothing preventing an employee from copying files to a personal device, having everything they work on stored on hardware owned by the company makes it significantly easier for the company to enforce those policies.
If you supply standardised equipment (a particular model, operating system, etc) it will also help for the setup of any new employee, and any interoperability that may be required. Everyone will get the same; troubleshooting requires a focused spectrum of knowledge and experience rather than knowing how to troubleshoot Windows, Mac, Bootcamp, Ubuntu, CentOS, on desktops or laptops.
I see what you're saying with the construction and dental analogies.
We want to make it convenient for employees to have access to what they want, when they need it. We want employees to be able to have access to all the work files/tools they would need whenever they are near a computer. We don't want to have an emergency come up on a weekend and cause it to force someone to come into the office, or to be able to say "I'm out of town, sorry."
We also want employees to feel safe using the computer for personal things. We've considered something like offering a stipend that covers new hardware and the employee gets to keep us assuming they continue to be employed for X amount of time.
I'm dubious as to the worth of these as a specific kitchen utensil; most how-to videos I've seen include a jumper as the sole "equipment" required. Maybe two rubber bands on either side to help prevent movement would make it even easier.
> The design process took months of careful exploration and testing. We did loads of sketches and built functional and aesthetic models by hand and on the computer.
The resulting product is quite nice aesthetically, but is it worth $18?
The majority of users/web requests aren't going to be hitting your hosting; they'll stop at the CDN.
Perhaps a better question is "fastest CDN?".