Hacker Newsnew | past | comments | ask | show | jobs | submit | filipn's commentslogin

Sprints are a good tool to transform bad teams into better teams. No amount of process and tools is going to convert a good team into being a great one.

Great teams do not need additional processes to deliver value and they usually are more of an obstacle. Clear goals and a vision are much more important, as well as having an environment where the decision makers are the one implementing the solution.


I've only once worked in a "great" team; it was a startup, they were all young and they weren't well-paid, but they were all intelligent, dedicated and knowledgeable. I once used to do college lecturing, and I've occasionally had a truly great class. A few star students energized the whole class, who were helping one-another and spurring one-another on.

The best class was a beginner's C class, and I was eminently suitable to teach it, because I was a beginner C programmer myself /s. By week 3, they were well ahead of me.

It's evidently possible to deliberately build a great team; but I have no idea how that's done. I don't know much about recruitment, and I certainly don't know how you can bring together a team like that C class.


You can set your router with a firewall NAT rule to capture all DNS requests and redirect them to use Pi-Hole. Your tv will think it talks to 8.8.8.8 when in reality it will talk to your server.


doesn't stop dns over https, which stuff like chromecast employs


And that's why DNS over HTTPS was created in the first place, to get rid of ad blocking on Chromecast and Android.

On Android now Proxies don't apply anymore to apps like YouTube, and as these apps also use DoH, there's no way to block ads anymore.

All talk of "security" and "privacy" is false, as always it was all about profit. I honestly wonder if the people at Google working on DoH even knew why management supported their project, or if they genuinely believed they did something good while management secretly realised the potential profits from DoH


Well, it's not like they were forced to do name resolution over DNS before... They could just as well deploy their own custom name resolvers and talk to them via IP from applications like YouTube for Android.

The scenario you're describing (DNS-level blocks) could always be circumvented.

On the other hand, if you live in the US and don't override the ISP's name servers, most probably they will spy on you and sometimes even do things like injecting ads into third party websites. This was the thing DoH was meant to solve.


> They could just as well deploy their own custom name resolvers and talk to them via IP from applications like YouTube for Android.

The Chromecast has 8.8.8.8 hardcoded, but recently an ISP started integrating ad-blocking via DNS filtering (with 8.8.8.8 MitM) in their ISP routers. That case actually went to court.

It'll always be possible to do the filtering as prosumer, but the current state of the art of "it just works" ad blockers is something Google is fighting against (also see Manifest v3)


Your response to the first scenario didn’t invalidate it - Google could have made a https://dns.google.com endpoint in 2013 before DOH and only allowed the Chromecast to work if it could get dns responses from there.


Yeah but that would require developers intentionally building an anti-adblock solution, while funding a solution like DoH allows Google to save face


What percentage of Google's users do you think have a pi hole? I buy that their changes to ad blockers in Chromium are motivated by this kind of logic, since they are used by a material number of people. But for DoH it just doesn't add up, it's such a fringe case. It's also worth noting that DoH was spearheaded by Mozilla, Google just got on the ship at the next port.

Not every thing that big corporations say they do for security reasons or whatever is a cynical ploy. Do you think they're experimenting with post quantum cryptography in Chrome Canary in preparation to drop a 50k qubit quantum computer on the market sometime soon?


Some ISPs started integrating pi-hole functionality into their ISP routers, and that actually went to court.

On Android, some of the most popular apps are fake-VPNs which just register your own device as VPN with itself so they can filter ads.

This isn't about the pi-hole, this is about ad blocking becoming "too easy". You can always block DoH. But no ISP can include such a blocker by default easily anymore.


Wait, how do you block DoH without blocking other HTTPS traffic?

Do you have to block every known DoH server? Looking at Google's DoH certificate they list quite a few hostnames and IPs as Subject Alt Names:

    dns.google
    *.dns.google.com
    8888.google
    dns.google.com
    dns64.dns.google
    2001:4860:4860::64
    2001:4860:4860::6464
    2001:4860:4860::8844
    2001:4860:4860::8888
    8.8.4.4
    8.8.8.8
Issued by Google Trust Services...


Google is digging as many moats as they can without triggering antitrust scrutiny. They have to plan for the future, not the here and now.


One could redirect any traffic destined for the IP of the DOH server to one's own DOH server on localhost.


Unlike DNS over TLS, and plain old DNS, the only way to 'redirect any traffic' is to have a blocklist of known DoH hosts, since it's just HTTPS, port 443, traffic, you can't tell if it's your page load or a DNS query.


As of today, how many DOH servers are running from IP addresses also used for serving websites. In the testing I have done with publicly advertised DOH servers, generally, the IPs are only used for DoH (sometimes DoT, too). Of course this could change.

The localhost forward proxy I use can distinguish DoH requests from other HTTP requests because the DoH query URL structures used are mostly the same; they follow RFC8484. As of today, it is easy to probe IP addresses for listening DoH servers. The list of "known DoH hosts" is still quite small. Of course this could change.

To be clear, I am defintely not a proponent of applications ignoring user system-wide DNS settings and using DoH to obscure that fact. I have long run localhost root and DNS for myself and have no need for third party DNS, whether ISP or open resolvers.

However, as an end user I see no reason to trust any outgoing HTTPS traffic from applications authored by folks who are agreeable to online advertising as a business model.

The greatest threat I face is online advertising, not DNS lookups associated with malware. With DoH, HTTPS traffic could contain an unwanted DNS request, but prior to DoH it could also contain data to be used for tracking and advertising purposes. I felt the need to start monitoring/MITM'ing the HTTPS traffic on the private networks I control long before anyone proposed DoH. I would guess many corporations and other organisations do the same.


Not if the resolver is validating the tls certificate. That's the point of doh, make impossible to intercept dns query.


MitM is not possible with DOH if the client checks for a valid certificate.


Twilio[1] and MessageBird[2] come to my mind as competitors. They both are a "Voice & SMS" communications platform and I guess the innovative part is that they provide an easy way of connecting companies to their customers via multiple channels.

[1] https://www.twilio.com/ [2] https://messagebird.com/en/



Yes, I think the second diagram that you showed is the preferred way of doing things, since using or "dogfooding" the client libraries will ensure they are always tested and correct, plus you'll get immediate feedback from the other developers who work on the apps as well.


In this day and age, your client libraries should be generated automatically. And of course used in your apps/webpage to save time.

See openapi/swagger for good examples how client generation works


I've been using openapi/swagger for generating my clients and honestly think it creates a mess of files and manually keeping clients up to date is not a huge task. I'm not sure that I buy into "your client libraries should be generated automatically" as a blanket statement.


Granted, APIs aren't updated often, but when they are, generated clients can save a lot of time. I can update a generated client in half an hour and be very confident of its correctness, whereas a manually constructed client would take hours to update and test.

Of course, the generated client is not very convenient, being a 1-to-1 mapping with the API. You build a high-level client with the more common operations on top of the generated one.

It's possible to do this in a way that gets you the best of both approaches, with some up-front planning.


Another benefit would be easier testing - you can easily auto-test the logic and separate the testing of the UI.


There is an explicit section for privacy on the page as I can see, there is even a dedicated page to it: https://microsoftedgewelcome.microsoft.com/en-us/privacy

Also they wrote a privacy whitepaper for Edge as well: https://docs.microsoft.com/en-us/microsoft-edge/privacy-whit...


This is absolutely not the same. Mozilla is actively fighting with user tracking agains their will. Apple is also making some steps in the same direction and speaking about it loudly. In the meantime, Google and MS are doing whatever they can get away with.


Oh man, I was just about to recommend this book, you beat me to it. I am currently reading it, and I cannot recommend it enough. If you're interested in distributed systems and learning about database fundamentals, it really is a must read.


You can read about their licenses here: https://redislabs.com/legal/licenses/

  Redis Modules created by Redis Labs (e.g. RediSearch, RedisGraph, RedisJSON, RedisML, RedisBloom) are licensed under the Redis Source Available License (RSAL).
Here is their RSAL license: https://redislabs.com/wp-content/uploads/2019/09/redis-sourc...


Basically you can't use it for a database product, caching engine, stream processing engine, search engine, indexing engine or ML/DL/AI serving engine. I feel just about any web app does at least one of these things.


I think the focus is on the "engine" part. What they are trying to prevent is someone else making money off of "hosted Redis".


There are additional clauses prohibiting hosting it as a service or using the source to provide a service with a compatible api.


the correct way to phrase it would be: you cannot use it unless you pay for a licence.

its source is only available so you can look at the implementation if you wish to explore the internals.


That's not true

> Licensor hereby grants to You a non-exclusive, royalty-free, worldwide, non-transferable license during the term of this Agreement to: ... (b) use the Software, or your Modifications, only as part of Your Application, but not in connection with any Database Product that is distributed or otherwise made available by any third party.

The license is basically to stop AWS using it as part of elasticache (or any other cloud providers)


Not fully correct, you can change the code and even redistribute it as part of your application, as long as your application is not a "Database Product" as defined in the license


I was referring to the Redis Source Available License, I am sure if you pay enough you can modify the source too.


you can modify even if you don't pay as long as you application is not a "Database Product"


What application is not a "Database Product", one way or another?


If you resell Redis that’s a DB product. If you sell widgets and use Redis for search it’s not. There’s obviously some grey areas but I thought it was relatively clear?


Any web application (like E-Commerce, News, Gaming...) will probably won't be considered as "Database Product"


  - Preventing me from switching to it: once a tool (ie. pencil) is used for a stroke, it reverts back to mouse cursor (I'd rather have it stay as pencil and hit 1 for it to switch to a cursor)
There is a little lock icon next to the drawing tools which enables you to lock a tool, so it won't revert back.


Fun fact: I added this to Excalidraw for this exact reason! It was super annoying.


Ah thanks, I’ll give this a try!


Is opentelemetry ready for production usage? As my understanding this project is a merge between OpenCensus and OpenTracing, but it’s still in beta and the documentation is really lacking. Does anyone have any hands-on experience with this library?


SDKs for languages are in beta and might not be production ready but the collector is production ready. You can use any of the OpenTracing or OpenCensus instrumentation libraries but deploy the OpenTelemetry collector and once Otel SKDs mature, migrate OC/OT to Otel or not if you don't want to.


I've used it, and it's very similar to OpenCensus and OpenTracing. If you've used either, it's not much of a stretch to use OpenTelemetry.

That said, it is still beta, and will be until Q4.


We've been trying different tools, and we've found Excalidraw (https://excalidraw.com/) to be satisfying our needs. It's relatively new and albeit it's lacking a few features, it has proven really good for drawing diagrams and designing new solutions like ui mockups and stuff like that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: