Hacker Newsnew | past | comments | ask | show | jobs | submit | rayhaanj's commentslogin

I'm currently using joker.com after also migrating off of gandi.


I think you mean "running your own recursive resolver", an authoritative server is one which is authoritative for some zone (e.g. example.net), whilst a recursive resolver is one that goes and walks from the root of the DNS hierarchy to the leaf that you have queried.

It is probably quite a bit slower though needing to have roundtrips at each stage of the resolution, which is also likely a reason that these public resolvers get so much use (latency improvement via caching).


> It is probably quite a bit slower though needing to have roundtrips at each stage of the resolution

The average load time for a website is 2.5 seconds. The added load time from running your own recursive resolver, which is only added the first time the site is loaded, would be around 50ms, or 2% increase load time.

DNS resolving is not a major aspect of a typical websites load time. If you want to speed things up, run a local proxy which local cached version of all popular web frameworks and fonts, and have it be be constantly populated by a script running in the background. That will save you much more than 2% on first load.


I just did some measurements and am impressed on both fronts: DNS recursive resolution is faster than I anticipated, but also page load times for well optimised sites are also very fast (sub 0.5s). Here's some data:

Recursively resolve bbc.com: 18ms https://pastebin.com/d94f1Z7P Recursively resolve ethz.ch: 17ms https://pastebin.com/x6jSHgDn Recursively resolve admin.ch: 39ms: https://pastebin.com/DUTg8Rit

Page load in Firefox: bbc.com DOMContentLoaded: ~40ms, page loaded: ~300ms reuters.com DOMContentLoaded: ~200ms, page loaded: ~300ms google.com DOMContentLoaded: ~160ms, page loaded: ~290ms

So it's quite reasonable to do full recursive resolution, and you'll still benefit from caching after the first time it's loaded. One other idea I had but never looked into it was instead of throwing out entries after TTL expiry to just refresh it and keep it cached, no idea if BIND/Unbound can do that but you can probably build something with https://github.com/hickory-dns/hickory-dns to achieve that.


The page you get when DOMContentLoaded is finished is a white page with no content. The page is only loaded in a very technical sense for sites like bbc.com.

Google page speed (https://pagespeed.web.dev/analysis/https-bbc-com/yxcpaqmphq?...) use two other terms. First Contentful Paint, that is the first point in the page load timeline where the user can see anything on the screen, and Largest Contentful Paint, the render time of the largest image, text block, or video visible in the viewport, relative to when the user first navigated to the page. For bbc.com, those sites around 1 second mark.

Other measuring aspect is Time to First Byte, which is the time between the request for a resource and when the first byte of a response begins to arrive. For bbc.com that is 300ms.


It is probably quite a bit slower though needing to have roundtrips at each stage of the resolution

My experience does not align with this. My Unbound instances cache only what I am requesting and I have full control over that cache memory allocation, min-ttl, zero-ttl serving and re-fetching, cron jobs that look up my most common requests hourly, etc... I do not have to share memory with anyone outside of my home. Just about anything I request on a regular basis is in the micro-seconds always shows as 0 milliseconds in dig. I've run performance tests against Unbound and all the major DNS recursive providers and my setup always wins for anything I use more than a few times a month or more than a dozen times in a year.

For the cases where I am requesting a domain for the first time the delay is a tiny fraction of the overall page loading of the site as belorn mentioned. I keep query response logs and that also has the response time for every DNS server I have queried. I also use those query response logs to build a table of domains that I look up hourly NS and A records to build the infrastructure cache in addition to resource record cache.

Now where there would be latency is if I had to enable my local Unbound -> DoT over Tinc VPN -> rented server Unbound -> root servers. That would only occur if my ISP decided to block anyone talking to the root servers directly and my DoT setup would only be in place while my legal teams get ready to roast my ISP and I start putting up billboards. That would of course be a waste of time and money when I could just get the IP's of censored sites from a cron-job running on multiple VM's and shove them into my hosts file. This could even be a public contribution into a git repo and automated on everyone's machines.


There is life outside major population centers. I have pings in excess of 200 ms to many major websites; if every DNS lookup requires doing several queries with 100-300 ms of waiting for each one, the web becomes unusable. From reading HN, users from e.g. New Zealand run into similar issues.


I too am in a rural area, just not as rural as NZ. My setup would also be 0ms in NZ and AU for 98+% of my requests. The real impactful delays come from the excessive requests browsers have to make to bloated frameworks, excessive cookies and third party integrations, ads, videos and so on. uBlock can clean some of that up but not all of it.



> Users do not have to worry about security, SRF quotes Weber: "No data is transferred. There is only a link to the correct page."

Yes but how do you know you've resolved the right thing, it's still vulnerable to a compromise of another TLD.


I think you meant kilometres per second, not per hour.


Also 4_294_901_931 is actually just small enough to fit in a u32 whose max value is 4_294_967_295


Perhaps hex notation helps to see things better: 0xFFFF00AB. It's actually nearly 64K below the maximum. 0xAB is 171.


Ha, this is exactly right! It's 100% in u32 range and part of the private use reservations so wouldn't appear in the internet table unless such values aren't filtered by the provider and/or it's being used locally.

Originally I went to write your exact comment because it seemed the value in the article should fit at a glance and then I must have done the comparison check backwards because I started pasting the 2^32 value in the rest of my comment concluded it was actually too large when really I had just jumbled things about.

Thanks for setting my mind straight!


There has been some work on encrypted server name indication: https://www.cloudflare.com/learning/ssl/what-is-encrypted-sn...


I've been working on a BGP implementation in Rust over the last few years: https://github.com/net-control-plane/bgp/

It's still in a proof of concept stage and nowhere as complete as GoBGP though.


Thank you for having a front page that actually says what BGP is.


I'm not sure if this will help explain the culture or not, but this is from the NTT (the Japanese telecommunications company), so probably they thought that everyone knew what BGP is (since that you probably won't encounter this otherwise).

Also, they have a Rust version: https://github.com/osrg/rustybgp


Searching around, I found someone that benchmarked gobgp, rustybgp, and some others, and gobgp didn't seem to do all that well compared to other implementations.

https://elegantnetwork.github.io/posts/comparing-open-source...


rustybgp seems to do pretty well (see #5 post of blog series) and is created by the same author(s) of gobgp. Additionally, it looks like the configs are transferable between gobgp and rustybgp.


Yup, even the BGP-4 RFC 4271 is from 2006 https://datatracker.ietf.org/doc/html/rfc4271


also, BGP-MP added so many new options with regards to NLRI that it can be used for many, many different use cases.

For instance, BGP can be used as an mac learning mechanism (EVPN), It can be used to communicate MPLS LSP's, it can even be used for source routing.


I have my personal LIR with RIPE and they do allow signing up as an LIR without being a company. As an individual you need to provide proof of identity via government issued ID and pay 17% Dutch VAT.

When I signed up I got a /22 of V4 space and /29 of v6 space, although for new signups I believe there's now a wait list to get a /24 and that's it for V4.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: