At Ubicloud, we have 100% line and branch coverage that is mandated on every PR (https://github.com/ubicloud/ubicloud). We also have an E2E test suite that we run periodically and with every commit. We did not really feel like our tests are slowing us down, it actually makes us faster since we have a higher trust to the payload and many manual checks that would need to be done is safely skipped.
I have started working for a startup recently. My main responsibility is to develop networking features for our cloud on bare metal. We started ipv6 by default but soon we discovered that the biggest issue is "not" the setup side. Ipv6 setup is actually quite straightforward, if you're starting from scratch. The biggest problem of ipv6 is that the ecosystem is not ready for it, at all. You cannot even use github without a proxy!
Hence, we had to start implementing ipv4 support immediately, because VMs for developers that only has ipv6 is almost useless.
Github is one of the most idiotic IPv4 exclusive services. Microsoft and Azure has all the knowledge and equipment to make IPv6 available to practically any site, but Github seems afraid to ask. They had IPv6 for a short while and turned it off later.
ZScaler is worse on a practical level. I have to disable IPv6 system-wide else I can't access internal services (it only routes IPv4). The crazy thing is that they call out VPNs for being archaic, but force users to use an even more archaic technology.
Bless you for mentioning that ZScaler doesn’t play nice with IPv6 - this gives me another possible cause for why a certain MSFT distributed app sometimes, but not always, installs on our users’ PCs when ZScaler is enabled, but works with it off.
Luckily that does not seem to be an issue here. You only have to pay for a public IPv4 address, you still have a full IPv4 stack and are able to make outbound connections via NAT.
> Pricing and Availability
> These two new capabilities to the VPC NAT gateway and Route 53 are available today in all AWS Regions at no additional costs. Regular NAT gateway charges may apply.
Specifically:
> Regular NAT gateway charges may apply.
Which should say: "Regular NAT gateway charges apply"
Since you still pay for the traffic that is processed by the NAT gateway (per GiB).
That article makes it clear that the NAT Gateway needs IPv4
> The NAT gateway recognizes the IPv6 address prefix, extracts the IPv4 address from it, and initiates an IPv4 connection to the destination. As usual, the source IPv4 address is the IPv4 address of the NAT gateway itself.
If the NAT Gateway doesn't have an IPv4 address it won't work.
I recently tried to deploy GitLab from scratch on an IPv6-only network, and the initial experience was anything but smooth. I was met with an exception right in the console during the initial setup. GitLab attempted to obtain a Let's Encrypt certificate and immediately failed, as it doesn't listen to IPv6 addresses by default. A year ago, we (at work) faced similar issues when trying to deploy GlusterFS on an IPv6-only network, and it also failed. (I pushed for V6 only, my manager was not happy) It's evident that while IPv6 may be the future, the present ecosystem doesn't seem fully prepared to support it.
For years, I have wanted to use Docker with IPv6 only, and I am really thinking about learning Go so I can write my own IPv6-only driver.
Yeah, it's a real shit show when you get down to actually trying to utilize IPv6 in any scenario that needs legacy IPv4 access in a straight-forward way.
I'm somewhat happy in that I've moved away from being way down at the low-level ISP/network side of things, so I may be missing something, but I don't see how we are ever going to elegantly transition away from IPv4 addresses. Everything just seems hacky and fragile in terms of trying to run a "pure" IPv6 environment, and be connected to the rest of the Internet.
I think that ISP-wide single-stack IPv6 deployments are the key. They throw the legacy IPv4 internet behind a huge NAT, while letting IPv6 internet to function natively. There is an IPv6 address range that represents the totality of the legacy IPv4 addresses, and the IPv4 addresses are translated into that, so from the IPv6 side of the NAT, every IPv6 service looks like it has an IPv6 address. From the IPv4 side, it looks like the your standard carrier-grade NAT, with huge numbers of users sharing IPv4 addresses.
That should simplify the network over dual-stack deploys, plus it makes providing services in "native" IPv6 the more attractive choice over NATted-to-death IPv4.
There are already some ISPs doing this. In Japan, where I live, one of the big three, NTT Docomo transitioned into a deployment like this just last year.
The protocol is called MAP and it is nearly stateless, cause the IPv4 address is in the IPv6 packet, and more efficient than carrier-grade NAT. The downside is that requires special gateway on customer side. The advantage is that ISP can have IPv6 internal network with IPv4 on the edges.
Oh, I was talking about NAT64/DNS64 which is what NTT Docomo is doing. Now I got to read about MAP... But how can the mapping be almost stateless? When packets are sent from IPv6 to IPv4, if there is significant exhaustion, multiple IPv6 address are going to map into a single IPv4 address, unless the ISP has enough for everybody, in which case I don't see the motivation to hasten moving to IPv6?
Edit: Ah, uses port numbers as extra bytes to encode the original senders address information. Smart!
Exactly this. As more ISPs go through this process, service providers are incentivised to provide v6 services or get poor service. As things continue in that pattern, v6 becomes dominant and v4 becomes the afterthought to be neglected on the sideline (like v6 is currently for many ISPs with lots of v4 resources)
GitLab supports ipv6, but just not out of the box.
My private gitlab instance is v6 only. I had problems with the updates in the beginning because because the official repo was not on V6 but I think they fixed it some years ago. (I am not gitlab expert, but i think my type of install is called Omnibus)
Congratulations on the launch Sai. Having worked with him over the years, I know that Sai knows what postgres migration means. I have seen him deal with countless migrations in and out of our services. I am excited to see what they have built
The problem with blockchain is that the number of transactions are too less than the needs of current “scale” world. For example, when you consider a scaling database, probably you need to process millions of rows per second.
For a moment, I thought it as a fully functional calculator. However, I couldn't go beyond the basic math operations on the predefined values. It would be awesome, if you could provide some guideline or a short statement on the basic purpose of the system.