In Vietnam, near every holiday or major event the internet magically slows to a crawl and the Govt run papers declare that sharks have attacked the undersea cables, it is almost comical how well timed the 'attacks' are...
After living in Vietnam for 6 years I can second this. Even my Vietnamese team jokes that "sharks bite" before each major national holiday. Fortunately, I've also noticed that the effects are not as disastrous as before. Previously, it would be impossible to keep a VPN up to a US server, but now I can still get high speeds and low latency through the "cable cuts."
In the mid 2000s I was in the Middle East and used the internet to connect back to the United States for conducting business. One day almost every site became unreachable (anything hosted in the states). It turned out that a naval vessel had dropped anchor and Murphy's Law put the anchor right on top of some undersea fiber cables. It was called a freak accident because statistically was so unlikely to happen.
It took a while to figure out what happened, and internet was out for a couple of days. Considering service was restored in 30 minutes in this case, we are getting a lot better at handling severed cables.
> Considering service was restored in 30 minutes in this case, we are getting a lot better at handling severed cables.
30 minutes?
This is absolutely mind-boggling to me. 30 minutes -- and you can identify exactly where the cut was, deep in the water, access it, fix it in the water? To me that sounds like a 30-month job not a 30-second job.
Anyone here with insight into this fix-process: could you please shed light on how this is done and how it can be so fast?
But then you still have to go there and fix it. I can't even get a pizza delivered in 30 minutes so I suspect the fix did not actually repair any cables.
The cable is not fully replaced. They lift the cable up to a boat and repair it (replacing the damaged bit and joining it to the original) and lower it back down.
They don't replace the cable typically, they fuse it with epoxy and it usually works pretty efficiently, mostly taking the time scheduling the boat and equipment the repair is likely a couple hour job at most and 12-24 for the epoxy to dry.
If it took 30 minutes that's not BGP, that's some people at ISPs manually provisioning new circuits on DWDM systems, using a moderate amount of automation tools, and bringing the circuits up facing routers at both ends. Or doing something like creating a few new VLANs in existing larger 100/200/400Gbps intercity transport links, and trunking those VLANs through to routers on each end. A topology change in BGP is a couple of seconds.
BGP itself has no real facility for dealing with circuit congestion.
So, I am willing to bet that dynamic routing was certainly in place, but the secondary/eBGP multipath destinations were full when the primary path went down.
And that necessitates manual action in instances where you aren't doing MPLS-TE, or don't have enough standby capacity, or both.
An event like this, where multiple fibers are disrupted and magically fixed less than an hour later just smells like the NSA, one of the other five eyes or a friendly telco installed a fiber optic beam splitter.
I don't have any traceroutes so this is speculation but the probably just route around it if it only takes 30 minutes. I was under the impression that a beam splitter can be installed without disruption. If they fucked up and broke the cable installing a beam splitter, a 30 minute fix is still pretty impressive.
I‘m by no means an expert but I did some datacenter work. I wonder how this would be possible to install a beam splitter without disruption? Don’t you need to plug the fibers out, put the splitter in and then plug the fibers back in the splitter? I was searching YouTube but couldn’t find a video.
I agree, this might be possible but I wonder how good the quality is then? A splitter will give you 50% light. That limits the length of the cable too, doesn’t it? How many errors will be there when you bend the cable and get maybe 15% light of encrypted traffic, and you can’t signal any retransmission?
On run of the mill singlemode fiber loss is roughly 0.5dB/km. Splitting off half of the light (and you don't need half, much less is fine), would result in 3dB of loss.
Fiber tramsreceivers automatically negotiate the power needed to sustain the link. So in case of datacenter or city networker, you could do a split-of without introducing too much loss and error rates.
Trans-oceanic fibers are different beast but I guess someone figured it out.
The light is kept inside the fiber by the change in refraction index in the radial direction of the fiber so there is a total internal reflection. But if you bend the fiber enough then reflection is not total and part of the beam goes outside.
LOL... I'm so sure the US boats seem to keep 'accidentally' being around when fiber is cut. You do know US/NSA actively taps global fiber networks. I assume they cut it in one place "accidentally" while they tap it in another to avoid being detected.
Taps are almost certainly being done by submarines -- there are even books about it[1].
Why would surface ships be around, blatantly giving away what is happening? The likeliest answer is that they screwed up with their anchoring procedures.
My guess would be they create an excuse while elsewhere do it. I don’t picture a boat on top of a submarine performing the procedure. The boat{excuse} is the excuse that renders any monitoring of the signal useless when the tap is done.
US is the biggest and done it the longest.
It's easy to act like the US is thee super power. But when you look at it, no other 'superpower' holds a candle to our 800 foreign military bases.
Performing any worthwhile tap that way would show a measurable difference to anyone monitoring it. When you cut the line and connect at the same time you would be resetting the baseline to be measured against when someone starts monitoring again.
Anchors cutting internet cables happen all the time. It is a weekly occurrence. If you want a visual image: imagine a plane laying down a cable over a forest. It is an heavy cable designed to crash a bit inside the canopy, but it is likely to still dangle in many places.
Another super powerful plane now lays down a one ton hook on a big ass chain and drags it around the forest until it settles on something strong enough to stop it. It does not make the coincidence that big, especially when you consider that many cables land close to commercial routes.
But mostly yes. It is interesting to read a bit on submarine cables that our infrastructure depends that much on.
It is for instance interesting to know that their landing points are considered strategical locations and that at least US and Russian navies have trained at eavesdropping on them using submarines. Just throwing that in case you are wondering why you don't hear much about X-37's ability to rendez-vous and "repair" satellites.
Wasn't there a woman in Armenia or something who cut a cable in her house the internet for the whole country dropped. I find it difficult to believe what I just wrote but I recall it.
Hummm from reading the article it doesn’t seem like she just “cut a cable in her house” but rather vandalized a major optic fiber cable while trying to steal copper not on her property.
I'm definitely leaving her be! Do you think she's going to come and read hacker news comment and be offended? We don't even know who she is. I'm just correcting facts here.
Off the top of my head I could believe that a good place to lay cable (by some definition) might be a good place to drop an anchor. The more shallow the water, the less cable you need.
Cable landing stations are generally a bit further away from ports and such. They're also heavily armored when near the shore for this particular reason. The folks laying the cables will also put in energy to do outreach with local fishing organizations to let them know about the cables too (and they're charted).
Underwater volcanoes (rock slides) are the biggest threat to cables when you get into the deep ocean. The cables are much more thin (less armoring then when near the shore). The other fault encountered in the deep (no anchor risk) is from amplifiers failing on the cable which is a long process to get a ship out to replace.
EDFA's: erbium doped fiber amplifier. They're optical signal amplifiers. Due to the attenuation of the fiber, they need to be installed every so often on the fiber, under the sea. Since the equipment is cheap in comparison to the cost of repair, modern undersea amplifier boxes often contain multiple amplifiers and switchgear to MUX to a new one after it fails.
This. If you see the oceanic cables every x amount of kilometers you'll see a bump in them: that's the amplifier. There are also junction points too where cables can split off to serve locations on the route. Newer cables have the ability to dynamically reconfigure these (wavelength switching) these junctions rather than be static (the older way).
very interesting, thanks both. I'm surprised something travelling at the speed of light over a relatively short distance attenuates at all. I wonder if these amplifiers are purely optical (like mini magnifying glasses), or if they are powered (requiring running adjacent power cables)
Power is involved too. When I first visited a cable landing station I got to learn about the process to de-energize / energize the cable and coordinate with the ship and the remote side. Pretty involved but really neat to see. The stories are interesting too. One cable landing station on a small island the main tech would bring his family in during a typhoon to shelter there for days since it was a fairly protected site that had plenty of fuel for the generators.
many years ago, when i worked at google, we'd joke that every time fiber was 'cut' it was some rando government agency installing a tap. this was before internal traffic was encrypted.
I've heard a really interesting war story from a Googler who was in the NOC when the Japanese earthquake that took out their nuclear power stations out hit - and cut the fibres on the sea bed. He reckons they were probably the first people on the planet to know something was happening.
Apparently their immediate response was just to reroute everything through the cables on the other side of the country that were still top, and let their bosses know it was happening. They didn't slow down to ask for permission to light up half a country's worth of bandwidth, then just did it.
> Apparently their immediate response was just to reroute everything through the cables on the other side of the country that were still top, ...
I'd be pretty surprised to learn that was a "manual" decision made by humans.
I would expect -- especially in Google's case -- that the re-routing happened automatically (and nearly instantaneously, ~50ms) and the NOC engineers were simply "notified" about it.
> If you tour one there are rooms they won't let you in.
I mean, I'd kind of expect not to be allowed in everywhere in such a facility for many reasons. You can't go anywhere you want in all kinds of public infrastructure (water treatment plants, power plants, etc.).
Do we? I'm under the impression they can install taps on fiber without disrupting service by bending the fiber just precisely right that a small amount of light bleeds out, or something like that.
They all do, and in most of them there are obvious errors in processes or rights assignments. It is pretty rare to come across a company that takes the threat from within serious. That's the whole reason Snowden could do what he did and if the NSA gets it wrong then there is a fair chance that your average corporation has faults as well.
History lessons are pertinent to the degree you represent the facts therein accurately. Perhaps this is where the confusion arises. You claim the NSA has computational superiority to crack "whatever they want". If this is true, I posit that such a invention would be available to the private sector as it would represent and contain an immense technological innovation -- and not doing so would be a greater detriment than doing so.
I'm sure I read somewhere a few weeks ago on HN (unsure if article or comments) that if the world's total electricity output were focussed on this one task, and given it would take 0.5 volts to flip one bit, it would take around 20 years to crack an AES key (I forget whether 128 or 256) or 10 years using a quantum computer. those are vague numbers from memory but I think someone actually did the maths. it was mind-bogglingly fascinating if anyone else remembers and could point me in the right direction. wish I had bookmarked it.
Let me try to write a similar explanation in my own words...
Many people have absolutely no idea about how powerful an exponential growth is, and no idea about how large 2^128 and 2^256 are. The security of symmetric cryptography doesn't depend on the "absolute" computational cost of the algorithm - the security is created by the large number of operations alone, so that even if the cost of a single operation is negligible, the system remains secure.
Let's break some symmetric encryption algorithms.
We assume the hardware required to run a decryption routine is as easy as a binary counter, one of the simplest circuits in digital logic - it just counts numbers. (Of course, a real decryption routine requires much more resources, but let's make it infinitesimal for demonstration purpose.) And it takes one picosecond (10^-12) for each count, so the equivalent clock frequency is 1000 GHz. Let's call this machine "Doomsday Counter (TM)". Built by alien technology, this machine costs 1 dollar.
How long does it takes to crack DES (56-bit)? 20 hours. This is what the EFF and distributed.net did in 1999, they used an cracking machine with thousands of ASIC chips and a volunteer team of thousands of PCs. They exposed the U.S. Government's lies about how DES was secure and how it's a threat of nation security. And forced the NIST to start the AES competition for real security. The victory of the first crypto war.
But how long does Doomsday Counter take to crack a 64-bit encryption algorithm? 213 days. It's getting much longer, but it's still doable. If you build 213 Doomsday Counter units, you can crack it within a day. Okay, so now we have 213 of Doomsday Counter machines now and we run it in parallel. And the equivalent total clock frequency is 213,000 GHz, or 213 THz, and it costs 213 dollars (thanks to aliens).
Then, how long does it take for our 213 Doomsday Counters to crack 80-bit encryption - which, in the beginning of this century, still was a reasonable standard of security? 180 years. Oops. Clearly, we need to scale up our operations further. Let's get 1 million (10^6) of these Doomsday Counter, which costs us 1 million dollars, and equivalent to 1,000,000 THz, and try again. Then we are able to crack it within... 14 days.
Then, let's try some serious targets - Triple-DES (112-bit) - three layers of 56-bit DES encryption - which was used as a stop-gap solution when DES was broken but AES was not ready yet. Although it's triple, due to mathematics, it's actually only equilevant to two layers of DES, not three layers, so it's 112-bit. So, how long do it take for our 1 million of Doomsday Counter to crack it?
164,646,653 years.
Clearly, 1 million of Doomsday Counters, each attempting a trillion keys per second is not enough. Let's purchase 165 trillion units of Doomsday Counters. Now it costs 165 trillion dollars, more than the GDP of the entire world combined. And don't forget, even a single unit of Doomsday Counters need alien technology to build. So we finally are able to build a supercomputing center that is able to crack Triple-DES within 365 days.
Now let's do the real challenge - crack AES-128, with 165 trillion units of Doomsday Counters. How long does it take? 65,395 years.
And AES-256?
20,000,000,000,000,000,000,000,000,000,000,000,000,000,000 years.
The end of the story. This is why those people who believe "hardware acceleration" threats the security of symmetric encryption have no idea about how secure symmetric encryption is.
And for our reference, as an indicator of the current level of human technology - what is the most powerful and the most expensive counter the human civilization ever built? The Bitcoin network. The bitcoin miners all over the world currently have a total hashrate of 101,057,457 THz. If all Bitcoin miners are codebreakers (they are not, decryption is more computationally expensive than hashing), its computational power is roughly equivalent to 101 millions of Doomsday Counters, and capable of breaking a 92-bit encryption key within two years, or a 98-bit encryption key within 100 years.
And all we can say is - it's the upper limit of the human civilization. 128-bit encryption is perfectly fine, although we can never be sure about whether AES-128 is really 128-bit, but we have enough confidence to continue using it for a few decades.
Now introduce quantum computers to this picture. All encryption algorithms will be broken, right? No! Quantum computers would not solve hard search problems
instantaneously by simply trying all the possible solutions at once. For quantum computers to solve a problem, the problem must have an exploitable mathematical structure. For example, integer factorization, discrete logarithm over a prime field, discrete logarithm over an elliptic curve - which are 99% of the public key encryption algorithms we deployed today, all have a structure that can be attacked by Shor's algorithm. If the problem size is O(N), Shor's algorithm only takes O(log(N)^3) steps, it makes the computation logarithmically simpler, this is serious - it effectively "linearized" your exponential growth, making quantum computers exponentially faster! For all practical size of the exponent, it will only have a small effect.
But surprisingly, for symmetric encryption, quantum computers doesn't do much at all! Yes, symmetric encryption has an exploitable mathematical structure as well. Grover's algorithm pointed out that, if you need to invert a blackbox function f(x), instead of O(N) of operations, on a quantum computer, you can do it with only O(sqrt(N)) operations. Thus, AES-128 (2^128) becomes AES-64 (2^64), and is vulnerable to quantum computers! Looks like a lot, but it's only a small speedup, simply upgrade AES-128 to AES-256 is enough to fix it, and it only makes the existence system 2x slower, not a lot to defend yourself from a quantum machine.
In the subfield of cryptography known as post-quantum cryptography, almost all major works are related to public key cryptography - for all things you need to worry about a large quantum computer, symmetric encryption is least of what you need to worry.
---
On the flip side, how much resource does it take to store an AES-128 secret key? Two 64-bit integers, or 16 bytes, or 10 English words from a dictionary of 7000 words, or 25 dice rolls of two 6-face dices. How about an AES-256 key? Four 64-bit integers, or 32-bit, or 20 English words, or 50 dice rolls of two 6-face dices. Also, going from 56-bit DES to 128-bit AES, only costs 2.28x more CPU time on your computer. This is the beauty of encryption: A linear increase of resources by the defender corresponding to an exponential increase of resources required by the attacker. So, decrypt a message simply doesn't make sense at all, but hacking (or stealing) your computer does.
This is true, as long as the algorithm itself lives up with its security claim, i.e. "it works as advertised", 128-bit AES really has 2^128 of possibilities to bruteforce, not 2^80 possibilities - which, we can never be sure, and it cannot be proved - but we are fairly confident that any major breakthrough require is extremely unlikely. Also, this is why 256-bit AES is standardized despite 128-bit is already much more than enough - cryptographers are one of the most conservative groups of people. And in fact, AES has already been broken, with its keyspace reduced to 126-bit, not 128-bit - which means, it's keyspace is now only 25% of what it's supposed to be. But if you understand how large 2^126 is, you'll see that it's irrelevant to practical applications.
The most brutal dictators in the world can build guns, bombs, tanks, planes, but they cannot decrypt a message if the key is destroyed, no matter what. It also transcends time - if you have a Commodore 64 in the 80s, you can write a AES-128 encryption routine in MOS 6502 assembly, it will only takes a few hours to encrypt a floppy disk, but the disk still remains secure today, and will remain secure tomorrow against the most powerful government in the world. (unfortunately, most people at that time, did not believe 128-bit encryption was necessary - Diffie and Hellman were the biggest advocate of 128-bit encryption and a vocal critic of the government's 56-bit DES).
"""
For commonly used 1024-bit keys, it would take about a year and cost a "few hundred million dollars" to crack just one of the extremely large prime numbers that form the starting point of a Diffie-Hellman negotiation. But it turns out that only a few primes are commonly used, putting the price well within the NSA's $11 billion-per-year budget dedicated to "groundbreaking cryptanalytic capabilities."
"""
My understanding is that they record encrypted traffic too. They can't read any of it - yet.
But they're betting one day either a security vulnerability will be discovered, or computers will be fast enough to attack the encryption and allow them to read the data. So even though it's unreadable today, it might be in 10 years.
Even 5EB would be a stretch for 2013. 5ZB is flat-out impossible. As another poster points out, that's years' worth of total worldwide drive shipments (most sources put it at less than 1ZB in 2013). Large buyers are further constrained by the fact that their demand can cause price spikes even at much lower percentages of the total. Not even No Such Agency has that kind of budget. The Utah facility also isn't physically big enough for that figure to hold. I work on large storage systems at one of those large buyers, and I've toured one of the several data centers where ours live. NSA's Utah data center looks to be on approximately the same scale, not orders of magnitude bigger. It's further plagued by power problems, which is another constraint on total size.
So I looked into that quote from the NSA director. What was actually said, apparently, was that the center was designed to hold up to 5ZB, not that it actually did. That seems to be a design based on some extremely optimistic assumptions about future drive density, power consumption, and cost. Needless to say, those assumptions were a bit silly at the time and have only seemed more so in retrospect.
For platter or SSD drives, sure. Some forms of magnetic tape storage can get up to 300 TB per cartridge though, which can scale up to petabytes in the right config.
Still ridiculous for information that is worth less and less over time.
300TB per cartridge in 2013? I think 10TB per cartridge was pretty high round then. That's still around half a billion tape cartridges, which are also very slow to read and write from.
Do you have a source for that 100 million figure? It strikes me as awfully low considering how many personal computers must surely exist in the world and how often they'd be replaced without reusing the drives, not even factoring in servers.
I remember hearing a speech by a quantum computing researcher that was primarily funded by the NSA. He included an anecdote about how “they would prefer quantum computing didn’t exist, but if it’s going to exist - they want the first one”.
Not even limited to 'the government'. Improperly sanitized network gear shows up in second-hand markets all around the world. Happened at a former employer of mine and a 'finder' attempted to extort us over it. VPN PSKs on the equipment were still in use in the field (no PFS either, so years of captured content could ostensibly have been decrypted).
Even equipment that appears to have been cleared out is probably hiding secrets in flash. The vendor of the equipment in this case had a separate command to wipe file contents. Deleting files just unlinked them in the flash fs.
Yep, I personally bought a Cisco firewall off of eBay several years ago that still had its entire configuration on it, including the PSKs for several IPSec VPN connections as well as SNMP (v2) communities, weak "type 7" hashes for local user accounts, the shared secrets for a pair of RADIUS servers, and so on.
Pretty much all of them (with the exception of the VPN PSKs) were sufficiently "generic" enough that I was convinced that they weren't device-specific, i.e., they were probably shared across many such devices.
According to the login banner, the firewall came from a casino.
I'm certain that my experience was not a unique one.
This is a claim that has been made about the Total Information Awareness program, its offshoots and, specifically, the NSA's big datacenter that was in the news some years ago: that one of the things the NSA are doing is collecting all the data they can in the hope they can make sense of it later, even I'd they can't now.
Need to remember that some cables have common points where they cross over and share fate, identified as shared risk link groups (SRLGs). Good engineering and design practices involve ingesting detailed fiber routes in GIS databases (kmzs in some cases) to identify these and build diverse paths. Every now and then sometimes people get surprised by what is documented isn't what's in the ground (this is a bigger problem in urban/metro locations). Example where fiber goes under the same sidewalk, transitions from aerial to underground, etc.
Seems like SpaceX's Starlink will have no shortage of business.
It seems like a no-brainer to use space as a backbone haul for inter-continent traffic vs. physical cables.
It might be slightly faster, it'll probably be cheaper, you can scale the bandwidth by deploying more (very cheap) satellites.
Most of all, the traffic cannot be cut, spied on, you can spread access points in multiple locations (i.e. resiliency via distribution vs. single point of failure).
I don't think that the inability to spy on traffic is a typical benefit of satellites over fiber optic cables. :-(
Spying on RF signals to and from satellites was a major Cold War activity. Governments spent a lot of money on it and got really good at it.
You might say that satellite data is still safer than terrestrial cables because a particular adversary might not be present in the footprint of a particular satellite, but (1) they might be, because the equipment needed to intercept the downlink may be easy enough to conceal, and (2) satellite signals get intercepted in both directions, uplink and downlink.
The one countersurveillance benefit that I can think of is in bypassing cable landing regulations that are used to require cable operators to give governments access to the data on their cables. While governments may also try to place licensing regulations on the use of satellite data services within their territory, it's a lot easier for individual end-users to evade or ignore those regulations than it would be for them to surreptitiously land a fiber-optic cable! However, governments will also have an easy time monitoring downlink signals in their territory so they may not lose much access compared to the cable landing license approach, depending on how the operators handle link encryption.
Starlink, Kuiper and Oneweb will be a significant improvement in $ per Mbps and throughput capacity over current options via C, Ku and Ka band geostationary VSAT terminals, for very remote and hard to reach locations.
And for small users much more likely to be affordable for a CPE terminal and monthly service than the smallest service options available via o3b.
It will not be a viable option to replace 80 x 100 Gbps DWDM wavelengths on submarine fiber cables.
There's no way that wireless links will ever be faster than backbone fiber - the whole notion makes very little sense. Starlink may be useful in other ways, of course.
The speed of light in glass-fiber is about 1.5 times slower than the speed of light in a vacuum. Hence the extra distance for lower-orbit systems can be canceled out by the extra speed of signals.
This is only talking about latency. Bandwidth is a different game. Sure, any given fiber link will probably have better bandwidth than any satellite link. Cause the medium in between is much more consistent, leading to less signal degradation. However, fiber needs to be placed, connected, maintained. Moreover, fiber needs a physical space.
Meanwhile wireless just needs access-points at both ends, well-aimed antennas, and little interference in between. Notably, I believe star-link will use laser communication for inter-satelite communication, which essentially means perfect directional antennas.
This stuff might be able to scale faster. It can certainly scale further because you don't need the space for all those cables.
"Starlink has the potential to provide lower latency around the globe than terrestrial fiber because light travels faster in vacuum than in optical silica fiber.
"Delay is Not an Option: Low Latency Routing in Space"
...
"We conclude that a network built in this manner can provide lower latency communications than any possible terrestrial optical fiber network for communications over distances greater than about 3000 km.""
Traditional satellite internet providers used satellites in geosynchronous orbit (that whole “aiming your dish” thing). That’s a 270ms hop up and another back.
Starlink is the other “not here yet” services are going to use low earth orbits for something like a 25-35ms latency to the satellite.
No, typical communication satellites operate in geostationary orbit, 36,000 km high. Signal must go up that far, then down again. The earth's circumference is 40,000 km, so you've traveled more already (and yet, are still in the same area because I didn't count transmitting to another satellite -- which AFAIK is never done: the satellite transmits back down to somewhere where the data can be transmitted trough fiber).
Assuming light travels at 2c/3 in fiber, it would still take less time for a signal to go around the equator back to the same spot than for it to arrive anywhere on earth if using a satellite link.
Spacex's satellites are a game-changer because they sit in low orbit.
Typical satellites are at geostationary orbit (35,786km).
Starlink satellites will be at 340km/550km/1150km. Lower orbits = less latency, but require a lot more satellites to cover the same area.
Wireless links could absolutely be faster than fiber links in latency. Unlikely they'll be faster in bandwidth though that's also possible if they are able to convince regulatory agencies that their beam-forming is good enough to allow them to use large swaths of bandwidth.
The idea with Starlink is that it will be low enough in the atmosphere and the satellite density will be high enough that the satellites should be able to route site-to-site. For some applications (high frequency trading) you could get potentially lower latency than with fiber.
a) The up and coming dictator Erdogan flexes his muscles for no particular reason. He has now probably demonstrated that he can cut two fibers at the same time using his all mighty turkish subs.
b) The Fed botches a software upgrade in the US
And this happens to happen at approximately the same time.
>According to Beckert, cable cuts happen "on average once every three days." He further noted that there are 25 large ships that do nothing but fix cable cuts and bends, and that such cuts are usually the result of cables rubbing against rocks on the sea floor.
>According to Global Marine Systems, "Undersea cable damage is hardly rare—indeed, more than 50 repair operations were mounted in the Atlantic alone last year". While a cut in a cable crossing the Atlantic has "no significant effect" due to the many alternate cables, only a handful of Internet cables serve the Middle East. These disruptions are only noticeable because of the small number of cables
>An estimated 35 million to 45 million cubic meters (between 1.2 billion and 1.6 billion cubic feet) of water per second are continually moved from the ocean bottom to the surface.
That's over a huge area though and still no real idea of speed.
> The company told internet service providers to connect to its other servers to "route around the problem".
I believe this means they changed their DNS settings and waited for it to propagate. This implies that the internet in general was accessible, but some larger companies that bought part of a fiber cable were inaccessible over that line.
Back in 2008 when some submarine cables were cut[1], it had us frantic at work as origin stations had no way to transmit documents to us in the affected countries so we couldn't do our jobs before the freight landed. Until it was rectified they were having to take the physical documents and fly them to a country that did still have a reliable connection and scan them there to transmit to us.
Then in 2010 [2] when the volcano in Iceland grounded flights throughout parts of Europe we were similarly like "argh, sorry customers!"
Cut a few cables and knock out the 31 GPS satellites and the world would grind to a halt. It's terrifying how dependent we are on technology for virtually ever facet of our lives. Then we rely so much on air travel for delivering persons and freight, modern society is so incredibly fragile.
"Large power transformers are essential critical infrastructure to the electric grid, and are huge, weighing up to 820,000 pounds. If large power transformers are destroyed by a geomagnetic disturbance (GMD) electromagnetic pulse (EMP), cyber-attack, sabotage, severe weather, floods, or simply old age, parts or all of the electric grid could be down in a region for 6 months to 2 years. "
You heard the old gag about the most important pieces of camping safety gear?
You need to have a meter of optical fibre with you. Then, if you get lost, you just bury it and then ask the backhoe driver who digs it up for a ride home.
"This is a picture of the Mackenzie Valley fibre optic cable. Since it was laid, it's been eaten at by wild animals, struck by lightning, and run over by construction workers."
I used to work at a military base in San Diego that lost all internet connectivity after construction workers replacing a sewer line accidentally cut the OC-48 line
Yesterday I could not find a The Pirate Bay mirror site that works. Now I assume that is connected to this issue but the first RARBG mirror I tried worked so I wonder if this is just an anecdote or it gives some insight about the demography of these two sites.
https://saigoneer.com/saigon-technology/11885-sharks,-anchor...
https://tuoitrenews.vn/news/society/20190528/vietnams-intern...
https://tuoitrenews.vn/news/society/20171016/vietnam-grapple...
https://tuoitrenews.vn/news/city-diary/20170125/save-vietnam...
https://www.independent.co.uk/life-style/gadgets-and-tech/ne...