AMA: just got symmetric Gigabit cable internet (high split DOCSIS 3.1)

w00key

Ars Praefectus
5,907
Subscriptor
the original CO was full, and since it was just copper for phone lines, they connected it to the further CO because it would work just fine for analog services
Nah it was some point to point trunk to a tiny street cabinet, and only pairs in use get patched through to the CO several km away. Those showed up as far far away on the TDR. There's also ADSL2 gear there but not VDSL2.

Then they rolled out FTTCabinet but unless you complain, they don't randomly unplug your pair and stick it in the local DSLAM. You also need a new modem.

But all in all we're pretty lucky that there is hefty competition and basically every address has Gb+ over coax available, fiber rolling out quickly and even VDSL2 isn't too bad with 200/60 Mbps over a bonded pair.
 

ikjadoon

Ars Scholae Palatinae
1,371
Apologies for the updates: filling in the the answers here. Unfortunately, my USB-C → Ethernet adapter is not being cooperative, but have ordered another, so I'll fill in the responsiveness test soon as well.
Those are good latency numbers for coax. You'll always have about 10ms-15ms extra over a pure ethernet solution like FIOS or true Ethernet, but those increases are pretty much negligible. And yes, looks like no buffer bloat at all.
You can test the Google Drive upload with and without cake, btw, and see if that slow rampup was due to the codel algo or something inherent to Google's or Slectrum's networks. Im not familiar with cake, but I remember that fq-codel could be tweaked to deal better with fast connections. The defaults that I've seen were better suited to 100-200Mb speeds.

Edit: just saw the latest speed updates you posted. Thank you. Those still look pretty good, considering that this is residential cable we're talking about. Even 443Mbps is still gamechanging compared to the standard 35Mbps most cable services provide.

Edit2: and the latencies look fantastic for all of those tests, even the ones with the slowest uploads (congestion?). Spectrum could be up against limits at their edge, with their peering connections.

Ah, great point. CAKE does have some parameters, but haven't had a chance to look again. Here's what I have (the Evenroute defaults way back when) in OpenWRT at the moment:

qdisc (ingress):
Code:
nat dual-dsthost ingress docsis mpu 64

qdisc (egress):
Code:
nat ack-filter-aggressive dual-srchost mpu 64

I ran the re-ran file upload test with CAKE enabled and disabled (the whole package was disabled when disabled). This is with a 4.19GB video file (size reported by Google Drive after upload), just manually timed with a stopwatch. There is a maybe a few seconds added per file as Google has a "finalizing upload", but it was less than 10s each run. Just one run each, which is not very scientific, however.

4.19GB video fileTime to CompleteEstimated Upload Capacity
CAKE fully enabled2 minutes, 18 seconds243 Mbps
CAKE fully disabled2 minutes, 59 seconds187 Mbps

There is a fair amount of ratchecting / sawtooth throughout the upload, CAKE on or off, but this is also on a "live" network. Nothing else showed up in Task Manager consuming more than 3 Mbps, however.

Cake fully disabled
2024-04-28_18-31-51.png

Cake fully enabled:
2024-04-28_18-28-12.png


Installed Speedest's CLI, so now we can see bufferbloat and the total upload capacity at once. Upload latency has increased, of note, but not egregiously (sans the 305ms max).


Code:
   Speedtest by Ookla

      Server: Spectrum - Columbus, OH (id: 63372)
         ISP: Spectrum
Idle Latency:    15.30 ms   (jitter: 0.68ms, low: 14.42ms, high: 15.92ms)
    Download:   918.87 Mbps (data used: 982.5 MB)
                 20.29 ms   (jitter: 3.07ms, low: 13.68ms, high: 39.60ms)
      Upload:   935.25 Mbps (data used: 1.1 GB)
                 42.10 ms   (jitter: 6.60ms, low: 13.61ms, high: 305.36ms)
 Packet Loss:     0.0%

//
One dumb test i would want to know. What happens if you load like 50 torrent files with 1000 connections. And then seeing if the router/modem they provide how it deals with it. Like does the speed test's ping/jitter go downhill fast?

Ah, that might be interesting, too. I've not used torrents in many years now: 50 torrents seems doable, but 1000 connections is where I'm lost. Is there an easy way to verify how many connections get made and / or ensure I get ~1000?
 

Andrewcw

Ars Legatus Legionis
18,129
Subscriptor
Ah, that might be interesting, too. I've not used torrents in many years now: 50 torrents seems doable, but 1000 connections is where I'm lost. Is there an easy way to verify how many connections get made and / or ensure I get ~1000?
So by connection i mean what they call "peer". You might have more connection attempts. But in a modern client you can limit how many you think your system can handle. But it also might be really hard to get that many now. As connections are so fast these days the pool of seeding doesn't need to be as large so there's oversupply in most cases. And also i remember. They might cripple torrent connections so the upload requests might be blocked from their side.

Anyways at the subscriber level they're at now and normal usage i guess the jitter is fine. It was more of a curiosity if pushing it really hard effected it.
 

malor

Ars Legatus Legionis
16,093
Sadly I am stuck with municipal internet 1200/600 costing 1.84 hours of work on minimum wage.
After income tax it would be less than 3 hours.

It delivers around 900/600, not at home so I cannot test right now.
A 1200Mbps signal? That's pretty weird. If it's being delivered via gigabit, that'd be why you see the 900ish limit.
 

w00key

Ars Praefectus
5,907
Subscriptor

malor

Ars Legatus Legionis
16,093
For the last few years in most markets Comcast/Xfinity "gigabit" plans are now 1200mbps down. Many modems have at least one 2.5G port.
Aha, thank you.
It's just a number in the settings file.
Given that the modems are coming with 2.5Gbps ports now, I guess it doesn't matter. It's just that most home networks are still gigabit, and upgrading to use that last 200Mbps could be pretty expensive.

I realize that the cable signal is not constrained by Ethernet limits, and that they could literally pick 1342Mbps if they wanted, it just seems weird to go to 1200 when it's a poor match for so many customers. But I guess selling something that most people can't easily use, but still have to pay for, is standard behavior for cable ISPs. Selling something you don't actually have to deliver is great for profit margins.
 

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
I realize that the cable signal is not constrained by Ethernet limits, and that they could literally pick 1342Mbps if they wanted, it just seems weird to go to 1200 when it's a poor match for so many customers. But I guess selling something that most people can't easily use, but still have to pay for, is standard behavior for cable ISPs. Selling something you don't actually have to deliver is great for profit margins.

Because you're getting mega-fast AC1800 wireless so you absolutely need to have 1200 to fully take advantage of it or else you're going to get fragged in Candy Crush! (Do the kids still say fragged?)

But realistically, yes, a combination of all your wireless devices AND multiple gigabit LAN ports could swamp a 1200Mb uplink, even if they probably sell those speeds to a lot of people that don't actually make use of it. It's no different from a large office building with a core that has 10Gb links going to access switches that may only have gigabit links to devices, or an ISP with a 100Gb backbone serving hundreds of users at a range of speeds from 25Mb to 200Mb, or a cable network node serving 50 homes at 1Gb and having a 20Gb uplink. Oversubscribing is a core "technology" for ISPs as well as internal IT departments.
 

BigLan

Ars Tribunus Angusticlavius
6,907
Yeah, technically multiple devices could swamp that 1,200 but most users will have a router/AP with a gigabit port connected to the cable modem which wouldn't ever see the max. Someone else said that routers with 2.5gb ports have a pretty steep premium over gigabit stuff, which would be harder to justify for a 20% faster internet link (1->1.2gbit) vs 250% faster (1-2.5gb.)

But most users probably don't come.close to maxing out 1 gbit, so for those who want to go faster I guess they know they need to spend for the network equipment to do so.
 

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
most users will have a router/AP with a gigabit port connected to the cable modem
The majority of users probably just use the gateway provided by the ISP, especially at the high end and with anything other than cable, which is very likely to have 4 LAN ports, and the majority of customers probably don't even need 4 of them anyway. Others may have at least two ports. (Most of the ones listed by Xfinity have 4.) These days wireless is what everyone expects to use, and most homes will have several wireless devices that can't come close to using the total available speed individually but combined might come close, and having just one wired PC added to that could result in periods where they exceed the available WAN service even at 1.2Gbps. It's enough that having 1.2Gbps can't be automatically assumed to be a waste, but it certainly is in many cases just because that's what salespeople do.
 

KD5MDK

Ars Legatus Legionis
22,652
Subscriptor++
I realize that the cable signal is not constrained by Ethernet limits, and that they could literally pick 1342Mbps if they wanted, it just seems weird to go to 1200 when it's a poor match for so many customers.
If the competition is AT&T or FIOS advertising "1Gb! (940mb actual)", being able to advertise "1200Mb" is a distinction they can't counter easily with numbers.