I don't understand the LGA2011 socket

Status
Not open for further replies.
No CPU socket has given me as much headache such as the LGA2011 socket. Apparently there are three versions of this socket. The issue is discussed in articles such as these:


Moreover, the information is sometimes misleading. For example the Xeon E7 CPUs are apparently incompatible with the LGA2011 socket, and yet when you go into Intel Ark and look it up it does say that the socket is LGA2011:


It says that it is FCLGA2011 but it is actually LGA2011-1 which is a different socket.

I would assume that the "2011" in the LGA2011 refers to the number of pins or "lands" used by the socket. However, when counting the pins, you will realize that the 2011, 2011-1 and 2011-3 actually have different pin counts.

As far as I understand, all these sockets fit physically with the CPUs, i.e. you can put a 2011-1 or 2011-3 CPU on an "ordinary" LGA2011 socket but it will not work as they are not electrically compatible.

The only circumstance it would make sense to have the same socket and use the same name even though the # of pins are different is to emphasize compatibility. If you put an E7 into a single socket motherboard, you may not be interested in all of its QPI links or some motherboards are not using all memory channels for the memory slots. In those cases the motherboard may choose to "ignore" such pins but can still benefit from a "high-end" CPU intended for a multisocket environment with higher memory specs. But this is not the case.

So my question is, what were the engineers at Intel thinking when designing this socket? It doesn't make any sense.
 
Last edited:

continuum

Ars Legatus Legionis
94,897
Moderator

Nevarre

Ars Legatus Legionis
24,110
It sounds like Intel's strategy is to bring Sapphire Rapids Xeons back into the workstation space. For the longest time, workstations were basically just 1-2 socket Xeons anyway. The "prosumer" HEDT parts were getting squeezed from above and below. AMD's competitive parts -- 16 core Zen 2/3 and the lower end Threadrippers made the chips a hard sell. Intel never came out with any modern "Lake"-based socket-2011 chips. Those always lagged consumer but the writing was definitely on the wall when they fell so far behind.

These last few moves from Intel might point to some hope in their core businesses-- divesting themselves of whitebox servers, HEDT and now the NUC series (RIP)-- those are all markets that others are highly competitive in, and can move quickly to absorb Intel moving out of those markets. If it gives them enough room to really improve the fab process nodes quickly and keeps ARC alive, then I have to take that tradeoff.
 

mpat

Ars Praefectus
5,951
Subscriptor
Saving money on parts design and testing and manufacturing different things that all have to be redone if you change the physical package. They don't care if you have to buy 3 different CPUs to get one that works. They count it as a sale even if you return it to the retailer. ;)
Even if that is true, they could change the name. Call it Socket 2012 and 2013 if they want to. It's not as if we are going to complain that the number of contact pads is a little bit off. Note that Intel made Socket 1156, 1155, 1150 and 1151 for this exact reason.
 

teubbist

Ars Scholae Palatinae
823
I would assume that the "2011" in the LGA2011 refers to the number of pins or "lands" used by the socket. However, when counting the pins, you will realize that the 2011, 2011-1 and 2011-3 actually have different pin counts.
Intel's design documents say it's 2011 because there are that many solder balls on the underside of the socket, rather than it being the used pin or land count. The land arrangement and spacing is identical between all 3, so my guess is it was either OEM pressure or cost saving that resulted in the socket being kept the same.

That and none being a consumer platform maybe.

As far as I understand, all these sockets fit physically with the CPUs, i.e. you can put a 2011-1 or 2011-3 CPU on an "ordinary" LGA2011 socket but it will not work as they are not electrically compatible.

They are keyed differently. LGA2011-0 is 12+15mm from the center point, 2011-1(E7) is 15+14 and 2011-3 is 13+14mm. There are also some minor ILM differences I believe, so even if you crunched them in they may not close properly.

Intel design docs if you're curious:

2011-0
2011-1
2011-3, Intel's version is just a placeholder document. If you Google "i7 Processor Family for the LGA2011-3 Socket" "tmsdg" there are some 3rd party sites that carry it if you're willing to do a risky click.
 
I managed to find pinouts for all sockets

Pinout for the LGA2011(-0) socket:

View: https://docs.google.com/spreadsheets/d/1IG9V5ZH7Mn_lBr395P7u6otuNUul27dUIXkX5WnYNA0/edit#gid=0


Pinout for LGA2011-1:

Pinout and thermal guide for LGA2011-3:

For LGA2011 and LGA2011-1 sockets I counted them to be 2011. I didn't count the LGA2011-3 because the cells of the tables in the doc don't have consistent height. If I had the time and energy, I would look into the pins of the different sockets and see what are the actual differences between these sockets.

If Intel consistently and explicitly specified in their "ark" LGA2011-1 for all of their Xeon E7s using that socket, then I wouldn't complain as much. A clear differentiation between those sockets in the official specs is not too much to ask.

It is not as bad as AMDs vendor locking of their EPYC CPUs via AMD PSB. AMD says it is not their fault, they deliver what their customers ask for. But they can actually say it is against their policy to provide such features and choose to refuse. I'm currently instigating all people that I know who are responsible for purchasing of HPC servers to boycott such hardware.
 
Cost savings, the socket is one of the most expensive components on the board.

(similar reason AMD's slot A was identical to Intel's slot 1, just keyed differently so that you couldn't insert one into the other)
It's hard to imagine that it is more expensive than the CPU itself. And yet it is only designed to handle a maximum of 30 insertions. That plastic cap that protects it is only designed for 15 insertions. That's what I call quality! Imagine a pair of shoes that can only handle such numbers of insertions! I suspect that the same applies to the USB-C ports. I wonder what will happen to that plastic cap after the 16th insertion, will it turn to dust? Will I see magic smoke?
 
Last edited:
  • Like
Reactions: bkaral

continuum

Ars Legatus Legionis
94,897
Moderator
And yet it is only designed to handle a maximum of 30 insertions. That plastic cap that protects it is only designed for 15 insertions. That's what I call quality!
Testing and certification still costs money-- if you can reduce risk and the existing socket with only changed keying still does the job, I can see why they kept it. See Intel's LGA1700 bending issues as an example!

If that bothers you, the original SATA connectors were only 50 cycles or something. (modern ones now are up to 500 cycles).

Given most insert a CPU only once into a motherboard, a design life of only 15 or 30 cycles, as crazy as it may seem, is actually okay. They will often work many many more times than design life.

I suspect that the same applies to the USB-C ports.
Trying not to go off-topic but since you are asking:


Standard USB has a minimum rated lifetime of 1,500 cycles of insertion and removal,[4] the mini-USB receptacle increases this to 5,000 cycles,[4] and the newer Micro-USB[4] and USB-C receptacles are both designed for a minimum rated lifetime of 10,000 cycles of insertion and removal.
 
Trying not to go off-topic but since you are asking:

That's shocking to me. I've had a lot of Micro-B cables go bad on me, but never really had an issue with full sized A or B cables or sockets.

But then, I suppose mini/micro/C does see much more insertions and removals (since they're used for charging and all).
 

teubbist

Ars Scholae Palatinae
823
If Intel consistently and explicitly specified in their "ark" LGA2011-1 for all of their Xeon E7s using that socket, then I wouldn't complain as much. A clear differentiation between those sockets in the official specs is not too much to ask.
Eh, I think the thing to recognise is that Intel's Ark isn't meant to act as a MB compatibility guide, that's explicitly down to the motherboard manufacturer. They all provide CPU lists, along with relevant BIOS revisions. The same issue you have with the E7's also exist for some socketed laptop CPU's(e.g. rPGA988A/B) but even more so due to vendor BIOS stupidity like whitelisting adding additional constraints.

The mildly harsh reality is inconvenience to the secondary market 5+ years after relevance for CPU's that weren't intended to be DIY user upgradable isn't going to be a concern for Intel. These were all OEM/tray CPU's.
 
  • Like
Reactions: continuum
...
The mildly harsh reality is inconvenience to the secondary market 5+ years after relevance for CPU's that weren't intended to be DIY user upgradable isn't going to be a concern for Intel. These were all OEM/tray CPU's.
It will be a concern of them if they have an angry mob of neckbeards with pitchforks standing outside.

Another reality is sustainability goals. Part of that is that companies are required to have product lifecycle analysis and management. Maintaining reusability and recyclability is one such sustainability policy that I think will be reinforced. Right to to repair is another such thing. I love Luis Rossman's rants about different companies anti-right to repair strategies.
 

Nevarre

Ars Legatus Legionis
24,110
It will be a concern of them if they have an angry mob of neckbeards with pitchforks standing outside.


At the dawn of the Core i (Nehalem) era, you had a pretty strong bifurcation between the i7 Nehalem/Bloomfield CPUs on S1366 and 'everything else'. Prosumer/Enthusiast users wanted an i7-9xx if they could at all afford it. Lynnfield i7s didn't appear until almost a year later and were clearly not sporting the same feature set. That was really the last time that the HEDT part was fully mainstream. That "1st gen" was a mess with lots of different generations of CPU being conflated with the same "product". As soon as Sandy Bridge rolled around, the lineup got much simpler and Sandy Bridge HEDT parts were not really priced (or necessary) enough to be mainstream anymore, not when the i7-2700(K) was good enough. Ever since then the percentage of users who really needed the HEDT part kept shrinking. I think enough time has passed between 2008 and now that the early 1366 parts have faded into obscurity as has most of the early core-i generations. (although I'd point out that PCIE 3.0 in Ivy Bridge and later did/does extend their lifespan while the original 2 generations with PCIE 2.0 only are pretty rough to use in 2023.)

Meanwhile the neckbeards can buy the lowest end i3 today and it would outperform an old Nehalem/Sandy/Ivy HEDT system by a very wide margin.

The neckbeards who truly relied on 10th gen HEDT parts are pretty small in number. They can probably go up or down the stack within Intel or just go with a Threadripper system if they're in that goldilocks zone where a desktop-class CPU isn't fast enough or doesn't have enough I/O and they don't quite need the parallelization or advanced features of a Xeon/Epyc.
Another reality is sustainability goals. Part of that is that companies are required to have product lifecycle analysis and management. Maintaining reusability and recyclability is one such sustainability policy that I think will be reinforced. Right to to repair is another such thing. I love Luis Rossman's rants about different companies anti-right to repair strategies.

I mean, it's dead. There's no point in encouraging them to do better next time. There's no next time. The whole S2011/HEDT platform is gone now. Shuffled off this mortal coil. The first few iterations of used S2011 CPUs have aged out or are aging out of their support lifecycles. Intel is clear that at some point older CPUs stop getting errata updates and stop being regression tested against.

Even though it's really helpful to have a stable platform and stable socket (and to Intel's credit they're about to hit three "generations" on S1700) new technologies do mean that you have to change platforms eventually. There's a used market. People who really care about keeping older systems can buy and sell them on the used market.

Intel isn't perfect but they're good at providing documentation and hosting docs/drivers for a very long time giving you some reasonable chance to repair and re-use something that has aged out of official support. They're not restricting your right to repair and re-use anything and if you want to point fingers-- point fingers at the OSes that refuse to support older hardware. Win 11 and Apple's move away from x86 drive e-waste at a scale that dwarfs the HEDT market.

If you want to be green and HEDT then performance/watt is not something to ignore. A brand new system might pull just as much from the wall as a 10 year old system, and replacing a 10 year old system means e-waste and associated costs... but it could also do the same computations in a fraction as much time and then go back to idling.
 
Last edited:
The LGA2011 CPUs may be a bit obsolete but they are not entirely useless. Sure, the performance may struggle to compete with a high-end i3. An 8 socket system of Xeon E7-8890 CPUs may get a performance that is not sexy for a gamer but useful for a hobbyist who want to do some 3D rendering or some other computational work. The downside with it may be that it requires a special cabinet, is noisy, weighs about 450 pounds and may pull 2kW of power when in operation instead of 300W. A new Ryzen 9 x3D may not be far off with such a system in terms of performance but it doesn't come with much I/O. A good Threadripper will for sure beat the crap out of it but those systems don't come cheap.

If you look at I/O only, it is not cheap these days and I don't know why that is. I remember paying $250 in 2013 for a high-end Intel motherboard with 7 full length PCIe 3.0 x16 slots and $150 for another motherboard the the same amount of PCIe3.0 x16 slots for the AMD. Now you have to pay at least $800 - 1000 for such a motherboard! What's funny is that those old motherboards sell on ebay/Ali for $400 - 500!

Say that you have a system with an LGA2011 socket and wish to upgrade, you could buy a second-hand high-end CPU that costed $7000 when it was new for $50, and have an ok performance 10 years later. That is a lot cheaper than upgrading all of the hardware. I wrote "could" as if Intel allowed LGA2011-1 chips on HEDT desktops.

Another reason for using Xeons is to be able to use ECC RAM. Something that is standard on AMD platforms but somehow only came with Intel Xeons. I would say it is quite impressive that AMD managed to keep the AM3 socket alive for as long as 7 years.

Cannot say that I'm a big fan of the LGA now that you have to use a torque wrench to install the CPU on those 4K+ land sockets. Maybe it is better to go back to pins and use multiple levers to secure the CPU in the socket. But then you may not get the same density in the socket I guess.

The EPYCs have now integrated also the SouthBridge chipset with the CPU. So now you have S-ATA and USB directly on the pins of the CPU. I'm not sure if I'm a big fan of this but it will likely make it possible to produce motherboards that don't get obsolete as quickly and it opens up for high performing I/O that can take advantage of the cooling fan. I've also read about people who managed to get Threadrippers to run on EPYC socket even though they are defined as different sockets...
 
Last edited:

teubbist

Ars Scholae Palatinae
823
2011-0 and 2011-3 E5 Xeons work fine in the majority of X79 and X99 motherboards. I have an E5-2667 v2 in a Rampage IV Gene currently and an E5-2697 v4 in an X99E-ITX/ac.

If you want lots of cores, IO and ECC for cheap for a DiY build pickup one of the Chinese X79 or X99(depending on 2011 generation you want to use) boards from AliExpress. Or a refurb Dell/HP 2U on Ebay if you're OK with rackmounts. Ebay is flooded with cheap E5 Xeons in all manner of core/clock speed arrangements to suite your tastes.

But buying an E7 based platform for anything beyond curiosity is stupid. They're a niche segment of the market, parts are more expensive, sparing is harder, they are excessively large and chew up a stupid amount of power even idle. I've looked after a number of 4S and 8S systems over the years and even in a commercial setting they only made sense when you absolutely needed a single system image for a particular piece of software. And they were always massively outnumbered by the 2S systems.
 
  • Like
Reactions: continuum

bkaral

Ars Tribunus Militum
2,646
It's hard to imagine that it is more expensive than the CPU itself. And yet it is only designed to handle a maximum of 30 insertions. That plastic cap that protects it is only designed for 15 insertions. That's what I call quality! Imagine a pair of shoes that can only handle such numbers of insertions! I suspect that the same applies to the USB-C ports. I wonder what will happen to that plastic cap after the 16th insertion, will it turn to dust? Will I see magic smoke?

Do any of the manufacturers make stuff that exceeds the design spec for insertions. (Dang, I feel like I'm talking about a porn video or sex toy).
 

Doomlord_uk

Ars Legatus Legionis
24,892
Subscriptor++
It's not that old.

It's been an interesting thread for me to read as I have a skt-2011-3 mobo sitting in the bottom of my wardrobe*, along with a 6950X cpu, waiting to be built. Feels kind of odd to think that this once top-of-the-line setup would be beaten by even an i3 today... Actually I'm curious to know how it would fare competing with the 4790K it would replace.

*An Asus Rampage V Edition 10
 
Status
Not open for further replies.