Informational Technical Account Managers (TAM)

Sockets, CPUs, Cores and Threads: The History of x86 Processor Terms

By Carl Olafson, Staff Technical Account Manager (TAM)

Without diving too deeply into the details, the x86 architecture was a commercial product in the late 1970’s as an 8-bit CPU. Back then, the relationship of Socket to CPU to Core to Thread was 1:1. Over time, the definitions have been blurred due to advancements in chip technology. With real estate, it is Location, Location and Location. With transistors, it is Density, Density and more Density! Everyone in the industry should be aware of Moore’s Law, which asserts that the number of transistors on a microchip doubles about every two years, though the cost of the associated computer is cut in half. I’m happy to debate when we will reach the limit for Moore’s Law, but it will forever remain a profound law of technology.

 

I am old enough to have owned an 8086 computer and utilized the CPM Operation System with 8” floppies! Over the years I’ve seen a misuse of terms as the 1:1 relationship moved to a 1: many relationship. Most of the confusion remains around the term “CPU” or Central Processing Unit. The goal of this article is to cover all the terms and hopefully clear up the confusion.

 

Processor Terms

 

Socket – At the most basic level, there is a “Motherboard” which can do nothing without a CPU chip with pins that are inserted into the socket. The more correct term is “CPU Socket”. Most common blades run a 2-socket motherboard, but then I was in awe when testing an HP DL-980 with eight sockets! It is beyond this article as well, but it is important to know that a NUMA node is not related to the CPU but is the relationship between the CPU Socket and the closest memory bank(s).

 

CPU – The Central Processing Unit is the most maligned term in the industry, and I have heard it used to define Sockets, Cores and even Threads. Ultimately, the relationship between Socket and CPU remains a 1:1 relationship regardless of how the term is actually used. In some ways, I would like the term to be removed because this can also be expressed as a “Socket”. At the end of the day, the most important thing to remember is that the CPU is a piece of integrated silicone and represents a 1:1 relationship with the CPU socket on the motherboard.

 

Cores – It wasn’t until 2005/2006 that Intel/AMD began releasing CPUs with multiple processing units (processors). I have intentionally not used the word “Processor” until now because it represents the ability to execute a single instruction from the Operating System. If there was a 2nd most maligned IT word it would be processors! Going back to the 8086 CPU, it could handle a single byte (hence 8-bit processor) of data at a time. The Processor/Core density has increased to 64-bit (hence 64/8 = 8 bytes of data). With the advent of “Cores”, a single CPU with 10 cores can execute 10 simultaneous instructions. And if we ignore performance, then we could be fine interchangeably using terms like CPU, Core and Processor. However, CPUs have associated cores and memory so “locality” does come into play. A set of instructions coming from an Operation System that are places on different CPUs will not see the same CPU cache and a host of other locality specific performance issues.

 

Thread – A thread is simply a queue for an Operating System instruction. The confusion comes with the term “Hyperthreading”, which has been around since the 1980s. Intel did not release an HT (Hyperthread) chip until the Pentium 4 (circa 2002). Hyperthreading has been the bane of my existence due to a huge misunderstanding of the term. And vendors have made this worse with OS level definitions of the term that leverage words like CPU/Core/Processor. Simply stated, enabling HT allows two threads (think queues) per core. If I have HT enabled on a 10-core system, I have 10 cores and 20 threads. I still can only execute 10 OS instructions per cycle. But I can, queue up 20 OS instructions. This provides a level of efficiencies by allowing for multiple instruction queue/dequeue events to occur. HT can provide upwards of 30% performance improvement by eliminating latency (improving efficiency). However, it is not performing any magic and is just getting you closer to the theoretical maximum of the number of instructions that can be executed in any given timeframe on a CPU/Core.

 

I mention everything above for one simple reason. I am a VMware Technical Account Manager and another level of complicity is increased as IT vendors/personnel add in terms like logical and virtual that represent abstraction/emulation of physical entities. I am on calls with vSphere customers and I start hearing words like “my processor utilization in vCenter is showing 50% of what the OS is showing”. And hence starts my standard respond of “Are we talking about Hosts or Workload metrics? And are you talking about cores or threads”?

 

Using ESXTOP as another example, the term “PCPU” is used to represent a thread (and not a physical CPU or even a core!). The term Core in ESXTOP is a Core (okay, so you are going to throw in the correct term!) If you do not see the PCPU values and only Cores, that is a sign that HT is not enabled in the Bios of the Host/ESXi Hypervisor. It is somewhat helpful because it is letting you know the percentage of time each thread is executing on a core.

 

Even the configuration of a VM can be confusing. When you edit the CPU properties you can set the number of CPUs and then the sub option is cores per socket, rather than cores per CPU. Since the upper definition is CPUs why not make the sub definition match? When it comes to setting a CPU/Core relationship within a VMware Workload/Virtual Machine be careful using cores at all. For example, with vMotion I can move a VM from a 6 Core/CPU Host to a 12 Core/CPU Host and vice versa. If you are setting CPUs/Cores then it should match the underlying physical infrastructure and that may be hard with you potentially have Hosts with different densities.

 

The key to success is remembering the x86 architecture is based on Socket-CPU-Core-Thread. When you are dealing with a definition, the first thing to do is determine what the definition is referencing. You also need to be on the same page with your colleagues. And don’t feel disheartened that IT technology has made this difficult. I am reminded of a comedy act that had me ROFL (Rolling on the Floor Laughing). Although I don’t remember the entire rant, I do remember “We Drive on a Parkway and Park on a Driveway!”. The nomenclature becomes less important if everyone agrees with what is being referenced.

 

 

About the author:

 

Carl Olafson is a Staff Technical Account Manager for VMware living in Southern California.  Carl started with VMware in 2008.  Prior to VMware, he was an independent consultant for 18 years specializing in consulting services for SMBs.  He has VCP-DCV, VCP-NV and VCAP-DCV (Design) certifications and is working on his VCIX-DCV.  Beyond his TAM duties, Carl is on the VMUG Virtual Event Taskforce, vExpert (2017-2019) and a member of TAM Tech Lead – Operations Management team.

 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *