And how do you figure that? The Intel 80386DX did NOT have any 80 bit instructions at all, the built in math co-processor came with i486. The only instructions on a 80386DX system that would be 80 bit would be to add a 80387 math co-processor.
But you obviously don’t count by a few extended instructions, but by the architecture of the CPU as a whole. And in that regard, the Databus is a very significant part, that directly influence the speed and number of clocks of almost everything the CPU does.
Doesn’t mean it’s any less important, it’s just not a good marketing measure,because average people wouldn’t understand it anyway, and it wouldn’t be correct to measure by the Databus alone.
As I stated it’s MORE complex today, not less, as the downvoters of my posts seem to refuse to acknowledge. The first Pentium had a 64 bit Databus for a 32 bit CPU. Exactly because data transfer is extremely important. The first Arm CPU was designed around as fast RAM access/management as possible, and it beat the 386 by several factors, with a tenth the transistors.
Go look at anything post-2000: 64 bit means that pointers take up 64 bits. 32 bits means that pointers take up 32 bits.
Although true, this is a very simplistic way to view it, and not relevant to the actual overall bitwith of the CPU, as I’ve tried to demonstrate, but people apparently refuse to acknowledge.
But bit width of the Databus is very important, and it was debated heavily weather it was even legal to market the M68008 Sinclair QL as a 32 bit computer, because it only had an 8 bit databus.
But as I stated other factors are equally important, and the decoder is way more important than the core instruction set, and modern higher end decoders operate at 256 bit or more, allowing them to decode multiple ( 4 ) instructions per cycle, again allowing each core to execute multiple instructions per clock, in 2 threads. Without that capability, each core would only be about a third as fast.
To claim that the instruction set determines bit wdth is simplistic, and also you yourself argued against it, because that would mean an i486 would be an 80 bit CPU. And obviously todays CPU’s would be 512 bit, because they have 512 bit instructions.
Calling it 64 bit is exclusively meant to distinguish newer CPU’s from older 32 bit CPUS, and we’ve done that since the 90’s, claiming that new CPU architectures haven’t increased in bit width for 30 years is simply naive and false, because they have in many more significant ways than the base instruction set.
Still I acknowledge that an AARCH64 or AMD64 or i64 CPU are generally called 64 bit, it was never the point to refute that. Only that it’s a gross simplification of what modern CPU’s have become, and that it’s not technically correct.
Let me finish with a question:
With a multi-core CPU where each core is let’s just say 64 bit, how many bits is the whole CPU package? Which is what we call the “CPU” today, when saying CPU we are not generally talking about the individual cores, because then it would have to be plural.
64-bit integers, memory addresses, or other data units[a] are those that are 64 bits wide. Also, 64-bit central processing units (CPU) and arithmetic logic units (ALU) are those that are based on processor registers, address buses, or data buses of that size.
It also states Address bus, but as I mentioned before, that doesn’t exist. So it boils down to instruction set as a whole requiring 64 bit processor registers and Databus.
Obviously 64 bits means registers are 64 bit, the addresses are therefore also 64 bit, otherwise it would require type casting every time you need to make calculations on them. But it’s the ability to handle 64 bit registers in general that counts, not the address registers. which is merely a byproduct.
Where did you get that from? Because that’s false, please show me dokumentation for that.
64 bit always meant the ability to handle 64 bit wide instructions, and because the architecture is 64 bit, the pointers INTERNALLY are 64 bit, but effectively they are only for instance 40 bit when accessing data.
Your claim about pointer width simply doesn’t make any sense.
That the CPU should be called by a single aspect they can’t actually handle!!! That’s moronic.
By your account a 386DX would be an 80-bit CPU because it could handle 80-bit floats natively,
No that’s not true, it’s way way more complex than that, some consider the data bus the best measure, another could be decoder. I could also have called a normal CPU bitwidth as depending on how many cores it has, each core handling up to 4 instructions per cycle, could be 256 bit, with an average 8 core CPU that would be 2048 bit.
There are several ways to evaluate like Databus, ALU, Decoder etc, but most ways to measure it reasonably hover around the 256 bit, and none below 128 bit.
There is simply no reasonable way to argue a modern Ryzen CPU or Intel equivalent is below 128 bit.
There is simply no reasonable way to argue a modern Ryzen CPU or Intel equivalent is below 218 bit.
There absolutely is, and the person you responded to made it incredibly clear: address width. Yeah, we only use 48-bit addresses, but addresses are 64-bit, and that’s the key difference that the majority of the market understands between 32-bit and 64-bit processors. The discussion around “32-bit compatibility” is all about address size.
And there’s also instruction size. Yes, the data it operates on may be bigger than 64-bit, but the instructions are capped at 64-bit. With either definition, current CPUs are clearly 64-bit.
But perhaps the most important piece here is consumer marketing. Modern CPUs are marketed as 64-bit (based on both of the above), and that’s what the vast majority of people understand the term to mean. There’s no point in coming up with another number, because that’s not what the industry means when they say a CPU is 64-bit or 32-bit.
You are stating the register width, which is irrelevant to the width of the address bus.
But that doesn’t make a shred of sense. it’s like claiming a road is 40000 km long around the globe, it’s just not finished, so you can only drive on a few km of it. The registers are 64 bit, but “only” 40 can be used. Enough to address 1 Terabyte of RAM.
If you want to measure by Address width we don’t have a single 64 bit CPU, because there doesn’t exist a 64 bit CPU that has a 64 bit Address bus.
no CPU has ever been called by the width of the address bus EVER.
Yes they have, and that’s what the vast majority of people mean when they say a CPU is 32-bit or 64-bit. It was especially important in the transition from 32-bit to 64-bit because of all the SW changes that needed to be made to support 64-bit addresses. It was a huge thing in the early 2000s, and that is where the nomenclature comes from.
Before that big switch, it was a bit more marketing than anything else and frequently referred to the size of the data the CPU operated on. But during and after that switch, it shifted to address sizes, and instructions (not including the data) are also 64-bit. The main difference w/ AVX vs a “normal” instruction is the size of the registers used, which can be up to 512-bit, vs a “normal” 64-bit register. But the instruction remains 64-bit, at least as far as the rest of the system is concerned.
Hence why CPUs are 64-bit, all of the interface between the CPU and the rest of the system is with 64-bit instructions and 64-bit addresses. Whether the CPU does something fancy under the hood w/ more than 64-bits (i.e. registers and parallel processing) is entirely irrelevant, the interface is 64-bit, therefore it’s 64-bit.
yes they have, and that’s what the vast majority of people mean when they say a CPU is 32-bit or 64-bit
Nobody ever called the purely 8 bit Motorola M6800, MOSTech 6502, Zilog Z80, ot the Intel 8080 16 bit computers for having a 16 bit address bus. They were 8 bit instruction and data bus, and were called 8 bit chips. The purely 16 bit Intel 8086 wasn’t called a 20 bit CPU for having a 20 bit Address bus, it was called a 16 bit CPU for having 16 bit instruction set and databus. Or the Motorola M68000 a 24 bit CPU for having a 24 bit adress bus, it was a 32 bit CPU for having a 32 bit instruction set.
I have no idea how you are upvoted, because your claim tha CPUs are called by their address bus bit length is decidedly false.
The most common is to use the DATA-bus or instruction set, and now also the instruction decoder and other things, because the complexity has evolved. But no 64 bit CPU has a 64 bit address bus, because that would be ridiculous.
Back in the day, it was mostly instruction set, then it became instruction set / DATA-bus. Today it’s way way more complex, and we may call it x86-64, but that’s the instruction set, the modern x86-64 CPU is not 64 bit anymore. They are hybrids of many bit widths.
Show me just ONE example of a CPU that was called by its address bus.
The 68000 is here correctly called 16/32 because it’a a 16 bit DATAbus and 32 bit instruction set.
The Address bus is 24 bit, but never has a CPU been called 20 ot 24 bit because of their address bus, despite many 16 bit CPU’s have had address busses of that length.
Incidentally, the MOS 6510 in the Commodore 64, had an extra 17th address bit, enabling it to use ROM and cartridges together with the 64 KB RAM. It would be absolutely ridiculous to call it either a 16 or 17 bit computer, and by no accepted standard would it be called that.
I guess you know more about hardware nomenclature Linux kernel developers, because they call modern Intel/AMD and ARM CPUs amd64 and aarch64, respectively.
AMD64 is the name of the instruction set they program to, it has nothing to do with how many bit the CPU is. Obviously the core instruction set is 64 bit, but as I’ve tried to explain, a chips bit width is not realistically determined by instruction set alone anymore.
Although they are almost identical, the equivalent Intel to AMD64 is called i64.
AArch64 Is the Arm Architecture family 64, again the instruction set you program for, and not the bit width of the CPU.
None of those describe the address bus width either. i64 and AMD64 and AARCH64 come will all sorts of different address bus widths, all of which are less than 64 bit wide.
Although this is a bit dated, the latest I heard was 48 bit address bus, which would surpass the above from 2013 by a factor of 256.
Obviously none of these 64 bit architecture CPU’s are called neither 40 or 48 bit.
Nobody ever called the purely 8 bit Motorola M6800
Sure, but that was a long time ago. Lithography marketing also used to make sense when it was actually based on real measurements, but times change.
All those chips you’re talking about were from >40 years ago. Times change.
Today it’s way way more complex, and we may call it x86-64, but that’s the instruction set, the modern x86-64 CPU is not 64 bit anymore.
Sure, yet when someone describes a CPU, we talk about the instruction set, so we talk about 32-bit vs 64-bit instructions. That’s how the terminology works.
we talk about the instruction set, so we talk about 32-bit vs 64-bit instructions. That’s how the terminology works.
I never denied that, what I denied was the ridiculous idea that Address bus was a meaningful measure. AMD64 is a 64 bit instruction set by definition, but a modern Ryzen CPU is so much more than just AMD64. And the same is true for the competition.
Originally an AMD64 CPU was single core single threaded. This is far from true today, so obviously since the CPU can handle multiple instructions on multiple cores, the “CPU Package” is also necessarily wider.
I have no idea what has gone wrong here? I’m not denying that a modern Intel or AMD or Arm CPU generally is called a 64 bit CPU.
I’m just stating that if they had to be measured by their actual capabilities, a modern Ryzen CPU for instance, is actually closer to being a 256 bit CPU, and that’s per core!. In part due to technologies that make them able to execute several instructions in a single clock cycle, that operate on way wider busses than older CPU’s, that encoded only a single thread per core.
But there can be absolutely no doubt that Address bus was NEVER used to determine the bit width of a CPU, that would simply be ridiculous, as it ONLY determines addressable RAM and nothing else.
All those chips you’re talking about were from >40 years ago. Times change.
Those easy to understand examples were only to show how claiming address bus can be a meaningful measure for the bit width of a CPU is ridiculous.
Also the AMD64 is only part of the instruction set of a modern Ryzen CPU, so although AMD64 definitely is a 64 bit instruction set, it only describes one part of the CPU. It also supports: x87, MMX, SSE, SSE2, SSE3, SSSE3, SSE4. 1, SSE4. 2, AES, CLMUL, AVX, AVX2, FMA3, CVT16/F16C, ABM, BMI1, BMI2, SHA.
Many of which have way wider instructions than 64 bit, AVX2 for instance supports 512 bit math.
I’m not denying that a modern Intel or AMD or Arm CPU generally is called a 64 bit CPU
That seems to be exactly what you’re arguing about, unless I have misread this entire thread.
If we want to highlight other capabilities, we should use different terminology than “X-bit” because that has been pretty much universally agreed upon to refer to instruction sizes and addresses, not data pipelines. And we do that, product spec sheets refer to extensions to point out the unique capabilities they offer (e.g. Intel was pretty famous for supporting AVX-512 almost 10 years before AMD).
That said, now that 32-bit is essentially dead, the “X-bit” marker is essentially dead, and saying something is 256-bit or whatever today is just going to confuse people. People have gotten into the habit if talking about specific capabilities if it’s relevant (which it isn’t for most people, who just care about “IPC”).
Removed by mod
And how do you figure that? The Intel 80386DX did NOT have any 80 bit instructions at all, the built in math co-processor came with i486. The only instructions on a 80386DX system that would be 80 bit would be to add a 80387 math co-processor.
But you obviously don’t count by a few extended instructions, but by the architecture of the CPU as a whole. And in that regard, the Databus is a very significant part, that directly influence the speed and number of clocks of almost everything the CPU does.
Removed by mod
Doesn’t mean it’s any less important, it’s just not a good marketing measure,because average people wouldn’t understand it anyway, and it wouldn’t be correct to measure by the Databus alone.
As I stated it’s MORE complex today, not less, as the downvoters of my posts seem to refuse to acknowledge. The first Pentium had a 64 bit Databus for a 32 bit CPU. Exactly because data transfer is extremely important. The first Arm CPU was designed around as fast RAM access/management as possible, and it beat the 386 by several factors, with a tenth the transistors.
Although true, this is a very simplistic way to view it, and not relevant to the actual overall bitwith of the CPU, as I’ve tried to demonstrate, but people apparently refuse to acknowledge.
But bit width of the Databus is very important, and it was debated heavily weather it was even legal to market the M68008 Sinclair QL as a 32 bit computer, because it only had an 8 bit databus.
But as I stated other factors are equally important, and the decoder is way more important than the core instruction set, and modern higher end decoders operate at 256 bit or more, allowing them to decode multiple ( 4 ) instructions per cycle, again allowing each core to execute multiple instructions per clock, in 2 threads. Without that capability, each core would only be about a third as fast.
To claim that the instruction set determines bit wdth is simplistic, and also you yourself argued against it, because that would mean an i486 would be an 80 bit CPU. And obviously todays CPU’s would be 512 bit, because they have 512 bit instructions.
Calling it 64 bit is exclusively meant to distinguish newer CPU’s from older 32 bit CPUS, and we’ve done that since the 90’s, claiming that new CPU architectures haven’t increased in bit width for 30 years is simply naive and false, because they have in many more significant ways than the base instruction set.
Still I acknowledge that an AARCH64 or AMD64 or i64 CPU are generally called 64 bit, it was never the point to refute that. Only that it’s a gross simplification of what modern CPU’s have become, and that it’s not technically correct.
Let me finish with a question:
With a multi-core CPU where each core is let’s just say 64 bit, how many bits is the whole CPU package? Which is what we call the “CPU” today, when saying CPU we are not generally talking about the individual cores, because then it would have to be plural.
Removed by mod
https://en.wikipedia.org/wiki/64-bit_computing
It also states Address bus, but as I mentioned before, that doesn’t exist. So it boils down to instruction set as a whole requiring 64 bit processor registers and Databus.
Obviously 64 bits means registers are 64 bit, the addresses are therefore also 64 bit, otherwise it would require type casting every time you need to make calculations on them. But it’s the ability to handle 64 bit registers in general that counts, not the address registers. which is merely a byproduct.
Removed by mod
deleted by creator
Which is why it’s such a pain, because you have to do it manually:
https://lemire.me/blog/2021/10/21/converting-binary-floating-point-numbers-to-integers/
Removed by mod
Where did you get that from? Because that’s false, please show me dokumentation for that.
64 bit always meant the ability to handle 64 bit wide instructions, and because the architecture is 64 bit, the pointers INTERNALLY are 64 bit, but effectively they are only for instance 40 bit when accessing data.
Your claim about pointer width simply doesn’t make any sense.
That the CPU should be called by a single aspect they can’t actually handle!!! That’s moronic.
Removed by mod
No that’s not true, it’s way way more complex than that, some consider the data bus the best measure, another could be decoder. I could also have called a normal CPU bitwidth as depending on how many cores it has, each core handling up to 4 instructions per cycle, could be 256 bit, with an average 8 core CPU that would be 2048 bit.
There are several ways to evaluate like Databus, ALU, Decoder etc, but most ways to measure it reasonably hover around the 256 bit, and none below 128 bit.
There is simply no reasonable way to argue a modern Ryzen CPU or Intel equivalent is below 128 bit.
There absolutely is, and the person you responded to made it incredibly clear: address width. Yeah, we only use 48-bit addresses, but addresses are 64-bit, and that’s the key difference that the majority of the market understands between 32-bit and 64-bit processors. The discussion around “32-bit compatibility” is all about address size.
And there’s also instruction size. Yes, the data it operates on may be bigger than 64-bit, but the instructions are capped at 64-bit. With either definition, current CPUs are clearly 64-bit.
But perhaps the most important piece here is consumer marketing. Modern CPUs are marketed as 64-bit (based on both of the above), and that’s what the vast majority of people understand the term to mean. There’s no point in coming up with another number, because that’s not what the industry means when they say a CPU is 64-bit or 32-bit.
Edited for clarity
You are stating the register width, which is irrelevant to the width of the address bus. But that doesn’t make a shred of sense. it’s like claiming a road is 40000 km long around the globe, it’s just not finished, so you can only drive on a few km of it. The registers are 64 bit, but “only” 40 can be used. Enough to address 1 Terabyte of RAM.
If you want to measure by Address width we don’t have a single 64 bit CPU, because there doesn’t exist a 64 bit CPU that has a 64 bit Address bus.
Yes they have, and that’s what the vast majority of people mean when they say a CPU is 32-bit or 64-bit. It was especially important in the transition from 32-bit to 64-bit because of all the SW changes that needed to be made to support 64-bit addresses. It was a huge thing in the early 2000s, and that is where the nomenclature comes from.
Before that big switch, it was a bit more marketing than anything else and frequently referred to the size of the data the CPU operated on. But during and after that switch, it shifted to address sizes, and instructions (not including the data) are also 64-bit. The main difference w/ AVX vs a “normal” instruction is the size of the registers used, which can be up to 512-bit, vs a “normal” 64-bit register. But the instruction remains 64-bit, at least as far as the rest of the system is concerned.
Hence why CPUs are 64-bit, all of the interface between the CPU and the rest of the system is with 64-bit instructions and 64-bit addresses. Whether the CPU does something fancy under the hood w/ more than 64-bits (i.e. registers and parallel processing) is entirely irrelevant, the interface is 64-bit, therefore it’s 64-bit.
Nobody ever called the purely 8 bit Motorola M6800, MOSTech 6502, Zilog Z80, ot the Intel 8080 16 bit computers for having a 16 bit address bus. They were 8 bit instruction and data bus, and were called 8 bit chips. The purely 16 bit Intel 8086 wasn’t called a 20 bit CPU for having a 20 bit Address bus, it was called a 16 bit CPU for having 16 bit instruction set and databus. Or the Motorola M68000 a 24 bit CPU for having a 24 bit adress bus, it was a 32 bit CPU for having a 32 bit instruction set.
I have no idea how you are upvoted, because your claim tha CPUs are called by their address bus bit length is decidedly false.
The most common is to use the DATA-bus or instruction set, and now also the instruction decoder and other things, because the complexity has evolved. But no 64 bit CPU has a 64 bit address bus, because that would be ridiculous.
Back in the day, it was mostly instruction set, then it became instruction set / DATA-bus. Today it’s way way more complex, and we may call it x86-64, but that’s the instruction set, the modern x86-64 CPU is not 64 bit anymore. They are hybrids of many bit widths.
Show me just ONE example of a CPU that was called by its address bus.
https://people.ece.ubc.ca/edc/379.jan2000/lectures/lec2.pdf
Tell me when 8086 and 8088 were called 20 bit CPU’s!!
https://www.alldatasheet.com/datasheet-pdf/view/82483/MOTOROLA/MC6800.html
The 6800 was an 8 bit CPU with 16 bit Adress bus as was the 6502/6510.
https://en.wikipedia.org/wiki/Motorola_68000
The 68000 is here correctly called 16/32 because it’a a 16 bit DATAbus and 32 bit instruction set.
The Address bus is 24 bit, but never has a CPU been called 20 ot 24 bit because of their address bus, despite many 16 bit CPU’s have had address busses of that length.
Incidentally, the MOS 6510 in the Commodore 64, had an extra 17th address bit, enabling it to use ROM and cartridges together with the 64 KB RAM. It would be absolutely ridiculous to call it either a 16 or 17 bit computer, and by no accepted standard would it be called that.
I guess you know more about hardware nomenclature Linux kernel developers, because they call modern Intel/AMD and ARM CPUs amd64 and aarch64, respectively.
AMD64 is the name of the instruction set they program to, it has nothing to do with how many bit the CPU is. Obviously the core instruction set is 64 bit, but as I’ve tried to explain, a chips bit width is not realistically determined by instruction set alone anymore.
Although they are almost identical, the equivalent Intel to AMD64 is called i64.
AArch64 Is the Arm Architecture family 64, again the instruction set you program for, and not the bit width of the CPU.
None of those describe the address bus width either. i64 and AMD64 and AARCH64 come will all sorts of different address bus widths, all of which are less than 64 bit wide.
https://www.tomshardware.com/reviews/processor-cpu-apu-specifications-upgrade,3566-2.html
Although this is a bit dated, the latest I heard was 48 bit address bus, which would surpass the above from 2013 by a factor of 256.
Obviously none of these 64 bit architecture CPU’s are called neither 40 or 48 bit.
Sure, but that was a long time ago. Lithography marketing also used to make sense when it was actually based on real measurements, but times change.
All those chips you’re talking about were from >40 years ago. Times change.
Sure, yet when someone describes a CPU, we talk about the instruction set, so we talk about 32-bit vs 64-bit instructions. That’s how the terminology works.
I never denied that, what I denied was the ridiculous idea that Address bus was a meaningful measure. AMD64 is a 64 bit instruction set by definition, but a modern Ryzen CPU is so much more than just AMD64. And the same is true for the competition.
Originally an AMD64 CPU was single core single threaded. This is far from true today, so obviously since the CPU can handle multiple instructions on multiple cores, the “CPU Package” is also necessarily wider.
I have no idea what has gone wrong here? I’m not denying that a modern Intel or AMD or Arm CPU generally is called a 64 bit CPU.
I’m just stating that if they had to be measured by their actual capabilities, a modern Ryzen CPU for instance, is actually closer to being a 256 bit CPU, and that’s per core!. In part due to technologies that make them able to execute several instructions in a single clock cycle, that operate on way wider busses than older CPU’s, that encoded only a single thread per core.
But there can be absolutely no doubt that Address bus was NEVER used to determine the bit width of a CPU, that would simply be ridiculous, as it ONLY determines addressable RAM and nothing else.
Those easy to understand examples were only to show how claiming address bus can be a meaningful measure for the bit width of a CPU is ridiculous.
Also the AMD64 is only part of the instruction set of a modern Ryzen CPU, so although AMD64 definitely is a 64 bit instruction set, it only describes one part of the CPU. It also supports: x87, MMX, SSE, SSE2, SSE3, SSSE3, SSE4. 1, SSE4. 2, AES, CLMUL, AVX, AVX2, FMA3, CVT16/F16C, ABM, BMI1, BMI2, SHA.
Many of which have way wider instructions than 64 bit, AVX2 for instance supports 512 bit math.
That seems to be exactly what you’re arguing about, unless I have misread this entire thread.
If we want to highlight other capabilities, we should use different terminology than “X-bit” because that has been pretty much universally agreed upon to refer to instruction sizes and addresses, not data pipelines. And we do that, product spec sheets refer to extensions to point out the unique capabilities they offer (e.g. Intel was pretty famous for supporting AVX-512 almost 10 years before AMD).
That said, now that 32-bit is essentially dead, the “X-bit” marker is essentially dead, and saying something is 256-bit or whatever today is just going to confuse people. People have gotten into the habit if talking about specific capabilities if it’s relevant (which it isn’t for most people, who just care about “IPC”).