• Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    You don’t even see it listed on spec sheets.

    Doesn’t mean it’s any less important, it’s just not a good marketing measure,because average people wouldn’t understand it anyway, and it wouldn’t be correct to measure by the Databus alone.
    As I stated it’s MORE complex today, not less, as the downvoters of my posts seem to refuse to acknowledge. The first Pentium had a 64 bit Databus for a 32 bit CPU. Exactly because data transfer is extremely important. The first Arm CPU was designed around as fast RAM access/management as possible, and it beat the 386 by several factors, with a tenth the transistors.

    Go look at anything post-2000: 64 bit means that pointers take up 64 bits. 32 bits means that pointers take up 32 bits.

    Although true, this is a very simplistic way to view it, and not relevant to the actual overall bitwith of the CPU, as I’ve tried to demonstrate, but people apparently refuse to acknowledge.
    But bit width of the Databus is very important, and it was debated heavily weather it was even legal to market the M68008 Sinclair QL as a 32 bit computer, because it only had an 8 bit databus.

    But as I stated other factors are equally important, and the decoder is way more important than the core instruction set, and modern higher end decoders operate at 256 bit or more, allowing them to decode multiple ( 4 ) instructions per cycle, again allowing each core to execute multiple instructions per clock, in 2 threads. Without that capability, each core would only be about a third as fast.
    To claim that the instruction set determines bit wdth is simplistic, and also you yourself argued against it, because that would mean an i486 would be an 80 bit CPU. And obviously todays CPU’s would be 512 bit, because they have 512 bit instructions.

    Calling it 64 bit is exclusively meant to distinguish newer CPU’s from older 32 bit CPUS, and we’ve done that since the 90’s, claiming that new CPU architectures haven’t increased in bit width for 30 years is simply naive and false, because they have in many more significant ways than the base instruction set.

    Still I acknowledge that an AARCH64 or AMD64 or i64 CPU are generally called 64 bit, it was never the point to refute that. Only that it’s a gross simplification of what modern CPU’s have become, and that it’s not technically correct.

    Let me finish with a question:
    With a multi-core CPU where each core is let’s just say 64 bit, how many bits is the whole CPU package? Which is what we call the “CPU” today, when saying CPU we are not generally talking about the individual cores, because then it would have to be plural.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        It means pointer width.

        https://en.wikipedia.org/wiki/64-bit_computing

        64-bit integers, memory addresses, or other data units[a] are those that are 64 bits wide. Also, 64-bit central processing units (CPU) and arithmetic logic units (ALU) are those that are based on processor registers, address buses, or data buses of that size.

        It also states Address bus, but as I mentioned before, that doesn’t exist. So it boils down to instruction set as a whole requiring 64 bit processor registers and Databus.
        Obviously 64 bits means registers are 64 bit, the addresses are therefore also 64 bit, otherwise it would require type casting every time you need to make calculations on them. But it’s the ability to handle 64 bit registers in general that counts, not the address registers. which is merely a byproduct.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        It means pointer width.

        Where did you get that from? Because that’s false, please show me dokumentation for that.
        64 bit always meant the ability to handle 64 bit wide instructions, and because the architecture is 64 bit, the pointers INTERNALLY are 64 bit, but effectively they are only for instance 40 bit when accessing data.
        Your claim about pointer width simply doesn’t make any sense.
        That the CPU should be called by a single aspect they can’t actually handle!!! That’s moronic.