Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

joelb51

macrumors newbie
Original poster
Sep 21, 2005
18
0
just want to throw this out there...

is it possible that 10.5 could be 128bit OS - or are intel processors only based on 64bit.
 

PlaceofDis

macrumors Core
Jan 6, 2004
19,241
6
nope, not going to happen

the OS and apps aren't even taking full advantage of the 64 bit chips yet, and there are no 128bit chips out there to be used, no point for them at this time really.
 

joelb51

macrumors newbie
Original poster
Sep 21, 2005
18
0
yeh makes sense.... just curious...also wat does the 64-bit actually mean- the processer can handle more commands???
 

Lacero

macrumors 604
Jan 20, 2005
6,637
3
128 bit?

64-bit already provides 4 billion times more memory addressing than 32-bit systems. If anything, a 128-bit OS would run slower than a 64-bit system.
 

vasaz

macrumors member
Mar 23, 2004
51
0
MN
First of all, there are no mainstream processors that are 128-bit, and I doubt there will be anytime soon. So no point in a 128-bit OS.

64 bit vs 32 bit can refer to many things in terms of CPU parts. It can be the bus width, an ALU (for doing additions, etc), or register sizes. Having more bits provides advantages such as the ability to accessing A LOT more memory, or doing arithmetics with large numbers natively (good for scientific stuff). But it also has downsides, such as using more chip area to actually implement, as well as requiring a lot more bandwidth to move any data around (moving around 64 bits instead of 32 bits).

So if you have an application (most these days) that do not use huge numbers or require more than 32 bits of memory, having 64 bit abilities provides almost no advantage... and at the same time, unless you are clever, you'll be filling up 64 bit registers with 32 bits of data, a waste!

Higher numbers don't necessarily mean that the result is that much better. 128 bits would be complete overkill at this stage, and provide no advantage to 99.99% of users. A CPUs area can be used for much more useful performance enhancing features.
 

vasaz

macrumors member
Mar 23, 2004
51
0
MN
Josh396 said:
Could you explain a little more? Is that because the processors would only be 64-bit?

Yes, so you'd have to emulate any 128 bit operations on top of 64 bit hardware... which is going to kill your performance.
 

Kelmon

macrumors 6502a
Mar 28, 2005
725
0
United Kingdom
I'm a little iffy on the true benefits of 64-bit computing but here's what I do know:

1. The big benefit, as far as "normal" computing goes is that the computer is able to address more than 4GB RAM. 64-bits represents the size of the registers on the processor and, as such, the biggest number that the computer can use is 64-bits long. 32-bit registers means that the biggest number the computer can understand is (I think) 4,294,967,296 and therefore memory addresses in RAM must be equal to or less than this number to be accessible. 64-bits gives a maximum number size of 18,446,744,074,000,000,000 and that effectively removes any practical memory limit. OS X, as you may be aware, loves RAM so the more you can throw at it, within reason, the better so 64-bit computing will help you out there.

2. Less beneficial for normal users but great for the scientific community is that 64-bit computers can handle bigger numbers with greater accuracy. My theory is a little iffy here but since your computer can handle really big numbers (beyond the limit imposed by 32-bit registers) it has to "fake" the ones bigger than the 32-bit limit which effectively reduces accuracy. 64-bits allows much bigger "real" number so scientific applications can be much more accurate in the higher ranges of numbers.

3. Big databases, I believe, appreciate 64-bits for reasons similar to #2. I am not 100% sure about this...

Beyond this I am not sure of the benefits of 64-bits. Some of the 64-bit processors run faster than the 32-bit ones but I am unsure as to whether that is as a direct result of 64-bits or just the opportunity to redesign the processors. The overall benefit for "regular" users is just access to more RAM and possibly processors with an improved architecture. Beyond this 64-bit computing means pretty much nothing and it won't deliver, for the sake of examples that I've heard, better Artificial Intelligence in games or other such nonsense. 128-bit computing simply isn't necessary at this time.
 

russed

macrumors 68000
Jan 16, 2004
1,619
20
are the proposed intel processors that may/will end up in mac's 64-bit?

is the processor that will go in the powermac/book 64 bit? otherwise wont it be a step backwards?
 

Will Cheyney

macrumors 6502a
Jul 13, 2005
701
0
United Kingdom
That question has been raised before... The truthful answer is, "We don't know", but the I don't see Apple back peddling and sticking older technology in their latest machines which they have to convince people about anyway. Many are dubious over the Intel switch.
Me? I don't really think it will make much of a difference - except a PowerBook G5 equivalent, FINALLY!
 

russed

macrumors 68000
Jan 16, 2004
1,619
20
but are current intel processors 64-bit? what about this yonah processor that i have read so many mention?
 

bousozoku

Moderator emeritus
Jun 25, 2002
15,876
2,078
Lard
It's possible but unlikely.

If Apple considered making the operating system even more hardware-independent, they could craft a virtual machine like that which IBM has already implemented on PowerPC hardware as a 512-bit processor. This would allow applications to be written for the virtual machine and require only minimal changes to the executable application from time to time, but nothing would have to be re-compiled. e.g., applications written for the original IBM System/38, which was released in 1979, are still able to run on the latest iSeries machine, even though the hardware has been changed many, many times, even changing from CISC to RISC.

The trouble with this is the overhead. It takes a strong machine to put forth reasonable performance even when the virtual machine code is in tune with the actual processor instruction set.
 

JoeDeep

macrumors newbie
Nov 25, 2005
3
0
The current generation of Apple's PowerPC platform are 64 bit RISC processors.

The next generation of Intels is all multiple core. The Intel that Apple is looking at is a dual core chip. It's based on the Pentium M from what I understand. I can't see them really making a step backwards from the 64bit standpoint that they currently have. So it's wise to assume that they will be 64bit, just as the PPC versions are now.
 

generik

macrumors 601
Aug 5, 2005
4,116
1
Minitrue
joelb51 said:
yeh makes sense.... just curious...also wat does the 64-bit actually mean- the processer can handle more commands???

Memory addressing.

And for AMDs, yeah, additional registers.
 

devman

macrumors 65816
Apr 19, 2004
1,242
8
AU
bousozoku said:
It's possible but unlikely.

If Apple considered making the operating system even more hardware-independent, they could craft a virtual machine like that which IBM has already implemented on PowerPC hardware as a 512-bit processor. This would allow applications to be written for the virtual machine and require only minimal changes to the executable application from time to time, but nothing would have to be re-compiled. e.g., applications written for the original IBM System/38, which was released in 1979, are still able to run on the latest iSeries machine, even though the hardware has been changed many, many times, even changing from CISC to RISC.

The trouble with this is the overhead. It takes a strong machine to put forth reasonable performance even when the virtual machine code is in tune with the actual processor instruction set.

Gee, there's so much to write about this it's hard to know where to start. It could be just the way you've written your post (meaning, you may well be aware of what I'm about to write).

With the S/38 and AS/400 up to the NPM/NMI (new programming model, new machine interace) there is a multi-layered OS architecture. The operating system (CPF and OS/400) are written to a specification and API called the MI - the machine interface. Beneath the MI is another whole bunch of code (the lower half of an OS you could say) called VMC on the S/38 and SLIC on the AS/400. The scheduler, resource management and etc. are all down there as is the database code (which is what lead to the S/38 claims of a database in hardware - which is strecthing the truth to breaking point). But file management, for example, is up in CPF or OS/400. Beneath the VMC/SLIC is the microcode (used to be called HMC on the S/38) and HW (which used to be 42bit CISC chips).

When you compile an RPG program (for example) it is a multi-level compilation process. First, an MI representation is created which has all the symbolic information as well. This is called the program template. This is the "above the MI" part of the compile. Next a VMC/SLIC program called the translator runs and compiles the MI representation down to the executable (a program object). This is the "beneath the MI" part of the compile. However, the full MI representation and all the symbolic information is stored with the program object in an associated space. This is what allowed any user S/38 or AS/400 program (prior to NPM) to be symbolically debugged without having to do a special compile. The symbolic info was always kept.

So, when the AS/400 came along and replaced the S/38, the AS/400 SLIC translator could compile from AS/400 MI or from S/38 MI. Same thing happend when the AS/400 switched from the OPM (original programming model) to the NPM (SLIC was rewritten from PL/MP to C++) S/38 programs (and OPM AS/400 programs) were able to appear as if they ran without recompilation.

What actually happened was that when a S/38 program object was run the first time on an NPM AS/400, the second half of the compiler - the SLIC translator was re-run, thus taking the still stored S/38 MI representation and compiling that S/38 MI to an NPM AS/400 program object. In other words the NPM SLIC tranlator could compile NMI or OMI (AS/400 or S/38 versions).

Now, all of this stored MI and symbolic stuff became known as a program's observability info. For a long time you have been able to strip the observability info from a program object (to save disk space or for confidentiality). If you do remove the observability info, then you lose the ability to symbolically debug (wihout specifically recompiling the source code for that option) and of course you lose the ability for the SLIC translator to do any cross-architecture compilations for you. e.g. an OPM AS/400 program without observability info will NOT run on a PowerPC AS/400 without a full recompile from source code.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.