Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
makeme said:
3. If you are just doing simple integer math, then there is no need for casting. Just do this: int i = int(1 / .005).
makeme - I knew about the overloaded constructor, but just out of curiosity I ran my test program with this too. On the intel machine, int(1.0/0.005) gives 199, not 200 (same as with the static_cast<int>()). On PPC, it gives 200.

The intel compiler is:
gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7)

I have tried the 3.3 and 4.0 compiler on the PPC with no difference. So now the cast really is a side issue. What's going on here?
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
makeme said:
Read http://gcc.gnu.org/bugs.html for how to report bugs to the GNU C++ Compiler developers. I don't know what else it could possibly be.

http://gcc.gnu.org/bugs.html said:
Problems with floating point numbers - the most often reported non-bug.

In a number of cases, GCC appears to perform floating point computations incorrectly. For example, the C++ program

#include <iostream>

int main()
{
double a = 0.5;
double b = 0.01;
std::cout << (int)(a / b) << std::endl;
return 0;
}

might print 50 on some systems and optimization levels, and 49 on others.

This is the result of rounding: The computer cannot represent all real numbers exactly, so it has to use approximations. When computing with approximation, the computer needs to round to the nearest representable number.

This is not a bug in the compiler, but an inherent limitation of the floating point types. Please study this paper for more information.

So now we're back to this just being basic frustration. :rolleyes: I guess this will teach me.
 

makeme

macrumors member
Jul 16, 2005
48
0
rinseout said:
So now we're back to this just being basic frustration. :rolleyes: I guess this will teach me.

Perhaps this is the time to bust out an industrial strength math library. Perhaps, Boost's numeric conversion library would help. Then again, it seems to just use floor and ceil to do it's work. Thus, you could just use floor and ceil directly. It's your call, it's your program, good luck!
 

jeremy.king

macrumors 603
Jul 23, 2002
5,479
1
Holly Springs, NC
This thread mysteriously has captured my interest. Are you sure you are comparing Apples to Apples (no pun intended). Have you tried other variations of the same computation such as explictly casting the operands to double/float and/or casting the result. What about doing a sizeof() to make sure you are dealing with the same bit precision on each platform??? How about different representations of the same problem such as (1.0 / (1.0 / 200.0))...

Just a couple thoughts.
 

gekko513

macrumors 603
Oct 16, 2003
6,301
1
kingjr3 said:
This thread mysteriously has captured my interest. Are you sure you are comparing Apples to Apples (no pun intended). Have you tried other variations of the same computation such as explictly casting the operands to double/float and/or casting the result. What about doing a sizeof() to make sure you are dealing with the same bit precision on each platform??? How about different representations of the same problem such as (1.0 / (1.0 / 200.0))...

Just a couple thoughts.
We did print out the binary representation of the "offending" double that were rounded differently, and as you can see they were identical on the two platforms except for the endianness.
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
kingjr3 said:
This thread mysteriously has captured my interest. Are you sure you are comparing Apples to Apples (no pun intended). Have you tried other variations of the same computation such as explictly casting the operands to double/float and/or casting the result.
We were comparing the same thing: the same source code compiled on two different platforms (unfortunately the intel platform was using a different point release - gcc 3.2 vs. gcc 3.3 or 4 on the apple). And yes the results as doubles are byte-for-byte the same, but casting them to ints gives different results. According to the gcc website, this is due to differences in how optimisation is performed. I basically stopped worrying about it since I have other work to do, but I was a bit peeved that it took me as long as it did to track down the cause of the differing behaviour of my project on the two platforms. Nestled deep in the project was this kludgey trick I used some years ago when I didn't know any better, and sho' nuff it was a terrible way of tackling the problem.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.