Why does my C program run faster on Linux than on Windows?

AWS

Owner
FPCH Owner
Joined
Nov 19, 2003
Messages
10,976
Location
Florida U.S.A.
Last November, I wrote a C program that can find large prime numbers (using the Lucas Lehmer test for mersenne prime numbers). Today, I installed Ubuntu 6.10 LTS in Microsoft Virtual PC 2007 and proceeded to install gcc, m4, autoconf and gmp. I also installed all of the patches available for Ubuntu and upgraded the kernel to the i686 version.

I downloaded the source code for my program from my university's unix server using sch and compiled my program using GCC. Imagine my surprise when I discovered that my program executed in a Virtual PC far faster than it executes on Windows.

Here are some numbers:

Testing all of the mersenne numbers ((2^x) - 1) between 0 and 2281 takes 1 second on Ubuntu and 3 seconds on windows. Testing all of the mersenne numbers between 0 and 3217 takes 5 seconds on Ubuntu and 10 seconds on Windows. Testing M21701 ((2^21701) - 1) takes 6 seconds on Ubuntu and 20 seconds on Windows.

You could say that my program runs twice as fast on Ubuntu, but that would be wrong, because Microsoft Virtual PC does not have SMP support, so my these tests are being run in a single thread on Ubuntu and two threads on Windows (three if you include the main thread that waits for the other threads to terminate) with the exception of the test of M21701, which uses a single thread on both.

Since I am using win32-pthreads for multithreading, I decided to do another run that would eliminate shared resources between the threads as a source of overhead by ensuring that each thread had a ton of work to do before needing to lock the mutex. So I tested all of the mersenne numbers between 2281 and 3217. It took 4 seconds on Ubuntu and 7 seconds on Windows.

The variables here are the operating systems and the compilers. I am running Windows Media Center 2005 Edition with Visual Studio 2008 Professional and under Microsoft Virtual PC 2007, Ubuntu 6.10 LTS with GCC 4.03. I am compiling my program in Visual Studio 2008 Professional under release mode with all compilation flags that I could find set. That includes /Ox, /Ob2, /Ot, /Oy, /GL, /arch:SSE2, /fp:fast and /GS-. I am compiling my program on Ubuntu with the following command:

gcc -m32 -O2 -fomit-frame-pointer -mtune=k8 -march=k8 mersenne.c /usr/local/lib/gmp.so

There are no background processes running that are sucking up CPU resources and I did each test (particularly on Windows) several times to try to minimize the negativity of the results.

The code is the same and the hardware is the same. The only other variable is Virtual PC, which should be harming performance on Ubuntu, not Windows. So would anyone be able to tell me, exactly what is making my program run so much slower on Windows than it runs on Ubuntu?


More...

View All Our Microsoft Related Feeds
 
Back
Top