Some issues on floating-point precision under Linux LG #53

Rate this post

« Linux Gazette…making Linux just a little more fun!« 


In this article I propose a practical exploration of how Linux behaves when performing single or double-precision calculations. I use a chaotic function to show how the calculation results of a same program can vary quite a lot under Linux or a Microsoft operating system.

It is intended for math and physics students and teachers, though the equations involved are quite accessible to just about everybody.

I use Pascal, C and Java as they are the main programming languages in use today.

This discussion focusses on the Intel architecture. Basic concepts are the same for other types of processor, though the details can vary somewhat.

May functions

These functions build up a series of terms with the form:

x is given in [0;1]
xk+1 = mu.xk.(1 – xk) where mu is a parameter

They were introduced by Robert May in 1976, to study the evolution of a closed insect population. It can be shown that:

  • for 0
Lire aussi...  A New Python-Based Solitaire Game LG #33