NIST logo

Bookmark and Share

OSG multi-card scaling problem

We have observed a problem with OpenSceneGraph (OSG) in which parallel, multi-threaded, applications do not scale properly to multiple graphics cards. The problem has been isolated to within the OSG software.  

We provide a program and input data file that demonstrate the performance problem:


The program should be compiled as any typical OSG program. (Our testing was done with OSG version 2.9.10 and 3.0.)

Before running, you should set the following environment variables:

export __GL_SYNC_TO_VBLANK=0
export OSG_THREADING=CullThreadPerCameraDrawThreadPerContext
Here is how to run the program from the command line:
  ./osgMultiCardTest displayNumbers  file
  • displayNumbers is a list of integer display numbers, i.e. 0 1 2 3
  • file is an OSG loadable test data file, typically .osg or .ive


            ./osgMultiCardTest 0       testex.ive      # display 0 only
            ./osgMultiCardTest 0 1     testex.ive      # display 0 and 1
            ./osgMultiCardTest 0 1 2   testex.ive      # display 0,1,2
            ./osgMultiCardTest 0 1 2 3 testex.ive      # display 0,1,2,3

When you run these tests, you should see a window appear on each of the specified displays. Each of these windows should show a 3D model consisting of many small colored spheres.  With your cursor in one of windows, press the 's' key; this will display the frame rate in the upper left corner of the window(s).  

The frame rate gives the rendering speed in frames per second (FPS). When we run these tests, we observe an unexpected reduction of frame rates as more cards are used. For example, if the FPS is 100 using one card, the frame rate drops to 50 for 2 cards and 25 for 4 cards in use. We would expect that if the application were working properly, the FPS would be the same in all cases.