Pervasive environments use an increasing number of sensors including cameras, microphones and microphone arrays that tend to produce data at a high rate by taking advantage of recent technological improvements in networks and commodity hardware. Pervasive applications must acquire, process and fuse data from sensors in real-time, which is usually beyond the capabilities of single machines.
It is therefore necessary to distribute such applications on a cluster of computers. An application is thus represented as a data flow graph, with streaming media flowing between the different computational components that perform logical tasks such as acquiring video, filtering audio, or detecting a person’s face as shown on the figure 1 below.Figure1: The NIST Data Flow System II data flow map for a multimodal application tracking a speaker based on the detection of its face and voice. Data flow nodes monitoring audio and video data are also shown in that application.
Figure 2: The NIST Data Flow System II data flow map for an Ant Colony Optimization (ACO) simulation. The simulation is distributed on multiple host computers. Multiple displays reprensenting subspace unit are shown to highlight the distribution
Distributing computational components (also called client nodes) on a local network of computers is not an easy task to realize and requires expertise in networks and systems. To facilitate the development of such applications, we provide a distributed data transport middleware, the NIST Data Flow System II, which offers network transparent services for data acquisition and processing across a local network of computers.
The NIST Data Flow System II can be seen as a service transporting streams of data between the computational components. The transport is transparent for these components, i.e. they request access to the data streams using their properties rather than their source locations. Consumer client nodes therefore don’t need to know if a stream of data is produced locally on the same physical machine or remotely on a different host. In the same way a client node providing a stream of data does not need to know where the data blocks are sent; the data flow system handles the data transport.
In order to use the data flow system services, client nodes use the C++ or Java API to create flows used to provide or consume stream of data. The system also comes with tools to help the development and the control of this new kind of applications.
Created on 2008-06-18 by Antoine Fillinger - Last updated on 2008-11-23 by Antoine Fillinger