Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NIST Data Flow System II - User's Guide - Draft Version - Chapter 6


Table of Contents

Starting the data flow server
Starting client nodes
Overriding the connection ID of a flow
-- Overriding the client name
-- Additional client parameters
The Log System
-- Adding redirections to the log output
-- Changing the log severity
-- Enabling the Log redirection from threads
-- Using the log macro in your client node

Once the client nodes you need in your application are built, you can decide how you will allocate client nodes to hosts. In this section, we highlight the management of running applications, which includes launching the data flow servers and the client nodes.

Starting the data flow server

Before starting any client node, a NDFS-II network needs to be established. You can do that by launching data flow servers.

A NDFS-II network is at minimum composed of one host running a server. If you desire to use several machines in your application, you need to start a data flow server on each of them.

The system has dynamic properties: if you start a server and client nodes on one host and later on you need to launch more client nodes from a different host where no server is currently running, you just need to start a data flow server on the new host. The new server will then join the NDFS-II network if servers share the same application domain. A server can join or leave the network at any time: the system has been designed to handle this behavior.

After the new server joins the network, you can start client nodes that can interact with the ones already running on the network by exchanging data if specified.

If a server quits the network, every client node running on this host will be disconnected. It will then affect other client nodes, because they won't receive data anymore from producer client nodes that were running on the host, which has just left the NDFS-II network.

In order to establish a NDFS-II network, you only need to start the data flow server on each host. You can give a parameter when launching the server. This parameter specifies the application domain. If you don't specify anything, the default application domain is used. Every domain is logically separated from the other ones, meaning that client nodes producing or consuming flows are not visible outside the domain. Therefore a producer client node providing data cannot feed a consumer in a different domain even if the flows match.

The domain selection allows having several applications sharing the same subnet without any interaction between them. It is very useful when several people work using the system with different applications. Be aware that a physical host can only have one server running at a time and can only be part of one domain.

In order to start a server, you need to launch the executable file called sf2d and located in the bin/admin folder if you built the system using qmake or in the bin folder if you used the configure; make; make install method.

If you are willing to start the server using an application domain different than the default one, you need to specify it as follows:

sf2d myDomain

Starting client nodes

Once a data flow server is started on a machine, client nodes can be launched. There are two ways to start client nodes and connect them together through flows. The Control Center provides a graphical way to connect clients together. It is however possible to launch client nodes from the command line. When a client node is launched, it connects to its local server and usually creates flows. The NDFS-II then establishes the connections between client nodes through flows based on the information hard coded in the flow declaration.

Overriding the connection ID of a flow

Flows' connections are hard coded. So connection between client nodes can be established just by launching the client nodes. This behavior is very convenient but can be too restrictive. We provide a way to override the flow connection IDs and therefore change the connection between clients during flow creation.

For example, the producer client node creates an output flow as follows:

flowOut = makeOutputFlow("Flow_Audio_Array", "myFlowOut", "", "Audio1");

and let's say consumer1 creates an input flow:

flowIn = makeInputFlow("Flow_Audio_Array", "myFlowIn", "", "Audio1");

The input and output flows of both client nodes have the same flow type, Flow_Audio_Array, and the same connection ID, Audio1. So when both clients are launched, the system connects these clients together through the flow.

Let's take consumer2, which creates an input flow:

flowIn = makeInputFlow("Flow_Audio_Array", "myFlowIn", "", "Audio2");

Here, the flow type Flow_Audio_Array matches with the producer flow type. The connection ID Audio2 from consumer2 however does not match the connection ID Audio1 of the producer. So when consumer2 is launched, it won't be connected to the producer. It is however possible to override the connection ID when starting a client node by providing an argument on the command line, by launching consumer2 as follows:

consumer2 --sf flow-myFlowIn=Audio1

The connection ID Audio2 of the flow named myFlowIn is overridden by the connection ID Audio1 provided as an argument in the command line. Consumer2 is now able to consume the data produced by the provider.

So to override the ID of a flow when launched from the command line, the option --sf flow- should be immediately followed by the name of the flow specified in the code, then the '=' sign, and finally the new assigned connection ID:

consumer_executable --sf flow-flow_name=newID

Overriding the client name

In an NDFS-II application, client nodes must have a unique name. It is however possible to use at the same time several instances of a client node as long as they have different names. The name used is the name provided in the Smartflow::connect_to_application_server() method. If not specified, the name of the executable (argv[0]) will be used. If several instances of the same client node are launched without any parameter at the same time, only the first one connecting to the local data flow server will be accepted. The remaining ones will be denied access because they have the exact same name.

In order to start several instances of the same client node from the command line, each one of them needs to register to the server with a different name. One of them can keep its original name, while the other ones have to register with a new name. It is possible to override a client name using parameters of the command line. For example, when consumer1 is started like this:

consumer1 --sf clientname=new_client_name

The client node registers itself as new_client_name to the data flow server. It is useless to override the parameter yourself when using the Control Center because it does it automatically if necessary.

Additional client parameters

Some additional parameters can be given when launching any client nodes. These parameters are used by the Control Center but users could take advantage of them depending on their needs.

Specifying the client node path

It is possible to specify the path of the executable file of the client node. This information is used by the Control Center. The Control Center is able to discover and take control of an application. After discovering the running application on the NDFS-II network, it can stop the client nodes running. But it often cannot restart them because it cannot know where the client node executable file is located. In order to specify this path, launch a client node like this:

consumer1 --sf clientpath=/path/of/the/executable/consumer1

This parameter is automatically used by the Control Center when one starts a client node. If you want to be able to start a client node from the command line, discover it using the Control Center, stop it from the Control Center and restart it from there, you need to specify the location of the client node executable when launching it.

Specifying flow information

The Control Center is able to make connection between client nodes through flows. It can request a description of the running application from the data flow server. Any client node and its active flows will be displayed. A client node may also provide or consume flows, which have not been created yet but will be later. In that case, the inactive flows won't be displayed because they are not registered with the data flow server yet. There is a way to launch a client node from the command line and specify flows that the client node will use. If a client node uses that option, the Control Center will be able to display a representation of every flows of the client node even if some flows have not been created yet. This parameter has originally been introduced for the Control Center.

The parameter --sf flowinfo automatically parsed by the Control Center gives a string representing flows described by their direction, flow name and flow type. Following is an example showing how to use it:

myClient --sf flowinfo=I%myFlowIn%Flow_Audio_Array%%O%myFlowOut%Flow_Audio_Array

The string describes every flows that the client node can eventually create. It should be in sync with the flows, that may be created in the source code of the client node. The flow list is represented by a string. The % character is used as a delimiter to identify the flow direction, flow name and flow type within the string. The %% delimiter is used to separate flow's description within the string. It is used as this:

--sf flowinfo=direction%flow_name%flow_type%%direction%flow_name_%flow_type

The direction can be either:

  • I for an input flow
  • O for an output flow

The flow_name is the name of the flow as specified in the code as the second parameter of the makeInputFlow() or makeOutputFlow() method.

The flow_type is the type of the flow as specified in the code as the first parameter of the makeInputFlow() or makeOutputFlow() method.

The Log System

A convenient log system was necessary to monitor distributed applications.

Adding redirections to the log output

The NDFS-II provides log information from servers, duplicators and client nodes. These pieces of information can be very useful to identify problems in applications or to display status information. The log outputs are by default sent to the standard output stream and the Control Center, but it is also possible to redirect the log output to:

  • a file. Each server, client node or duplicators has one log file.
  • the system logger of your operating system.

Enabling log outputs is done from the command line or the Control Center. From the command line, when launching a client node or the server, add parameters to specify redirections:

  • To log to a file, add --sf logfile=/the/full/path/of/the/file. The log of the server or client node will be saved in the file indicated using an absolute path. When logging into a file is activated for a server, the file logging is as well activated for duplicators. The log files for duplicators are stored in the same folder as where the server saves its log. The files are named dup_IdOfTheFlow-TypeOfTheFlow.log. For example, if a client node creates a flow of type Flow_Audio_Array with the connection ID mic1. The log file of the duplicator handling this flow is named dup_mic1-Flow_Audio_Array.log.
  • To log to the system logger, add --sf syslog=1 when you launch a server or a client node. The log options for the server are propagated to duplicators running on the same host.

Note: If you don't want to gather the logs from the Control Center, you need to disable this capability when launching data flow servers or client nodes. It can be done by giving this extra parameter to the data flow server or client nodes:

--sf logforward=0

Changing the log severity

The log severity can also be set up when launching servers or client nodes. The severity level applied to a server will also be applied to the duplicator running on the same host. The severity of each client node can be set up individually.

Here are the 3 error levels in detail ascending order:

  • error: this is the default mode. Only errors and important messages are displayed.
  • warning: This mode is providing more output information.
  • debug. This mode provides very detailed information. It is intended for developers.

In order to change the default log severity, you need to specify it when starting a server or a client node. Adding the loglevel parameter on the command line does it. In the example below, the client is started with the debug severity.

consumer1 --sf clientname=new name --sf loglevel=debug

Enabling the Log redirection from threads

By default, the log forwarding capabilities are only available within the thread where the Smartflow object has been created. If a user creates a thread and wish to have the log generated from this thread available for viewing in the Control Center, then it is necessary to explicitely activate this capability. This can be done by calling the method Log_Manager::enableLogForwardForCurrentThread() within the thread context:

//sf is the Smartflow object

Before leaving the thread, the log forward should be deactivated by calling Log_Manager::disableLogForwardForCurrentThread().

Note: The method need to be invoked from each user thread in order to activate the log forwarding for each thread.

Using the log macro in your client node

It is possible for the user to use the log capabilities of the NDFS-II for his own usage within a client node. By doing so, logs generated from client nodes will therefore be visible from the Control Center, and be stored in a file or sent to the system logger if required.

We use the ACE logging capabilities to implement the NDFS-II log system, hence the method is actually an ACE macro.

ACE_DEBUG( (USER_PREFIX ACE_TEXT("The message displays a string: %s and an integer: %d\n"),
      myString, myInteger) );

The formatting directives used (%d, %s...) are mostly similar to the one used by the standard printf() function. For more specific formatting directive, please have a look at the ACE documentation.

Created March 3, 2016, Updated June 2, 2021