The Oxford BSP toolset manual pages

The manual pages are split into three sections: (1) the core BSPLib operations; (2) the collective communication operations; and (3) any user commands such as compilation drivers or profiling visulisation tools.

Core BSPlib

Like many other communications libraries, BSPlib adopts a Single Program Multiple Data (SPMD) programming model. The task of writing an SPMD program will typically involve mapping a problem that manipulates a data structure of size N into p instances of a program that each manipulate an N/p sized block of the original domain. The role of BSPlib is to provide the infrastructure required for the user to take care of the data distribution, and any implied communication necessary to manipulate parts of the data structure that are on a remote process. An alternative role for BSPlib is to provide an architecture independent target for higher-level libraries or programming tools that automatically distribute the problem domain among the processes.

Initialisation bsp_begin Start of SPMD code
bsp_end End of SPMD code
bsp_init Simulate dynamic processes
Halt bsp_abort One process stops all
Enquiry bsp_nprocs Number of processes
bsp_pid Find my process identifier
bsp_time Local time
Superstep bsp_sync Barrier synchronisation
DRMA bsp_push_reg Make area globally visible
bsp_pop_reg Remove global visibility
bsp_put Copy to remote memory
bsp_get Copy from remote memory
BSMP bsp_set_tagsize Choose tag size
bsp_send Send to remote queue
bsp_qsize Number of messages in queue
bsp_get_tag Getting the tag of a message
bsp_move Move from queue
High Performance bsp_hpput Unbuffered communication

Collective communications: level 1 library

Some message passing systems, such as MPI, provide primitives for various specialised communication patterns which arise frequently in message passing programs. These include broadcast, scatter, gather, total exchange, reduction, prefix sums (scan), etc. These standard communication patterns also arise frequently in the design of BSP algorithms. It is important that such structured patterns can be conveniently expressed and efficiently implemented in a BSP programming system, in addition to the more primitive operations such as put and get which generate arbitrary and unstructured communication patterns. The collective operations can be implemented in terms of the core operations, or directly on the architecture if that was more efficient. For modularity and safety, all collective communications will have an implicit registration, within the routine, of any arguments that are required to be communicated.

Note: The operations defined here will change in the near future, when a more substantial MPI-like suite of collective operations will be provided.

bsp_bcast Broadcast from one process to all
bsp_fold Reduce data with an associative operator
bsp_s Return s, the Mflop/s rate of the processor
bsp_l Return l, the barrier synchronisation rate in flops
bsp_g Return g, the communication throughput in flops/word

User commands

Compilation bspcc Compilation driver for C programs
bspf77 Compilation driver for Fortran 77 programs
bspc++ Compilation driver for C++ programs
TCP/IP support bsplibd TCP/IP daemon
bspload Load manager
profiling bspcgprof Call graph profiling visualisation tool
bspprof Performance prediction tool
bspsig prof(1) style profiling
Misc. BSP commands bsprun Execute a BSPlib program
bsparch Check the BSP architecture and communications device
ipcclean Cleanup the inter-process communication facilities
bspparam List the BSP machine parameters
Literate programming litToPgm Convert a literate source file into program text
litToTex Convert a literate source file into LaTeX

Jonathan Hill
Last updated: June 11th 1997