A
novel language system has been developed, which has given rise to
promising alternatives to standard formal and processor network models
of computation, and to the systolic programming of reconfigurable
arrays of Arithmetic Logic Units. A textual structure called the interstring is
proposed, which is a string of strings of simple expressions of fixed
length. Unlike a conventional tree language expression, an interstring
linked with an abstract machine environment, can represent sharing of
sub-expressions in a dataflow, and a program incorporating data
transfers and spatial allocation of resources, for the parallel
evaluation of dataflow output. Formal computation models called the α-Ram family,
are introduced, comprising synchronous, reconfigurable machines with
finite and infinite memories, designed to support interstring based
programming languages (interlanguages).
Distinct from dataflow, visual programming, graph rewriting, and FPGA
models, α-Ram machines’
instructions are bit level and execute in situ without fetch. They
support high level sequential and parallel languages without the
space/time overheads associated with the Turing Machine and λ-calculus,
enabling massive programs to be simulated. The elemental devices of one α-Ram
machine, called the Synchronic
A-Ram, are fully
connected and simpler than FPGA look up tables. With the addition of a
mechanism for expressing propagation delay, the machine may be seen as
a formal model for sequential digital circuits and reconfigurable
computing, capable of illuminating issues in massive parallelism.
A
compiler for an applicative-style, interlanguage called Space, has
been developed for the Synchronic A-Ram. Space can express coarse to
very fine grained MIMD parallelism, is modular, strictly typed, and
deterministic. Barring operations associated with memory allocation and
compilation, Space modules are referentially transparent. A range of
massively parallel modules have been simulated on the Synchronic A-Ram,
with outputs as expected. Space is more flexible than, and has
advantages over existing graph, dataflow, systolic, and multi-threaded
programming paradigms. At a high level of abstraction, modules exhibit
a small, sequential state transition system, aiding verification.
Composable data structures and parallel iteration are straightforward
to implement, and allocations of parallel sub-processes and
communications to machine resources are implicit. Space points towards
a range of highly connected architectural models called Synchronic
Engines, with the potential
to scale in a globally asynchronous, locally synchronous fashion.
Synchronic Engines are more general purpose than systolic arrays and
GPUs, and bypass programmability and resource conflict issues
associated with processor networks. If massive intra chip, wave-based
interconnectivity with nanosecond reconfigurability becomes available,
Synchronic Engines will be in favourable position to contend for the
TOP500 parallel machines.