We look at Donald Knuth's concept of "Literate Programming," investigating exactly what it is and how it is used to assist conventional programmers. We then ask what lessons we can learn from this idea and if it is possible to apply them to Forth. If so, then what is required and what advantages to we gain by doing so.Finally some examples of Forth coding techniques are given allowing us to compare the more traditional techniques with Knuth's system.
All the original work revolves around a particular literate programming tool called WEB. Knuth [Knu92] says:
The philosophy behind WEB is that an experienced system programmer, who wants to provide the best possible documentation of his or her software products, needs two things simultaneously: a language like TeX for formatting, and a language like C for programming. Neither type of language can provide the best documentation by itself; but when both are appropriately combined, we obtain a system that is much more useful than either language separately.Knuth's original WEB tool provided a meta-language that allowed the author to combine TeX typesetting instructions and PASCAL program instructions. Two tools where provided, one converted the WEB file into a TeX file, which could in turn be typeset via Knuth's TeX typesetting software. The PASCAL instructions were printed stylisticly, allowing the reader to quickly identify keyword, variables, etc. The order of the WEB file is not affected.The structure of a software program may be thought of as a web that is made up of many interconnected pieces. To document such a program we want to explain each individual part of the web and how it relates to its neighbours. The typographic tools provided by TeX give us an opportunity to explain the local structure of each part by making that structure visible, and the programming tools provided by languages such as C or Fortran make it possible for us to specify the algorithms formally and unambigously. By combining the two, we can develop a style of programming that maximizes our ability to perceive the structure of a complex piece of software, and at the same time the documented programs can be mechanically translated into a working software system that matches the documentation.
The second tool extracts the PASCAL instructions from the WEB file, automaticaly re-arranging and expanding the chunks into the correct order necessary for the compiler. The resulting file is compiler-ready and is not structured for human consumption.
There are now a large number of WEB tools for specific languages, these include Ada, APL2, C and C++, Fortran, Lisp, Maple, Modula-2, Pascal and Scheme. These tools use a number of different typesetting engines, including TeX and LaTeX, HTML, roff and unix 'man' page format. There are a few language/typersetter independent tools, most notably Norman Ramsey's noweb [Ram94] and the CLiP tool [vAK93]. Indeed Eric van Ammers claims [Tho95]:
I have proposed for many years that programming has nothing to do with programming langauges, i.e., a good programmer makes good programs in any language (given some time to learn the syntax) and a bad programmer will never make a good program, no matter the language he uses.The CADiZ tool [YSE94] takes this idea further in that it allows one to formally specify and derive an Ada program by semi-automatic means. The final document being a full description of the process. A tool is provided to extract the Ada program code into a compiler ready form.The idea of a literate program as a text book should be extended even further. I would like to see a literate program as an (in)formal argument of the correctness of the program.
Thus a literate program should be like a textbook on mathematics. A mathematical textbook explains a theory in terms of lemma and theorems. But the proofs are never formal in the sense that they are obtaind by symbol manipulation of a proof checker. Rather the proofs are by so called "informal rigour", i.e. by very precise and unambiguous sentences in a natural language.
The reordering is especially useful for encapsulating tasks such as input validation, error checking, and printing output fit for humans --- all tasks that tend to obscure "real work" when left inline.
Indexing is typically done automatically or 'semi-automatically', the latter meaning that identifier definitions are marked by hand. Diligently done semi-automatic indexes seem to be best, because the author can mark only the identifiers he or she considers important, but automatic indexing can be almost as good and requires no work. Some tools allow a mix of the two strategies.
Literate programming tools work with a number of different typesetting engine, normally TeX, roff or HTML but rather unusually WYSIWYG is not generally supported. This is because the data formats used in wysiwyg products are proprietary, and they tend to be documented badly if at all. They are subject to change at the whim of the manufacturer. These conditions make it nearly impossible to write tools, especially tools that provide automatic indexing and cross-reference support.
People use TeX, roff and HTML because free implementations of these tools are widely available on a variety of platforms. TeX and HTML are well documented, and TeX and roff are stable. TeX is the most portable, it was also developed by Donald Knuth as a literate program [Knu86], thus it is not suprising that the first literate programming tools used TeX as their typesetting engine.
To improve this situation Knuth introduced a decomposition facility into his meta-language. This allows one to break up the definition into its constituant parts without the need for defining new functions. Thus we are able to decompose the algorithm, discussing both it and its implementation in a more natural way, with the various parts being defined and discussed in their natural place. A tool is provided to collect the various parts together and reconstitute them into the correct order.
We are thus able to use the decomposition capability built into the meta-language to provide multiple levels of abstraction, with the book being broken into parts or chapters. We could start with a highly abstract description of the software working in more detail as we move through the book [1].
Knuth believes that creating a program should be viewed as creating a work of art. The resault will automatically be more readably as the author's intentions will be laid out in much more detail. The reader will have seen the development of the software through its description. Such descriptions should be of interest to any programmer. How often have you wanted to have a quick look at the code just to see how he did that little bit?
As the system has been fully described we can read the intention of the original author. We are then required to re-write the relevant section of the book to reflect the desired alteration. This technique is not of much use for run of the mill debugging, but is much more useful for long term maintenance.
With better factoring and documentation it is inevitable that we will be able to understand it better. Along with this understanding comes improved maintainability. If the software is easer to maintain, is better documented and better structured in the first place this has to lead to better quality software. This has a knock on effect in that a better quality program will command a better price and thus your salary will improve.
To obtain the benefits of Knuth's system he relies heavily on a change of emphasis. Moving away from program development, to the development of a maintenance manual for later use. The program just happens to appear as a side effect of developing the manual. It goes without saying that the vast majority of Forth code is very badly documented, if at all. Adopting Knuth's change of emphasis when writing Forth software will bring the same benefits, primarly readability and maintainability.
It is often said that Forth, like C, is a "write only language." The adoption of literate programming has done much to make C a readable language, and as a consequence made it more maintainable. If it can transform a cryptic notation such C, it must surely be able to make Forth more readable, and thus more maintainable.
The text on the right of the index line is an indication of last time the block was modified (10-May-1990) and by whom (PjK). This is placed on the block automatically by the Forth++ editor and is useful for maintenance and project management reasons.
One of the drawbacks of using blocks is that the space is limited and one is tempted to shoehorn the code and not document it. For this reason the "shadow" block was developed to allow documentation of the code presented in the associated block (as shown in the example). Normally the two blocks are printed side by side, thus making it obvious which code is referred to in the shadow (or "comment") block. The shadow block provides us with an extremely useful documentation technique, yet it is suprising how many seasoned Forth programmers do not bother with the shadow block, thus increasing the difficulty of maintenance.
The file does release us from the space constraints of the block. This means
that we no longer need to cramp the code and we can now include comments with
our code much more readly. This can be seen quite clearly with the definition
of ncbs
where a single line of code has been split into three lines,
each of which with an associated comment.
Source files do however have their drawbacks. The limited space of the block has been removed thus one is now able to make a definition that would not fit on a single source block, losing the good factoring that is a hallmark of a well written Forth program. As with other file based languages it is up to the programmer to include documentation, he may include anything from a large ammount of comments to none at all. Just as I find the limited use of shadow blocks suprising, it is just as suprising to see how comment-free source files are. This problem is shared with other langauges and was one of the seeds that gave rise to the development of literate programming.
The comments, or commentary as we should now call it, can be clearly distinguished
from the program code. This allows us to mark out the difference between a
reference to a variable or definition from a general concept in the commentary.
Ie, NCB is referring to the concept while NCB
is referring to the Forth
definition and ncb refers to the stack item.
As with the previous systems, we are able to collect together all the code relating to a single topic into a single section. Unlike the previous systems, we can also distribute code (say initialisation code) throughout the document (via the decomposition system) whichever is more useful to us.
As we are using files, we still have the drawback that a very long definition could be given. This is overcome by the philosophy of literate programming which promotes the use of good factoring of problems. The decomposition mechanism was provided explicitly to help with the factoring of complex algorithms. It should be noted that we have not used the decomposition capabilities in this extract as a well written Forth program should not need them.
Can we apply these ideas to Forth? Yes, although not by simply adopting the change of emphasis. In order to assist in this change of emphasis Knuth had to introduce a decomposition mechanism into his system, via a meta-language. As Forth is well disposed to decomposition, via factoring, it would appear that we do not need to adopt this side of the system.
There is another aspect of the meta-language that is just as important to the sucsess of the system, this is the ability to move the definition and discussion of code to a more relevant part of the description. Adding such a facility to Forth would free us from the bottom-up approach inposed by the 'define before use' nature of the dictionary. We would be allowed to use the top-down or middle-out approaches to software development.
With the reordering aspect of the meta-langauge implemented we are now in a position to make the change of emphasis proposed by Knuth. There is no reason to suppose the advantanges of using Knuth's approach would not apply to Forth. Thus it would be possible to improving the readability, maintainability and quality of Forth programs. This would naturally lead on to an increased pleasure in programming and, as the quality of the software has improved, an increase in our salary.
It is interesting to note how one of the major claims for literate programming is that it provides imperative languages with a usable factoring mechanism. Something which is fundamental to the basic design of the Forth language.
comp.programming.literate
Usenet newsgroup.
1 ( NETCHAT - Reserve Memory PjK 10MAY90) 2 3 : STRING WORD C@ 1+ ALLOT ; 4 CREATE NET-CHAT BL STRING NET-CHAT 5 6 CREATE ncbs ncb_size 5 * ALLOT ncbs ncb_size 5 * ERASE 7 CREATE buff 60 5 * ALLOT buff 60 5 * ERASE 8 9 : NCB ( n -- a ) ncb_size * ncbs + ; 10 : BUFF ( n -- a ) 60 * buff + ; 11 12 : name ( n -- a n NCB ) DUP BUFF SWAP 60 SWAP NCB ; 13 14 15 16
1 This screen will reserve the memory buffers to be used. 2 3 It first defines the word STRING to compile a counted string. 4 The word NET-CHAT is then defined as a counted string. 5 6 Memory is reserved for 5 NCBs (one outgoing, four incoming) 7 Memory is also reserved for 5 buffers (of 60 chars each) 8 9 NCB returns the address of NCB number n. 10 BUFF returns the address of text buffer n. 11 A precondition for these operations is that 0 <= n < 5 12 13 The word name is defined to provide the parameter information 14 for a Datagram Receive on a given buffer number. 15 16
\ Page 3: NETCHAT - Reserve Memory \ Last Modified by Peter J. Knaggs 10-May-90 \ In this page we reserve the memory buffers used by the NetChat application. \ We also define words to provide easy access to these buffers. \ We start be defining a word (STRING) that will allow us to compile a \ counted string literal, which we go on to use to define the common name \ used in the application. : STRING ( -- ) WORD C@ 1+ ALLOT ; \ The word NET-CHAT is then defined as a counted string. CREATE NET-CHAT BL STRING NET-CHAT \ Memory is reserved for 5 NCBs (one outgoing, four incoming) \ Memory is also reserved for 5 buffers (of 60 chars each) CREATE ncbs \ Define the base area ncb_size 5 * ALLOT \ Reserve space for five NCBs ncbs ncb_size 5 * ERASE \ Clear the memory like a good boy CREATE buff \ Define the base address 60 5 * ALLOT \ Reserve space for five buffers buff 60 5 * ERASE \ Clear the memory \ We now define two words NCB and BUFF to return the address of the given \ ncb or buffer. They take one item from the stack which must be between \ 0 and 4 (inclusive), giving access to five ncbs or buffers. : NCB ( n -- a ) ncb_size * ncbs + \ NCBs are ncb_size characters long ; : BUFF ( n -- a ) 60 * buff + \ Buffers are 60 characters long ; \ Finally (for this page) we define a new word name to provide the \ parameter information for a Datagram Receive request on a given buffer. : name ( n -- a n NCB ) DUP BUFF SWAP 60 SWAP NCB ;
C. Literate Program
Note: Due to limitations with HTML we are unable to render this section
as we would like (as it appears in the PostScript version of this paper). Particularly
comments in the code fragments will appear in the wrong font. Additionally
NCB should appear as "small caps", if it does not, then you
should upgrade your browser to a HTML/3.2 compatible version.
The first part of the application is to reserve the memory buffers that are going to be used. This section not only reserves the memory but also defines words that allow easy access to this memory.
We are going to require five NCBs and buffers. We first reserve the
space for the five NCBs (one outgoing, four incoming). The number
of bytes to reserve is calculated by multiplying the number of bytes
required for an NCB (ncb_size
) by five. We then initialise
this memory to zeros using the ERASE
word.
Thus the wordCREATE ncbs \
Define the base areancb_size 5 * ALLOT \
Reserve space for five NCBsncbs ncb_size 5 * ERASE \
Clear the memory like a good boy
ncbs
will return the start address of a block of
memory large enough to hold five NCBs. We now define a word
NCB
that take an NCB number (n) and returns the
address (a) of the indicated NCB from our table.
Now we do the same for the data buffers. This time the buffers are 60 bytes long and is given the name: NCB (
n--
a)
ncb_size * ncbs + \
NCBs arencb_size
characters long;
buff
, while the accessing word is called
BUFF
.
We now define the wordCREATE buff \
Define the base address60 5 * ALLOT \
Reserve space for five buffersbuff 60 5 * ERASE \
Clear the memory: BUFF (
n--
a)
60 * buff + \
Buffers are 60 characters long;
name
that takes an NCB number (n)
and initialises the stack ready for a NetBios call to the Datagram Receive function,
placing the corresponding buffer address (buff), the maximum size of the buffer
(60) and the indicated NCB (NCB) on the stack.
: name (
n--
buff 60 NCB)
DUP BUFF SWAP NCB 60 SWAP
;