Retro 12: Initial Support for Prefixes

A big part of Retro 11 is prefixes. As with Parable, these are single character prefixes that can be tacked on to a token to tell the compiler how to deal with them. Unlike Parable, Retro has traditionally allowed the user to implement custom prefixes as desired.

This is implemented through a custom error handler. If the token is not in the dictionary and is not a number, the first character is split off, appended to a string with two underscores, and searched for in the dictionary. If it exists, we have a token handler. This will then receive the rest of the token as a string on the stack to process. If not, an error is reported.

create prefix
95 , 95 , 95 , 0 ,

: prefix:set
prefix 2 + ! ;

: prefix:seek
prefix find 0 <> [ tib 1 + swap dup d->xt @ swap d->class @ withClass ] [ drop default: notFound ] if ;

: <notFound>
tib @ prefix:set prefix:seek ;

' <notFound> is notFound

The above is not yet complete. In Retro 11, there was a special class of prefixes which could take over the parser (these were used to build strings, do vocabulary searches, etc). For now though this will make it possible to start building prefixes into the language again, while not requiring anything other than a minimal notFound handler in the kernel.

If you want to test this, try these snippets for a couple that are used a lot in the existing Retro sources:

: __@
find 0 <> [ d->xt @ .data ` @ ] [ drop default: notFound ] if ; immediate

: __!
find 0 <> [ d->xt @ .data ` ! ] [ drop default: notFound ] if ; immediate

: __'
@ .data ; immediate

Remapping Character Input in Retro 12

In Retro 11, input can be remapped at a character level. This is used by the listener to provide support for treating enter (cr and/or lf) and tabs as spaces, as well as supporting backspace under OS X. The new kernel, being more minimal, only has a fixed set of remapping (to support cr and lf at the end of source lines). Since it's rather useful to be able to support additional remappings, this is one of the first things I wanted to add into Retro 12.

Here's a simple bit of code to enable Retro 11 compatible input remapping:

-1 variable: remapping                   ( Allow extended whitespace?  )
-1 variable: tabAsWhitespace ( Treat TAB as a space? )

: remapKeys ( c-c ) ;
: remap:whitespace ( c-c )
dup 127 = [ drop 8 ] ifTrue
dup 13 = [ drop 10 ] ifTrue
remapping @ 0; drop
dup 10 = [ drop 32 ] ifTrue
tabAsWhitespace @ 0; drop dup 9 = [ drop 32 ] ifTrue ;

: getc:unfiltered 1 1 out wait 1 in ;
[ repeat getc:unfiltered remapKeys dup 0 <> [ remap:whitespace 0 ] [ -1 ] if 0; drop drop again ] is getc

This provides the remapping and tabAsWhitespace variables for controlling the overall remapping, and a remapKeys function ready to be replaced with your custom key remappings. (Which should look similar to the remap:whitespace function implementation). It also provides the getc:unfiltered for those times when you want a non-remapped key input.

Looking at this exposes a potential bug: I think that remapping should be checked by getc and remapKeys should only be called if it is true. I'll take a look back at this later and change it to ensure the appropriate behavior if it does prove to be a bug.

Retro 12: Fixing Bugs

I'm presently in upstate Pennsylvania visiting my in-laws. (We'll be up here, traveling between  the Wellsboro, Coudersport, and Elmira, NY areas for the next few days). Since I have time off work, I can devote a bit more time to working on my personal projects.

Once we finished the drive and got settled in, I set out to find and fix several big issues with the new Retro kernel. While I've been able to build a mostly functional image for a while, it ran into problems with the second stage of higher level extensions. This would result in some parts working, but others crashing the VM.

After some careful searching, comparisons with working code from the older metacompiled branch, and a lot of testing, I'm pleased to say that the most significant bugs are fixed now. Now testing and refinement should be able to proceed more quickly. (It was very difficult to debug the image when several forms of output [including numbers] caused the VM to segfault).

Tomorrow I'll begin spending a bit of time on moving more to the higher level portion and start working on the pieces needed to bring this to feature parity with the currently stable Retro branch.

A Simple Preprocessor for Parable

One of the biggest shortcomings in the minimalistic parser I use in Parable is that functions are limited to a single line in length. This can be problematic since fitting everything in one line makes readability suffer. The original prototype didn't suffer from this, since it required editing a single definition at a time. But how to handle this now?

At present I'm using a tiny preprocessor. This is just a few lines of Python:

import os, sys

def refine(code):
s = ''
r = []
for l in code:
if l != '\n':
s = s + ' ' + l.strip()
s = ''
return r

for source in sys.argv:
if not os.path.exists(source):
sys.exit('ERROR: source file "%s" was not found!' % source)
if source != sys.argv[0]:
for line in refine(open(source).readlines()):
print line

It allows for multi-line definitions, with a blank line separating each function definition. It can be used at a command line:

python source1 .... sourceN > program.p

And then I can pass the program.p that's created to Parable. It's pretty handy, and lets me write code in a more readable fashion.

A Quick Update

I haven't done much coding the last two days. Mostly I spent a little time revisiting Parable. I updated the Parable syntax overview, fixed a bug in the C implementation, and cleaned up a couple of routines in the Python implementation.

I expect to have a few larger blocks of time starting Thursday. I have a long (five day) weekend off work, so will be traveling a bit and doing some writing and programming. So maybe I'll finally be able to get some things done.

Palindrome Detection in Retro and Parable

Since this is a palindromic week, I decided to implement routines to detect palindromes in Retro and in Parable.

To decide if a string is a palindrome, we need to do a couple of things. First, remove all non alphanumeric characters. Then we can reverse it, and compare it to the original. If they match, it's a palindrome.

So first up, an implementation in Retro.

create buf  1024 allot
: valid? dup 'a 'z within over '0 '9 within or ;
: (filter) repeat @+ 0; valid? [ ^buffer'add ] [ drop ] if again ;
: filter ^strings'toLower buf ^buffer'set (filter) nip ;
: palindrome? filter ^buffer'start dup tempString ^strings'reverse compare ;

Breaking this down briefly:

The first line creates a buffer that will contain the filtered string. The second compares a character to the range of valid characters (a-z and 0-9) and returns a flag. The third is a helper loop for the filtering function. It'll read each character from the source string, call valid? and either append it to the new string in buf, or discard it. The top level filter function converts the input string to lowercase, sets up the buffer, and then calls (filter) to actually build the string. And finally, the top level palindrome? function calls filter, then makes a copy of the filtered string, reverses one of them, and compares the two.

And then, in Parable:

'buf' variable
[ &buf slice-set [ dup alphanumeric? [ slice-store ] [ drop ] if ] for-each-character ] 'filter' define
[ to-lowercase filter &buf :s dup reverse = ] 'palindrome?' define

This follows the same basic approach, but is actually shorter since the Parable standard library provides some useful stuff out of the box. We create a buffer (but), and a filter function which uses the handy for-each-character combinator to handle the loop over the source string. So the top level just needs to filter the input string, get a pointer to the filtered one, duplicate it, and compare the two. (Parable's awareness of types saves a bit of hassle here).

And that's it: easy palindrome detection for Retro and Parable.

Adding I/O to Parable

By design, the core Parable language does not provide any means of communicating with the host operating system. This means that interacting with the user needs to be provided by some other layer that sits above the core. There are a few examples of this in the repository. (These include pre, a script runtime; legend, a full screen console interface; and purple, a browser based UI). All of these provide a basic means of getting input into Parable, and displaying the stack after execution completes.

But none of them suffice in allowing real interaction with the underlying host. Without this  Parable is not very useful. Since I don't want to mandate a specific I/O model, a solution was needed. And that solution is to allow for custom byte codes to be defined for specific purposes.

The initial example I wrote for this is called ika. It's similar to pre in that it is intended to run scripts at the command line, but it also adds byte codes for output and file I/O operations. You can look at the source to see how this works, but I'll present a smaller example here. We'll look at a simple application that defines and uses an output function.

So here's the code:

import sys, os
from parable import *

def display_value():
    global stack, types
    i = len(stack) - 1
    if types[i] == TYPE_NUMBER:
    elif types[i] == TYPE_CHARACTER:
    elif types[i] == TYPE_STRING:
    elif types[i] == TYPE_FUNCTION:
        sys.stdout.write('&' + unicode(stack[i]))
    elif types[i] == TYPE_FLAG:
        if stack[i] == -1:
        elif stack[i] == 0:
            sys.stdout.write("malformed flag")

def opcodes(slice, offset, opcode):
    if opcode == 1000:
    return offset

if __name__ == '__main__':
interpret(compile("[ `1000 ] 'display' define", request_slice()))
interpret(compile("#45 display", request_slice()), opcodes)

So adding new byte codes is done by defining a function to handle them, and passing this to the interpret() call. In the example above, the byte code handler is named opcodes(). This function should be setup like this:

def opcodes(slice, offset, opcode):
  return offset

The interpret() will pass the slice, offset, and byte code to the provided routine. The routine needs to return the new offset (if it changes), or the same one if not. The new byte codes can be anything not used by Parable itself.

As with the core language, map the custom byte codes to Parable functions using the back tick prefix. So for our example, we define a display function using `1000.

With this, it's possible to add custom byte codes for I/O and other things. As mentioned earlier, ika provides an example of file I/O (which is incidentally compatible with the model used in Ngaro and Retro).

Of my two VM models, Ngaro is better known and documented than Parable, so I'll offer a brief contrast in the two approaches.

Ngaro defines a set of simulated I/O ports for different devices, and allows the code running to communicate with the devices by reading, writing, and waiting for I/O events. This has a nice benefit of standardizing the I/O model, but comes at a cost of forcing implementations to pretend that certain things may exist, even when they don't make sense. (Not all devices are keyboard driven, and Ngaro heavily assumes the existence of a traditional console environment.)

Parable's approach is to leave I/O to the user. It defines a core language, but no I/O. This allows the user to define the I/O model that makes sense for a specific application. So it's easier to embed Parable into larger applications, and it's easier to build interfaces that don't assume the user is on a traditional TTY console (or semblance thereof).

At this point I slightly prefer the approach I used in building Parable. The Ngaro model was easy to setup, but has proven a bit restrictive in terms of how Retro was built (and makes moving towards more modern platforms a bit more difficult). It'd certainly be possible to build Ngaro style I/O into a Parable, but I have no plans to do so at this time.

(Though I am somewhat tempted to implement Ngaro as an extended byte code set for Parable someday...)

Parable Updates

I have done a little work on Parable this month. Briefly: migrated the repo to git, fixed a bug in letter?, and added support for Unicode strings.

The move to git should help make the sources a bit more accessible. I still like bzr, but git seems much more widespread, and there's more hosting options for git repositories. So this should be a net benefit. I'll be going with git for my future projects as well.

While working on some string processing code recently I discovered that the letter? function was totally broken. This was a major mistake, and it has been rectified. (This highlights the need for a more comprehensive test suite. This will be coming soon.)

The other change so far is that I am beginning work to support Unicode strings. This is taking place in the Python implementation at present, but I'll try to improve things in the other implementations in the future.

Get the latest sources from GitHub, or download a snapshot.

Retro, via IRC

Last night Tangentstorm mentioned hat he'd like an IRC bot for Retro. I thought this sounded like a cool idea, so I whipped up a quick and dirty bot in Python. 

This uses the Python implementation of Ngaro. To reduce chances of abuse some functionality is disabled. (Specifically, this VM has no file system access.) Apart from this though, it gives a full Retro system, with one instance per query.

To use it, edit the bot name and channel in, launch it, and then, in the IRC channel specified, say 'retrobot: ...'  (replacing the ... with the code you want Retro to run. The output will be dumped into the channel.

There are several improvements that should be made before this is generally useful. First off, configuration is currently hard coded. This should definitely be done better. Secondly, a way to quit cleanly should be added. And the child process for the VM should be killed properly if run times get too long. 

The bot would benefit from a slightly customized image. Ideally it wouldn't repeat the user input, and it would also not display the ok prompt and image startup banner. These are pretty easy to implement.

Slightly longer term, it'd be useful to store prior requests and outputs, and perhaps allow dumping longer outputs into a pastebin service. 

I have uploaded the initial code to GitHub.  Check it out.


Pangram Detection in Retro and Parable


(noun) A sentence that contains every letter of the alphabet, such as "The quick brown fox jumps over the lazy dog" in English. 

- wiktionary

I enjoy writing little routines to solve small puzzles or problems. Detection of a pangram is one such thing. The easy approach is to setup a buffer that will contain the letters from the string being checked, iterate over the string, adding the letters to this buffer, and then comparing the string to one known to contain all the letters in the target alphabet.

In Retro I implemented this:

: isPangram? ( $-f )
heap [ 27 allot ] preserve
[ @ 'a - dup 0 25 within [ [ 'a + ] [ here + ] bi ! ] &drop if ]
^types'STRING each@ here "abcdefghijklmnopqrstuvwxyz" compare ;

And breaking it down:


Converts the source string to lower case. Then we can allocate a buffer:

heap [ 27 allot ] preserve

We mark out (and zero out) 27 characters (26 for the alphabet and one for the termination character). To make this temporary, I used the preserve combinator to reset heap once the allocation is done.

[ @ 'a - dup 0 25 within [ [ 'a + ] [ here + ] bi ! ] &drop if ]
^types'STRING each@ 

This sequence is where all the work is done. For each address in the string it will fetch the value (@), convert to a number representing it ('a -), and check to ensure that it is an alphabetic value (dup 0 25 within). Then there's a conditional block that's called if the character matches. This has three parts. First, it converts the number back to a character ([ 'a + ]) then it maps the number to an offset in the temporary buffer ([here +]). Then ! is used to store it into the buffer. If the character doesn't map to a letter it is discarded by the &drop.

And then the final bit is to simply compare the temporary buffer contents to a known string (here "abcdefghijklmnopqrstuvwxyz" compare). If they match, then all the characters in the target alphabet are in both.

I've also done a quick implementation of this approach in Parable. This is longer and a bit less easy to follow at present.

'scratch' variable
'source' variable
[ &source @ swap fetch :c $a - ] 'pangram:obtain' define
[ dup #0 #25 between? ] 'pangram:match?' define
[ [ dup $a :n + swap &scratch swap store ] [ drop ] if ] 'pangram:process' define
[ request &scratch copy length swap &source ! ] 'pangram:begin' define
[ 'abcdefghijklmnopqrstuvwxyz' &scratch :s = ] 'pangram:check' define
[ pangram:begin [ [ pangram:obtain pangram:match? pangram:process ] sip #1 - dup #-1 <> ] while-true drop pangram:check ] 'pangram?' define

In this quick implementation, I used a couple of variables. scratch is used as the temporary buffer, source holds the original string. The pangram:begin clears out scratch, stores the original sting pointer into source, and gets the length of the source string

The loop obtains a character, matches it agains the range we are interested in, and processes it. These are factored into separate routines. The final piece is a check to compare the newly built string against the alphabet reference string.

But this is really messy. It'd be a bit better with a combinator for running a quote against the characters in the string. So something like this:

"Given a string and a quote, run the quote against each character in the string"
[ swap reverse length [ dup-pair #1 - fetch swap [ swap ] dip [ [ :c over invoke ] dip ] dip #1 - dup #0 > ] while-true drop-pair drop ] 'for-each-character' define

This would be used something like:

'hello, world' [ :s report-error ] for-each-character

So with this, the pangram? function could be written as:

'scratch' variable
[ 'scratch' variable [ dup letter? [ dup $a - :n &scratch swap store ] [ drop ] if ] for-each-character 'abcdefghijklmnopqrstuvwxyz' &scratch :s = ] 'pangram?' define?

So down to one variable and a single function. Still not as readable as the Retro code, but much better than the quick and dirty attempt. A slight refactoring should make things much cleaner:

'scratch' variable
[ dup $a - :n &scratch swap store ] 'record' define
[ 'scratch' variable [ dup letter? [ record ] [ drop ] if ] for-each-character 'abcdefghijklmnopqrstuvwxyz' &scratch :s = ] 'pangram?' define?

One note here: notice that I redefine scratch in the pangram? function. Doing this accomplishes the same thing as the request &scratch copy in the original, but is shorter and a bit more straightforward. The ability to do things like this is one area in which Retro and Parable greatly differ.

I'll stop at this point. It satisfies the basic requirements, and yields a new combinator that might be useful in other areas.

An Overview of Paipera

One of the Ent services that received more use than I expected was a query tool to search the text of the Bible. This was one of the messiest pieces of code in Ent, involving the top level PHP wrapper with grep, awk, sed, and Perl scripts below that. But it worked reasonably well, so I never bothered to improve it.

After I shutdown Ent, I wanted to revisit this, hopefully building something a bit more robust. The first stage of this was to break down the initial text files that contained the translations I was interested in into a more suitable format. I have chosen to store the broken down texts into a simple sqlite3 database.

In the public release of Paipera there are three translations: the Authorized Version (AV) / King James Version, the World English Bible (WEB), and Young's Literal Translation (YLT). I have texts for some other versions prepared, but can not release them due to copyright restrictions.

The database has a table for each translation. These are currently setup like:

CREATE TABLE av (id integer primary key, book blob, chapter blob, verse blob, text blob);

Currently the full database currently sits at around 15mb. This is larger than I had hoped, so I'll be splitting each translation into a separate file shortly.

The next part of the process involves building new tools to search and return results in a reasonable format. Most of the original code was in Perl. Since I really don't like using Perl, I'm writing a fresh implementation in Python.

I have a few ideas for small applications using this, but nothing pressing. So it's mostly on a very slow development cycle, giving me a change of pace from my work on Retro and learning Objective C.

Sources for this will be released on GitHub as I work on them.  For the time being this is just the raw text sources, the SQL, and the sqlite3 database. I'll have the first of the modules for performing searches and returning results added in over the next two days.


Building Retro 12: Reducing the Function Classes

Function classes are an important part of Retro's inner workings. The token processing loop looks up each symbol in the dictionary, and passes the contents of the function pointer field (d->xt) to the class handler (d->class). The class handler decides what to do with the function pointer.

In prior versions of Retro there were numerous function classes in the core image. We had .word, .macro, .data, .parse, .compiler, and .primitive. For most purposes though only three are needed. These are:


This is used for all normal functions. If compiler is true, this lays down a call instruction referring to the function pointer and advances here by one cell. If it is false, it calls the function directly. (In Retro 11, this is called .word). This is basically:

: .function ( a- ) compiler @ [ , ] [ do ] if ;


This is used primarily for compiler macros. All functions with this class are called directly. (In Retro 11, this is called .macro). This is basically just:

: .immediate ( a- ) do ;


This is used for data structures. If compiler is true, this lays down a lit instruction followed by the value of the function pointer. If false, it pushes the function pointer to the stack.

: .data ( v- || v-v ) compiler @ [ 1 , , ] ifTrue ;

The base image will only provide these class handlers. Additional ones are easy enough to add in later, as needed, so there's no reason to have more in the kernel.

I have already pushed a commit to do this to the repository.

Building Retro 12: Dictionary Structure

The existing Retro dictionary headers are setup as a linked list. Each entry contains a pointer to the previous one, a pointer to the class handler, a pointer to the function definition, a pointer to a documentation string, and a zero-terminated string containing the symbolic name. This isn't complex, but since the introduction of the metacompiler, adding new fields (or removing/reordering) them has been troublesome.

For the next major version of Retro this will be simplified. It'll still be a linked list, but will contain a simplified structure: link to previous, pointer to class handler, pointer to definition, pointer to symbolic name, and a pointer to an extended attributes structure. The two big changes are how names are recorded and the extended attributes.

Currently a variable length string is the last field in a header. By making the name a pointer, we can reposition the headers (and discard the names completely if desired) into higher memory, making it a bit easier to build an image for memory constrained targets. It also makes the dictionary headers a consistent size, which may have other benefits down the road.

The second change is the extended attributes field. This will allow for building in additional things (like documentation strings, compiled code length, source references, etc) that are not essential to the core language. I'm not sure what will be defined in this initially, but this will at least make it possible to build a more flexible dictionary without having to worry about making major modifications to the core kernel.

This change is already in the repository.

Building Retro 12: General Plans

As I have been working on the new assembler and updated kernel for Retro, I feel an increasing desire to begin work on rebuilding Retro. The current kernel contains a lot that should have been added in via hooks, and leaves out some pieces that would make debugging and troubleshooting much easier.

And so I'll begin work now. I'm starting with the initial 11.6 kernel from the ongoing rewrite, but will be significantly revising and simplifying it. There'll be a cleaner (and more flexible) dictionary implementation, simplified compiler (with a focus on quotations), basic string and numeric processing, and the minimal essential I/O functionality.

At a minimum I'm removing the non-essential classes, compiler functions, key maps, whitespace remapping, I/O, and UI elements. The kernel will be much smaller and more straightforward, and the more complex pieces implemented later, in high level Retro as part of a set of standard extensions.

I'll be posting more details on what I'm thinking of doing and the directions I plan to take in the next few days. I have begun work, and have uploaded the sources as they stand to the retro-12 GitHub repository.

Review: Knock App

For the last few years I have been using an iPad as my primary computer. When I decided to get into iOS development a bit I needed a Mac, so I purchased a MacBook Air. (I also have an aging 2007-era Mac Mini, but this is stuck on Snow Leopard, so I needed something newer.) This has been a great tool, but there are still some areas that I find a bit frustrating.

One of these is logging in. I have a fairly lengthy password, so typing it in each time I go to unlock and begin use is a bit of a bother. This is especially critical to me as I routinely login for short sessions of writing or coding and then lock it. But I found a possible solution in Knock. This is a pair of Mac and iOS applications that allow me to use a BLE connection to unlock the MacBook by tapping twice on my iPhone.

So far it's been working pretty well. I open the MacBook, wait a couple of seconds, then quickly tap twice on the phone and my account is logged in and ready to use. There are a few occasional issues. If the iOS app gets closed in the background, I have to relaunch it (not a big deal), and sometimes it takes a minute or so for the MacBook and iPhone to see each other. But it's still nicer than entering the password frequently as I start and stop.

If, like me, you find yourself frequently locking and unlocking your Mac, this might be worth the $3.99 cost. I personally find it incredibly convenient, with the occasional longer login worth removing the need to manually enter my password multiple times each day.

Check it out on the Knock website, or on iTunes.

Recent Developments in Retro

Over the last couple of days I have been rewriting the Retro kernel using the new assembler. I'm about 2/3 of the way into this, and am hoping to have a functional image working within the next week.

Once this is done I will be able to retire the current metacompiler and update the documentation on the kernel sources. Getting rid of the current metacompiler is a priority. It's even a frequent source of bugs and support headaches, so I'm really looking forward to returning to something simpler, if a bit less theoretically elegant.

I would like to get Retro 11.6 out by early May. It won't be drastically changed from 11.5, but should provide a better base for building on in the future.

Working with Ngaro Assembly Language

Ngaro is a pretty simple virtual machine. It's intended to be fully understandable by a single individual, and easy enough to implement from scratch within a few hours. There are 30 instructions. Most of these are a single cell in length, but a few (for conditionals and pushing literals) are two cells (the byte code and a parameter). The full list of instructions, with the assembler names, follows:

opcode	name		assembler	two cell	stack
======  ======= 	=========       ========	=====
0	NOP		nop,	 			-
1	LIT		lit,		y		-n
2	DUP		dup,	 			n-nn
3	DROP		drop,	 			n-
4	SWAP		swap,		 		xy-yx
5	PUSH		push,		 		n-
6	POP		pop,		 		-n
7	LOOP		loop,		y		n-n
8	JUMP		jump,		y		-
9	RETURN		return,	 			-
10	LT_JUMP		<jump,		y		xy-
11	GT_JUMP		>jump,		y		xy-
12	NE_JUMP		!jump,		y		xy-
13	EQ_JUMP		=jump,		y		xy-
14	FETCH		@,		 		a-n
15	STORE		!,		 		na-
16	ADD		+,		 		xy-z
17	SUBTRACT	-,	 			xy-z
18	MULTIPLY	*,	 			xy-z
19	DIVMOD		/mod,	 			xy-rq
20	AND		and,		 		xy-z
21	OR		or,		 		xy-z
22	XOR		xor,	 			xy-z
23	SHL		<<,	 			xy-z
24	SHR		>>,		 		xy-z
25	ZERO_EXIT	0;,		 		n-?
26	INC		1+,	 			x-y
27	DEC		1-,	 			x-y
28	IN		in,		 		p-n
29	OUT		out,		 		np-
30	WAIT		wait,	 			-

For those unfamiliar with Ngaro, note that there is no CALL instruction. In Ngaro, calls are implicit. Any opcode greater than 30 is assumed to be a subroutine call to the address that matches the opcode.

The assembler is intended mostly for use in building a new image file, so it's not really useful for application developers. There's actually no need to use assembly with a Retro application, unless you are building custom compilation functions (in which case, the raw opcodes are generally used). The functions that map directly to instructions are assigned to the .primitive class which will inline the instruction rather than a subroutine call.

If you want to write an assembly application, the basic process is to create a skeleton like this:

include ngaro-asm.rx

label: main

main setEntryPoint
"imageName" saveImageAs

You can then implement your variables, data structures, and supporting functions between the beginApplication and label: main. The primary code that should be run by the application should be added between label: main and main setEntryPoint.

Ok, so with the skeleton created, there are a few general things that come into play. First off, labels. These are symbolic names that refer to addresses within the image. They aren't saved as part of the image, so don't worry about the length. You create one with label: followed by the symbol name. All named elements are created with label:.

Variables need a name and space. We can allocate space by using , (a comma) to inline some value. Normally I use a zero, unless the variable should be initialized to something else. So, to create a single cell variable named foo:

label: foo
0 ,

The value to be inlined into the newly allocated space should come before the comma. If you need more space (say a simple string), use more commas:

label: foo
$a , $b , $c , 0 ,

For strings, there is also a $, function which saves a bunch of trouble:

label: foo
"abc" $,

So that's labels, and simple allocations of space for variables and other data structures. If you want to reference an address (or numeric value), you need to do one of the following:

lit, foo ,
foo #

These compile to identical code. For readability purposes, I prefer the # form.

Next up, calling a function. First, a simple function for testing:

label: addTenToValue
10 #

And then we have two options.

addTenToValue ,
call: addTenToValue

Either option will work, but I prefer to use call: as it better indicates at a glance that a function call is being created.

Moving on to unconditional jumps, there are again two forms: 

jump, addTenToValue ,
jump: addTenToValue

And once again I prefer to use the jump: form, though both compile to the same machine code. For the conditional jumps (=jump, >jump, <jump, !jump, and loop, instruction, only the first form will work.

For conditionals, the assembler provides a if/then helpers, which greatly simplify things. These look like:

10 #
20 #
=if -1 # then

Without these, you would need to do something like:

10 #
20 #
!jump, target @ 0 , -1 # here swap !

Which drops back to the underlying Retro system more heavily and is far messier to read. Just use the conditional helpers; it's much less troublesome!

Apart from these, the instructions map directly to their higher level equivalents, so it's pretty much just like Forth at that point. It's easy enough to see this with a small example:

: hello-string "hello, world!" ;
: say-hello hello-string puts cr ;
label: hello-string
"hello, world!" $,

label: say-hello
hello-string #
call: puts
call: cr

The assembly is longer and more verbose, but maps directly to the Forth.

On the whole the assembler isn't complex or difficult to use. But as I mentioned earlier, there's not much point in using it for most people. Unless you are trying to make a new image, or hacking the Retro kernel, none of this will be needed. For those who do need it, I hope this helps make things a bit clearer.

Nock Hightower

Last year I backed the Nock KickStarter campaign. I've been listening to the Pen Addict podcast and liked the looks of the cases, so decided to contribute (and get a case that I thought would meet my needs). I received my Nock Hightower a while back, and have been using it for a couple of months now.

Nock Hightower, Peacock Exterior

Nock Hightower, Peacock Exterior

When I ordered this, I chose the limited edition peacock exterior / midnight blue interior. I'm pleased with the color: it's not too dark, and contrasts nicely. It's also easy to spot on my desk at the end of the day, saving a few valuable moments when I'm trying to leave work.

The Hightower has two sections. The left has three slots under a cover flap. The right has a pocket big enough to hold a couple of Field Notes or other small notebooks. The slots are big enough to hold any of the pens I currently own. My wife also has one; she uses thinner pens and can fit a total of six.

It's made of a ballistic nylon. It seems very durable, and has a nice overall feel. The stitching is very solid with no signs of fraying despite being carried and used daily. I really have no complaints with the construction.

Nock Hightower, Midnight Blue interior.

Nock Hightower, Midnight Blue interior.

My only problem is that since I ordered this, my pen collection has grown too large to carry all of them in this. I'm awaiting the opening of their store so I can order a Brasstown for storing the pens I don't use daily.

Overall this is a great pen case, and I highly recommend it for anyone who carries a few pens (or other writing instruments) and notebooks with them.

Lamy Joy 1.9mm

I ordered this pen from Amazon on March 28, and it arrived today. This is a short review, based on my initial impressions after a couple of hours of use.

When I ordered this, I fully expected it to feel similar to my Lamy Safari in terms of construction. And it does. It's glossy, but still feels good. I like the contrast between the red clip and accent on the end and the black body.

This is easily the longest pen I've ever owned. When capped it is 7" long. Uncapped, it is 6-1/2". But despite this, it's still comfortable to hold.

Writing sample with Lamy Blue ink cartridge. This is a bit light since I didn't wait for the feed to fully dry out after flushing it.

Writing sample with Lamy Blue ink cartridge. This is a bit light since I didn't wait for the feed to fully dry out after flushing it.

The grip seems identical to my Lamy Safari. I have no problems with it. The cap is identical in design to the Safari. I'll have to watch and see if it develops the same loosening over time.

The nib is broad. At 1.9mm, it's far wider than anything I've used up to now. And, for the most part, I like it. At present it takes a moment to start, but writes well until I stop. There might be a bit of a baby bottom, but I'll need to obtain a loupe to confirm this. In spite of the hard starts, I like the line variation this affords. For everyday writing a 1.1mm or 1.5mm might be a bit better though.

On the whole I like this. I think it'll improve a bit once I get more comfortable with the stub nib. It's definitely tempting me to look at other stub nib options for my future pens.

Incision: a pastebin in Retro

For about a year, I hosted a small pastebin written in Retro on my server. I eventually dropped it in favor of using, but have kept the code for the old one around so that I could still access things if I needed them.

The pastebin was named Incision. As I mentioned in my post on Naming Projects, many of my projects are given names with ties to pain and suffering. Incision is a pastebin, which stores pastes as files called Cuts. Since this was primarily used by me and a few close friends, it was never given a friendlier name.

Retro provides a simple web CGI framework called Casket. There is also an associated library for generating HTML named Casket::HTML. This tool uses both. We start with a bit of preamble to load the necessary libraries, and setup helpers for a few paths.

needs casket'
needs casket::html'
with| casket' casket::html' |

: CUTS ( -$ ) casket:root "cuts/" ^strings'append ;
: CURRENT ( -$ ) casket:root "current" ^strings'append ;

Cuts are stored in a cuts subdirectory, and the number of the current cut is stored in a file named current. Each cut is given a numeric file name, and the contents of current are updated afterwards.

create scratch
32 allot

create query
32768 allot

: getCurrent ( - )
scratch CURRENT ^files'slurp drop scratch toNumber !current ;

There are two data structures. scratch is a little buffer used for various temporary things, and query will hold the cut being displayed. I also have a getCurrent function that reads the current cut number and stores it in the current variable for later use.

The content of a cut file will end up being stored in the query buffer, so pastes are limited to 32k in size. I never implemented any checks to prevent a buffer overflow so actually opening this to the world would have proven problematic.

And now we go on to the pages. With Casket, we define a function for each page we want to display. So first up is /cut:

: /cut
Content-type: text/html
[ [ [ "incision: pastebin" ] title
[ "%u/css" ] stylesheet ] head
[ [ "incision: a pastebin in retro" ] "header" :class p
[ [ "new paste" ] "%u/index" :href a ] "nav" :class p
[ query 0 32768 fill
query CUTS casket:options ^strings'append ^files'slurp query "%S" ] pre ] body
] html ;

As you can see here, Casket::HTML is used to build the pages. This gives combinators corresponding to HTML tags, allowing the document to be built up programatically. Casket also provides a template system, but I didn't use it for most of Incision.

This reads in the requested cut (the file number returned by casket:options), and displays it in a pre tag. 

: /post
Content-type: text/html
[ [ [ "incision: pastebin" ] title
[ "%u/css" ] stylesheet ] head
[ [ "incision: a pastebin in retro" ] "header" :class p
[ [ "new paste" ] "%u/index" :href a ] "nav" :class p
@current 1+ "<p><a href='%u/cut/%d'>permalink</a>" tputs
getFormData 8 + [ "<pre>%s</pre>" puts ] sip
withLength CUTS @current 1+ toString ^strings'append ^files'spew drop
@current 1+ toString withLength CURRENT ^files'spew drop
] body
] html ;

The /post page gets the form data, stores it in the cuts/<current> file, increments the value in the current file, and returns the page and a permalink to it. If this had ever been made visible to a wider audience I would have refactored this a bit, separating each piece into a separate support routine for easier maintenance.

serve: incision.css as text/css
: /css /incision.css ;

This is a little different. Casket provides a simple server for static content. You can say:

serve: filename as mime-type

And then wrap it in another name if you want.

: /index
Content-type: text/html
[ [ [ "incision: pastebin" ] title
[ "%u/css" ] stylesheet ] head
[ [ "incision: a pastebin in retro" ] "header" :class p
[ [ "new paste" ] "%u/index" :href a ] "nav" :class p
"index.erx" withTemplate ] body
] html ;

The final page is /index. This will be set as the default page for the application, and provides a minimal interface. The form data is in templates/index.erx and is just:

  <form action="%u/post" method="get">
<textarea rows="10" cols="60" name="content"></textarea><br><br>
<input type="submit">

I kept it in a template since I never added form creation to Casket::HTML. I probably should have defined a function to return this as text, so the template wouldn't be necessary.

[ /index ] is /
[ ( -$ ) "/full/path/to/incision" ] is casket:root
[ ( -$ ) "http://domain/path/to/incision" ] is casket:url
&getCurrent is doBeforeDispatch
&dispatch is boot
save bye

The final bits are just the configuration for the application. The first line sets /index as the default page. The second sets the physical root directory for Casket. The third sets the expected URL. The fourth overrides the Casket doBeforeDispatch function with our getCurrent function, and the fifth overrides the Retro boot function with Casket's dispatch function. The final line saves the image and quits.

The dispatch function in Casket processes the URL request, and calls the proper page handler function. Casket also provides a doBeforeDispatch which is called before any page handler. And Retro provides a boot function which is called once the image loads. So by overriding these, we make a standalone CGI application that serves as a pastebin.

I haven't used this in years, but it was nice to have around. It's also one of the few things I can show as a demonstration of using the Casket framework to process form data. It's certainly not as easy as in a more modern language, but it is doable with some patience.