2 The Four Fundamental Operations of Definite Action
3 ==================================================
5 All definite actions (computer program) can be defined by four
6 fundamental patterns of combination:
16 Do one thing after another. In joy this is represented by putting two
17 symbols together, juxtaposition:
23 Operations have inputs and outputs. The outputs of ``foo`` must be
24 compatible in "arity", type, and shape with the inputs of ``bar``.
29 Do one thing or another.
33 boolean [F] [T] branch
37 ----------------------
42 ----------------------
46 branch == unit cons swap pick i
48 boolean [F] [T] branch
49 boolean [F] [T] unit cons swap pick i
50 boolean [F] [[T]] cons swap pick i
51 boolean [[F] [T]] swap pick i
52 [[F] [T]] boolean pick i
55 Given some branch function ``G``:
61 Used in a sequence like so:
67 The inputs and outputs of ``F`` and ``T`` must be compatible with the
68 outputs for ``foo`` and the inputs of ``bar``, respectively.
79 Often it will be easier on the programmer to write branching code with
80 the predicate specified in a quote. The ``ifte`` combinator provides
81 this (``T`` for "then" and ``E`` for "else"):
87 Defined in terms of ``branch``:
91 ifte == [nullary not] dip branch
93 In this case, ``P`` must be compatible with the stack and return a
94 Boolean value, and ``T`` and ``E`` both must be compatible with the
95 preceeding and following functions, as described above for ``F`` and
96 ``T``. (Note that in the current implementation we are depending on
97 Python for the underlying semantics, so the Boolean value doesn't *have*
98 to be Boolean because Python's rules for "truthiness" will be used to
99 evaluate it. I reflect this in the structure of the stack effect comment
100 of ``branch``, it will only accept Boolean values, and in the definition
101 of ``ifte`` above by including ``not`` in the quote, which also has the
102 effect that the subject quotes are in the proper order for ``branch``.)
107 Do one thing zero or more times.
123 The ``loop`` combinator generates a copy of itself in the true branch.
124 This is the hallmark of recursive defintions. In Thun there is no
125 equivalent to conventional loops. (There is, however, the ``x``
126 combinator, defined as ``x == dup i``, which permits recursive
127 constructs that do not need to be directly self-referential, unlike
128 ``loop`` and ``genrec``.)
132 loop == [] swap [dup dip loop] cons branch
135 boolean [Q] [] swap [dup dip loop] cons branch
136 boolean [] [Q] [dup dip loop] cons branch
137 boolean [] [[Q] dup dip loop] branch
139 In action the false branch does nothing while the true branch does:
143 t [] [[Q] dup dip loop] branch
148 Because ``loop`` expects and consumes a Boolean value, the ``Q``
149 function must be compatible with the previous stack *and itself* with a
150 boolean flag for the next iteration:
167 Keep doing ``B`` *while* some predicate ``P`` is true. This is
168 convenient as the predicate function is made nullary automatically and
169 the body function can be designed without regard to leaving a Boolean
170 flag for the next iteration:
175 --------------------------------------
176 [P] nullary [B [P] nullary] loop
179 while == swap [nullary] cons dup dipd concat loop
183 [P] [B] swap [nullary] cons dup dipd concat loop
184 [B] [P] [nullary] cons dup dipd concat loop
185 [B] [[P] nullary] dup dipd concat loop
186 [B] [[P] nullary] [[P] nullary] dipd concat loop
187 [P] nullary [B] [[P] nullary] concat loop
188 [P] nullary [B [P] nullary] loop
193 The *parallel* operation indicates that two (or more) functions *do not
194 interfere* with each other and so can run in parallel. The main
195 difficulty in this sort of thing is orchestrating the recombining
196 ("join" or "wait") of the results of the functions after they finish.
198 The current implementaions and the following definitions *are not
199 actually parallel* (yet), but there is no reason they couldn't be
200 reimplemented in terms of e.g. Python threads. I am not concerned with
201 performance of the system just yet, only the elegance of the code it
207 Joy has a few parallel combinators, the main one being ``cleave``:
212 ---------------------------------------------------------
213 ... [x ...] [A] infra first [x ...] [B] infra first
214 ---------------------------------------------------------
217 The ``cleave`` combinator expects a value and two quotes and it executes
218 each quote in "separate universes" such that neither can affect the
219 other, then it takes the first item from the stack in each universe and
220 replaces the value and quotes with their respective results.
222 (I think this corresponds to the "fork" operator, the little
223 upward-pointed triangle, that takes two functions ``A :: x -> a`` and
224 ``B :: x -> b`` and returns a function ``F :: x -> (a, b)``, in Conal
225 Elliott's "Compiling to Categories" paper, et. al.)
227 Just a thought, if you ``cleave`` two jobs and one requires more time to
228 finish than the other you'd like to be able to assign resources
229 accordingly so that they both finish at the same time.
234 There are also ``app2`` and ``app3`` which run a single quote on more
240 ---------------------------------------------------------
241 ... [y ...] [Q] infra first [x ...] [Q] infra first
245 ---------------------------------
246 ... [z ...] [Q] infra first
247 [y ...] [Q] infra first
248 [x ...] [Q] infra first
250 Because the quoted program can be ``i`` we can define ``cleave`` in
255 cleave == [i] app2 [popd] dip
257 (I'm not sure why ``cleave`` was specified to take that value, I may
258 make a combinator that does the same thing but without expecting a
272 The common ``map`` function in Joy should also be though of as a
279 There is no reason why the implementation of ``map`` couldn't distribute
280 the ``Q`` function over e.g. a pool of worker CPUs.
285 One of my favorite combinators, the ``pam`` combinator is just:
291 This can be used to run any number of programs separately on the current
292 stack and combine their (first) outputs in a result list.
296 [[A] [B] [C] ...] [i] map
297 -------------------------------
300 Handling Other Kinds of Join
301 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
303 The ``cleave`` operators and others all have pretty brutal join
304 semantics: everything works and we always wait for every
305 sub-computation. We can imagine a few different potentially useful
306 patterns of "joining" results from parallel combinators.
311 Thinking about variations of ``pam`` there could be one that only
312 returns the first result of the first-to-finish sub-program, or the
313 stack could be replaced by its output stack.
315 The other sub-programs would be cancelled.
320 Also known as "Futures" or "Promises" (by *everybody* else. "Fulinators"
321 is what I was going to call them when I was thinking about implementing
324 The runtime could be amended to permit "thunks" representing the results
325 of in-progress computations to be left on the stack and picked up by
326 subsequent functions. These would themselves be able to leave behind
327 more "thunks", the values of which depend on the eventual resolution of
328 the values of the previous thunks.
330 In this way you can create "chains" (and more complex shapes) out of
331 normal-looking code that consist of a kind of call-graph interspersed
332 with "asyncronous" ... events?
334 In any case, until I can find a rigorous theory that shows that this
335 sort of thing works perfectly in Joy code I'm not going to worry about
336 it. (And I think the Categories can deal with it anyhow? Incremental