1 The Four Fundamental Operations of Definite Action
2 ==================================================
4 All definite actions (computer program) can be defined by four
5 fundamental patterns of combination:
15 Do one thing after another. In joy this is represented by putting two
16 symbols together, juxtaposition:
22 Operations have inputs and outputs. The outputs of ``foo`` must be
23 compatible in "arity", type, and shape with the inputs of ``bar``.
28 Do one thing or another.
32 boolean [F] [T] branch
36 ----------------------
41 ----------------------
45 branch == unit cons swap pick i
47 boolean [F] [T] branch
48 boolean [F] [T] unit cons swap pick i
49 boolean [F] [[T]] cons swap pick i
50 boolean [[F] [T]] swap pick i
51 [[F] [T]] boolean pick i
54 Given some branch function ``G``:
60 Used in a sequence like so:
66 The inputs and outputs of ``F`` and ``T`` must be compatible with the
67 outputs for ``foo`` and the inputs of ``bar``, respectively.
78 Often it will be easier on the programmer to write branching code with
79 the predicate specified in a quote. The ``ifte`` combinator provides
80 this (``T`` for "then" and ``E`` for "else"):
86 Defined in terms of ``branch``:
90 ifte == [nullary not] dip branch
92 In this case, ``P`` must be compatible with the stack and return a
93 Boolean value, and ``T`` and ``E`` both must be compatible with the
94 preceeding and following functions, as described above for ``F`` and
95 ``T``. (Note that in the current implementation we are depending on
96 Python for the underlying semantics, so the Boolean value doesn't *have*
97 to be Boolean because Python's rules for "truthiness" will be used to
98 evaluate it. I reflect this in the structure of the stack effect comment
99 of ``branch``, it will only accept Boolean values, and in the definition
100 of ``ifte`` above by including ``not`` in the quote, which also has the
101 effect that the subject quotes are in the proper order for ``branch``.)
106 Do one thing zero or more times.
122 The ``loop`` combinator generates a copy of itself in the true branch.
123 This is the hallmark of recursive defintions. In Thun there is no
124 equivalent to conventional loops. (There is, however, the ``x``
125 combinator, defined as ``x == dup i``, which permits recursive
126 constructs that do not need to be directly self-referential, unlike
127 ``loop`` and ``genrec``.)
131 loop == [] swap [dup dip loop] cons branch
134 boolean [Q] [] swap [dup dip loop] cons branch
135 boolean [] [Q] [dup dip loop] cons branch
136 boolean [] [[Q] dup dip loop] branch
138 In action the false branch does nothing while the true branch does:
142 t [] [[Q] dup dip loop] branch
147 Because ``loop`` expects and consumes a Boolean value, the ``Q``
148 function must be compatible with the previous stack *and itself* with a
149 boolean flag for the next iteration:
166 Keep doing ``B`` *while* some predicate ``P`` is true. This is
167 convenient as the predicate function is made nullary automatically and
168 the body function can be designed without regard to leaving a Boolean
169 flag for the next iteration:
174 --------------------------------------
175 [P] nullary [B [P] nullary] loop
178 while == swap [nullary] cons dup dipd concat loop
182 [P] [B] swap [nullary] cons dup dipd concat loop
183 [B] [P] [nullary] cons dup dipd concat loop
184 [B] [[P] nullary] dup dipd concat loop
185 [B] [[P] nullary] [[P] nullary] dipd concat loop
186 [P] nullary [B] [[P] nullary] concat loop
187 [P] nullary [B [P] nullary] loop
192 The *parallel* operation indicates that two (or more) functions *do not
193 interfere* with each other and so can run in parallel. The main
194 difficulty in this sort of thing is orchestrating the recombining
195 ("join" or "wait") of the results of the functions after they finish.
197 The current implementaions and the following definitions *are not
198 actually parallel* (yet), but there is no reason they couldn't be
199 reimplemented in terms of e.g. Python threads. I am not concerned with
200 performance of the system just yet, only the elegance of the code it
206 Joy has a few parallel combinators, the main one being ``cleave``:
211 ---------------------------------------------------------
212 ... [x ...] [A] infra first [x ...] [B] infra first
213 ---------------------------------------------------------
216 The ``cleave`` combinator expects a value and two quotes and it executes
217 each quote in "separate universes" such that neither can affect the
218 other, then it takes the first item from the stack in each universe and
219 replaces the value and quotes with their respective results.
221 (I think this corresponds to the "fork" operator, the little
222 upward-pointed triangle, that takes two functions ``A :: x -> a`` and
223 ``B :: x -> b`` and returns a function ``F :: x -> (a, b)``, in Conal
224 Elliott's "Compiling to Categories" paper, et. al.)
226 Just a thought, if you ``cleave`` two jobs and one requires more time to
227 finish than the other you'd like to be able to assign resources
228 accordingly so that they both finish at the same time.
233 There are also ``app2`` and ``app3`` which run a single quote on more
239 ---------------------------------------------------------
240 ... [y ...] [Q] infra first [x ...] [Q] infra first
244 ---------------------------------
245 ... [z ...] [Q] infra first
246 [y ...] [Q] infra first
247 [x ...] [Q] infra first
249 Because the quoted program can be ``i`` we can define ``cleave`` in
254 cleave == [i] app2 [popd] dip
256 (I'm not sure why ``cleave`` was specified to take that value, I may
257 make a combinator that does the same thing but without expecting a
271 The common ``map`` function in Joy should also be though of as a
278 There is no reason why the implementation of ``map`` couldn't distribute
279 the ``Q`` function over e.g. a pool of worker CPUs.
284 One of my favorite combinators, the ``pam`` combinator is just:
290 This can be used to run any number of programs separately on the current
291 stack and combine their (first) outputs in a result list.
295 [[A] [B] [C] ...] [i] map
296 -------------------------------
299 Handling Other Kinds of Join
300 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
302 The ``cleave`` operators and others all have pretty brutal join
303 semantics: everything works and we always wait for every
304 sub-computation. We can imagine a few different potentially useful
305 patterns of "joining" results from parallel combinators.
310 Thinking about variations of ``pam`` there could be one that only
311 returns the first result of the first-to-finish sub-program, or the
312 stack could be replaced by its output stack.
314 The other sub-programs would be cancelled.
319 Also known as "Futures" or "Promises" (by *everybody* else. "Fulinators"
320 is what I was going to call them when I was thinking about implementing
323 The runtime could be amended to permit "thunks" representing the results
324 of in-progress computations to be left on the stack and picked up by
325 subsequent functions. These would themselves be able to leave behind
326 more "thunks", the values of which depend on the eventual resolution of
327 the values of the previous thunks.
329 In this way you can create "chains" (and more complex shapes) out of
330 normal-looking code that consist of a kind of call-graph interspersed
331 with "asyncronous" ... events?
333 In any case, until I can find a rigorous theory that shows that this
334 sort of thing works perfectly in Joy code I'm not going to worry about
335 it. (And I think the Categories can deal with it anyhow? Incremental