X-Git-Url: http://git.osdn.net/view?a=blobdiff_plain;f=docs%2Fsphinx_docs%2Fnotebooks%2FThe_Four_Operations.rst;h=b345b0ece63492ee296ac8399b02fda15ad92e53;hb=56da4690d025635bc08a95c6587c75835005205b;hp=4af07727c1dfa5776fc5d0a18c88f4aaa6685c26;hpb=ef6411205dba391dae33be9652d1e88bc3dfb732;p=joypy%2FThun.git diff --git a/docs/sphinx_docs/notebooks/The_Four_Operations.rst b/docs/sphinx_docs/notebooks/The_Four_Operations.rst index 4af0772..b345b0e 100644 --- a/docs/sphinx_docs/notebooks/The_Four_Operations.rst +++ b/docs/sphinx_docs/notebooks/The_Four_Operations.rst @@ -17,10 +17,10 @@ symbols together, juxtaposition: :: - foo bar + foo bar Operations have inputs and outputs. The outputs of ``foo`` must be -compatible in "arity", type, and shape with the inputs of ``bar``. +compatible in “arity”, type, and shape with the inputs of ``bar``. Branch ------ @@ -29,72 +29,72 @@ Do one thing or another. :: - boolean [F] [T] branch + boolean [F] [T] branch - t [F] [T] branch - ---------------------- - T + t [F] [T] branch + ---------------------- + T - f [F] [T] branch - ---------------------- - F + f [F] [T] branch + ---------------------- + F - branch == unit cons swap pick i + branch == unit cons swap pick i - boolean [F] [T] branch - boolean [F] [T] unit cons swap pick i - boolean [F] [[T]] cons swap pick i - boolean [[F] [T]] swap pick i - [[F] [T]] boolean pick i - [F-or-T] i + boolean [F] [T] branch + boolean [F] [T] unit cons swap pick i + boolean [F] [[T]] cons swap pick i + boolean [[F] [T]] swap pick i + [[F] [T]] boolean pick i + [F-or-T] i Given some branch function ``G``: :: - G == [F] [T] branch + G == [F] [T] branch Used in a sequence like so: :: - foo G bar + foo G bar The inputs and outputs of ``F`` and ``T`` must be compatible with the outputs for ``foo`` and the inputs of ``bar``, respectively. :: - foo F bar + foo F bar - foo T bar + foo T bar ``ifte`` ~~~~~~~~ Often it will be easier on the programmer to write branching code with the predicate specified in a quote. The ``ifte`` combinator provides -this (``T`` for "then" and ``E`` for "else"): +this (``T`` for “then” and ``E`` for “else”): :: - [P] [T] [E] ifte + [P] [T] [E] ifte Defined in terms of ``branch``: :: - ifte == [nullary not] dip branch + ifte == [nullary not] dip branch In this case, ``P`` must be compatible with the stack and return a Boolean value, and ``T`` and ``E`` both must be compatible with the preceeding and following functions, as described above for ``F`` and ``T``. (Note that in the current implementation we are depending on -Python for the underlying semantics, so the Boolean value doesn't *have* -to be Boolean because Python's rules for "truthiness" will be used to +Python for the underlying semantics, so the Boolean value doesn’t *have* +to be Boolean because Python’s rules for “truthiness” will be used to evaluate it. I reflect this in the structure of the stack effect comment of ``branch``, it will only accept Boolean values, and in the definition of ``ifte`` above by including ``not`` in the quote, which also has the @@ -107,17 +107,17 @@ Do one thing zero or more times. :: - boolean [Q] loop + boolean [Q] loop - t [Q] loop - ---------------- - Q [Q] loop + t [Q] loop + ---------------- + Q [Q] loop - ... f [Q] loop - -------------------- - ... + ... f [Q] loop + -------------------- + ... The ``loop`` combinator generates a copy of itself in the true branch. This is the hallmark of recursive defintions. In Thun there is no @@ -128,21 +128,21 @@ constructs that do not need to be directly self-referential, unlike :: - loop == [] swap [dup dip loop] cons branch + loop == [] swap [dup dip loop] cons branch - boolean [Q] loop - boolean [Q] [] swap [dup dip loop] cons branch - boolean [] [Q] [dup dip loop] cons branch - boolean [] [[Q] dup dip loop] branch + boolean [Q] loop + boolean [Q] [] swap [dup dip loop] cons branch + boolean [] [Q] [dup dip loop] cons branch + boolean [] [[Q] dup dip loop] branch In action the false branch does nothing while the true branch does: :: - t [] [[Q] dup dip loop] branch - [Q] dup dip loop - [Q] [Q] dip loop - Q [Q] loop + t [] [[Q] dup dip loop] branch + [Q] dup dip loop + [Q] [Q] dip loop + Q [Q] loop Because ``loop`` expects and consumes a Boolean value, the ``Q`` function must be compatible with the previous stack *and itself* with a @@ -150,15 +150,15 @@ boolean flag for the next iteration: :: - Q == G b + Q == G b - Q [Q] loop - G b [Q] loop - G Q [Q] loop - G G b [Q] loop - G G Q [Q] loop - G G G b [Q] loop - G G G + Q [Q] loop + G b [Q] loop + G Q [Q] loop + G G b [Q] loop + G G Q [Q] loop + G G G b [Q] loop + G G G ``while`` ~~~~~~~~~ @@ -170,21 +170,21 @@ flag for the next iteration: :: - [P] [B] while - -------------------------------------- - [P] nullary [B [P] nullary] loop + [P] [B] while + -------------------------------------- + [P] nullary [B [P] nullary] loop - while == swap [nullary] cons dup dipd concat loop + while == swap [nullary] cons dup dipd concat loop - [P] [B] while - [P] [B] swap [nullary] cons dup dipd concat loop - [B] [P] [nullary] cons dup dipd concat loop - [B] [[P] nullary] dup dipd concat loop - [B] [[P] nullary] [[P] nullary] dipd concat loop - [P] nullary [B] [[P] nullary] concat loop - [P] nullary [B [P] nullary] loop + [P] [B] while + [P] [B] swap [nullary] cons dup dipd concat loop + [B] [P] [nullary] cons dup dipd concat loop + [B] [[P] nullary] dup dipd concat loop + [B] [[P] nullary] [[P] nullary] dipd concat loop + [P] nullary [B] [[P] nullary] concat loop + [P] nullary [B [P] nullary] loop Parallel -------- @@ -192,11 +192,11 @@ Parallel The *parallel* operation indicates that two (or more) functions *do not interfere* with each other and so can run in parallel. The main difficulty in this sort of thing is orchestrating the recombining -("join" or "wait") of the results of the functions after they finish. +(“join” or “wait”) of the results of the functions after they finish. The current implementaions and the following definitions *are not -actually parallel* (yet), but there is no reason they couldn't be -reimplemented in terms of e.g. Python threads. I am not concerned with +actually parallel* (yet), but there is no reason they couldn’t be +reimplemented in terms of e.g. Python threads. I am not concerned with performance of the system just yet, only the elegance of the code it allows us to write. @@ -207,27 +207,27 @@ Joy has a few parallel combinators, the main one being ``cleave``: :: - ... x [A] [B] cleave - --------------------------------------------------------- - ... [x ...] [A] infra first [x ...] [B] infra first - --------------------------------------------------------- - ... a b + ... x [A] [B] cleave + --------------------------------------------------------- + ... [x ...] [A] infra first [x ...] [B] infra first + --------------------------------------------------------- + ... a b The ``cleave`` combinator expects a value and two quotes and it executes -each quote in "separate universes" such that neither can affect the +each quote in “separate universes” such that neither can affect the other, then it takes the first item from the stack in each universe and replaces the value and quotes with their respective results. -(I think this corresponds to the "fork" operator, the little +(I think this corresponds to the “fork” operator, the little upward-pointed triangle, that takes two functions ``A :: x -> a`` and ``B :: x -> b`` and returns a function ``F :: x -> (a, b)``, in Conal -Elliott's "Compiling to Categories" paper, et. al.) +Elliott’s “Compiling to Categories” paper, et. al.) Just a thought, if you ``cleave`` two jobs and one requires more time to -finish than the other you'd like to be able to assign resources +finish than the other you’d like to be able to assign resources accordingly so that they both finish at the same time. -"Apply" Functions +“Apply” Functions ~~~~~~~~~~~~~~~~~ There are also ``app2`` and ``app3`` which run a single quote on more @@ -235,35 +235,35 @@ than one value: :: - ... y x [Q] app2 - --------------------------------------------------------- - ... [y ...] [Q] infra first [x ...] [Q] infra first + ... y x [Q] app2 + --------------------------------------------------------- + ... [y ...] [Q] infra first [x ...] [Q] infra first - ... z y x [Q] app3 - --------------------------------- - ... [z ...] [Q] infra first - [y ...] [Q] infra first - [x ...] [Q] infra first + ... z y x [Q] app3 + --------------------------------- + ... [z ...] [Q] infra first + [y ...] [Q] infra first + [x ...] [Q] infra first Because the quoted program can be ``i`` we can define ``cleave`` in terms of ``app2``: :: - cleave == [i] app2 [popd] dip + cleave == [i] app2 [popd] dip -(I'm not sure why ``cleave`` was specified to take that value, I may +(I’m not sure why ``cleave`` was specified to take that value, I may make a combinator that does the same thing but without expecting a value.) :: - clv == [i] app2 + clv == [i] app2 - [A] [B] clv - ------------------ - a b + [A] [B] clv + ------------------ + a b ``map`` ~~~~~~~ @@ -273,10 +273,10 @@ The common ``map`` function in Joy should also be though of as a :: - [a b c ...] [Q] map + [a b c ...] [Q] map -There is no reason why the implementation of ``map`` couldn't distribute -the ``Q`` function over e.g. a pool of worker CPUs. +There is no reason why the implementation of ``map`` couldn’t distribute +the ``Q`` function over e.g. a pool of worker CPUs. ``pam`` ~~~~~~~ @@ -285,16 +285,16 @@ One of my favorite combinators, the ``pam`` combinator is just: :: - pam == [i] map + pam == [i] map This can be used to run any number of programs separately on the current stack and combine their (first) outputs in a result list. :: - [[A] [B] [C] ...] [i] map - ------------------------------- - [ a b c ...] + [[A] [B] [C] ...] [i] map + ------------------------------- + [ a b c ...] Handling Other Kinds of Join ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -302,7 +302,7 @@ Handling Other Kinds of Join The ``cleave`` operators and others all have pretty brutal join semantics: everything works and we always wait for every sub-computation. We can imagine a few different potentially useful -patterns of "joining" results from parallel combinators. +patterns of “joining” results from parallel combinators. first-to-finish ^^^^^^^^^^^^^^^ @@ -313,24 +313,24 @@ stack could be replaced by its output stack. The other sub-programs would be cancelled. -"Fulminators" +“Fulminators” ^^^^^^^^^^^^^ -Also known as "Futures" or "Promises" (by *everybody* else. "Fulinators" +Also known as “Futures” or “Promises” (by *everybody* else. “Fulinators” is what I was going to call them when I was thinking about implementing them in Thun.) -The runtime could be amended to permit "thunks" representing the results +The runtime could be amended to permit “thunks” representing the results of in-progress computations to be left on the stack and picked up by subsequent functions. These would themselves be able to leave behind -more "thunks", the values of which depend on the eventual resolution of +more “thunks”, the values of which depend on the eventual resolution of the values of the previous thunks. -In this way you can create "chains" (and more complex shapes) out of +In this way you can create “chains” (and more complex shapes) out of normal-looking code that consist of a kind of call-graph interspersed -with "asyncronous" ... events? +with “asyncronous” … events? In any case, until I can find a rigorous theory that shows that this -sort of thing works perfectly in Joy code I'm not going to worry about +sort of thing works perfectly in Joy code I’m not going to worry about it. (And I think the Categories can deal with it anyhow? Incremental evaluation, yeah?)