#+TITLE: Software Foundations 558
#+OPTIONS: toc:2 num:nil ^:t TeX:t LaTeX:t
#+EXPORT_EXCLUDE_TAGS: hide
#+STARTUP: hideblocks

- Office Hours:
  - Tue: 10:00 - 11:00
  - Thu: 12:15 - 1:15
- http://digamma.cs.unm.edu/~darko/classes/2009f-558/index.html
- syllabus.pdf

* meta stuff
** optional reading
- /Programming in Haskell/ can read in one sitting, intro to prog
- /Introduction to Functional Programming/ part math, part prog more
  in depth, prove correctness etc...
- /Practical Foundations for Programming Languages/ online notes, can
  serve as alternative to textbook
- /Programming Language Pragmatics/ undergrad PL and compiler textbook
- /Real World Haskell/ Orielly, in case we actually wanted to write
  real programs in Haskell

** homework hand in
Turn In
- There is a turn-in mechanisms through cssupport.
- electronic through CS machines (need cs account)

** exams
- 2 midterms
  - 1 after Haskell
  - 1 half way through content
- 1 cumulative final

*** DONE 1st midterm <2009-10-01 Thu>
    :PROPERTIES:
    :ID:       4DF93D57-F52F-4670-A3E7-8E8D8BED5390
    :END:

covers programming and reasoning in Haskell
- present this data structure or algorithm in Haskell
- may be whole period may just be parts of the period

preparation
- work through examples

*** DONE 2nd midterm <2009-11-19 Thu>
    :PROPERTIES:
    :ID:       8456FC7F-7630-456F-A0CE-787501B4B85D
    :END:
non-cumulative

*** DONE Final Exam <2009-12-15 Tue 12:30>--<2009-12-15 Tue 14:30>
    :PROPERTIES:
    :ID:       F7688401-8F34-453E-B92C-4DD7BA65387E
    :END:

- final.pdf
- selected-solutions.pdf

* class notes

** 2009-08-25 Tue [2/2]

- intro to Haskell
- start with core of lambda calculus and begin adding features
  - OO
  - imperative
  - most features of current programming languages
  - possibly more

- [X] sign up for mailing list
- [X] get text book

** 2009-08-27 Thu
*** sum integers
    :PROPERTIES:
    :ID:       8E9A12E3-B863-460D-AC44-C1CA84FE6982
    :END:
C: creates the location =sum= in memory, then modifies it 11 times
#+begin_src c
  sum = 0;
  for (i=1, i<11, 1++)
    {
      sum = sum + 1;
    }
#+end_src

Haskell: no store, initialization, variables, etc...
#+begin_src haskell
  sum [1..10]
#+end_src

sum over all integers in Haskell
#+begin_src haskell
sum [0..]
#+end_src

*** QuickSort
    :PROPERTIES:
    :ID:       9C83CEA9-EAC0-4D64-A53D-5CCA4588201A
    :END:
#+begin_src haskell
  let f [] = []
  let f (x:xs) = f ys ++ [x] ++ f zs
          where
            ys = [a | a <- xs, a <= x]
            zs = [b | b <- xs, b > x]
#+end_src
this is much clearer in functional language, but would be difficult to
specify an in-place QuickSort.  see "persistent data structures
v.s. ephemeral data structures" for more on this

*** expressions
Prelude: contains basic mathematical expressions (sqrt, +, etc...),
and is located in Prelude.hs

common functions in the prelude
#+begin_example 
Prelude> head [2,3,5,1]
2
Prelude> tail [2,3,5,1]
[3,5,1]
Prelude> [2,3,5,1] !! 2
5
Prelude> take 3 [2,3,5,1]
[2,3,5]
Prelude> drop 3 [2,3,5,1]
[1]
Prelude> length [2,3,5,1]
4
Prelude> sum [2,3,5,1]
11
Prelude> product [2,3,5,1]
30
Prelude> [1,2,3] ++ [2,3,5,1]
[1,2,3,2,3,5,1]
Prelude> reverse [2,3,5,1]
[1,5,3,2]
#+end_example

*** function application
- in Java f(x)
- in math f(x)
- in Haskell f x

=f x + b= -> =f(x) + b=

style in Haskell is to minimize parenthesis

*Curried functions*

functions *always* take a *single* argument, so in =take 2 [2,5,3]=
the portion =take 2= returns a function which is then applied to the
list.

=f x y= -> =(f(x))(y)=

*** program
    :PROPERTIES:
    :ID:       B7A50485-B474-41E5-947A-E43154F5509B
    :END:
#+begin_src haskell :file example.hs
  let double x = x + x
  let quadruble x = double (double x)
#+end_src

in the interactive session you'd load a file with
: Prelude> :l example.hs

*** pattern matching
    :PROPERTIES:
    :ID:       75100DF2-5537-4428-A47A-ED1E0D8F182E
    :END:
pattern matching happens from top to bottom
#+begin_src haskell
let fac 0 = 1
let fac n = n * fac(n - 1)
#+end_src
so in the above example 0 matches first.

also guards (same as such that) can be used to be more explicit
#+begin_src haskell
  let fac n
          | n > 0 = n * fac(n - 1)
          | n = 0 = 1
#+end_src

** 2009-09-01 Tue
*** technical questions
    :PROPERTIES:
    :ID:       CA56E1B8-ADF8-49A9-9582-C8932F92CC68
    :END:
=undefined= is a valid token in Haskell code.
#+begin_example 
  Prelude> undefined
  *** Exception: Prelude.undefined
#+end_example

if your guards for a function don't cover some argument, then the
result for that argument is undefined

#+begin_src haskell
  fact n
      | n >= 1 = n * fact(n - 1)
      | n == 0 = 1
  -- is equal to
  fact n
      | n >= 1 = n * fact(n - 1)
      | n == 0 = 1
      | n < 0  = undefined
#+end_src

write all function definitions in files

*** more Haskell features / syntax
    :PROPERTIES:
    :ID:       9C979C33-221A-405E-8DE0-3818350DE15D
    :END:
- when you want to use a curried function infix, then you must
  surround it in back quotes
  #+srcname: average
  #+begin_src haskell 
    average xs = sum xs `div` length xs
    -- or
    average xs = div (sum xs) (length xs)
  #+end_src

- all names (functions, variables) must start with lower-case letters
  #+srcname: funny
  #+begin_src haskell 
    funny ns = sum ns + length ns
    -- or
    funny ns = (+) (sum ns) (length ns)
  #+end_src

- indentation matters, code is 2-D (like python), /however/ there are
  1-D options for all expression, and in fact these 1-D options are
  the /basic/ or /core/ of Haskell which the complex 2-D layout stuff
  is translated to.

- the =!!= notation is used to index into a list

- comments
  - single line starts with =--=
  - multi line functions use ={-= =-}= notation

*** types
    :PROPERTIES:
    :ID:       DCEEFE6E-8005-4097-BDD2-0E8410F6F10E
    :END:
- can be explicit about types
- =::= means /has the type/
- all types are capitalized
  #+begin_src haskell
    False :: Bool
    '5' :: Char
    5 :: Int
    "a" :: String
    "a" :: [Char]
    [False, True] :: [Bool]
  #+end_src

we will use type declarations for *every* function which we will write

types
- Int :: machine size integers
- Integer :: integers of arbitrary size
- Float :: numbers with decimals
- Bool :: boolean
- Char :: characters
- String :: is the same as a list of characters =[Char]=

#+srcname: uppcase
#+begin_src haskell 
  uppcase :: String -> String
  uppcase str = map f s
      where
        f :: Char -> Char
        f 'a' = 'A'
        f 'b' = 'B'
  ...
        f c = c
#+end_src

*** reference
- http://haskell.org contains the prefix etc...
- hoogle provides a search through haskell

** 2009-09-03 Thu
   :PROPERTIES:
   :ID:       3B4D9FE6-2D23-4A5F-8D95-89CF623EE0B4
   :END:
*** question: executable with main
#+srcname: main
#+begin_src haskell :file main 
  module Main where
      fac 0 = 1
      fac n = n * fac(n - 1)
      main =
          putStrLn "hello"
#+end_src

*** types
#+begin_src haskell
  [False, False] :: [Bool]
  [1..] :: [Integer]
#+end_src

- tuple :: is like a record, can contain multiple values of different
           types.  tuples can be any size but it's size if fixed.
           #+begin_src haskell
             (False, True) :: (Bool, Bool)
             (False, 'a', 1) :: (Bool, Char, Int)
           #+end_src
- =div 5 9= ::
  - =div 5= returns a function which divides 5 by it's argument.  the
    type of div is =div :: Integer -> (Integer -> Integer)=
- =->= :: we just say that the =->= constructor associates to the
          right, so we don't have to write the parenthesis as above.

by having *every* function take *exactly* one argument, Haskell and ML
avoid the complexities of needing to have having =apply= operators
like in lisp

when passing complex arguments to functions you can use pattern
matching to automatically deconstruct the complex argument
: f (b,n) = b + n

- polymorphic functions :: can take multiple types of arguments
     #+begin_src haskell
       length :: [Integer] -> Integer
       -- or
       length :: [Char] -> Integer     
     #+end_src

- type variables :: can be used to represent multiple types
                    #+begin_src haskell
                      length :: [a] -> Integer
                      -- or
                      take :: Integer -> [a] -> [a]
                      fst :: (a,b) -> a
                      id :: a -> a
                    #+end_src
- overloading :: when you want multiple types, but not all types you
                 basically are overloading the function (defining it
                 multiple times for multiple argument types)
- type classes :: Haskell manages overloading by defining classes of
                  multiple types.  see the prelude for type classes.
                  type class constraints can be placed in front of
                  type class constructions
                  #+begin_src haskell
                    product :: Num a => [a] -> a
                    + :: Num a => a -> a -> a
                  #+end_src
                    - Num :: numerical types
                    - Eq :: equality types
                    - Ord :: ordered types

** 2009-09-08 Tue
*** pattern matching

#+srcname: abs
#+begin_src haskell 
  abs :: Int -> Int
  abs n
      | n < 0 = -n
      | otherwise = n
#+end_src

#+srcname: not
#+begin_src haskell 
  not :: Bool -> Bool
  not False = True
  not True = False
#+end_src

wildcard =_=
#+srcname: and-no-short-circuit
#+begin_src haskell 
  (and) :: Bool -> Bool -> Bool
  True and True = True
  _ and _ = False
#+end_src

note that the above and clause is not short-circuiting, while the next
one is

#+srcname: and
#+begin_src haskell
  (and) :: Bool -> Bool -> Bool
  True and b = b
  False and _ = False
#+end_src

*** lazy evaluation
allows the use of infinite data structures

=[1..]= is the infinite list

lists use the =:= cons operator

=[1, 2, 3]=

#+begin_src ditaa
  +-----+-----+       +-----+-----+      +-----+-----+
  |     |   --------> |     |   -------> |     |     |    
  |  a  |     |       |  b  |     |      |  c  |  [] |    
  +-----+-----+       +-----+-----+      +-----+-----+    
#+end_src

*** anonymous functions lambda (written =\=)
#+begin_src haskell
  (\ x -> x + 2) 4 -- is equal to 6
#+end_src

useful when combined with map
#+begin_src haskell
firstodss n = map (\ x -> x + 1) [0..n-1]
#+end_src

*** sections (partial infix operators)
shorthand for lambda extractions
#+begin_src haskell
  -- the following are all equivalent
  1 + 2
  (+) 1 2
  (1+) 2
  (+2) 1
#+end_src

can be handy
#+begin_src haskell
map (*2) [1, 2, 3, 4]
#+end_src

*** list comprehension
#+begin_src haskell
  [x^2 | x <- [1..5]]
  -- is equal to
  map (^2) [1..5]
#+end_src

more interesting list comprehension
#+begin_src haskell
  -- multiple generators
  [(x,y) | x < [1, 2, 3], y <- [4,5]]
  -- triangular
  [(x,y) | x <- [1, 2, 3], y <- [x..3]]
#+end_src

- concat :: flattening function for lists
            #+srcname: concat
            #+begin_src haskell
              concat xss = [[x] | xs <- xss, x <- xs]
            #+end_src

guards
#+begin_src haskell
  [[x] | x <- [1..10], even x]
#+end_src

*** finding primes
#+begin_src haskell
  factors n = [[x] | x <- [1..n], n `mod` x == 0]
  prime n = factors n == [1,2]
  primes n = [[x] | x <- [2..n], prime x]
#+end_src

** 2009-09-10 Thu
continue where we left off (more examples)

*** zip
combines lists

#+srcname: zip
#+begin_src haskell :tangle scraps/zip
  zip : [a] -> [b] -> [(a, b)]
  zip [] _  = []
  zip _ []  = []
  zip [a] [b] = [(a,b)]
  zip (a:as) (b:bs) = (a,b) : (zip as bs)
#+end_src

turn a list into a list of pairs
#+srcname: pairs
#+begin_src haskell 
  pairs [a] -> [(a, a)]
  pairs xs = zip xs (tail xs)
#+end_src

#+srcname: sorted
#+begin_src haskell 
  sorted :: [Ord] -> Bool
  sorted [x] = True
  sorted xs = foldl (and) True (map (\ x y -> if (x < y) True else False) (pairs xs))
  -- more concisely
  sorted xs = [ x <= y | (x,y) <- pairs xs]
#+end_src

*** count lower-case letters in string
#+srcname: numlower
#+begin_src haskell 
  numlower :: String -> Int
  numlower str = length [x | x < str, isLower x]
#+end_src

*** recursive list functions
#+srcname: product
#+begin_src haskell 
  product Num a => [a] -> a
  product [] = 1
  product (x : xs) = x * (product xs)
#+end_src

#+srcname: length
#+begin_src haskell 
  length :: [a] -> Int
  length [] = 0
  length (_:xs) = 1 + (length xs)
#+end_src

#+srcname: reverse
#+begin_src haskell 
  reverse [a] -> [a]
  reverse [] = []
  reverse (x:xs) = (reverse xs) ++ [x]
#+end_src

how fast is reverse... I'd think its =O(n)=
: reverse [1, 2, 3]
: (reverse [2,3]) ++ [1]
: ((reverse [3]) ++ [2]) ++ [1]
: (((reverse []) ++ [3]) ++ [2]) ++ [1]
: (([] ++ [3]) ++ [2]) ++ [1]
: ([3] ++ [2]) ++ 1
: [3, 2] ++ [1]
: [3, 2, 1]

=++= must traverse the list, so it doesn't run in unit time, so we run
in n^2 time

maybe
#+begin_src haskell
  reverse [a] -> [a]
  reverse xs = foldl (++) [] xs
#+end_src

#+srcname: concat
#+begin_src haskell 
  concat :: a -> [a]
  concat xss = [x | xs <- xxs, x <- xs]
  -- or recursively
  concat [] = []
  concat (x:xs) = x ++ (concat xs)
#+end_src

#+srcname: replicate
#+begin_src haskell 
  replicate :: Int a -> [a]
  replicate 0 _ = []
  replicate n x = x : (replicate (n - 1) x)
#+end_src

#+srcname: !!
#+begin_src haskell 
  (!!) :: [a] -> Int -> a
  (x:xs) !! 0 = x
  (x:xs) !! n = xs !! (n - 1)
#+end_src

#+srcname: elem
#+begin_src haskell 
  elem :: Eq a => a -> [a] -> Bool
  elem _ [] = False
  elem x (y:ys) = (x == y) or (elem x ys)
#+end_src

What types would not be comparable? Functions

#+srcname: map
#+begin_src haskell
  map :: (a -> b) -> [a] -> [b]
  map f xs = [f x | x <- xs]
  -- now recursively
  map f [] = []
  map f (x:xs) = f x : map f xs
#+end_src

#+srcname: filter
#+begin_src haskell 
  filter :: (a -> Bool) -> [a] -> [a]
  filter f xs = [x | x <- xs, fx]
  -- recursively
  filter f [] = []
  filter f (x:xs)
             | p x = x : filter xs
             | otherwise = filter xs
#+end_src

** 2009-09-17 Thu
*** our own types, and their classes
How to define types and make them instances of type classes

In standard Haskell it is not possible to have a type which is
dependent on a value.

- Type Synonym :: do not introduce new types
     #+begin_src haskell
       type Point = (Float, Float)     
     #+end_src
- New Type :: does introduce a new type
     #+begin_src haskell
       data Point2D = Point2D (Float, Float)     
     #+end_src
     this actually does introduce a new type, which we can use
     #+begin_src haskell
       pointA, pointB :: Point2D
       pointA = Point2D(2.0, 3.0)
       pointB = Point2D(3.0, 4.0)
       -- we can then define functions on these types
       distance (Point2D(x1, y1)) (Point2D(x2, y2)) = sqrt((x1 - x2)^2 + (y1-y2^2))
     #+end_src
     the new =Point2D= syntactic element is a *tag* which has the type
     of a function from a pair of =Floats= to the type =Point2D=.  It
     is not necessary for the tag and the type to have the same name.
- types w/variants :: a type which has multiple constructors
     #+begin_src haskell
       data Point2D = Rect (Float, Float)  -- (x,y)
                    | Polar (Float, Float) -- (theta, r)
       -- we can then use these types
       pointC :: Point2D
       pointC = Polar(2.0, 1.5)
       -- and define functions
       distance(Rect(x1, y1))(Rect(x2, y2))   = sqrt((x1 - x2)^2 + (y1-y2^2))
       distance(Polar(x1, y1))(Rect(x2, y2))  = 
       distance(Rect(x1, y1))(Rect(x2, y2))   = distance(Polar(x1, y1))(Rect(x2, y2))
       distance(Polar(x1, y1))(Polar(x2, y2)) = 
     #+end_src
- recursive data types :: for real fun, we need to recurse
     #+begin_src haskell
       data BnIntTree = Leaf Int
                      -- normally in haskell curried functions are prefered to tuples
                      | Node BnIntTree BnIntTree
     #+end_src
     and to use said structure...
     #+begin_src haskell
       tree1 :: BinIntTree
       tree1 = Leaf 1
       tree123 :: BinIntTree
       tree123 = Node (Node (Leaf 1) (Leaf 2)) (Leaf 3)
     #+end_src
     for more stuff...
     #+begin_src haskell
       -- lets count our leaves
       countLeaves :: BinIntTree -> Int
       countLeaves (Leaf n) = 1
       countLeaves (Node t1 t2) = (countLeaves t1) + (countLeaves t2)
       -- lets add our leaf values
       addLeaves :: BinIntTree -> Int
       addLeaves (Leaf n) = n
       addLeaves (Node t1 t2) = (addLeaves t1) + (addLeaves t2)
       -- left to right traversal
       traverseTreeLeft :: BinIntTree -> [Int]
       traverseTreeLeft (Leaf n) = [n]
       traverseTreeLeft (Node t1 t2) = (traverseTreeLeft t1) ++ (traverseTreeLeft t2)
       -- right to left traversal
       traverseTreeRight :: BinIntTree -> [Int]
       traverseTreeRight (Leaf n) = [n]
       traverseTreeRight (Node t1 t2) = (traverseTreeRight t2) ++ (traverseTreeRight t1)
       -- map some function across a tree
       mapBinIntTree :: (Int -> Int) -> BinIntTree -> BinIntTree
       mapBinIntTree f (Leaf n) = Leaf (f n)
       mapBinIntTree f (Node t1 t2) = Node (mapBinIntTree f t1) (mapBinIntTree f t2)
       -- height
       heightBinIntTree :: BinIntTree -> Int
       heightBinIntTree (Leaf _) = 0
       heightBinIntTree (Node t1 t2) = 1 + (heightBinIntTree t1 `max` heightBinIntTree t2)
     #+end_src
- new tree data type :: abstract arithmetic syntax tree
     #+begin_src haskell
       data ArExp = Mul ArExp ArExp
                  | Add ArExp ArExp
                  | Number Num
     #+end_src
     now to evaluate said expressions
     #+begin_src haskell
       evaluate :: ArExp -> Int
       evaluate (Mul e1 e2) = evaluate e1 * evaluate e2
       evaluate (Add e1 e2) = evaluate e1 + evaluate e2
       evaluate Number n = n
     #+end_src
     pretty print
     #+begin_src haskel
       -- just do the pattern above only construct/concat strings
     #+end_src
     
** 2009-09-22 Tue
parameters to type constructors

another way to define =Point2D=
#+begin_src haskell
  type Pair a = (a,a)
  type Point2D = Pair Float
#+end_src

can also do this with data (remember =data= actually defines a new
structure while =type= defines something more like an alias)
#+begin_src haskell
  data BinIntTree = Leaf Int
                  | Node BinIntTree BinIntTree
  -- or can be written as
  data BinTree a = Leaf a
                 | Node (BinTree a) (BinTree a)
  type BinIntTree = BinTree Int
#+end_src

if we wanted to define lists
#+begin_src haskell
  data MyList a = Empty
                | Nonempty a (MyList a)
  -- to construct values of this type
  list1 :: MyList Char
  list1 = Nonempty 'r' (Nonempty 'e' (Nonempty '1' Empty))
#+end_src

lists in the prelude (same functionality, different syntax)
#+begin_src haskell
  -- this isn't really legal haskell
  data [] a = []
            | a : ([] a)
  -- to construct values of this type
  list2 :: [Char]
  list2 = 'r' : ('e' : ('1' : []))
#+end_src

*** =foldr= abstract the structure or recursive list traversal
=foldr= is a higher order function to hold this common form
#+begin_src haskell
  f [] = v
  f (x:xs) = x (+) xs       
#+end_src
so
#+begin_src haskell
  sum = foldr (+) 0
  product = foldr (*) 1
  and = foldr (&&) True
#+end_src

type of =foldr=
#+begin_src haskell
  foldr :: (a -> b -> b) -> b -> [a] -> b
#+end_src

when list is thought of as a series of cons'd lists, then foldr can be
thought of as
- replacing the cons with it's function
- replacing the empty list with it's base value

- length using foldr
     #+begin_src haskell 
       length = foldr (\ _ a = 1 + a) 0
       -- or more satisfyingly
       length = foldr (const (1+)) 0
     #+end_src
- reverse using foldr
     #+begin_src haskell
       reverse = foldr (\ x r -> r ++ [x]) []
     #+end_src

*** function composition
#+begin_src haskell
  even :: Int -> Bool
  not :: Bool -> Bool
  -- lets use the above to define the below
  odd :: Int -> Bool
  odd n = not (even n)
  -- even better (using function composition)
  odd = not . even
#+end_src

this last example is an instance of the /point free/ style of definition

to define =.=
#+begin_src haskell
  (f . g) a = f (g a)
  (f . g) = (\ a -> f (g a))
#+end_src

** 2009-09-24 Thu
*** some more higher order functions...
- map :: applies function to list
- filter :: filters list on some function

express list comprehension using map and filter
#+begin_src haskell
  [f x | x <- xs, p x]
  -- in terms of map and filter
  (map f . filter p) xs -- could be point free
#+end_src

#<<map-through-foldr-point-free>>
#+begin_src haskell
  map :: a -> b -> [a] -> [b]
  map f = foldr accum (\ x accum -> (f x):accum) []
  -- or more point free
  map f = foldr (\ x -> ((fx):)) []
  -- can the above be simplified to eliminate all lambdas
#+end_src

#+begin_src haskell
  -- my first guess
  filter p = foldr (\ x -> ((if (p x) then x else []) ++)) []
  -- better
  filter p = foldr (\ x -> (if (p x) then (x :) else id)) []
  -- if we hate if-then-else
  cond p f g = if (p x) then (f x) else (g x)
  filter p = foldr (cond p (:) (\ _ -> id)) []
  filter p = foldr (cond p (:) (const id)) [] -- lookup const
#+end_src

algebra on functions like the above is an example of "eta-expansion"
or "eta-reduction"

back to map
#+begin_src haskell
  map = foldr (\ x -> ((f x):)) []
  map = foldr ((:) . f) []
#+end_src

*** equational reasoning (of the map redefinition above)
for algebraic program transformation

1) first we calculate $\forall$ y
   | y:                | by definition                            |
   | (:) y             | by two eta expansions                    |
   | (\a -> \b -> a:b) | by function application (beta-reduction) |
   | \b -> y:b         |                                          |
2) in particular =\r -> fx:r = (fx:)= where y is fx
   | (\x -> (fx:)) a b | by function application         |
   | (fa:) b           | by def of sections              |
   | (:) (fa) b        | make parenthesis explicit       |
   | ((:) (fa)) b      | by def. of function composition |
   | (((:) . f) a) b   | dropping parens                 |
   | (: . f) a b       |                                 |

** 2009-09-29 Tue
a Haskell is a set of defining equations

*** lazy evaluation
(for now) use instead of meaning

- infinite list :: (e.g. =[6..]= is list of 6 to infinity).  to
     practically use an infinite list we must only deal with some
     finite portion of the list, this can be done using the =take=
     prefix operator, or the =!!= index operator.
     #+begin_src haskell
       -- create an infinite list of factorials
       factorials = map fact [0..]
       -- first 5 factorial values
       take f factorials
       -- factorial of 9
       factorials !! 9
     #+end_src
- w/list comprehension :: can create infinite lists with list
     comprehension
     #+begin_src haskell
       -- squares
       squares = [x^2 | x <- [0..]]
       -- powers of an integer
       powers :: Integer -> [Integer]
       powers n = [2^x | x <- [0..]]
       -- using a nice build in function `iterate'
       powers n = iterate (p*) 1
     #+end_src
- iterate :: how to write iterate
     #+begin_src haskell
       iterate :: (a -> b) -> a -> [b]
       iterate f x = x : iterate f (f x)
     #+end_src
- primes ::
     #+begin_src haskell
       primes :: [Integer]
       primers = [p | p <- [2..], prime p]
           where
             prime :: Integer -> Bool
             prime n = null (nonTrivialFactors n)
                 where
                   nonTrivialFactors n = filter (\ x -> n mod x == 0) [2..n-1]
     #+end_src
- so now what is this :: it is an infinite list of infinite lists.
     #+begin_src haskell
       map powers primes  
     #+end_src
     - how do we access elements of this list w/o evaluating the
       entirety of it's predecessors?
- example :: example working with infinite lists
     #+begin_src haskell
       ints = 1 : map (1+) ints
     #+end_src

*** odds and ends
**** literate programming
     :PROPERTIES:
     :CUSTOM_ID: literate-programming
     :END:
a program is a work of literature

lhs2tex-Manual

comments
#+begin_src haskell
  -- comment
  {- literate comment -}
#+end_src

=.lhs= files can be ingested by all Haskell compilers.  These files
can be constructed in the following files
- Bird-style :: all text is considered to be a comment, to write code
     first input an empty line following by code lines prefixed with
     "> ", the code is then ended with another empty line
- LaTeX-style :: main body is Latex code, to write code wrap it in a
     =\begin{code}...\end{code}= LaTeX environment
- lhs2TeX :: this is a preprocessor for the /LaTeX-style/ of =.lhs=
     files, that can be used to make the haskell code look more like
     math if desired (i.e. converting -> to arrows, Greek letters to
     Greek, etc...).

From now on we will be required to submit our code in a literate
programming style.

**** fold for generic data types
Arithmetic Data Type
#+begin_src haskell
  data Expr = Num Int
            | Mul Expre Expr
  eval :: Expr => Int
  eval (Num n) = n
  eval (Mul e1 e2) = eval e1 * eval e2
  
  -- now to define fold for the Expr type
  foldExpr (Int -> a) -> (a -> a -> a) -> Expr -> a
  foldExpr f g (Num n) = f n
  foldExpr f g (Mul e1 e2) = foldExpr e1 `g` foldExpr e2
  -- or using case
  foldExpr fNum fMul e =
      case e of
        Num n -> fNum n
        Mul e1 e2 -> foldExpr e1 `fMul` foldExpr e2
  
  -- evaluating with our combining function
  eval = foldExpr (id) (*)
  eval = foldExpr (\ (Num n) -> n) (\ (Mul e1 e2) -> eval e1 * eval e2)
  
  -- printing with the combining function
  to_string = foldExpr show (\ s1 s2 -> "("++s1++" * "++s2++")")
#+end_src

note that this can be done automatically in *generic Haskell* (feel
free to investigate extensions that provide this, but don't use them
as they're not part of the standard)

using /data genericity/ certain functions are defined automatically
whenever a data structure is defined.

** 2009-10-06 Tue
*** homework/midterm grades
homework
| median  | 100 |
| average |  94 |

midterm
| median  | 40 |
| average | 42 |

*** notes/topics from homework/midterm

**** homework problem 2
#+begin_src haskell
  data Expr = Num Integer
            | Let {var :: String, value :: Expr, body :: Expr}
  
  -- same as
  data Expr = Num Integer
            | Let String, Expr, Expr
#+end_src

**** midterm problem 1
#+begin_src haskell
  jj
      where
        j x = j x
#+end_src
- type is =a=
- value =diverges=

#+begin_src haskell
  f (2, [1, 2, 3])  
    where
      f (n, x:xs) = f (n-1, xs)
      f (0, xs) = xs
#+end_src
- type is =[Int]=
- value is =fails=, because eventually the list is empty and that case
  isn't matched (because when the list is empty n doesn't equal 0)

**** midterm problem 3

my solution which was wrong
#+begin_src haskell
  inits [] = [[]]
  inits (x:xs) = (\ rest -> rest ++ (head (reverse rest) : [x])) (inits xs)
#+end_src

in general on tests, you should evaluate your function by hand on a
small input to ensure that the mechanics work

**** midterm problem 4
keep around the depth information in the same way that you would have
in C?

** 2009-10-08 Thu
*** homework questions
- question 2.2.4 (the assembler) :: turn a large string (including
     newlines) into a program, the large string will look like the
     example in 2.2.5 (i.e. the output of "the disassembler" from
     question 2.2.3)

the homework introduces some new concepts
- IO :: we need to learn a little bit about IO, but don't need to
     understand monads
- PRINT :: the PRINT statements will return a type =IO()= all of which
     must be accumulated during the course of executing the program

*** reasoning about functions & data structures
**** pairs =(a,b)=
#+begin_src haskell
  -- take the first of a pair
  fst :: (a, b) -> a
  fst (x, _) = x
  -- take the second of a pair
  snd :: (a, b) -> b
  fst(_, y) = y
#+end_src

- pair :: no special function is needed to create a pair, rather the
     function =pair= will be used to do something more subtle (apply a
     pair of functions to a value).
     #+begin_src haskell
       pair :: (a -> b, a -> c) -> a -> (b, c)
       pair (f, g) x = (f x, g x)
     #+end_src
- cross :: like pair but applies a pair of functions to a pair of
     values
     #+begin_src haskell
       cross :: (a -> c, b -> d) -> (a, b) -> (c, d)
       cross (f, g) (x, y) = (f x, g y)
     #+end_src


can we define =cross= in terms of =pair=? or course we can.
#+begin_src haskell
  cross (f, g) = pair (f . fst, g . snd)
#+end_src

now lets prove some properties of =pair= and =cross=

- prove =fst . pair (f, g) = f= :: in the calculational proof style
     #+begin_src haskell
       -- being painfully verbose
       (fst . pair (f, g)) x
       == {- definition of function composition -}
       fst (pair (f, g) x)
       == {- definition of pair -}
       fst (f x, g x)
       == {- definition of fst -}
       f x
     #+end_src
     we have established, by calculation, that for any arbitrary =x=
     #+begin_src haskell
       fst . pair (f, g) x == f x
     #+end_src
     By extensionality
     #+begin_src haskell
       fst . pair (f, g) == f
     #+end_src

**** pair exercises
     :PROPERTIES:
     :CUSTOM_ID: pair-exercises
     :END:
- prove that =snd . pair (f, g) = g= :: we know that \forall =x=
     #+begin_src haskell
       -- snd . pair (f, g) = g
       pair (f, g) x == (f x, g x)  -- by the definition of pair
       snd (f x, g x) == g x -- by the definition of snd
       -- so for any x
       (snd . pair (f, g)) x == g x
     #+end_src
- prove that =pair (f, g) . h = pair (f . h, g . h)= :: \forall =x=
     #+begin_src haskell
       -- pair (f, g) . h = pair (f . h, g . h)
       (pair (f, g) . h) x == pair (f, g) (h x)    -- by the definition of (.)
       pair (f, g) h x == (f (h x), g (h x))       -- by the definition of pair
       (f (h x), g (h x)) == (f . h x, g . h x)    -- by the definition of (.)
       (f . h x, g . h x) == pair (f . h, g . h) x -- by the definition of pair
     #+end_src
- prove that =cross (f, g) . pair (h, k) = pair (f . h, g . k)= ::
     \forall =x=
     #+begin_src haskell
       -- cross (f, g) . pair (h, k) = pair (f . h, g . k)
       cross (f, g) . pair (h, k) x
       -- definition of pair
       == cross (f, g) . (h x, k x)
       -- 
     #+end_src
     
  
- prove that =cross (f, g) . cross (h, k) = cross (f . h, g . k)=
- what are the functions pair and cross good for?

**** laws of map
     :PROPERTIES:
     :CUSTOM_ID: laws-of-map
     :END:
Laws of =map=
1) =map id = id=, we must be careful here as the second =id= is an
   =id= over lists
2) =map f . g = map f . map g= so for example
   #+begin_src haskell
     map (square . succ) [1, 2] == [4, 9]
   #+end_src
   so the left hand side of this rule is one list traversal of a
   complex function (=square . succ=) and the right hand side is two
   list traversals applying two simpler functions =succ= and
   =square=.  Note that the compiler may prefer the left hand side to
   the right hand side because the right hand side requires 2 list
   traversals (time) and an intermediate data structure (space).
3) =map f . tail = tail . map f= so =map f= and =tail= commute
4) =f . head = head . map f=
5) =map f . reverse = reverse . map f=
6) =map f (xs ++ ys) = map f xs ++ map f ys=
7) =map f . concat = concat map (map f)=

**** filter
     :PROPERTIES:
     :CUSTOM_ID: filter-properties
     :END:

1) =filter p . concat = concat . map (filter p)=
2) =filter p . filter q = filter (\ x -> p x && q x)=

**** typing exercise
what are these types?

#+begin_src haskell
  map (map square) -- [[1, 2], [3, 4]] -> [[1, 4], [9, 16]]
  -- what is the type of this equation?
  -- we need to know the type of square
  map (map square) :: Int -> Int
  -- and
  map square :: [Int]
#+end_src

#+begin_src haskell
  map map :: [a -> b] -> [[a] -> [b]]
#+end_src
to explain the above just cram the entire type of =map= into the =(a
-> b)= portion of the type of map

** 2009-10-13 Tue
Depak is lecturer

*** unification

Unification (or equation solving or constraint solving) this topic
comes from Heibrand's PhD thesis -- proving first order predicate
calculus is complete -- then "rediscovered" by Pravitz and Robinson
(resolution principle, initially exponential algorithm, eventually
quadratic, by others down to n log(n) or n \alpha(n)).

- \alpha(n) :: inverse of the Ackerman function

Universe
- constants
- variables
- function symbols (making no assumptions of properties)
- terms/expressions (combination of the above)

given finite set of simultaneous equations (composed of terms) we will
1) determine if solvable (or unifiable), if \exists a substitution
   \sigma (values for variables) s.t. when the variables are applied
   they become equal (unifier)
2) if so find the most general solution (most general unifier)

possibilities
1) not solvable
2) solvable with single solution
3) solvable with \infty solutions

depending on the nondeterministic choices (which equation/variable to
solve first) it is possible (in the case of \infty solutions) to
arrive at *syntactically* different solutions which will still be
equivalent.

operations on the system of equations must be both
- sound :: no operation adds new solutions
- complete :: no operation removes possible solutions
basically you should be operating equally on either side of the
equality relation in the equations

*** solvability when dealing with terms and expressions

No Solutions
1) $a() = b()$ two functions which are not equal
2) $f(x, y) = g(a(), x, y)$ two functions which are not equal
3) $x = f(x, y)$ can't be equal as =x= is on both sides

Solutions
1) $x = y$
2) $x = f(...)$
3) $t = t$
4) $f(s_1..s_k) = f(t_1..t_k)$ solvable iff $\forall i \in
   [i..k]$ $s_i = t_i$ is solvable

*** unification algorithm

at every point the state of the system will include
| $E = \{s_1 = t_1, s_k = t_k\}$             | set of equations |
| $[x_1 \leftarrow g_1, x_2 \leftarrow g_2]$ | partial solution |

at each step there are two possibilities
1) no solution if "function clash" (2 above) or "occurs check" (3 above)
2) can progress by removing equations from system and possibly adding
   terms to the partial solution
   - can remove equation of the form $t = t$
   - can remove equation of the form $x = y$ and add $x \leftarrow y$
     to partial solution
   - you have $x = f(...)$ like 2 in the solutions above and add $x
     \leftarrow f(...)$ to the partial solution
   - you have case 4 above in the solutions section

** 2009-10-20 Tue
when checking whether a program has a type we will be implementing
unification-type algorithms

*thus begins the second part of the course*
- from textbook
- more interactive

#+begin_example 
  t ::=
    true
    false
    if t then t else t
    0
    succ t
    pred t
    iszero t
#+end_example

- in the above =t= is a place holder for any term
- the above is shorthand for an inductive definition of a set of terms
- context free grammars define sets of strings
- the above defines terms (tree-like structures) rather than strings,
  these kinds of grammars may be called /abstract/ grammars, as
  opposed to the context free grammars which are concerned with the
  string representations of the programs as written by the programmer
- for now we won't worry about the parsing issues, but only with the
  abstract trees

type systems will help to divide terms into those that are definitely
meaningless and those that might have meaning

the standard notation for these inductive set definitions of the form
if $t_1 \in T$ then $succ(t_1) \in T$ is

$$\frac{t_1 \in T}{succ(t_1) \in T}$$

#<<concrete-set-construction>>
a more concrete method of constructing the set of terms
- $S_0 = \emptyset$
- $S_{i+1} =$
  - $\{true, false, 0\} \cup$
  - $\{succ(t_1), pred(t_1), iszero(t_1) | t_1 \in S_i \} \cup$
  - $\{if t_1 then t_2 else t_3 | t_1, t_2, t_3 \in S_i\}$
- $S = \cup_i S_i$

then applying these rules
- S_0 = empty-set
- S_1 = {true, false, 0}
- S_2 = ...

*** from syntax to semantics
evaluation is moving from sets of terms to sets of values

#+begin_example 
  -- terms
  t ::=
    true
    false
    if t then t else t
  
  -- values
  v ::=
    true
    false
#+end_example

evaluation rules
- if true then t_1 else t_2 $\rightarrow$ t_1
- if false then t_1 else t_2 $\rightarrow$ t_2
- $$\frac{t_1 \rightarrow t'}{if\,t_1\,then\,t_2\,else\,t_3 \rightarrow if\,t'\,then\,t_2\,else\,t_3}$$

** 2009-10-22 Thu
*** homework2 notes
- definitely use LaTeX, lhs, etc...
- include test code in the resulting pdf
- include discussion in the resulting pdf
- consider effort on part of the reader/grader
- could even include the test log into the resulting pdf file
- need to be able to print a single pdf file
- minimum font is 11pt
- use the following LaTeX font =\usepackage{mathpazo}=
- don't use ellipses in a proof
- don't use a narrative style in a proof (as much as possible the
  proof should be a system of equations)
- these calculational proofs are intended to be mechanically checkable
  (in style at least if not in practice)

here is an example =.lhs= file excerpt-CircuitDesigner.lhs

*** current homework notes
- we can use our own types if we prefer
- fail with cycle is just an occurs check failure

*** operational semantics
- =if t1 then t2 else t3= is *strict* in =t1= (meaning it must be
  known/evaluated) but is *lazy* in =t2= and =t3= in that they may not
  be evaluated
- rules for if/then/else
  - (E-IFTRUE) $if\,true\,then\,t_2\,else\,t_3\,\rightarrow\,t_2$
  - (E-IFFALSE) $if\,false\,then\,t_2\,else\,t_3\,\rightarrow\,t_2$
  - (E-IF) $$\frac{t_1 \rightarrow t'}{if\,t_1\,then\,t_2\,else\,t_3\,\rightarrow\,if\,t'\,then\,t_2\,else\,t_3}$$
- repeat evaluation =->*= is the transitive, reflexive closure of the
  single-step evaluation relation
- the /normal form/ of a term is what it ultimately evaluates to

actually stepping-through/evaluating a program
1) $$E-IF\frac{E-IFTRUE\frac{}{if\,true\,then\,false\,else\,true \rightarrow
   false}}{if\,(if\,true\,then\,false\,else\,true)\,then\,true\,else\,(if\,false\,then\,false\,else\,true)}$$
   $$\rightarrow$$
   $$if\,false\,then\,true\,else\,(if\,false\,then\,false\,else\,true)$$
2) ...
3) ...
4) $true$

*** now adding numbers to our simple boolean language
sadly we now have /normal forms/ that we don't like, for example
=iszero false=.  Adding a type system to our system will allow us to
find these /stuck/ situations before evaluating.

- well typed :: evaluation will not result in a /stuck/ term

** 2009-10-27 Tue
homework2 back today average score is ~90

recall the language of arithmetic expressions
#+begin_example 
  t ::= true
        false
        if t then t else t
        0
        succ t
        pred t
        iszero t
#+end_example

inductive function definition examples
- size :: returns the "size" of a term
     #+begin_src haskell
       size true  = 1
       size false = 1
       size 0     = 1
       size (succ t)   = 1 + size t
       size (pred t)   = 1 + size t
       size (iszero t) = 1 + size t
       size (if t1 then t2 else t3) = (size t1) + (size t2) + (size t3)
     #+end_src
- consts :: returns the set of constants present in a term
     #+begin_src haskell
       consts 0 = [0]
       consts true = [true]
       consts 0 = [false]
       consts (succ t) = consts t
       consts (pred t) = consts t
       consts (iszero t) = consts t
       consts (if t1 then t2 else t3) =
           consts t1 `union` consts t2 `union` consts t3
     #+end_src

*** Theorem
$|consts(t)| \leq size(t)$

- by induction on the structure of t
- base cases are $t \in [true, false, 0]$: 
  - $|consts(t)| = |[t]| = 1 = size(t)$
- inductive size
  - $t \in [succ(t_1), pred(t_1), iszero(t_1)]$:
    - $|consts(t)| = |consts(t_1)| = |[t]| \leq size(t_1) < size(t)$
  - $t = if\, t_1 \, then \, t_2 \, else t_3$
    - $|consts(t)| = |consts(t_1) \cup consts(t_1) \cup consts(t_1)|$
    - $\leq |consts(t_1)| + |consts(t_1)| + |consts(t_1)|$
    - $\leq size(t_1) + size(t_1) + size(t_1)$
    - $< size(t)$

*** Operational Semantics
what if we want to describe a language on booleans where both cases of
the conditional are evaluated and in addition we'd like to fix the
order of evaluation to t_2 then t_3 then t_1?

| E-IF-THEN      | $$\frac{t_2\,->\,t_2'}{if\,t_1\,then\,t_2\,else\,t_3\,->\,if\,t_1\,then\,t_2\,else\,t_3'}$$     |
| E-IF-ELSE T    | $$\frac{t_3\,->\,t_3'}{if\,t_1\,then\,true\,else\,t_3\,->\,if\,t_1\,then\,true\,else\,t_3'}$$   |
| E-IF-ELSE F    | $$\frac{t_3\,->\,t_3'}{if\,t_1\,then\,false\,else\,t_3\,->\,if\,t_1\,then\,false\,else\,t_3'}$$ |
| E-IF-GUARD TT  | $$\frac{t_1\,->\,t_1'}{if\,t_1\,then\,true\,else\,true\,->\,if\,t_1'\,then\,true\,else\,true}$$ |
| and three more | ...                                                                                             |
| E-TRUE TT      | $if\,true\,then\,true\,else\,true\,->\,true$                                                    |
| and eight more | ...                                                                                             |

we should really have a metavariable =v= over values

| E-IF-THEN  | $$\frac{t_2\,->\,t_2'}{if\,t_1\,then\,t_2\,else\,t_3\,->\,if\,t_1\,then\,t_2\,else\,t_3'}$$ |
| E-IF-ELSE  | $$\frac{t_3\,->\,t_3'}{if\,t_1\,then\,v_2\,else\,t_3\,->\,if\,t_1\,then\,v_2\,else\,t_3'}$$ |
| E-IF-GUARD | $$\frac{t_1\,->\,t_1'}{if\,t_1\,then\,v_2\,else\,v_3\,->\,if\,t_1'\,then\,v_2\,else\,v_3}$$ |
| E-TRUE     | $if\,true\,then\,v_2\,else\,v_3\,->\,v_2$                                                   |
| E-FALSE    | $if\,false\,then\,v_2\,else\,v_3\,->\,v_3$                                                  |

*** Lets type our language of arithmetic
the goal here being to eliminate normal terms which are not values
(e.g. =succ(false)=).

we'd prefer types to be a relation rather than a partition, so that
terms can have multiple types (e.g. 1 is a natural number and an int.)

=:= will be our type assignment operator =t:T= mean =t= has type =T=

#+begin_example
  T := Nat
       Bool
#+end_example

rules for values
- $false : Bool$
- $true : Bool$
- $0 : Nat$

now rules for compound terms
- $$\frac{t_1 : Nat}{ succ(t_1) : Nat }$$
- $$\frac{t_1 : Nat}{ pred(t_1) : Nat }$$
- $$\frac{t_1 : Nat}{ iszero(t_1) : Bool }$$
- $$\frac{t_1 : Bool, t_2 : T, t_3 : T}{if\,t_1\,then\,t_2\,else\,t_3 : T}$$

** 2009-10-29 Thu
lambda calculus -- there are only functions and function definitions.

what special things can you do in a language where you know that all
terms will terminate

*** lambda calculus
:  t ::= x
:        \x . t
:        t t

Reduction rule -- $\beta$-reduction rule
$$(\lambda x . t_{12}) t_2 \rightarrow [x \mapsto t_2] t_{12}$$

- will not get more than one normal form
- is possible to end up in an infinite re-write loop

- normal order strategy -- always work with the outermost redex
  - $$(\lambda x . x) ((\lambda x . x) (\lambda z . (\lambda x . x) z))$$
  - $$(\lambda x . x) (\lambda z . (\lambda x . x) z)$$
  - $$(\lambda z . z)$$
- call by name strategy -- don't dive into a lambda
  - $$(\lambda x . x) ((\lambda x . x) (\lambda z . (\lambda x . x) z))$$
  - $$(\lambda x . x) (\lambda z . (\lambda x . x) z)$$
  - $$(\lambda z . (\lambda x . x) z)$$
- call by value -- only values can be substituted into a lambda
  (/abstractions/ are considered to be values)
  - $$(\lambda x . x) ((\lambda x . x) (\lambda z . (\lambda x . x) z))$$
  - $$(\lambda x . x) (\lambda z . (\lambda x . x) z)$$
  - $$(\lambda z . (\lambda x . x) z)$$

note that the two previous strategies result in different normal forms
and define different lambda calculi.  /call by name/ can vaguely be
thought of as a strategy in which parts of the program are set aside
which can be compiled into machine code (because they won't serve as
data at any point).  /call by name/ is the flavor of lambda calculus
which lives in the core of Haskell -- with the addition of laziness
which makes it /call by need/.

- call by need :: haskell
- call by value :: java, C, ML, etc...

from here on out we will restrict ourselves to the /call by value/
form of lambda calculus

*** untyped, /call-by-value/, $\lambda$-calculus
terms
: t ::= x
:       \x . t
:       t t

values
: v ::= \x . t

evaluation relation
- E-APPABS :: $(\lambda x . t_{12}) \rightarrow [x \mapsto v_2] t_{12}$

staging rules
- E-APP1 :: $\frac{t_1 \rightarrow t_1'}{t_1 t_2 \rightarrow t_1' t_2}$
- E-APP2 :: $\frac{t_2 \rightarrow t_2'}{v_1 t_2 \rightarrow t_v t_2'}$

tricky spot (what does $x \mapsto t$ mean): which only means /free/
occurrences of $x$ should be replaced.  /free/ terms mean those which
are not inside of a term.

** 2009-11-03 Tue
the next homework will be out soon...

*** lambda calculus (Church constants)
- $\lambda t. \lambda f. t$ -- true
- $\lambda t. \lambda f. f$ -- false

the above are like conditionals
- true A B --> A
- false A B --> B

#+begin_quote 
  Church could not discover an encoding of the predecessor function,
  it was discovered by Fellini while he was having a tooth extracted.

  -- Legend
#+end_quote

- $\lambda b . \lambda c. b c (\lambda t . \lambda f . f)$ -- and

*** y combinator
$\Omega = (\lambda x. x x)(\lambda x. x x)$

#+begin_src haskell
  om = (\x -> x x)(\x -> x x)
#+end_src

results in

#+begin_example 
      Occurs check: cannot construct the infinite type: t = t -> t1
      Probable cause: `x' is applied to too many arguments
      In the expression: x x
      In the expression: (\ x -> x x) (\ x -> x x)
#+end_example

$\Omega$ never converges, it has no normal form

** 2009-11-05 Thu
*** formal definition of our lambda calculus / notation
very formal definition of λ-terms
- infinite sequence of expressions called variables:
  $$V_0, V_{00}, V_{000}, ...$$
- finite, infinite or empty sequence of expressions called atomic
  constants which will be different from variables. (if there are none
  of these we have a /pure/ λ calculus)

given the variables and atomic constants we will define λ terms
- all variables and atomic constants are λ terms and we will call
  these /atoms/
- if M and N are λ-terms, then (M N) is a λ term, this term will
  be called an application
- if M is a λ-term and x is a variable, then (λ x . M) is a λ term,
  this term will be called an abstraction

Notation
- Capital letters will denote arbitrary items
- Letters will denote variables

Parenthesis will be omitted as follows
- =MNPQ= denotes =(((MN)P)Q)=
- =λ x . P Q= denotes (λ x (P Q))
- =λ x_1 x_2 ... x_n . M= denotes =(λ x_1 .(λ x_2 . (... (λ x . M) ...)))=

*** definitions
P,Q λ-terms

P occurs in Q, or P is a subterm of Q, or Q contains P

inductive definition:
- P occurs in P
- if P occurs in M or in N, then P occurs in (MN)
- if P occurs in M or P==x, then P occurs in (λx.M)

Exercise:
- ((xy)(λx.(xy)))
  - three occurrences of x
  - two occurrences of (xy)
- (λxy.xy)
  - only one occurrence of (xy)

/scope/
- Def :: for a particular occurrence of λx.M in a term P, the
     occurrence of M is called the /scope/ of the occurrence of λx.
- Example :: in P==(λy.yx (λx.y(λy.z)x))vw
     - the scope of the leftmost λy is yx (λx.y(λy.z)x
     - the scope of λx is y(λy.z)x

An occurrence of a variable x in a term P is:
- bound if it is in the scope of a λx in P
- bound and binding if it is the x in λx
- free otherwise

/bound/ and /free/ in a term
- x is a bound variable of a term P if x has at least one binding
  occurrence in P
- x is a free variable of a term P if x has at least one free
  occurrence in P
- FV(P) is the set of free variables of the term P
- the term P is closed if it has no free variables

** 2009-11-10 Tue
*** homework notes
- is there something more elegant than string comparison for the
  highlighting? yes -- there is an easier way if the function
  understands what the highlighting means
- when asking questions we should employ the phrasing "why is _ _"
- scanning should happen in two phases
  1) all identifiers taken to be identifiers
  2) see if these presumed identifiers *are* identifiers or if they
     are keywords
- although we may not alter/remove the given function, we may create
  our own additional functions
- note that the parenthesis *are* part of the grammar and *must* be
  included (the same is true of =else= and =term= for the =if=
  construct)
- the only place we really have leeway is in our acceptable
  identifiers
- we don't want to return *any* identifiers to the parser

*** formal definitions
- substitution :: for any M,N,x we define [N/x]M to be the result of
     substituting N for every free occurrence of x in M, and changing
     bound variables to avoid clashes. other notations include:
     - [x $\mapsto$ N]M
     - [x/N]M
     - [N/x]M
- induction on M :: a number of rules
     - [N/x]x == N
     - [N/x]a == a \forall atoms a != x
     - [N/x](PQ) == ([N/x]P [N/x]Q)
     - [N/x](λx.P) == (λx.P)
     - [N/x](λy.P) == (λy.P) if x $\not\in$ FV(P)
     - [N/x](λy.P) == (λy.[N/x]P) if x $\in$ FV(P) and y $\not\in$ FV(N)
     - [N/x](λy.P) == (λz.[N/x][z/y]P) if x $\in$ FV(P) and y $\in$ FV(N) where z "is fresh"

** 2009-11-12 Thu
*** homework5
simple extension of homework4, λ calculus with reductions etc...

*** language of Booleans and Numbers
- avoiding non-meaningful(untyped) normal terms
- terms which *do* have types/meaning *will* evaluate and will do so
  to their type

 we would like to find types w/o evaluating the term, so what do we do?
- type checking should be guaranteed to terminate
- should be quick (not so true in ML or Haskell)

new syntactic type
#+begin_example 
  T ::= Bool
        Nat
#+end_example

type of our syntactic elements
- true : Bool
- false : Bool
- 0 : Nat
- $$\frac{t_1 : Nat}{succ t_1 : Nat}$$
- $$\frac{t_1 : Nat}{pred t_1 : Nat}$$
- $$\frac{t_1 : Nat}{iszero t_1 : Bool}$$
- $$\frac{t_1 : Bool \, t_2 : T \, t_3 : T}{if t_1 then t_2 else t_3 : T}$$

so the typing system will exclude many terms which would evaluate w/o
problem

goals
- well typed term will not get stuck, meaning it can be further
  evaluated or it is a value
- Preservation is the property that a well typed term will not
  evaluate in a single step to an untyped term
- Canonical forms lemma: if B has type Bool then it is either True or
  False

** 2009-12-01 Tue
*** homework
- many people's fresh variables weren't generating truly fresh variables
- overuse of isValue function
- average grade is around 70

Chapter 12 introduces a /fixed point combinator/ which will allow
recursion and make the language much more usable.  this is done by
removing the limitation of the inability to recur in the typed
λ-calculus because combinators have no type (may not terminate).

*** lecture
would be nice to have...
- let bindings :: which are equivalent to function applications
- pairs, tuples, records :: which can also be nicely typed

** 2009-12-03 Thu

*** getting back the midterms
- typing went well
- some evaluation uncertainties
- class average 216
- look up the normal order reduction solution in the book

- first question :: in cases where a rule has no predicates like
     E-APPABS we can write the derivation tree sideways for clarity
  - initial expression : (λx. (λz. λx. x z) x) (λx.xx)
  - applying E-APPABS : [x |-> λx. x x] ((λz. λx. x z) x)
  - resolving application : (λz. λx. x z) (λx. x x)
  - applying E-APPABS : [z |-> λx. x x] (λz. λx. x z)
  - resolving application : λx. x λx. x x
- last question :: I'm not going to latex this one out...
  - Nat -> Nat -> Nat != (Nat -> Nat) -> Nat, because the convention
    is that -> associates to the right

*** λ-calculus
(bridging the gap between /core/ λ-calculus and what we need in a language)

so far we have talked about
- pairs
- tuples
- records

Today we will talk about /variant types/

when several types are all injected into a single type with labels --
these are like the =data= types in Haskell
#+begin_example 
  T ::= T+T

  t ::= inject_left t
        inject_right t
        case t of { inject_left x => t | inject_right x => t }
#+end_example

or in Haskell

#+begin_src haskell
  data T = A T1 | B T2

  x1 :: T1
  x1 = ...
  x2 :: T2
  x2 = ...
  t1 = A x1
  t2 = B x2

  -- using these types
  case t3 of
    A x1 -> ... x1 ...
    B x2 -> ... x2 ...
#+end_src

typing rules

$$\frac{\tau \vdash t_1 : T_1}{\tau \vdash inject\_left t_1 : T_1 + T_2}$$

$$\frac{\tau \vdash t_1 : T_2}{\tau \vdash inject\_right t_1 : T_1 + T_2}$$

and also a much longer one for the case rule

$$
\frac{\tau \vdash t_0 : T_1 + T_2 \, \tau, x_1 : T_1 \vdash t_1 : T \, \tau, x_2 : T_2 \vdash t_2 : T}
     {\tau \vdash case \, t_0 of \, inject\_left x_1 => t_1 | inject\_right x_2 => t_2} : T
$$

in our Haskell implementation of this concept we will have no idea
what T_2 will be for an expression like =inject_left x= where x has
type T_1, so we may have to force the programmer to type annotate
these expressions s.t.
- =inject_left x= becomes =inject_left x as Nat x T2=
- =inject_right x= becomes =inject_right x as Nat T1 x=
and the rules above must be changed similarly

the =data= keyword in Haskell introduces both the /records/ and
/variant types/ that we have discussed thus far as well as /recursive
data types/ which we will not have time to address this year.

*** recursive evaluation (in 4 minutes)
fact n in the λ-calculus

we can't say
: fact n = λ n. if n == 0 then 1
:                         else n * fact (n - 1)

but in λ-calculus the =fact= above doesn't refer to the =fact= being
described
: λf. λn. if n == 0 then 1
:                   else n * fact (n - 1)

so we end up saying...
: fact = Y (λf. λn. if n == 0 then 1
:                             else n * f (n - 1))

where =Y= is some Y-combinator, however we can't write this in our
type system, so we must add the new syntactic form =fix= s.t.
#+begin_example 
  t ::= fix t
#+end_example

with typing rule
$$
\frac{\tau \vdash t_1 : T_1 \rightarrow T_1}
     {\tau \vdash t_1 : T_1}
$$

and the evaluation rules

$$
fix (\lambda x: T1 . t2) \rightarrow [x \mapsto fix (\lambda x:T1 . t2)]t2
$$

$$
\frac{t_1 \rightarrow t_1'}
     {fix t_1 \rightarrow fix t_1'}
$$

so to perform the factorial of 6 we could do...
:    (fix  (λf. λn. if n == 0 then 1
:                             else n * f (n - 1))) 6

as these expand the =fix(...)= is inserted in place of =fact= in such
a way that the exposed λ-expressions are applied to their arguments
(initially 6) on every other step, and every other-other step the
=fix= substitution will take place.  basically it alternates between
/expanding/ and /applying/.

** 2009-12-10 Thu
*** type reconstruction algorithm
typing rules are similar to the evaluation rules, but in addition to
requiring assumptions about the evaluation of subterms, there may also
be type constraints generated during the subterm evaluation.

$\tau \vdash t_1 : T_1 | C_1$ $\tau \vdash t_2 : T_2 | C_2$ $\tau \vdash t_3 : T_3 | C_3$

first, constraints C_1, C_2, and C_3 are generated while evaluating
the portions of the if statement, and two new constraints are added
which are required by the if statement.

$$
\frac{C' = \{T_2 = T_3\} \cup \{T_1 is Bool\} \cup C_1 \cup C_2 \cup C_3}
     {\tau \vdash if t_1 then t_2 else t_3 : | C'}
$$

for a simpler typing constraint

$$
\frac{x:T \in \tau}
     {\tau \vdash x:T | \emptyset }
$$

while running a type constraint algorithm we will occasionally need to
generate a fresh type variable which has no constraints (e.g. when
calculating the type constraints on an application).

more formally the above /fresh type variable/ creation requires a
slightly more complicated stating that the new variable (say =X=) is
not an element of any of the existing sets of variables used in our
subexpressions.

one valid output of our type reconstruction is polymorphic functions.
one possible problem of polymorphic functions is the potential for
them to be applied to different values.

for example
: let id = λn.n in
: (id (λk.k)) (id 5)

=id= is used here with different types.

the work around for this issue -- discovered during the implementation
of ml -- is called /let polymorphism/ and involves calculation of
polymorphism during the evaluation of a let-bound.  so while the above
would work the following previously equivalent expression would be
rejected
: (λid.((id(λk.k))(id 5)))(λn.n)

*** what we haven't covered

- meta-theory :: analysis/proofs of the properties and relationships
     of/between our types and terms
- lanuage extensions :: we skipped many possible extensions
     - references :: clean way of adding mutable state to a core λ
          calculus
     - exceptions :: chapter 14, models for raising and handling said
     - subtyping :: type classes -- complex interactions with type
          reconstruction, fundamental to Object Oriented programming.
          Type classes in Haskell have *nothing* to do with subtyping
     - recursive types :: this is not hard, but we didn't cover it.
          here's a quick introduction to recursive types in Haskell.
          #+begin_src haskell
            data Tree = Leaf | Fork Tree Tree
          #+end_src
          If we think of types as sets of possible tokens, then we can
          recursively build the set(type) of type =Tree=.  Chapters 20
          and 21 discuss how we can handle recursing into types.
     - universal polymorphism :: system F, module system, existential
          polymorphism
     - some really interesting stuff :: quantification over terms and
          types
          - dependent types
          - types over types

* questions / topics
** BFS
Note the solution shown earlier today actually correct, the following
does work however

#+begin_src haskell
  data Tree a = Leaf a
              | Fork [Tree a]
  
  tree = (Fork [(Fork [(Leaf 1), (Fork [(Leaf 2), (Leaf 3)])]), (Leaf 4)])
  
  dfs :: Tree a -> [a]
  dfs (Leaf l) = [l]
  dfs (Fork ts) = foldr (\ t rest -> dfs t ++ rest) [] ts
  
  -- fixed version -- Thanks to Sunny
  bfs :: Tree a -> [a]
  bfs (Leaf l) = [l]
  bfs (Fork []) = []
  bfs (Fork xs) = (concat (map bfs ([y | y <- xs, isLeaf y])))++(bfs (Fork (collapseForks [y | y <- xs, not (isLeaf y)])))
         where
                 isLeaf :: Tree a -> Bool
                 isLeaf (Leaf l) = True
                 isLeaf _ = False
                 collapseForks :: [Tree a] -> [Tree a]
                 collapseForks [] = []
                 collapseForks ((Fork a):xs) = a++(collapseForks xs)
  
  tree = (Fork [(Fork [(Leaf 1), (Fork [(Leaf 2), (Leaf 3)])]), (Leaf 4)])
  
  -- from George
  bfs (Fork xs) = concatMap bfs ([y | y <- xs, isLeaf y] ++ [Fork (collapseForks [y | y <- xs, not (isLeaf y)])])
  bfs (Fork xs) = [l | Leaf l <- xs] ++ bfs (Fork (concat [ts | Fork ts <- xs]))
  
  -- broken version -- don't use
  bfs :: Tree a -> [a]
  bfs (Leaf l) = [l]
  bfs (Fork ts) = foldr ((++).bfs) [] ([y | y <- ts, isLeaf y] ++ [y | y <- ts, not (isLeaf y)])
      where
        isLeaf :: Tree a -> Bool
        isLeaf (Leaf l) = True
        isLeaf (Fork ts) = False
#+end_src

** Haskell IO
a list of IO actions
#+begin_src haskell
  todoList :: [IO ()]
  
  todoList = [putChar 'a',
              do putChar 'b'
                 putChar 'c',
              do c <- getChar
                 putChar c]
#+end_src

processes the IO actions with
#+begin_src haskell
  sequence_ todoList
#+end_src

** Haskell source code
http://www.cs.chalmers.se/pub/haskell/report/src/

| Prelude.hs   |
| PreludeIO.hs |

** partial lists
Haskell has three types of lists, finite, infinite, and partial

a partial list has an undefined final element
#+begin_src haskell
  a = [1, 2, 3, undefined]
#+end_src

** =null= operator
take a look at =null=

** Array
create and access an array
#+begin_src haskell
squares = array (1,10) [(i, i*i) | i <- [1..10]]
-- then to get at a value
squares!2
#+end_src

** Monads
- http://www.haskell.org/tutorial/monads.html
- wiki:Monads_in_functional_programming

definition (outside Haskell)
- monad :: a term for God or the first being, or the totality of all
           being

#+begin_quote
There are many decent monad tutorials on haskell.org. I like Wadler's
tutorial from the 1995 Advanced Functional Programming summer school.
#+end_quote

- wadler-effects-and-monads.pdf
- haskellwiki-monad

** Macros, Meta-Haskell
they exist, but are different than in lisp (see meta-haskell.pdf)