## Functors, Vectors, and Quantum Circuits

Vectors are dang useful things, and any energy you put into them seems to pay off massive dividends.

Vectors and Linear Algebra are useful for:

• 2D, 3D, 4D geometry stuff. Computer graphics, physics etc.
• Least Squares Fitting
• Solving discretized PDEs
• Quantum Mechanics
• Analysis of Linear Dynamical Systems
• Probabilistic Transition Matrices

There are certain analogies between Haskell Functors and Vectors that corresponds to a style of computational vector mathematics that I think is pretty cool and don’t see talked about much.

Due to the expressivity of its type system, Haskell has a first class notion of container that many other languages don’t. In particular, I’m referring to the fact that Haskell has higher kinded types * -&gt; * (types parametrized on other types) that you can refer to directly without filling them first. Examples in the standard library include Maybe, [], Identity, Const b, and Either b. Much more vector-y feeling examples can be found in Kmett’s linear package V0, V1, V2, V3, V4. For example, the 4 dimensional vector type V4

This really isn’t such a strange, esoteric thing as it may appear. You wouldn’t blink an eye at the type

in some other language. What makes Haskell special is how compositional and generic it is. We can build thousand element structs with ease via composition. What we have here is an alternative to the paradigm of computational vectors ~ arrays. Instead we have computational vectors ~ structs. In principle, I see no reason why this couldn’t be as fast as arrays, although with current compiler expectations it probably isn’t.

Monoidal categories are a mathematical structure that models this analogy well. It has been designed by mathematicians for aesthetic elegance, and it seems plausible that following its example leads us to interesting, useful, and pleasant vector combinators. And I personally find something that tickles me about category theory.

So to get started, let’s talk a bit about functors.

## The Algebra of Functors

Functors in Haskell are a typeclass for containers. They allow you to map functions over all the items in the container. They are related to the categorical notion of functor, which is a mapping between categories.

You can lift the product and sum of types to the product and sum of Functors which you may find in Data.Functor.Product and Data.Functor.Sum. This is analogous to the lifting of ordinary addition and multiplication to the addition and multiplication of polynomials, which are kind of like numbers with a “hole”.

Functors also compose. A container of containers of a is still a container of a. We can form composite containers by using the Compose newtype wrapper.

When you use this Compose newtype, instead of having to address the individual elements by using fmap twice, a single application of fmap will teleport you through both layers of the container.

Product, Sum, and Compose are all binary operator on functors. The type constructor has kind

Some important other functors from the algebra of types perspective are Const Void a, Const () a, and Identity a. These are identity elements for Sum, Product, and Compose respectively.

You can define mappings between containers that don’t depend on the specifics of their contents. These mappings can only rearrange, copy and forget items of their contained type. This can be enforced at the type level by the polymorphic type signature

These mappings correspond in categorical terminology to natural transformations between the functors f and g. There is a category where objects are Functors and morphisms are natural transformations. Sum, Product, and Compose all obeys the laws necessary to be a monoidal product on this category.

How the lifting of functions works for Compose is kind of neat.

Because the natural transformations require polymorphic types, when you apply ntf to fg the polymorphic variable a in the type of ntf restricts to a ~ g a'.

Product and Sum have a straight forward notion of commutativity ( (a,b) is isomorphic to (b,a)) . Compose is more subtle. sequenceA from the Traversable typeclass can swap the ordering of composition. sequenceA . sequenceA may or may not be the identity depending on the functors in question, so it has some flavor of a braiding operation. This is an interesting post on that topic https://parametricity.com/posts/2015-07-18-braids.html

Combinators of these sorts are used arise in at least the following contexts

• Data types a la carte – A systematic way of building extensible data types
• GHC Generics – A system for building generic functions that operate on data types that can be described with sums, products, recursion, and holes.
• In and around the Lens ecosystem

Also see the interesting post by Russell O’Connor and functor oriented programming http://r6.ca/blog/20171010T001746Z.html. I think the above is part of that to which he is referring.

### Vector Spaces as Shape

Vector spaces are made of two parts, the shape (dimension) of the vector space and the scalar.

Just as a type of kind * -&gt; * can be thought of as a container modulo it’s held type, it can also be a vector modulo its held scalar type. The higher kinded type for vector gives an explicit slot to place the scalar type.

The standard Haskell typeclass hierarchy gives you some of the natural operations on vectors if you so choose to abuse it in that way.

• Functor ~> Scalar Multiplication: smul s = fmap (* s)
• Applicative ~> Vector Addition: vadd x y = (+) <$> x <*> y • Traversable ~> Tranposition. sequenceA has the type of transposition and works correctly for the linear style containers like V4. The linear library does use Functor for scalar multiplication, but defines a special typeclass for addition, Additive. I think this is largely for the purposes for bringing Map like vectors into the fold, but I’m not sure. Once we’ve got the basics down of addition and scalar multiplication, the next thing I want is operations for combining vector spaces. Two important ones are the Kronecker product and direct sum. In terms of indices, the Kronecker product is a space that is indexed by the cartesian product (,) of its input space indices and the direct sum is a space indexed by the Either of its input space indices. Both are very useful constructs. I use the Kronecker product all the time when I want to work on 2D or 3D grids for example. If you’ll excuse my python, here is a toy 2-D finite difference Laplace equation example. We can lift the 1D second derivative matrix $K = \partial_x^2$ using the kronecker product $K2 = K \otimes I + I \otimes K$. The direct sum is useful as a notion of stacking matrices. The following is perhaps the most important point of the entire post. Compose of vector functors gives the Kronecker product, and Product gives the direct sum (this can be confusing but its right. Remember, the sum in direct sum refers to the indices). We can form the Kronecker product of vectors given a Functor constraint. Notice we have two distinct but related things called kron: Kron and kron. One operates on vectors spaces and the other operates on vector values. Building vector spaces out of small combinators like V2, V4, DSum, Kron is interesting for a number of reasons. • It is well typed. Similar to Nat indexed vectors, the types specify the size of the vector space. We can easily describe vector spaced as powers of 2 as V16 = Kron V2 (Kron V2 (Kron V2 (Kron V2 V1))), similarly in terms of its prime factors, or we can do a binary expansion (least significant bit first) V5 = DSum V1 (Kron V2 (DSum V0 (Kron V2 V1))) or other things. We do it without going into quasi-dependently typed land or GADTs. • It often has better semantic meaning. It is nice to say Measurements, or XPosition or something rather than just denote the size of a vector space in terms of a nat. It is better to say a vector space is the Kron of two meaningful vector spaces than to just say it is a space of size m*n. I find it pleasant to think of the naturals as a free Semiring rather than as the Peano Naturals and I like the size of my vector space defined similarly. • Interesting opportunities for parallelism. See Conal Elliott’s paper on scans and FFT: http://conal.net/papers/generic-parallel-functional/ #### What do linear operators look like? In the Vectors as shape methodology, Vectors look very much like Functors. I have been tempted to lift the natural transformation type above to the following for linear operators. In a sense this works, we could implement kron because many of the container type (V1, V2, V3, etc) in the linear package implement Num. However, choosing Num is a problem. Why not Fractional? Why not Floating? Sometimes we want those. Why not just specifically Double? We don’t really want to lock away the scalar in a higher rank polymorphic type. We want to ensure that everyone is working in the same scalar type before allowing things to proceed. Note also that this type does not constrain us to linearity. Can we form the Kronecker product of linear operators? Yes, but I’m not in love with it. This is not nearly so beautiful as the little natural transformation dance. This was a nice little head scratcher for me. Follow the types, my friend! I find this particularly true for uses of sequenceA. I find that if I want the containers swapped in ordering. In that situation sequenceA is usually the right call. It could be called transpose. Giving the vector direct access to the scalar feels a bit off to me. I feel like it doesn’t leave enough “room” for compositionally. However, there is another possibility for a definition of morphisms could be that I think is rather elegant. Does this form actually enforce linearity? You may still rearrange objects. Great. You can also now add and scalar multiply them with the Additive k constraint. We also expose the scalar, so it can be enforced to be consistent. One other interesting thing to note is that these forms allow nonlinear operations. fmap, liftU2 and liftI2 are powerful operations, but I think if we restricted Additive to just a correctly implemented scalar multiply and vector addition operation, and zero, we’d be good. We can recover the previous form by instantiation k to V1. V1, the 1-d vector space, is almost a scalar and can play the scalars role in many situations. V1 is the unit object with respect to the monoidal product Kron. There seems to be a missing instance to Additive that is useful. There is probably a good reason it isn’t there, but I need it. ## Monoidal Categories The above analogy can be put into mathematical terms by noting that both vectors and functor are monoidal categories. I talked a quite a bit about monoidal categories in a previous post http://www.philipzucker.com/a-touch-of-topological-computation-3-categorical-interlude/ . Categories are the combo of a collection of objects and arrows between the objects. The arrows can compose as long as the head of one is on the same object as the tail of the other. On every object, there is always an identity arrow, which when composed will do nothing. We need a little extra spice to turn categories into monoidal categories. One way of thinking about it is that monoidal categories have ordinary category composition and some kind of horizontal composition, putting things side to side. Ordinary composition is often doing something kind of sequentially, applying a sequence of functions, or a sequence of matrices. The horizontal composition is often something parallel feeling, somehow applying the two arrows separately to separate pieces of the system. ### Why are they called Monoidal? There is funny game category people play where they want to lift ideas from other fields and replace the bits and pieces in such a way that the entire thing is defined in terms of categorical terminology. This is one such example. A monoid is a binary operations that is associative and has an identity. Sometimes people are more familiar with the concept of a group. If not, ignore the next sentence. Monoids are like groups without requiring an inverse. Numbers are seperately monoids under both addition, multiplication and minimization (and more), all of which are associative operations with identity (0, 1, and infinity respectively). Exponentiation is a binary operation that is not a monoid, as it isn’t associative. The Monoid typeclass in Haskell demonstrates this http://hackage.haskell.org/package/base-4.12.0.0/docs/Data-Monoid.html A common example of a monoid is list, where mempty is the empty list and mappend appends the lists. There are different set-like intuitions for categories. One is that the objects in the category are big opaque sets. This is the case for Hask, Rel and Vect. A different intuitiion is that the category itself is like a set, and the objects are the elements of that set. There just so happens to be some extra structure knocking around in there: the morphisms. This is the often more the feel for the examples of preorders or graphs. The word “monoidal” means that they we a binary operation on the objects. But in the category theory aesthetic, you also need that binary operation to “play nice” with the morphisms that are hanging around too. Functors are the first thing that has something like this. It has other properties that come along for the ride. A Functor is a map that takes objects to objects and arrows to arrows in a nice way. A binary functor takes two objects to and object, and two arrows to one arrow in a way that plays nice (commutes) with arrow composition. ### String diagrams String diagrams are a graphical notation for monoidal categories. Agin I discussed this more here. Morphisms are denoted by boxes. Regular composition is shown by plugging arrows together vertically. Monoidal product is denoted by putting the arrows side to side. When I was even trying to describe what a monoidal category was, I was already using language evocative of string diagrams. You can see string diagrams in the documentation for the Arrow library. Many diagrams that people use in various fields can be formalized as the string diagrams for some monoidal category. This is big chunk of Applied Category Theory. This is the connection to quantum circuits, which are after all a graphical notation for very Kroneckery linear operations. There is an annoying amount of stupid repetitive book keeping with the associative structure of Kron. This can largely be avoided hopefully with coerce, but I’m not sure. I was having trouble with roles when doing it generically. ### Bit and Bobbles • Woof. This post was more draining to write than I expected. I think there is still a lot left to say. Sorry about the editing everyone! Bits and pieces of this post are scattered in this repo • How would you go about this in other languages? C, Rust, OCaml, C++, Agda • The discussion of Vect = * -> * is useful for discussion of 2-Vect, coming up next. What if we make vectors of Vect? Wacky shit. • Metrics and Duals vectors. type Dual f a = f a -&gt; a. type Dual1 f a = forall k. Additive k =&gt; Kron f k a -&gt; k a • Adjunction diagrams have cups and caps. Since we have been using representable functors, they actually have a right adjunction that is tupling with the vector space index type. This gives us something that almost feels like a metric but a weirdly constrained metric. • LinOp1 form is yoneda? CPS? Universally quantified k is evocative of forall c. (a -&gt; c) -&gt; (b -&gt; c) ### References ## Appendix ### Representable/Naperian Functors Containers that are basically big product types are also known as representable, Naperian, or logarithmic. Representable places emphasis on the isomorphism between such a container type and the type (-&gt;) i which by the algebra of types correspond is isomorphic to $a^i$ (i copies of a). They are called Naperian/Logarithmic because there is a relationship similar to exponentiation between the index type a and the container type f. If you take the Product f g, this container is indexed by (a + b) = Either a b. Compose f g is indexed by the product (a,b). (f r) ~ r^a The arrow type is written as an exponential b^a because if you have finite enumerable types a and b, that is the number of possible tabulations available for f. The Sum of two representable functors is no longer representable. Regular logarithms of sums Log(f + g) do not have good identities associated with them. See Gibbons article. There is a good argument to be made that representable functors are a good match for vectors/well typed tensor programming. But note that there is a reasonable interpretation for container types with sum types in them. These can be thought of as subspaces, different bases, or as choices of sparsity patterns. When you define addition, you’ll need to say how these subspaces reconcile with each other. — two bases at 45 degrees to each other. ### Monoidal Products on Hask Hask is a name for the category that has objects as Haskell types and morphisms as Haskell functions. Note that it’s a curious mixing of type/value layers of Haskell. The objects are types whereas the function morphisms are Haskell values. Composition is given by (.) and the identity morphisms are given by id. For Haskell, you can compose functions, but you can also smash functions together side by side. These combinators are held in Control.Arrow. You can smash together types with tuple (,) or with Either. Both of these are binary operators on types. The corresponding mapping on morphisms are given by These are binary operators on morphisms that play nice with the composition structure of Haskell. ### Monoidal Combinators of Functors A monoidal category also has unit objects. This is given by the Identity functor There is also a sense of associativity. It is just newtype rearrangement, so it can also be achieved with a coerce (although not polymorphically?). Similarly, we can define a monoidal category structure using Product or Sum instead of Compose. These are all actually just newtype rearrangement, so they should all just be instances of coerce, but I couldn’t get the roles to go through generically? ## Concolic Weakest Precondition is Kind of Like a Lens That’s a mouthful. Lens are described as functional getters and setters. The simple lens type is . The setter is and the getter is This type does not constrain lenses to obey the usual laws of getters and setters. So we can use/abuse lens structures for nontrivial computations that have forward and backwards passes that share information. Jules Hedges is particular seems to be a proponent for this idea. I’ve described before how to encode reverse mode automatic differentiation in this style. I have suspicions that you can make iterative LQR and guass-seidel iteration have this flavor too, but I’m not super sure. My attempts ended somewhat unsatisfactorily a whiles back but I think it’s not hopeless. The trouble was that you usually want the whole vector back, not just its ends. I’ve got another example in imperative program analysis that kind of makes sense and might be useful though. Toy repo here: https://github.com/philzook58/wp-lens In program analysis it sometimes helps to run a program both concretely and symbolically. Concolic = CONCrete / symbOLIC. Symbolic stuff can slowly find hard things and concrete execution just sprays super fast and can find the dumb things really quick. We can use a lens structure to organize a DSL for describing a simple imperative language The forward pass is for the concrete execution. The backward pass is for transforming the post condition to a pre condition in a weakest precondition analysis. Weakest precondition semantics is a way of specifying what is occurring in an imperative language. It tells how each statement transforms post conditions (predicates about the state after the execution) into pre conditions (predicates about before the execution). The concrete execution helps unroll loops and avoid branching if-then-else behavior that would make the symbolic stuff harder to process. I’ve been flipping through Djikstra’s book on this. Interesting stuff, interesting man. I often think of a state machine as a function taking s -> s. However, this is kind of restrictive. It is possible to have heterogenous transformations s -> s’. Why not? I think I am often thinking about finite state machines, which we really don’t intend to have a changing state size. Perhaps we allocated new memory or something or brought something into or out of scope. We could model this by assuming the memory was always there, but it seems wasteful and perhaps confusing. We need to a priori know everything we will need, which seems like it might break compositionally. We could model our language making some data type like data Imp = Skip | Print String | Assign String Expr | Seq Imp Imp | ... and then build an interpreter But we can also cut out the middle man and directly define our language using combinators. To me this has some flavor of a finally tagless style. Likewise for expressions. Expressions evaluate to something in the context of the state (they can lookup variables), so let’s just use And, confusingly (sorry), I think it makes sense to use Lens in their original getter/setter intent for variables. So Lens structure is playing double duty. type Var s a = Lens' s a With that said, here we go. Weakest precondition can be done similarly, instead we start from the end and work backwards Predicates are roughly sets. A simple type for sets is Now, this doesn’t have much deductive power, but I think it demonstrates the principles simply. We could replace Pred with perhaps an SMT solver expression, or some data type for predicates, for which we’ll need to implement things like substitution. Let’s not today. A function is equivalent to . This is some kind of CPS / Yoneda transformation thing. A state transformer to predicate transformer is somewhat evocative of that. I’m not being very precise here at all. Without further ado, here’s how I think a weakest precondition looks roughly. Finally here is a combination of the two above that uses the branching structure of the concrete execution to aid construction of the precondition. Although I haven’t expanded it out, we are using the full s t a b parametrization of lens in the sense that states go forward and predicates come back. Neat. Useful? Me dunno. ## Flappy Bird as a Mixed Integer Program Mixed Integer Programming is a methodology where you can specify convex (usually linear) optimization problems that include integer/boolean variables. Flappy Bird is a game about a bird avoiding pipes. We can use mixed integer programming to make a controller for Flappy Bird. Feel free to put this as a real-world application in your grant proposals, people. While thinking about writing a MIP for controlling a lunar lander game, I realized how amenable to mixed integer modeling flappy bird is. Ben and I put together the demo on Saturday. You can find his sister blog post here. The bird is mostly in free fall, on parabolic trajectories. This is a linear dynamic, so it can directly be expressed as a linear constraint. It can discretely flap to give itself an upward impulse. This is a boolean force variable at every time step. Avoiding the ground and sky is a simple linear constraint. The bird has no control over its x motion, so that can be rolled out as concrete values. Because of this, we can check what pipes are relevant at time points in the future and putting the bird in the gap is also a simple linear constraint. There are several different objectives one might want to consider and weight. Perhaps you want to save the poor birds energy and minimize the sum of all flaps cvx.sum(flap). Or perhaps you want to really be sure it doesn’t hit any pipes by maximizing the minimum distance from any pipe. Or perhaps minimize the absolute value of the y velocity, which is a reasonable heuristic for staying in control. All are expressible as linear constraints. Perhaps you might want a weighted combo of these. All things to fiddle with. There is a pygame flappy bird clone which made this all the much more slick. It is well written and easy to understand and modify. Actually figuring out the appropriate bounding boxes for pipe avoidance was finicky. Figuring out the right combo of bird size and pipe size is hard, combined with computer graphics and their goddamn upside down coordinate system. We run our solver in a model predictive control configuration. Model predictive control is where you roll out a trajectory as an optimization problem and resolve it at every action step. This turns an open loop trajectory solve into a closed loop control, at the expense of needing to solve a perhaps very complicated problem in real time. This is not always feasible. My favorite mip modeling tool is cvxpy. It gives you vectorized constraints and slicing, which I love. More tools should aspire to achieve numpy-like interfaces. I’ve got lots of other blog posts using this package which you can find in my big post list the side-bar 👀. The github repo for the entire code is here: https://github.com/philzook58/FlapPyBird-MPC And here’s the guts of the controller: I think it is largely self explanatory but here are some notes. The dt&nbsp;=&nbsp;t//10&nbsp;+&nbsp;1 thing is about decreasing our time resolution the further out from the current time step. This increases the time horizon without the extra computation cost. Intuitively modeling accuracy further out in time should matter less. The last_solution stuff is for in case the optimization solver fails for whatever reason, in which case it’ll follow open-loop the last trajectory it got. ### Bits and Bobbles • We changed the dynamics slightly from the python original to make it easier to model. We found this did not change the feel of the game. The old dynamics were piecewise affine though, so are also modelable using mixed integer programming. http://groups.csail.mit.edu/robotics-center/public_papers/Marcucci18.pdf . Generally check out the papers coming out of the Tedrake group. They are sweet. Total fanboy over here. • The controller as is is not perfect. It fails eventually, and it probably shouldn’t. A bug? Numerical problems? Bad modeling of the pipe collision? A run tends to get through about a hundred pipes before something gets goofy. • Since we had access to the source code, we could mimic the dynamics very well. How robust is flappy bird to noise and bad modeling? We could add wind, or inaccurate pipe data. • Unions of Convex Region. Giving the flappy bird some x position control would change the nature of the problem. We could easily cut up the allowable regions of the bird into rectangles, and represent the total space as a union of convex regions, which is also MIP representable. • Verification – The finite difference scheme used is crude. It is conceivable for the bird to clip a pipe. Since basically we know the closed form of the trajectories, we could verify that the parabolas do not intersect the regions. For funzies, maybe use sum of squares optimization? • Realtime MIP. Our solver isn’t quite realtime. Maybe half real time. One might pursue methods to make the mixed integer program faster. This might involve custom branching heuristics, or early stopping. If one can get the solver fast enough, you might run the solver in parallel and only query a new path plan every so often. • 3d flappy bird? Let the bird rotate? What about a platformer (Mario) or lunar lander? All are pretty interesting piecewise affine systems. • Is this the best way to do this? Yes and no. Other ways to do this might involve doing some machine learning, or hardcoding a controller that monitors the pipe locations and has some simple feedback. You can find some among the forks of FlapPyBird. I have no doubt that you could write these quickly, fiddle with them and get them to work better and faster than this MIP controller. However, for me there is a difference between could work and should work. You can come up with a thousand bizarre schemes that could work. RL algorithms fall in this camp. But I have every reason to believe the MIP controller should work, which makes it easier to troubleshoot. ## Linear Algebra of Types It gives my brain a pleasant thrum to learn new mathematics which mimics the algebra I learned in middle school. Basically this means that the new system has operations with properties that match those of regular numbers as much as possible. Two pretty important operations are addition and multiplication with the properties of distributivity and associativity. Roughly this corresponds to the mathematical notion of a Semiring. Some examples of semirings include • Regular multiplication and addition • And-Or • Min-plus • Matrices. • Types I have written before about how types also form a semiring, using Either for plus and (,) for times. These constructions don’t obey distributivity or associativity “on the nose”, but instead are isomorphic to the rearranged type, which when you squint is pretty similar to equality. Matrices are grids of numbers which multiply by “row times column”. You can form matrices out of other semirings besides just numbers. One somewhat trivial but interesting example is block matrices, where the elements of the matrix itself are also matrices. Another interesting example is that of relations, which can be thought of as matrices of boolean values. Matrix multiplication using the And-Or semiring on the elements corresponds to relational composition. What if we put our type peanut butter in our matrix chocolate and consider matrices of types, using the Either-(,) semiring? The simplest implementation to show how this could go can be made using the naive list based implementation of vectors and matrices. We can directly lift this representation to the typelevel and the appropriate value-level functions to type families. This was just for demonstration purposes. It is not my favorite representation of vectors. You can lift a large fraction of possible ways to encode vector spaces at the value level up to the type level, such as the linear package, or using dual vectors type V2 a = a -&gt; a -&gt; a. Perhaps more on that another day. ### What is the point? Ok. That’s kind of neat, but why do it? Well, one way to seek an answer to that question is to ask “what are matrices useful for anyway?” One thing they can do is describe transition systems. You can write down a matrix whose entire $a_{ij}$ describes something about the transition from state $i$ to state $j$. For example the entry could be: • The cost of getting from $i$ to $j$ (min-plus gives shortest path), • The count of ways to get from $i$ to $j$ (combinatorics of paths) • The connectivity of the system from $i$ to $j$ using boolean values and the and-or semiring • The probability of transition from $i$ to $j$ • The quantum amplitude of going from $i$ to $j$ if we’re feeling saucy. If we form a matrix describing a single time step, then multiplying this matrix by itself gives 2 time steps and so on. Lifting this notion to types, we can build a type exactly representing all the possible paths from state $i$ to $j$. Concretely, consider the following humorously bleak transition system: You are going between home and work. Every 1 hour period you can make a choice to do a home activity, commute, or work. There are different options of activities at each. This is described by the following transition diagram The transitions are described by the following matrix.type: What is the data type that describe all possible 4-hour day? You’ll find the appropriate data types in the following matrix. Now, time to come clean. I don’t think this is necessarily the best way to go about this problem. There are alternative ways of representing it. Here are two data types that describe an indefinite numbers of transition steps. Another style would hold the current state as a type parameter in the type using a GADT. We could construct types that are to the above types as Vec n  is to [] by including an explicit step size parameter. Still, food for thought. ### Further Thoughts The reason i was even thinking about this is because we can lift the above construction to perform a linear algebra of vectors spaces. And I mean the spaces, not the vectors themselves. This is a confusing point. Vector spaces have also have two natural operations on them that act like addition and multiplication, the direct sum and kronecker product. These operations do form a semiring, although again not on the nose. This is connected to the above algebra of types picture by considering the index types of these vector spaces. The simplest way to denote this in Haskell is using the free vector space construction as shown in this Dan Piponi post. The Kronecker product makes tuples of the indices and the direct sum has an index that is the Either of the original index types. This is by far not the only way to go about it. We can also consider using the Compose-Product semiring on functors (Compose is Kron, Product is DSum) to get a more index-free kind of feel and work with dense vectors. Going down this road (plus a couple layers of mathematical sophistication) leads to a set of concepts known as 2Vect. Dan Roberts and James Vicary produced a Mathematica package for 2Vect which is basically incomprehensible to me. It seems to me that typed functional programming is a more appropriate venue for something of this kind of pursuit, given how evocative/ well modeled by category theory it can be. These mathematical ideas are applicable to describing anyonic vector spaces. See my previous post below. It is not a coincidence that the Path data type above is so similar to FibTree data type. The root type variable takes the place of the work/home state, and the tuple structure take the place of a Vec-like size parameter n . More to on this to come probably as I figure out how to explain it cleanly. Edit: WordPress, your weird formatting is killing me. Edit: Hoo Boy. This is why we write blog posts. Some relevant material was pointed out to me that I was not aware of. Thanks @DrEigenbastard. https://link.springer.com/chapter/10.1007/978-3-319-19797-5_6 http://blog.sigfpe.com/2010/08/constraining-types-with-regular.html http://blog.sigfpe.com/2010/08/divided-differences-and-tomography-of.html ## Relational Algebra with Fancy Types Last time, I tried to give a primer of relations and relational algebra using the Haskell type type Rel a b = [(a,b)]. In this post we’re going to look at these ideas from a slightly different angle. Instead of encoding relations using value level sets, we’ll encode relations in the type system. The Algebra of Programming Agda repo and the papers quoted therein are very relevant, so if you’re comfortable wading into those waters, give them a look. You can find my repo for fiddling here At this point, depending on what you’ve seen before, you’re either thinking “Yeah, sure. That’s a thing.” or you’re thinking “How and why the hell would you do such a ridiculous thing.” Most of this post will be about how, so let’s address why first: 1. Examining relations in this style illuminates some constructions that appear around the Haskell ecosystem, particularly some peculiar fellows in the profunctor package. 2. More syntactic approach to relations allows discussion of larger/infinite domains. The finite enumerations of the previous post is nice for simplicity, but it seems you can’t get far that way. 3. Mostly because we can – It’s a fun game. Maybe a useful one? TBD. With that out of the way, let’s go on to how. ### Translating Functions to Relation GADTs We will be using some Haskell extensions in this post, at the very least GADTs and DataKinds. For an introduction to GADTs and DataKinds, check out this blog post. DataKinds is an extension that reflects every data constructor of data types to a type constructor. Because there are values True and False there are corresponding types created'True and 'False. GADTs is an extension of the type definition mechanism of standard Haskell. They allow you to declare refined types for the constructors of your data and they infer those refined type when you pattern match out of the data as well, such that the whole process is kind of information preserving. We will use the GADT extension to define relational datatypes with the kind . That way it has a slot a for the “input” and b for the “output” of the relation. What will goes in these type slots will be DataKind lifted types like 'True, not ordinary Haskell types like Int. This is a divergence from from the uses of similar kinds you see in Category, Profunctor, or Arrow. We’re doing a more typelevel flavored thing than you’ll see in those libraries. What we’re doing is clearly a close brother of the singleton approach to dependently typed programming in Haskell. Some examples are in order for what I mean. Here are two simple boolean functions, not and and defined in ordinary Haskell functions, and their equivalent GADT relation data type. You can already start to see how mechanical the correspondence between the ordinary function definition and our new fancy relation type. A function is often defined via cases. Each case corresponds to a new constructor of the relation and each pattern that occurs in that case is the pattern that appears in the GADT. Multiple arguments to the relations are encoded by uncurrying everything by default. Any function calls that occur on the right hand side of a function definition becomes fields in the constructor of our relation. This includes recursive calls and external function calls. Here are some examples with a Peano style natural number data type. We can also define things that aren’t functions. Relations are a larger class of things than functions are, which is part of their utility. Here is a “less than equal” relation LTE. You can show that elements are in a particular relation by finding a value of that relational type. Is ([4,7], 11) in the relation Plus? Yes, and I can show it with with the value PS (PS (PS (PS PZ))) :: Plus (4,7) 11 . This is very much the Curry-Howard correspondence. The type R a b corresponds to the proposition/question is $(a,b) \in R$ . ### The Fun Stuff : Relational Combinators While you need to build some primitive relations using new data type definitions, others can be built using relational combinators. If you avoid defining too many primitive relations like the above and build them out of combinators, you expose a rich high level manipulation algebra. Otherwise you are stuck in the pattern matching dreck. We are traveling down the same road we did in the previous post, so look there for less confusing explanations of the relational underpinnings of these constructions, or better yet some of the references below. Higher order relational operators take in a type parameters of kind and produce new types of a similar kind. The types appearing in these combinators is the AST of our relational algebra language. The first two combinators of interest is the composition operator and the identity relation. An element $(a,c)$ is in $R \cdot Q$ if there exists a $b$ such that $(a,b) \in R$ and $(b,c) \in Q$. The fairly direct translation of this into a type is The type of the composition is the same as that of Profunctor composition found in the profunctors package. Alongside a composition operator, it is a knee jerk to look for an identity relation and we do have one This is also a familiar friend. The identity relation in this language is the Equality type. We can build an algebra for handling product and sum types by defining the appropriate relational combinators. These are very similar to the combinators in the Control.Arrow package. The converse of relations is very interesting operation and is the point where relations really differ from functions. Inverting a function is tough. Conversing a relation always works. This data type has no analog in profunctor to my knowledge and probably shouldn’t. Relations do not have a notion of currying. The closest thing they have is ### Lattice Operators For my purposes, lattices are descriptions of sets that trade away descriptive power for efficiency. So most operations you’d perform on sets have an analog in the lattice structure, but it isn’t a perfect matching and you’re forced into approximation. It is nice to have the way you perform these approximation be principled, so that you can know at the end of your analysis whether you’ve actually really shown anything or not about the actual sets in question. The top relation holds all values. This is represented by making no conditions on the type parameters. They are completely phantom. Bottom is a relation with no inhabitants. The meet is basically the intersection of the relations, the join is basically the union. A Lattice has an order on it. This order is given by relational inclusion. This is the same as the :-> combinator can be found in the profunctors package. Relational equality can be written as back and forth inclusion, a natural isomorphism between the relations. There is also an interesting indirect form. #### Relational Division If we consider the equation (r <<< p) :-> q with p and q given, in what sense is there a solution for r? By analogy, this looks rather like r*p = q, so we’re asking a kind of division question. Well, unfortunately, this equation may not necessarily have a solution (neither do linear algebraic equations for that matter), but we can ask for the best under approximation instead. This is the operation of relational division. It also appears in the profunctor package as the right Kan Extension. You’ll also find the universal property of the right division under the name curryRan and uncurryRan in that module. One formulation of Galois connections can be found in the adjunctions file. Galois Connections are very slick, but I’m running out of steam, so let’s leave that one for another day. ### Properties and Proofs We can prove many properties about these relational operations. Here a a random smattering that we showed using quickcheck last time. ### Odds and Ends • Recursion Schemes – Recursion schemes are a methodology to talk about recursion in a point free style and where the rubber meets the road in the algebra of programming. Here is an excellent series of articles about them. Here is a sample of how I think they go: • Higher Order Relations? • Examples of use. Check out the examples folder in the AoP Agda repo. These are probably translatable into Haskell. • Interfacing with Singletons. Singletonized functions are a specialized case or relations. Something like? • A comment to help avoid confusion. What we’ve done here feels confusingly similar to profunctor, but it is in fact distinct I think. Profunctors are described as a categorical generalization of relations , but to be honest, I kind of don’t get it. Despite many of our constructions appearing in the profunctor package, the profunctor typeclass itself appears to not play a role in our formulation. There just isn’t a good way to dimap under our relations as written, unless you construct free profunctors. Converse at the least is a wrench in the works. • Star and graphs. Transition relations are a powerful methodology. A transition relation is in some respects the analog of a square matrix. We can iteratively compose it with itself. ### References ## Notes on Getting Started in OCaml I know a bit of Haskell. It’s the functional programming language I have the strongest background in. OCaml is very similar to Haskell, which is why I haven’t felt a strong need to learn it. Having had to delve into it for necessity because of work I think that was an oversight. The biggest thing for me is being able to more easily read a new set of literature and see familiar things in a new light, which is very refreshing. ### Getting OCaml opam is the package manager. Follow the instructions to install it and get your environment variables setup. It’ll tell you some extra commands you have to run to do so. You use it to install packages via opam install packagename. You can also use it to switch between different ocaml compiler versions via command like opam switch 4.08.1. Dune is a build tool. You can place a small config file called dune in your folder and it can figure out how to appropriately call the compiler. Dune is in flux, so check out the documentation. What I write here may be wrong. https://dune.readthedocs.io/en/stable/ Here’s an example execution. Note that even though the file is called main.ml in this example, you call build with main.exe. And exec requires the ./ for some reason. Weird. Here’s a dune file with some junk in it. You make executables with blocks. You include a list of the files without the .ml suffix require by the executable in the modules line. You list libraries needed in the libraries line. You want to also install merlin. opam install merlin. Merlin is a very slick IDE tool with autocomplete and type information. dune will setup a .merlin file for you The ReasonML plugin is good for vscode. Search for it on the marketplace. It is the one for OCaml also. ReasonML is a syntactic facelift intended for the web btw. I don’t particularly recommend it to start. There are also emacs and vim modes if that is what you’re into. The enhanced repl is called utop. Install it with opam install utop. Basic repl usage: Every line has to end with ;;. That’s how you get stuff to be run. Commands start with #. For example #quit;; is how you end the session. #use "myfile.ml";; will load a file you’ve made. Sometimes when you start you need to run #use "topfind";; which loads a package finder. You can load libraries via the require command like #require "Core";; #help;; for more. ### A Quick Look at the Language With any new language I like to check out Learn X from Y if one is available. https://learnxinyminutes.com/docs/ocaml/ Here are some shortish cheat sheets with a quick overview of syntax https://github.com/alhassy/OCamlCheatSheet https://ocaml.org/docs/cheat_sheets.html ### More In Depth Looks This is a phenomenal book online book built for a Cornell course: https://www.cs.cornell.edu/courses/cs3110/2019sp/textbook/ Real World OCaml is also quite good but denser. Very useful as a reference for usage of Core and other important libraries. The reference manual is also surprisingly readable https://caml.inria.fr/pub/docs/manual-ocaml/ . The first 100 or so pages are a straightforward introduction to the language. https://github.com/janestreet/learn-ocaml-workshop Pretty basic workshop. Could be useful getting you up and running though. ### Useful libraries Core – a standard library replacement. Becoming increasingly common https://github.com/janestreet/core It is quite a bit more confusing for a newcomer than the standard library IMO. And the way they have formatted their docs is awful. Owl – a numerical library. Similar to Numpy in many ways. https://ocaml.xyz/ These tutorials are clutch https://github.com/owlbarn/owl/wiki Bap – Binary Analysis Platform. Neato stuff Lwt – https://github.com/ocsigen/lwt asynchronous programming Graphics – gives you some toy and not toy stuff. Lets you draw lines and circles and get keyboard events in a simple way. OCamlGraph – a graph library Jupyter Notebook – Kind of neat. I’ve got one working on binder. Check it out here. https://github.com/philzook58/ocaml_binder Menhir and OCamlLex. Useful for lexer and parser generators. Check out the ocaml book for more fmt – for pretty printing ### Interesting Other Stuff (A Descent into Lazy Writing) https://discuss.ocaml.org/ – The discourse. Friendly people. They don’t bite. Ask questions. https://github.com/ocaml-community/awesome-ocaml Awesome-Ocaml list. A huge dump of interesting libraries and resources. An excerpt of cool stuff: • Coq – Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms and theorems together with an environment for semi-interactive development of machine-checked proofs. • Why3 – Why3 is a platform for deductive program verification. It provides a rich language for specification and programming, called WhyML, and relies on external theorem provers, both automated and interactive, to discharge verification conditions. • Alt-Ergo – Alt-Ergo is an open-source SMT solver dedicated to the proof of mathematical formulas generated in the context of program verification. http://ocamlverse.github.io/ – A pretty good set of beginner advice and articles. Seems like I have a lot of accidental overlap. Would’ve been nice to find earlier https://www.cl.cam.ac.uk/teaching/1617/L28/materials.html – advanced functional programming course. Interesting material. TAPL – https://www.cis.upenn.edu/~bcpierce/tapl/ Has implementations in OCaml of different lambda calculi. Good book too. It is not uncommon to use a preprocessor in OCaml for some useful features. There is monad syntax, list comprehensions, deriving and more available as ppx extensions. https://whitequark.org/blog/2014/04/16/a-guide-to-extension-points-in-ocaml/ ppx perepsorcssor. ocamlp4 5 are both preprocessors too https://tarides.com/blog/2019-05-09-an-introduction-to-ocaml-ppx-ecosystem.html https://blog.janestreet.com/archive/ The jane street blog. They are very prominent users of ocaml. https://opensource.janestreet.com/standards/ Jane Street style guide Oleg Kiselyov half works in Haskell, half in OCaml, so that’s cool. https://arxiv.org/pdf/1905.06544.pdf oleg effects without monads Oleg metaocaml book. MetaOCaml is super cool. http://okmij.org/ftp/ML/MetaOCaml.html And the switch funtionality of opam makes it easy to install! Oleg tagless final http://okmij.org/ftp/tagless-final/index.html https://github.com/ocamllabs/higher Cohttp, LWT and Async https://github.com/backtracking/ocamlgraph ocaml graphs https://mirage.io/ Mirage os. What the hell is this? https://github.com/ocamllabs/fomega https://github.com/janestreet/hardcaml ppx_let monaidc let bindings some of the awesome derivinig capabilites are given by ppx_jane. SExp seems to be a realy good one. It’s where generic printing is? dune build lambda.bc.js will make a javascript file. That’s pretty cool. Uses js_of_ocaml. The js_of_ocaml docs are intimidating https://ocsigen.org/js_of_ocaml/dev/manual/overview http://ocsigen.org/js_of_ocaml/dev/api/ Note you need to install both the js_of_ocaml-compiler AND the library js_of_ocaml and also the js_of_ocaml-ppx. Go digging through your _build folder and you can find a completely mangled incomprehensible file jsboy.bc.js. But you can indeed import and use it like so. dune build --profile release lambda.bc.js putting it in the release profile makes an insane difference in build size. 10mb -> 100kb There is also bucklescript for compiling to javascript. Outputs readable javascript. Old compiler fork? Check out J.T. Paach’s snippets. Helpful Dune: https://gist.github.com/jtpaasch/ce364f62e283d654f8316922ceeb96db Z3 ocaml https://gist.github.com/jtpaasch/3a93a9e1bcf9cae86e9e7f7d3484734b Ocaml new monadic let syntax https://jobjo.github.io/2019/04/24/ocaml-has-some-new-shiny-syntax.html #require “ppx_jane”;; in utop in order to import a thing using ppx And argument could be made for working from a docker Weird dsls that generate parsers and lexers. Also oddly stateful. Took a bit of fiddling to figure out how to get dune to do. Otherwise pretty straight forward encoding expereince rport: using f omega as a teaching language Because they aren’t hidden behind a monadic interface (for better or for worse), OCaml has a lot more of imperative feel. You could program in a subset of the language and have it not feel all that different from Java or python or something. There are for loops and while loops, objects and classes, and mutable variables if you so choose. I feel like the community is trying to shy away from these features for most purposes however, sitcking to the functional core. However, it does let you do for loops and has an interesting community anddifferent areas of strength. Maybe more importantly it let’s you access a new set of literature and books. Sligthly different but familiar ideas I think Core is bewildering for a newcomer. ### Rando Trash Poor Style OCaml Snippets lex_lisp.mll : simplistic usage of ocamllex and menhir parse_lisp.mly Doinking around with some graphics A couple Advent of code 2018 A little Owl usage ## The Classical Coulomb Gas as a Mixed Integer Quadratic Program The coulomb gas is a model of electrostatics where you take the discreteness of charge into account. That is what makes it hard compared to the continuous charge problem. Previously, I’ve used mixed integer programming to find lowest energy states of the ising model. This is even more obvious application of mixed integer programming to a physics model. We ordinarily consider electric charge to be a continuum, but it isn’t. It comes in chunks of the electron charge. Historically, people didn’t even know that for quite a while. It is usually a reasonable approximation for most purposes to consider electric charge to be continuous If you consider a network of capacitors cooled to the the level that there is not enough thermal energy to borrow to get an electron to jump, the charges on the capacitors will be observably discretized. With a sufficiently slow cooling to this state, the charges should arrange themselves into the lowest energy state. The coulomb gas model also is of interest due to its connections to the XY model, which I’ve taken a stab at with mixed integer programming before. The coulomb gas models the energy of vortices in that model. I think the connection between the models actually requires a statistical or quantum mechanical context though, whereas we’ve been looking at the classical energy minimization. We can formulate the classical coulomb gas problem very straightforwardly as a mixed integer quadratic program. We can easily include an externally applied field and a charge conservation constraint if we so desire within the framework. We write this down in python using the cvxpy library, which has a built in free MIQP solver, albeit not a very good one. Commercial solvers are probably quite a bit better. The results seems reasonable. It makes sense for charge to go in the direction of the electric field. Going to the corners makes sense because then like charges are far apart. So this might be working. Who knows. ### Interesting places to go with this: Prof Vanderbei shows how you can embed an FFT to enable making statements about both the time and frequency domain while keeping the problem sparse. The low time/memory $N log(N)$ complexity of the FFT is reflected in the spasity of the resulting linear program. https://vanderbei.princeton.edu/tex/ffOpt/ffOptMPCrev4.pdf Here’s a sketch about what this might look like. Curiously, looking at the actual number of nonzeros in the problem matrices, there are way too many. I am not sure what is going on. Something is not performing as i expect in the following code The equivalent dense DFT: It would be possible to use a frequency domain solution of the interparticle energy rather than the explicit coulomb law form. Hypothetically this might increase the sparsity of the problem. It seems very possible to me in a similar manner to embed a fast multipole method or barnes-hut approximation within a MIQP. Introducing explicit charge summary variables for blocks would create a sparse version of the interaction matrix. So that’s fun. ## Doing Basic Ass Shit in Haskell Haskell has lots of fancy weird corners, but you want to get rippin’ and runnin’ The Haskell phrase book is a new useful thingy. Nice and terse. https://typeclasses.com/phrasebook This one is also quite good https://lotz84.github.io/haskellbyexample/ I also like what FP complete is up to. Solid set of useful stuff, although a bit more emphasis towards their solutions than is common https://haskell.fpcomplete.com/learn I was fiddling with making some examples for my friends a while ago, but I think the above do a similar better job. https://github.com/philzook58/basic-ass-shit Highlights include: Makin a json request Showing a plot of a sine function Doing a least squares fit of some randomly created data #### Not too complicated stuff to get you excited about Haskell: I love Power Serious. https://www.cs.dartmouth.edu/~doug/powser.html Infinite power series using the power of laziness in something like 20 lines https://blog.plover.com/prog/haskell/monad-search.html Using the list monad to solve SEND+MORE=MONEY puzzle. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.8903&rep=rep1&type=pdf Jerzy Karczmarczuk doing automatic differentiation in Haskell before it was cool. Check out Conal Elliott’s stuff after. Very simple symbolic differentiation example. When I saw this in SICP for the first time, I crapped my pants. https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf Why functional Programming Matters by John Hughes https://www.cs.cmu.edu/~crary/819-f09/Backus78.pdf John Backus emphasizing escaping the imperative mindset in his 1978 Turing Award speech. A call to arms of functional programming https://www.cs.tufts.edu/~nr/cs257/archive/richard-bird/sudoku.pdf Richard Bird defining sudoku solutions and then using equation reasoning to build a more efficient solver https://wiki.haskell.org/Research_papers/Functional_pearls – Functional Pearls #### Here’s how I find useful Haskell packages: I google. I go to hackage (if I’m in a subpage, click on “contents” in the upper right hand corner). Click on a category that seems reasonable (like “web” or something) and then sort by Downloads (DL). This at least tells me what is popular-ish. I look for tutorials if I can find them. Sometimes there is a very useful getting started snippet in the main subfile itself. Some packages are overwhelming, others aren’t. The Real World Haskell book is kind of intimidating although a lovely resource. The wiki has a pretty rockin set of tutorials. Has some kind soul been improving it? https://wiki.haskell.org/Category:Tutorials I forgot learn you a Haskell has a chapter on basic io http://learnyouahaskell.com/input-and-output #### Learn more When you’re ready to sit down with Haskell more, the best intro is currently the Haskell Book You may also be interested in https://www.edx.org/course/introduction-functional-programming-delftx-fp101x-0 this MOOC https://github.com/data61/fp-course or this Data61 course Then there is a fun infinitude of things to learn after that. ______ More ideas for simple examples? This post is intentionally terse. IO is total infective poison. standard output io mutation & loops. You probably don’t want these. They are not idiomatic Haskell, and you may be losing out on some of the best lessons Haskell has to offer. file IO web requests http://www.serpentine.com/wreq/tutorial.html web serving – scotty image processing basic data structures command line arguments plotting Parallelism and Concurrency https://nokomprendo.frama.io/tuto_fonctionnel/posts/tuto_fonctionnel_25/2018-08-25-en-README.html ## CAV 2019 Notes: Probably Nothin Interestin’ for You. A bit of noodling with Liquid Haskell I went to the opening workshops of CAV 2019 in New York this year (on my own dime mind you!) after getting back from joining Ben on the long trail for a bit. The tutorials on Rosette and Liquid Haskell were really fun. Very interesting technology. Also got some good ramen and mochi pudding, so it’s all good. Big Gay Ice Cream was dece. ## Day 1 workshops Calin Belta http://sites.bu.edu/hyness/calin/.Has a new book. Control of Temporal logic systems. Automata. Optimized. Partition space into abstraction. Bisimulation https://www.springer.com/gp/book/9783319507620 Control Lyapunov Function (CLF) – guarantees you are going where you want to go Control Barrier Function – Somehow controls regions you don’t want to go to. Lyapunov function based trajectory optimization. You somehow have (Ames 2014) http://ames.gatech.edu/CLF_QP_ACC_final.pdf Is this it? Differential flatness , input output linearization Sadradiini worked there. Temproal logic with #### Rise of Temporal Logic Linear Temporal Logic vs CTL Fixpoint logic, Buchi automata – visit accepting state infinite times equivalency to first order logic monadic logic, propositions only take 1 agrument. Decidable. Lowenheim. Quantifier elimination. Bounded Mondel property Languages: ForSpec, SVA, LDL, PSL, Sugar Monadic second order logic (MSO). method of tableau #### Sadraddini Polytopic regions. Can push forward the dynmaics around a trajectory and the polytope that you lie in. RRT/LQR polytopic tree. pick random poitn. Run. Evauating branching heuristics branch and prune icp. dreal. branch and prune. Take set. Propagate constraints until none fire. branching heuristics on variables largest first, smearing, lookahead. Try different options see who has the most pruning. Non clear that helped that muhc QF_NRA. dreal benchmarks. flyspeck, control, robotics, SMT-lib http://capd.sourceforge.net/capdDynSys/docs/html/index.html #### Rosette commands: saolver adied programming verify – find an input on which the assertions fail. exists x. not safe debug – Minimal unsat core if you give an unsat query. x=42/\ safe(s,P(x))$ we know thia is unsat because of previous step

solve – exists v si.t safe(v)

synthesis – exists e forall x safe(x,P(x))

define-symbolic, assert, verify, debug, solve, sythesis

Rosette. Alloy is also connected to her. Z Method. Is related to relational logic?

https://homes.cs.washington.edu/~emina/media/cav19-tutorial/index.html

http://emina.github.io/rosette/

Building solver aided programming tool.

symbolic compiler. reduce program all possible paths to a constraint

Cling – symbolic execution engine for llvm

implement intepreter in rosette

Symbolic virtual machine

layering of languages. DSL. library (shallow) embedding. interpreter (deep) embedding.

deep embedding for sythesis.

I can extract coq to rosette?

how does it work?

reverse and filter keeping only positive queries.

symbolic execution vs bounded model checking

symbolic checks every possible branch of the program. Cost is expoentntial

CBMC.

type driven state merging. Merge instances of primitiv types. (like BMC), value types structurally ()

instance Merge Int, Bool, Real — collect up SMT context

vs. Traversable f => Merge (f c) – do using Traversable

symbolic union a set of guarded values with diskoint guard.

merging union. at most one of any shape. bounded by number of possible shapes.

puts some branching in rosette and some branch (on primitives) in SMT.

symbolic propfiling. Repair the encdoing.

tools people have built.

strategy generation. That’s interesting. builds good rewrite rules.

serval.

certikso komodo keystone. fintie programss

IS rosette going to be useful for my work? coooooould be

https://ranjitjhala.github.io/

another thing we could do is galois connections between refinements. Pos, Zero, Neg <-> Int

Liquid Haskell uses SMT solvers to resolve it’s type checking requirements.

Agda et al also work very much via unification. Unification is a broad term but it’s true.

It also has a horn clause solver for inference. Every language needs some kind of inference or you’d go insane. Also it is piggybacking on haskell

It’s not as magical as I thought? Like seeing the magicians trick. It really does understand haskell code. Like it isn’t interpretting it. When it knows facts about how (+) works, that is because the refined type was put in by hand in the prelude connecting it to SMT facts. What is imported by liquid haskell?

The typing environment is clutch. You need to realize what variables are in scope and what their types are, because that is all the SMT can use to push through type checking requirements.

Installing the stack build worked for me. It takes a while . I couldn’t get cabal install to work, because I am not l33t.

Uninterpeted functions. Unmatchability?

It wouldn’t be haskell without a bunch of compiler directives. It is somewhat difficult to find in a single cohesive place what all the syntax, and directives are from liquid haskell. Poking around it best.

• ple
• reflection
• no-termination
• higherorder – what is this?

https://github.com/ucsd-progsys/230-wi19-web course notes

https://github.com/ucsd-progsys/liquid-sf some of software foundations

https://nikivazou.github.io/publications.html niki vazou’s pubs. Check out refinement reflection

https://arxiv.org/pdf/1701.03320 intro to liquid haskell. Interesting to a see a different author’s take

http://goto.ucsd.edu/~nvazou/presentations/ presentations. They are fairly similar to one another.

Liquid haskell gives us the ability to put types on stuff that wasn’t possible before.

Linearity :: f :: {a -> b | f (s ^* a) == s ^* (f a) }

Pullback. {(a,b) | f a == g b}

Equalizer

Many things in categoruy theory rely on the exists unique. Do we have functiona extensionality in Liquid haskell?

product : {(a,b) | f q = x, g q = y, => }

Pushing the boundaries on what liquid haskell can do sounds fun.

Equalizer. The eqaulizer seems prominents in sheaves. Pre-sheaves are basically functors. Sheaves require extra conditions. Restriction maps have to work? Open covers seem important

type Equalizer f g a b = {(e :: a , eq :: a -> b) | f (eq e) = g (eq e) }

I think both the type a and eq are special. e is like an explcit parametrization.

type Eq f g a = {e :: a | f e = g e} I think this is more in the spirit. Use f and g both as measures.

presheaf is functor. But then sheaf is functor that

(a, Eq (F a) (G a)). typelevel equalizer? All types a that F and G agree on.

https://ncatlab.org/nlab/show/equalizer

Records are sheaves – Jon Sterling. Records have subtyping. This gives you a toplogy feeling thing.

{foo | a} {bar | a} -> intersection = {foo bar | b} can inhabit either

union is
or do you want closed records? union is union of fields. intersection is intersection of fields.

In this case a cover would be a set of records with possibly overlapping fields whose combined labels cover the whle space we want to talk about. consistency condition of sheaf/equalizer is that overlapping records fields have to match. I guess {  q.foo = r.foo } ?There is a way to combine all the stuff up. This is exactly what Ghrist was getting at with tables. Tables with shared columns.

data R1 = R1 {foo :: Int, bar :: Int}

{ (r1 :: R1, r2 :: R2) | (foo r1) = (foo r2) } — we manitain duplicates across records.

{. }

if you have a “cover” {foo bar |} {bar fred} {gary larry} whose in

https://www.sciencedirect.com/science/article/pii/S1571066108005264

Sheaves. As a model of concurrency? Gaguen paper.

sheaves as constraint satisfcation? sheafifcation. Constraint solving as a way of fusing the local constraints to be globally consistent.

sheaves as records

sheaves as data fusion

Escardo. Compact data types are those finitely searchable

Continuous funcitons are ~computable? Productive?

http://www.paultaylor.eu/

http://www.paultaylor.eu/ASD/foufct/

http://www.paultaylor.eu/~pt/prafm/

typed recursion theory toplogy

typed computatabiltity theory

Topological notions in computation. Dictionary of terms realted decidable, searchable, semi decidablee

cs.ioc.ee/ewscs/2012/escardo/slides.pdf

https://en.wikipedia.org/wiki/Computable_topology

Through NDArray overloading, a significant fragment of numpy code is probably verifiable.

Need to inspect function annotations to know how to build input type.

@verify() tag

If statements are branching. We are again approaching inspecting functions via probing. But what if we lazily probe. At every __bool__ point, we run a z3 program to determine if there is an avaiable bool left possible (we don’t need to inspect dead code regions. Also would be cool to mention it is a dead region). Curious. We’re reflecting via Z3.

Loops present a problem. Fixed loops are fine. but what about loops that depend on the execution? for i in range(n). I guess again we can hack it…? Maybe. range only takes an integer. we don’t have overload access.

Maybe we need to go into a full eval loop. utterly deconstructing the function and evaluating it statelemnt by statement.

(compare :: a -> a -> Comparison). We could select a choice based on if there is a new one avaialable. Requires access to external store. We lose the thread. How can we know a choice was made? How can we know what the choice was? Did it ask var1 or var2? We can probably do it in python via access to a global store. But in haskell?

while loops take invariant annotations.

It would be cool to have a program that takes

pre conditions. Post ocnditions, but then also a Parameter keyword to declare const variables as deriveable. exists parameter. forall x precondition x => post condition.

Parameter could be of a type to take a dsl of reasonable computations. Perhaps with complexity predicates. and then interpretting the parameter defines the computation.

Or simpler case is parameter is an integer. a magic number.