RC Airplane

We decided to give making an RC airplane a shot. We have lots of parts from our drone projects. It’s interesting that we haven’t tried before. Drones appear to be more popular than rc planes if one goes by the internet.

We are going for the simplest thing we can think of. We bought some foamboard and cut it into the wing tail and body. We half cut through it to make the control surfaces.

We ripped a crappy small brushless motor and esc from our eldest drone project. The esc back powers the radio receiver through the control line

We had a bag of mini servos and a Turnigy 9x. The servos are directly connectable to the receiver unit. I don’t see why you’d need more than 3 servos to get starts. One for the ailerons connected on opposite sides of the servo horn so that they flap oppositely. We used paper clips as push rods. One for the rudder and one for the elevator flap. Putting the controller into an airplane mode is a bit confusing, but Ben figured it out eventually.

We sized things to look roughly plane like. I’ve heard a longer wingspan is better for trainers and gliders. Also we dug up a calculator for wing loading. All our stuff comes in to about 10oz.

 

 

 

 

We didn’t follow this but maybe we should have

Solving the Ising Model using a Mixed Integer Linear Program Solver (Gurobi)

I came across an interesting thing, that finding the minimizer of the Ising model is encodable as a mixed integer linear program.

The Ising model is a simple model of a magnet. A lattice of spins that can either be up or down. They want to align with an external magnetic field, but also with their neighbors (or anti align, depending on the sign of the interaction). At low temperatures they can spontaneously align into a permanent magnet. At high temperatures, they are randomized. It is a really great model that contains the essence of many physics topics.

Linear Programs minimize linear functions subject to linear equality and inequality constraints. It just so happens this is a very solvable problem (polynomial time).

MILP also allow you to add the constraint that variables take on integer values. This takes you into NP territory. Through fiendish tricks, you can encode very difficult problems. MILP solvers use LP solvers as subroutines, giving them clues where to search, letting them step early if the LP solver returns integer solutions, or for bounding branches of the search tree.

How this all works is very interesting (and very, very roughly explained), but barely matters practically since other people have made fiendishly impressive implementations of this that I can’t compete with. So far as I can tell, Gurobi is one of the best available implementations (Hans Mittelman has some VERY useful benchmarks here http://plato.asu.edu/bench.html), and they have a gimped trial license available (2000 variable limit. Bummer.). Shout out to CLP and CBC, the Coin-Or Open Source versions of this that still work pretty damn well.

Interesting Connection: Quantum Annealing (like the D-Wave machine) is largely based around mapping discrete optimization problems to an Ising model. We are traveling that road in the opposite direction.

So how do we encode the Ising model?

Each spin is a binary variable s_i \in {0,1}

We also introduce a variable for every edge. which we will constrain to actually be the product of the spins. e_{ij} \in {0,1}. This is the big trick.

We can compute the And/Multiplication (they coincide for 0/1 binary variables) of the spins using a couple linear constraints. I think this does work for the 4 cases of the two spins.

e_{ij} \ge s_i +s_j - 1

e_{ij} \le s_j

e_{ij} \le s_i

The xor is usually what we care about for the Ising model, we want aligned vs unaligned spins to have different energy. It will have value 1 if they are aligned and 0 if they are anti-aligned. This is a linear function of the spins and the And.

s_i \oplus s_j = s_i + s_j - 2 e_{ij}

Then the standard Hamiltonian is

H=\sum B_i s_i + \sum J_{ij} (s_i + s_j - 2 e_{ij})

Well, modulo some constant offset. You may prefer making spins \pm 1, but that leads to basically the same Hamiltonian.

The Gurobi python package actually let’s us directly ask for AND constraints, which means I don’t actually have to code much of this.

We are allowed to use spatially varying external field B and coupling parameter J. The Hamiltonian is indeed linear in the variables as promised.

After already figuring this out, I found this chapter where they basically do what I’ve done here (and more probably). There is nothing new under the sun. The spatially varying fields B and J are very natural in the field of spin glasses.

https://onlinelibrary.wiley.com/doi/10.1002/3527603794.ch4

For a while I thought this is all we could do, find the lowest energy solution, but there’s more! Gurobi is one of the few solvers that support iteration over the lowest optimal solutions, which means we can start to talk about a low energy expansion. https://www.gurobi.com/documentation/8.0/refman/poolsearchmode.html#parameter:PoolSearchMode

Here we’ve got the basic functionality. Getting 10,000 takes about a minute. This is somewhat discouraging when I can see that we haven’t even got to very interesting ones yet, just single spin and double spin excitations. But I’ve got some ideas on how to fix that. Next time baby-cakes.

(A hint: recursion with memoization leads to some brother of a cluster expansion.)

 

 

 

Here’s the ground antiferromagnetic state. Cute.

 

 

 

 

OSQP and Sparsegrad: Fast Model Predictive Control in Python for an inverted pendulum

OSQP is a quadratic programming solver that seems to be designed for control problems. It has the ability to warm start, which should make it faster.

I heard about it in this Julia talk

They have some really cool technology over there in Julia land.

The problem is setup as a sequential quadratic program. I have a quadratic cost to try to bring the pendulum to an upright position.

The equations of motion are roughly

\ddot{\theta}I=-mgL\sin(\theta)+mfL\cos(\theta)

\ddot{x}=f

I=\frac{1}{3}mL^2

We don’t include the back reaction of the pole on the cart. It is basically irrelevant for our systems and overly complicated for no reason. The interesting bit is the nonlinear dynamics and influence of the cart acceleration.

I write down obeying the equations of motion as a linear constraint between timesteps. I use sparsegrad to get a sparse Jacobian of the equations of motion. The upper and lower (l and u) bounds are set equal to make an equality constraint.

Our setup has a finite track about a meter long. Our motors basically are velocity control and they can go about +-1m/s. Both of these can be encapsulated as linear constraints on the position and velocity variables. l \le Ax \le u

\phi(x)=0

\phi(x) \approx \phi(x_0) + \partial \phi(x_0) (x - x_0)

A= \partial \phi(x_0)

l=u=\partial \phi(x_0) x_0 - \phi_0(x_0)

Similarly for finding the quadratic cost function in terms of the setpoint x_s. \frac{1}{2}x^T P x + q^Tx= \frac{1}{2}(x-x_s)P(x-x_s) Expanding this out we get

q = - Px_s

I run for multiple iterations to hopefully converge to a solution (it does). The initial linearization might be quite poor, but it gets better.

The OSQP program seems to be able to run in under 1ms. Nice! Initial tests of our system seem to be running at about 100Hz.

Could probably find a way to make those A matrices faster than constructing them entirely anew every time. We’ll see if we need that.

I very inelegantly smash all the variables into x. OSQP and scipy sparse don’t support multiIndex objects well, best as I can figure. Vectors should be single dimensioned and matrices 2 dimensioned.

Still to be seen if it works on hardware. I was already having infeasibility problems. Adjusting the regularization on P seemed to help.

 

osqp_cart

Approximating Compiling to Categories using Type-level Haskell: Take 2

The previous episode is here .

Summary: I’m trying to use typelevel programming in Haskell to achieve some of the aims of Conal Elliott’s compiling to categories GHC plugin. The types of highly polymorphic tuple functions are enough to specify the implementation. We aren’t going to be able to piggy-back off of GHC optimizations (a huge downside), but we can reify lambdas into other categories and avoid the scariness of plugins.

The current implementation github source is here 


JESUS CHRIST.

http://okmij.org/ftp/Haskell/de-typechecker.lhs

Of course, Oleg already did it. This is a file where he builds the implementation of a polymorphic function from the type signature. Instead of tuples, he is focusing on higher order functions with deep nesting of (->).

The trick that I was missing is in the IsFunction typeclass at the very end, which is only achievable as a Incoherent instances.

I would never have had the courage to use an Incoherent instance if I hadn’t seen a higher authority do it first. It has turned out in my typelevel journey that many instances that I’ve been tempted to make overlapping or incoherent don’t actually have to be, especially with the availability of closed type families. I think you really do need Incoherent in this application because type variables stay polymorphic forever.

To the best of my knowledge, if you need to differentiate between a tuple type (a,b) and an uninstantiated polymorphic value a’ like we do when deconstructing the input type of our lambda, you need to use an Incoherent instance. Since a’ could hypothetically eventually be unified to some (a,b) we should not be able to do different things for the two cases without stepping outside the usual laws of typeclass resolution.

New features of the implementation:

  • The new implementation does not require the V annotation of the input like previous version by using the previously mentioned. This is super cool because now I can throw the stock Prelude.fst into toCcc.
  • I changed how the categorical implementation is built, such that it short circuits with an ‘id’ if a large structure is needed from the input, rather than deconstructing all the way to every piece of the input. Lots of other optimizations would be nice (vital?), but it is a start.
  • I also implemented a FreeCat GADT so that we can see the implementation in ghci.
  • I switched from using Data.Proxy to the type annotations extensions, which is a huge readability win.
  • I added a binary application operator binApp, which is useful for encapsulating categorical literals as infix operators into your lambda expressions.
  • General cleanup, renaming, and refactoring into library files.

A couple typelevel tricks and comments:

You often have to make helper typeclasses, so don’t be afraid to. If you want something like an if-then-else in your recursion, it is very likely you need a form of the typeclass that has slots for ‘True or ‘False to help you pick the instance.

If possible, you often want the form

rather than

The type inference tends to be better.

Here are some usage examples of toCcc.

My partially implemented version of some of Conal’s category typeclasses. Should I switch over to using the constrained-categories package?

 

The actual implementation of toCcc

 

drawing-1

 

 

Variational Method of the Quantum Simple Harmonic Oscillator using PyTorch

A fun method (and useful!) for solving the ground state of the Schrodinger equation is to minimize the energy integral dx \psi^\dagger H \psi while keeping the total probability 1. Pytorch is a big ole optimization library, so let’s give it a go.

I’ve tried two versions, using a stock neural network with relus and making it a bit easier by giving a gaussian with variable width and shift.

We can mimic the probability constraint by dividing by to total normalization \int dx \psi^\dagger \psi. A Lagrange multiplier or penalty method may allows us to access higher wavefunctions.

SGD seems to do a better job getting a rounder gaussian, while Adam is less finicky but makes a harsh triangular wavefunction.

The ground state solution of -\frac{d^2\psi}{dx^2} + x^2\psi=E\psi is e^{-x^2/2}, with an energy of 1/2 (unless I botched up my head math). We may not get it, because we’re not sampling a very good total domain. Whatever, for further investigation.

Very intriguing is that pytorch has a determinant in it, I believe. That opens up the possibility of doing a Hartree-Fock style variational solution.

Here is my garbage

Edit: Hmm I didn’t compensate for the fact I was using randn sampling. That was a goof. I started using unifrom sampling, which doesn’t need compensation

 

sho

Shit Compiling to Categories using Type level Programming in Haskell

So I’ve  been trying to try and approximate Conal Elliott’s compiling to categories in bog standard Haskell and I think I’ve got an interesting chunk.

His approach in using a GHC plugin is better for a couple reasons. One really important thing is that he gets to piggy back on GHC optimizations for the lambdas. I have only implemented a very bad evaluation strategy. Perhaps we could recognize shared subexpressions and stuff, but it is more work. I seem somewhat restricted in what I can do and sometimes type inference needs some help. Not great. However, GHC plugins definitely bring their own difficulties.

What I’ve got I think still has roots in my thinking from this previous post

There are a couple insights that power this implementation

  1. A fully polymorphic tuple to tuple converter function is uniquely defined by it’s type. For example, swap :: (a,b) -> (b,a), id:: a -> a, fst :: (a,b) ->a, snd::(a,b)->b are all unique functions due to the restrictions of polymorphism. Thus typelevel programming can build the implementation from the type.
  2. Getting a typeclass definition to notice polymorphism is hard though. I haven’t figured out how to do it, if it is even possible. We get around it by explicitly newtyping every pattern match on a polymorphic variable like so \(V x, V y) -> (y,x). Two extra characters per variable. Not too shabby.
  3. You can abuse the type unification system as a substitution mechanism. Similar to HOAS, you can make a lambda interpreter at the type level that uses polymorphism as way of generating labels for variables. This is probably related to Oleg Kiselyov’s type level lambda calculus, but his kind of confuses me http://okmij.org/ftp/Computation/lambda-calc.html#haskell-type-level
  4. You can inject a categorical literal morphism using a wrapping type to be extracted later using an apply operator and type App f a. An infix ($$) makes it feel better.

 

So here is the rough structure of what the typelevel programming is doing

You can do a depth first traversal of the input tuple structure, when you hit V, unify the interior type with a Nat labelled Leaf. At the same time, you can build up the actual value of the structure so that you can apply the lambda to it and get the output which will hold a tree that has morphism literals you want.

Then inspect the output type of the lambda which now has Nat labels, and traverse the labelled tree again to find the appropriate sequence of fst and snd to extract what you need. If you run into an application App, you can pull the literal out now and continue traversing down.

At the moment I’ve only tested with the (->) Category, which is a bit like demonstrating a teleporter by deconstructing a guy and putting back in the same place, but I think it will work with others. We’ll see. I see no reason why not.

At the moment, I’m having trouble getting GHC to not freakout unless you explicitly state what category you’re working in, or explicitly state that you’re polymorphic in the Category k.

Some future work thoughts: My typelevel code is complete awful spaghetti written like I’m a spastic trapped animal. It needs some pruning. I think all those uses of Proxy can be cleaned up by using TypeApplications. I need to implement some more categories. Should I conform to the Constrained-Categories package? Could I use some kind of hash consing to find shared structures? Could Generic or Generic1 help us autoplace V or locate polymorphism? Maybe a little Template Haskell spice to inject V?

Here’s the most relevant bits, with my WIP git repository here

 

 

 

Deducing Row Sparse Matrices from Matrix Vector Product Samples

So as part of betting the banded hessian out of pytorch, I’ve been trying to think about what it is I am doing and I’ve come up with an alternative way of talking about it.

If every row of a matrix has <N non zero entries, you can back out that matrix from N matrix vector samples of it. You have many choices for the possible sampling vectors. Random works well.

 

14c0bdc6-d4b3-453d-8d98-8bef87c41a02

The unknown in this case is the row of the matrix, represented in green. We put a known set of inputs into it and get outputs. Each row of the output, represented in red, can tell use the matrix row. We have to invert the matrix that consists of all the elements that the nonzero elements of that row touches represented in blue. That black T is a transpose. To turn those row vectors into column vectors, we have to transpose everything.

For a banded matrix, we have a shifting sample matrix that we have to invert, one for each row.

A cute trick we can use to simplify the edge cases where we run off the ends is to extend the sample matrix with some random trash. That will actually put the entries in the appropriate place and will keep the don’t cares close to zero also.

In my previous post I used a stack of identity matrices. These are nice because a shifted identity matrix is a circular permutation of the entries, which is very easy to undo. That was what the loop that used numpy.roll was doing. Even better, it is easy to at least somewhat vectorize the operation and you can produce those sampling vectors using some clever use of broadcasting of an identity matrix.

An alternative formulation that is a little tighter. I want the previous version because the samples isn’t actually always random. Often they won’t really be under our control.

 

This all still doesn’t address the hermitian problem. The constraint that A is hermitian hypothetically allows you to take about half the samples. But the constraints make it tougher to solve. I haven’t come up with anything all that much better than vectorizing the matrix and building a matrix out of the samples in the appropriate spots.

I think such a matrix will be banded if A is banded, so that’s something at least.

 

 

 

Pytorch Trajectory Optimization Part 4: Cleaner code, 50Hz

Cleaned up the code more and refactored some things.

Added backtracking. It will backtrack on the dx until the function is actually decreasing.

Prototyped the online part with shifts. Seems to work well with a fixed penalty parameter rho~100. Runs at ~50Hz with pretty good performance at 4 optimization steps per time step. Faster or slower depending on the number of newton steps per time step we allow ourselves.  Still to see if the thing will control an actual cartpole.

The majority of time is spent just backwards calculating the hessian still (~50%).

I’ve tried a couple different schemes (direct projection of the delLy terms or using y = torch.eye). None particularly seem to help.

The line search is also fairly significant (~20% of the time) but it really helps with both stability and actually decreasing the number of hessian steps, so it is an overall win. Surprisingly during the line search, projecting out the batch to 0 doesn’t matter much. How could this possibly make sense?

What I should do is pack this into a class that accepts new state observations and initializes with the warm start. Not clear if I should force the 4 newton steps on you or let you call them yourself. I think if you use too few it is pretty unstable (1 doesn’t seem to work well. 2 might be ok and gets you up to 80Hz maybe.)

The various metaparameters should be put into the __init__. The stopping cutoff  1e-7, Starting rho (~0.1), rho increase (x10) , backtrack alpha decrease factor (0.5 right now), the online rho (100). Hopefully none of these matter two much. I have noticed going too small with cutoff leading to endless loops.

Could swapping the ordering of time step vs variable number maybe help?

For inequality constraints like the track length and forces, exponential barriers seems like a more stable option compared to log barriers. Log barriers at least require me to check if they are going NaN.

I attempted the pure Lagrangian version where lambda is just another variable. It wasn’t working that great.

 

 

Pytorch Trajectory Optimization 3: Plugging in the Hessian

I plugged in the hessian extraction code for using newton steps.

When I profiled it using the oh so convenient https://github.com/rkern/line_profiler I found almost all of my time was spent in the delLy.backwards step. For each hessian I needed to run this B (the band width) times and each time cost ~0.6ms. For the entire run to converge took about 70 iterations and 1000 runs of this backwards step, which came out to 0.6 seconds. It is insane, but actually even calculating the band of the hessian costs considerably more time than inverting it.

So to speed this up, I did a bizarre thing. I replicated the entire system B times. Then I can get the entire hessian band in a single call to backwards. remarkably, although B ~ 15 this only slowed backwards down by 3x. This is huge savings actually, while obviously inefficient. The entire program has gone down from 1.1s to 0.38s, roughly a 3x improvement. All in all, this puts us at 70/0.38 ~ 185 Hz for a newton step. Is that good enough? I could trim some more fat. The Fast MPC paper http://web.stanford.edu/~boyd/papers/fast_mpc.html says we need about ~5 iterations to tune up a solution, this would mean running at 40Hz. I think that might be ok.

Since the hessian is hermitian it is possible to use roughly half the calls (~B/2), but then actually extracting the hessian is not so simple. I haven’t figured out a way to comfortably do such a thing yet. I think I could figure out the first column and then subtract (roughly some kind of gaussian elimination procedure).

It has helped stability to regularize everything with a surprising amount of weight in the cost. I guess since I anticipate all values being in the range of -10,10, maybe this makes sense.

Now I need to try not using this augmented Lagrangian method and just switching to a straight newton step.

Edit: Ooh. Adding a simple backtracking line search really improves stability.

figure_repl_version

figure_repl_resid

Cartpole Camera System – OpenCV + PS EYE + IR

We tried using colored tape before. It was okay after manual tuning, but kind of sucked. Commerical motion tracking systems use IR cameras and retroreflectors.

We bought some retroreflective tape and put it on the pole. http://a.co/0A9Otmr

We removed our PS EYE IR filter. The PS EYE is really cheap (~7$) and has a high framerate mode (100+ fps). People have been using it for a while for computer vision projects.

http://wiki.lofarolabs.com/index.php/Removing_the_IR_Filter_from_the_PS_Eye_Camera

We followed the instructions, but did not add the floppy disk and sanded down the base of the lens to bring the image back into focus.

We bought an IR LED ring light which fit over the camera with the plastic cover removed and rubber banded it in place.

http://a.co/2sGUY08

If you snip the photoresistor it is always on, since the photoresistor is high resistance in the dark. We used a spare 12V power supply that we soldered a connector on for.

We had also bought an IR pass filter on amazon, but it does not appear to help.

Useful utilties: qv4l2, v4l2-ctl and v4l2-utils. You can change lots of stuff.

qv4l2 -d 1 is very useful for experiementation

Useful options to  v4l2-ctl : -d selects camera, -p sets framerate -l gives a list of changeable options. You have to turn off the automatic stuff before it becomes changeable. Counterintuitively auto-exposure seems to have 1 as off.

There has been a recent update to opencv to let the v4l2 buffer size be changed. We’re hoping this will really help with our latency issues

A useful blog. We use v4l2-ctl for controlling the exposure programmatically

http://www.jayrambhia.com/blog/capture-v4l2

Oooh. The contour method + rotated rectangle is working really well for matching the retroreflective tape.

https://docs.opencv.org/3.3.1/dd/d49/tutorial_py_contour_features.html

You need to reduce the video size to 320×240 if you want to go to the highest framerate of 187fps

 

In regards to the frame delay problem from before, it’s not clear that we’re really seeing it? We are attempting both the screen timestamp technique and also comparing to our rotary encoder. In the screen timestamp technique, it is not so clear that what we measure there is latency, and if it is, it includes the latency of the monitor itself, which is irrelevant.

img_5311 img_2511