Data Wrangling with Pandas

Multi-index column headers

Having multi-index columns restricts what you can do in Pandas. For instance, we can’t use pd.query, and writing functions for pd.apply, pd.groupby becomes complicated. Moving forward I would opt to temporarily join the rows with an appropriate delimiter, and expand when exporting. Or just select one row as header when possible.

Read More

Model quantization with direct feedback alignment

In this experiment we compare MNIST training performance on the LeNet-5 model with three different training algorithms with/without 4-bit quantization. The three training algorithms are backpropagation (BP), feedback alignment (FA), and directed feedback alignment (DFA). We chose the 4-bit quantization to establish a baseline reference for the proposed work on binarization, namely quantized DFA (QDFA). The network consists of 2 convolution networks followed by 3 fully-connected layers.

Read More

Training Binarized Neural Networks with binarized weights

Currently training BNNs still require weights in full/half precision floating point. Training a BNN with its weights in binarized form would accelerate training time and boost ML applications at the edge. This approach contrasts with the lottery ticket hypothesis, in which one would start with a larger network, find a winning ticket (model weights), followed by pruning.

Read More

TensorFlow Lite Benchmarks on Android/Galaxy Note

To understand the current state of machine learning on the edge, a friend and I ran benchmarks across devices which includes Movidius NCS2 and Android phones. This post summarizes my experience with benchmarking Tensorflow Lite on Samsung Note 9. I will update this post with details and results on this experiment.

Read More