Hike News
Hike News

MQTT K8s Setup on Ubuntu 20.04

Install the k8s cluster

microk8s enable dns storage helm3
microk8s status

Install the Mqtt helm chart

microk8s helm repo add truecharts https://charts.truecharts.org/
microk8s helm pull truecharts/mosquitto --version 8.0.11

microk8s helm install my-mosquitto truecharts/mosquitto --version 8.0.11

Check the status of the installed application.

microk8s status
microk8s kubectl show pods
microk8s kubectl
microk8s kubectl logs
microk8s kubectl describe pods

Service yaml file

We need to expose the service to the outside world.
Thankfully microk8s has a built in loadbalancer called metallb

Replace Y with the MQTT port number. Default 1883
Replace Z with the MQTT API number Default 9001

apiVersion: v1
kind: Service
name: mqtt-service
type: LoadBalancer
app.kubernetes.io/name: mosquitto
- name: http
protocol: TCP
port: Y
targetPort: Y
- name: https
protocol: TCP
port: Z
targetPort: Z
- X.X.X.X

More information here

Apply the service

microk8s kubectl apply -f ./mqtt-service.yaml 

Confirm the service is active

microk8s kubectl describe services mqtt-service

Local test client

You can test the connection locally on the server with this simple CLI mqtt client
Test client

Firewall Rules

This assumes you are using ufw.
ufw is bascally a wrapper for IPTABLES. If you have ever used IPTABLES before you understand why ufw exists.

sudo ufw default allow routed 
sudo ufw allow from X.X.X.0/X to any port Y proto tcp
sudo ufw status

Test external Connections

Once again try to connect to port Y with https://mqttx.app/cli



Microscope Camera

Camera Selection

5MP camera is the min. Cellphone adapter is also available but I want a dedicated camera.


This was a similar project to what I am trying to build. Very nice blog as well


Raspberrypi High Quaility Camera 12 MP C mount

The framerate on a raspberry pi is very low so I perchased an arducam camera to usb board.
I got the idea to use the arducam adapter from https://www.briandorey.com/post/raspberry-pi-high-quality-camera-on-the-microscope

User Manual for the Pi camera adapter.

Arducam-UVC-Camera-Adapter-Board-for-12MP-IMX477-Raspberry-Pi-HQ-Camera-b0278.pdf UC-733_DIM.pdf

Raspberrypi High Quaility Camera 12 MP C mount

Components: Raspberrypi High Quaility Camera 12 MP C mount, ribbion cable, and Arducam UVC Camera Adapter Board.

Connecting the Arducam UVC Camera Adapter Board to the Raspberrypi High Quaility Camera


Camera Adapter Mount

I orginally went with mounting the camera without any lenses. Using this Raspberry Pi HQ camera adapter 3d printed

The adapter mounted to the camera port of my AmScope T720B Scope



Using the above setup I was able to take this image with the camera. This is a plant leaf from the [Codiaeum variegatum](https://en.wikipedia.org/wiki/Codiaeum_variegatum) at ~40x

Camera Issues

The camera sensor size causes an inherent magnification effect called crop factor. The field of view becomes reduced due to the this magnification effect. You can see this in the image above.
More information here

In order to correct for the inherent magnification effect an adapter was purchased.
From these sources I was able to find that 0.3x to 0.5x is about correct for my sensor size

Adapter lens selection

AmScope RU050 0.5X Reduction Lens for C-mount Cameras FMA050 Cmount to 23mm adapter. This fits my AmScope T720B Scope


Picture of dust taken with the 0.5x adapter. It's hard to tell from the images above but the field of view is noticably larger.


To protect the hardware I designed an enclouse

Microscope_Camera_Enclosure-Lid.stl Microscope_Camera_Enclosure-Enclosure.stl Microscope_Camera_Enclosure.FCStd Microscope_Camera_Enclosure.FCStd1


I had problems with Cheese constantly changing the exposure. Thankfully many have suffered from this issue before. I found this link that is specific to the raspberry pi but useful. https://hackernoon.com/polising-raspberry-pi-high-quality-camera-3z113u18
I had to use a more advanced pieace of software called qv4l2

To install on debian linux

sudo apt-get install qv4l2



Microscope Selection




From the research above I was able to settle on the following requirements

  • Greater than 1000X
  • Kohler light source
  • infinity corrected optics
  • Camera port


Amscope T720B https://amscope.com/products/t720b-hc2

Infinity Corrected Optical System with High Resolution
Fully Coated Optics with Crystal Clear & Sharp Images
Precise Mechanical Control System
Reversed Nosepiece Design
Kohler Illumination System with Field Diaphragm for Lighting Control
30-Degree Inclined, 360-Degree Swiveling, Compensation Free Trinocular Head
Eight Magnification Levels: 40X, 80X, 100X, 200X, 400X, 800X, 1000X, 2000X
Intensity-Variable Transmitted LED Lighting System
Abbe Condenser with Iris Diaphragm and Filter Holder
Rack and Pinion Adjustment for Condenser
Low Position Coaxial Stage Movement Controlling Knobs
Dual Side Coaxial Coarse and Fine Focusing Control
Adjustable Interpupillary Distance
Adjustable Diopter on Eyepieces
Durable Cast Alloy Frame with Stain Resistant Enamel Finish
Four Infinity Plan Objectives Included
Two Pairs of Extreme Widefield Eyepieces Included (EWF10X & WF20X)
Quadruple, Reversed, Extra-Large Nosepiece with Wide, Knurled Grip for Easy Operation
Large Double Layer Mechanical Stage with Stain Resistant Coating
Upward Stage Limit Stop to Protect Objectives and Slides
Manufactured under ISO 9001 Quality Control Standards
Excellent Five (5) Year Factory Warranty

Specifications :
Optical System: infinity corrected
Nosepiece: reversed, ball bearing quadruple
Head: gemel type trinocular head, 30-degree inclined
Eyepiece: high eye-point eyepieces, WF10X22mm, WF20X
Objectives: infinity plan objective 4X, 10X, 40X (spring), 100X (spring, oil)
Focusing: low position coaxial focus system
Focusing Range: 1-3/16” (30mm)
Interpupillary Adjustment Range: 2-3/16” - 3” (55-75mm)
Mechanical Tube Length: 6-5/16” (160mm)
Mechanical Stage: 8.5” x 5.9” (216mm x 150mm)
Stage Traveling Range: 2.9” x 2” (75x50mm)
Focusing Rang: 0.95” (24mm)
Division of Fine Focusing: 0.00003935” (0.001mm)
Illuminator: Built-in Kohler LED illumination system
Condenser: N.A. 1.25 achromatic condenser
Illumination: Kohler, LED
Power Supply: 90V-240 wide voltage, CE certified
Built in measurement capabilities
Weight: 28 lbs

Packing List :
One Trinocular Compensation-Free Head
One Microscope Body with Frame, Base, and Kohler Illumination System
Four High Quality DIN Plan Achromatic Objectives: 4X, 10X, 40X and 100X
One Pair of Widefield Eyepieces: WF10X
One Pair of Widefield Eyepieces: WF20X
One Dust Cover
One HDMI camera
One HDMI cable
Immersion Oil
User’s Manual


There is no reason pay top dollar for a microscope. You can find massive discounts on refunbised microscopes on ebay!



WLED Degchi Lamp

While traveling in India I came across these nice looking tin lamps.

It is sometimes called a Degchi lamp. Since I am not a fan of open flames so leds will have to fill in the role of a candle. This build will be a bit rough. I will refine the lamp in later posts.




Esp8266 Huzzah

Esp8266 Huzzah documentation

Level shifter

GeeekPi 6Pack TXS0108E 8 Channel Logic Level Converter Bi-Directional High Speed Full Duplex Shifter 3.3V 5V for Arduino Raspberry Pi

LED Strip

Led Strip

Power supply

5v 10 amp power upply with barrel plug


Barrel plug





Wled binary


You will need a 3.3v usb to serial adapter. An FTDI based usb to serial adapteris perfered.

Method 2 https://kno.wled.ge/basics/install-binary/


Connection between the usb to serial adapter and the ESP8266.

  • Logic Level set to 3.3V
  • ESP -> FTDI
  • TX -> RX
  • RX -> TX
  • VCC -> 5V
  • GND -> GND

Operating system permission workarounds

In order to write data to ttyUSB or other serieal ports you must be a member of the dialout group.

sudo usermod -a -G dialout your_user_name

Log out log in

sudo esptool.py -p ttyUSB0sudo esptool.py -p ttyUSB0 write_flash 0x0 ./WLED_0.13.1_ESP8266.bin



Electronics Rough Fit

Playing around with the electronics to confirm that the design works as expected.

Everything is hooked up but the lights are not working.

Discovered that the OE pin on the TXS0108E needs to be pulled high to VA (ESP Logic Level High 3.3V)

Physical Construction


Test Fit

Lets see how everything could fit inside the lamp. Who cares how it looks at the moment. We will polish it later.

Electronics Enclosure

FreeCad Model


First atttempt to create a quick enclosure.

View of the enclosure lid.

Enclosure STLs


Enclosure Main Body


Enclosure Main Body

Enclosure Assembly

1 amp Fuse

All the electronics seem to fit. A more professional version will be created when I get the parts.

Forcing all the electronics into the enclosure.

Everything is coming together.

Power on testing

Led Mounts

The inside of the lamp is coated in a thick non conductive coating. For the time being the led strip is just placed inside the lamp body.

Connecting to WIFI

WLED 13.1 has some trouble connecting to wifi networks.

  • Set wifi control channel to a fixed channel. e.g 1

  • Change channel width to 20 Mhz

  • Bind to static ip in router

  • Add the same static ip in wled wifi settings


I am not a fan of this very large black power cable. I will replace the power cord with USB-C.
After measuring the power usage of the lamp at peak load it looks like USB-C is a good option. Peak load 0.6 Amps
This will be covered in a follow up article.

TP Link U3T Ubuntu

This device has the rtl8812bu chipset and you willneed to do a little more work to get it working.
Thankfully there is a working driver available for it here: https://github.com/cilynx/rtl88x2bu

To get it working, you will need to first install some packages and check out the Git repo:

sudo apt-get install build-essential dkms git
git clone https://github.com/cilynx/rtl88x2bu.git

Then follow the instructions here to install the driver:

cd rtl88x2bu
VER=$(sed -n 's/\PACKAGE_VERSION="\(.*\)"/\1/p' dkms.conf)
sudo rsync -rvhP ./ /usr/src/rtl88x2bu-${VER}
sudo dkms add -m rtl88x2bu -v ${VER}
sudo dkms build -m rtl88x2bu -v ${VER}
sudo dkms install -m rtl88x2bu -v ${VER}
sudo modprobe 88x2bu


Ask Ubuntu

Rigol DS1052E Oscilloscope Encoder Repair

Broken encoder

I dropped my trusty Rigol scope off of the table while testing. The trigger encoder knob broke off.

Replacement part

After a bit of googling I was able to locate the part number. This is a very popular scope with hobbyist so this information was not all that hard to find.

Opening the case

The screws that hold the case on are Trox or star drive. There are 6 screws. Two screws on the bottom near the feet, two screws under the handle, and two screws on either side of the power socket.

WARNING: Do not forget to remove the power button. If you try the case with the power button still in place the switch will snap off. The power button can be removed by pulling it upwards.

You will need an extension bit to get at the screws under the handle

Do not forget about the screws on the side

Back case removed

Once the case is removed, unscrew the standoffs on either side of the serial interface (DB9).
Lift off the metal rf sheild

You will need to remove all the screws inside of the case. All The power supply board must be removed
Disconnect the power supply board. Watch out for the LCD lamp power cable (Red/White cable with JST connector)

You will need to disconnect the white ribbion cable from the board at the bottom of the unit.

The front case panel can now be removed.

Power supply board. Power switch

Front case panel removed. Picture of the 3 screws holding on the user control board.
These will need to be removed.

Replacing the encoder

Boken encoder next to replacement encoder.

Bottom of the user control board. Unsoldering required.
WARNING: Rigol uses lead free solder. Only use lead free solder. If you mix leaded solder and lead free solder a new alloy with a higher melting point will be formed. Good luck removing that!

Desoldered encoder

Replacement encoder

Put everything back together

Grub Boot Loader Not Found

Insert Grub console picture here

run the following commands

  • ls

Will show you all the drive parations


  • ls (hdo,gpt1)/

Will give you a listing of all the files on the dive

  • Find the drive that contains /boot


  • locate the grub.cfg

configfile /PathToFile/grub.cfg

configfile (hd0,gpt1)/boot/grub/grub.cfg

  • Grub boot menu should start.

  • Kernel is now running

  • Fix any broken packages
    dpkg –configure -a

  • Check systemctl

  • Look for any errors that may have occured.

  • Fix any filesystem errors that may have occures

  • look in /dev/disk/by-uuid

  • Perform fsck on any drives that require a repair.
    fsck /dev/disk/by-uuid/abc456

  • Often the grub loader cannot find the efi file

  • sudo apt install grub-efi-amd64

Numpy LeNet 5 with ADAM

John W Grun


In this paper, a manually implemented LeNet-5 convolutional NN with an Adam optimizer written in Numpy will be presented. This paper will also cover a description of the data used to train and test the network,technical details of the implementation, the methodology of training the network and determining hyper parameters, and present the results of the effort.


LeNet-5 was created by Yuan Lecun and described in the paper “Gradient-Based Learning Applied To Document Recognition” . LeNet-5 was one of the first convolutional neural networks used on a large scale to automatically classify hand-written digits on bank checks in the United States. Prior to LeNet, most character recognition was done by using feature engineering by hand, followed by a simple machine learning model like K nearest neighbors (KNN) or Support Vector Machines (SVM). LeNet made hand engineering features redundant, because the network learns the best internal representation from training images automatically.

This paper will cover some of the technical details of a manual Numpy implementation of LeNet-5 convolutional Neural Network including the details about the training set, structure of the lenet-5 CNN, weights and biases initialization, optimizer, gradient descent, the loss function, and speed enhancements. The paper will also cover the methodology used during training and selecting hyperparameters as well as the performance on the test dataset.

Related work

There are numerous examples of numpy implementations of LeNet 5 found across the internet but, none with more significance than any other. Lenet-5 is now a common architecture used to teach new students fundamental concepts of convolutional neural network

Data Description

The MNIST database of handwritten digits, contains a training set of 60,000 examples, and a test set of 10,000 examples. Each example is a 28 x 28 pixel grayscale image.
All training and test examples of the MNIST were converted from gray scale images to bilevel representation to simplify the function the CNN needed to learn. Only pixel positional information is required to correctly classify digits, while grayscale offers no useful additional information and only aids in increasing complexity. The labels of both the test and training examples were converted to one hot vectors to make them compatible with the softmax output and cross entropy loss function. Both indexes of the training and test sets were further randomized to ensure each batch was a random distribution of all 10 classes.

Model Description


The model is a implementation of LeNet 5 with the following structure:

  • Input 28 x 28
  • Convolutional layer (Pad = 2 , Stride = 1, Activation = ReLu, Filters = 6, Size = 5)
  • Max Pool (Filter = 2, Stride = 2)
  • Convolutional layer (Pad = 0 , Stride = 1, Activation = ReLu, Filters = 16 )
  • Max Pool (Filter = 2, Stride = 2)
  • Convolutional layer (Pad = 0 , Stride = 1, Activation = ReLu, Filters = 120)
  • Fully Connected ( Size = 120, Activation = ReLu)
  • Fully Connected (Size = 84, Activation = ReLu)
  • Soft Max ( 10 Classes )

Weight and bias initialization

Since the original lenet-5 predates many of the more optimal weight initialization schemes such as Xavier or HE initialization, the weights were initialized with numpy random.randn while biases were zero filled with numpy zeros.


At first a constant learning rate optimizer was used for this network but, stable convergence required a very small learning rate. This small learning rate required a very long training time to achieve a reasonable accuracy on the test set. The constant learning rate optimizer was replaced with a numpy implementation of the ADAM optimizer. ADAM allowed for the use of higher learning rate that resulted in quicker and smoother convergence. The formulas that describe ADAM are shown below:

Gradient Descent

This implementation of LeNet-5 uses Mini-batch gradient descent. Mini-batch gradient descent is a trade-off between stochastic gradient descent (training on 1 sample at a time) and gradient descent (training on the entire training set). In mini-batch gradient descent, the cost function (and therefore gradient) is averaged over a small number of samples. Mini batch gradient descent was selected due to its increased convergence rate and the ability to escape local minimum.

Loss function

LeNet 5 produces a 10 class categorical output representing the numbers 0 to 9. The original LeNEt-5 used Maximum a posteriori (MAP) as the loss loss function. Cross-entropy was chosen as the loss function in this implementation instead of MAP since cross entropy appears to be the dominant loss function for similar classification problems and source code was available to check against. The formula for cross entropy loss is given below:

Speed Enhancements

To train the CNN in a reasonable amount of time several performance enhancements had to be made.

The python profiler was used to identify locations in the code that would have the largest effect on performance. The convolutional and max pooling layers consumed the majority of the running time. The running time of the convolutional and max pool layers was decreased by first converting the single threaded functions into multithreaded functions. Processing was divided up equally across the number of threads. Once threading was confirmed to be working properly, the Numba Just in Time compiler (JIT) was employed to convert python functions into native code. Numba JIT was then liberally applied throughout the code. These enhancements reduced the training time from over 1 day to a few hours, constituting a 6-8x speed up on average.

Method Description And Experimental Procedure

The LeNet 5 model implementation was trained on the MNIST dataset. After each training, the training loss versus epoch was plotted. The learning rate was decreased until the training loss vs epochs was a monotonically decreasing function. The number of epochs was selected to minimize the training loss while the training loss continued to decrease with every training epoch. Adjustments to the epochs sometimes also required adjustments to the learning rate to keep the training loss vs epoch a monotonically decreasing function.
In addition to the training loss, the prediction accuracy was computed. The accuracy was computed by the following method:
The input images were forward propagated through the network with the weights and biases learned during training. The class with the largest magnitude was selected as the prediction. The predicted class was compared to the label for a given input image. The percentage of correct predictions was computed across all input images forward propagated through the network.
The prediction accuracy was computed for both the training and testing sets . In a well trained network (one not underfitting or overfitting ) the test prediction accuracy should be close to the training prediction accuracy. If the training prediction accuracy is far greater than the test prediction accuracy it is a sign the network is overfitting on the training data and failing to generalize well.
The batch size was selected primary upon the cache limitations of the processor. A batch size of around 32 was determined to be small enough to fit in cache while also large enough to reduce overhead from thread context switching.


Hyper parameters

The hyper parameters for this numpy implementation of LeNet 5 are as follows:

  • Epochs = 20
  • Learning rate = 0.0002
  • Batch = 32

Training time

The total training time was brought down from 26 hours to train on the entire training set of 60000 examples to only 2.75 hours after applying speed enhancements.

Training loss

The training loss of LeNet-5 as plotted over 20 epochs. The training loss is monotonically decreasing indicating the network is effectively learning to differentiate between the ten classes in the MNIST dataset.


Accuracy on test set = 95.07%
Accuracy on Train set = 94.90%
The Lenet-5 implementation achieved a high accuracy on the test and train sets without a significant difference in prediction accuracy between the train and test sets which would be an indication of overfitting.


A Lenet 5 Convolutional Neural Network has been implemented only using Numpy that yields prediction accuracies over 95% on the test set. The network was trained on all 60000 examples found in the MNIST dataset and tested against the 10000 examples in the MNIST test set. The network used the standard LeNet Architecture with modifications where required. To decrease convergence time, a numpy ADAM optimizer was written. Several speed enhancements such as multi threading and just in time compilation were employed to decrease training time to a reasonable period.


[1] Lavorini, Vincenzo. “Speeding up Your Code (4): in-Time Compilation with Numba.” Medium, Medium, 6 Mar. 2018, medium.com/@vincenzo.lavorini/speeding-up-your-code-4-in-time-compilation-with-numba-177d6849820e.
[2] “Convolutional Neural Networks.” Coursera, www.coursera.org/learn/convolutional-neural-networks.
[3] LeCun, Yann. MNIST Demos on Yann LeCun’s Website, yann.lecun.com/exdb/lenet/.
[4] Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. doi:10.1109/5.726791
[5] “MNIST Database.” Wikipedia, Wikimedia Foundation, 11 Apr. 2019, en.wikipedia.org/wiki/MNIST_database.
[6] “Cross Entropy.” Wikipedia, Wikimedia Foundation, 8 May 2019, en.wikipedia.org/wiki/Cross_entropy.
[7] “Stochastic Gradient Descent.” Wikipedia, Wikimedia Foundation, 29 Mar. 2019, en.wikipedia.org/wiki/Stochastic_gradient_descent.


BumbleBee Quad Copter



Frame Configuration

Quad X


Readytosky Pixhawk PX4 Flight Controller Autopilot PIX 2.4.8 32 Bit Flight Control Board+Safety Switch+Buzzer+I2C Splitter Expand Module+16GB SD Card

Readytosky M8N GPS Module Built-in Compass Protective Case with GPS Antenna Mount for Standard Pixhawk 2.4.6 2.4.8 Flight Controller

Remote Control

Turnigy TGY-I6

PWM To PPM Conversion

usmile PPM Encoder With 10pin Input & 4pin Output Cable For Pixhawk/PPZ/MK/MWC/Pirate Flight Control

Motor Controllers

Turnigy MultiStar V.20

Internal BEC provides 5V to the rest of the system
WARNING: If using ESC BECs to power your system you may need to disconnect all but one of the 5 volt connections from the ESC BEC. Only 1 power source!


3300 mAH Battery





Mission Planner ( Ground Control )

APM Planner V2.0





Motor Connections

Front Left

Front Right

Rear Left

Rear Right