Reading a Seneye using a Raspberry Pi – coding

In my previous post I discussed how to discover information about the Seneye device, here I will describe some simple code to read values from it and push these to a MQTT broker. I have this running on a small Raspberry Pi Zero W, on which I also have the Motion software and a small streaming web cam.

Having got lots of good information from the Linux commands you start by finding if the device is attached:

    dev = usb.core.find(idVendor=9463, idProduct=8708)

Next, ensure that the operating system does not have control of the device:

    interface = 0
    if dev.is_kernel_driver_active(interface) is True:
        kernel_driver_active = True

Then set the first configuration and claim the interface – and it needs to be done in that order, apparently!

usb.util.claim_interface(dev, interface)
cfg = dev.get_active_configuration()
intf = cfg[(0,0)]

Alternate settings may be ignored as most devices do not have them, so we move straight to the endpoint and search for the first in/out.

epIn = usb.util.find_descriptor(interface, custom_match= lambda e: usb.util.endpoint_direction(e.bEndpointAddress) == usb.util.ENDPOINT_IN)
epOut = usb.util.find_descriptor(interface, custom_match = lambda e: usb.util.endpoint_direction(e.bEndpointAddress) == usb.util.ENDPOINT_OUT)

Then send the READING message to the device, read the response, and read again with longer timeout so that the measurements can be read and returned.


Once done the fun of picking the bit flags and integers out of the results starts and I have provided just six of them, as they were the ones I could confirm from the Seneye C++ program. At the end you need to send a closing message called “BYESUD”, I guess to tell the device to go into sleep mode or whatever.


Finished code

All this code is available in my GitHub repository and I’d encourage you to read the code, try it on one of your systems, and if you want to improve it fork the repo and submit pull requests to me. I’m not looking to make it complex with control functions and displays, just something simple and lightweight.


Reading a Seneye using a Raspberry Pi – USB devices

I recently purchased the excellent Seneye aquarium monitor. It is a small device with that sits in a fish tank and monitors various parameters.


I’d been looking for a simple device like this for some time – not too complex, not too expensive. It may not sense some parameters needed in a marine tank however it does do the basics such as temperature, in/out of water, ammonia and pH. As I mentioned before, water chemistry is difficult and you shouldn’t knock those who have tried. Other devices are more complex, expensive, or actually never successfully made it to market. In essence it reads the colour change of a litmus strip by comparing it to a sealed reference, as well as temperature and water refraction.

The challenge came about because although they have a cloud solution that transmits readings every 30 minutes, the device itself has to connect to a Windows PC and thence to the cloud. I’d looked at running a low-powered Windows SBC or even purchasing Seneye’s always-on connection device, but the easiest seemed to be to read the USB device.

USB devices

USB devices are a wonder to behold. They do a lot of work internally and the USB protocol spec runs for many pages. I looked for libraries to help me read the device and found a couple which work in Python, my language of choice. USB also delineates a special form of device known as the ‘HID’, or Human Interface Device. These are things such as USB mice, track-pads and keyboards. They have a simpler interface and only use a couple of the modes of transfer: control, and interrupt.

Walking the USB tree

As USB devices have a hierarchical protocol we need to find out some things about it.

  1. Device
  2. Configuration
  3. Interface
  4. Alternate setting
  5. Endpoint

We start by looking at the device itself using Linux commands and the udevadm command. The udevadm command may not be installed on your system and should be installed using your distro’s package manager.

Firstly, using lsusb we get to see where our device is sitting on the bus – and note that this changes every time we plug it in.

Bus 002 Device 089: ID 24f7:2204

… so our device is on the bus number 2, and is at device number 89. Now, using udevadm we can test this and get the device characteristics:

udevadm info -a -p $(udevadm info -q path -n /dev/bus/usb/002/089)
looking at device '/devices/pci0000:00/0000:00:14.0/usb2/2-1':
    ATTR{bNumInterfaces}==" 1"
    ATTR{manufacturer}=="Seneye ltd"
    ATTR{product}=="Seneye SUD v 2.0.16"
    ATTR{version}==" 2.00"

From this we can see that the device has one configuration and one interface. It also contains the device’s serial number. Other notable things are the maximum packet size, and the power requirements. That number of interfaces response is a little odd?

Why is everything on Windows?

On the trail for my aquarium monitor I came across the excellent Seneye range of aquarium sensors. They seem to provide all that I need:

  • Acidity (pH) reading
  • Temperature
  • Light
  • Ammonia (NH3)

… and are USB connected and powered. So far so good. They seem to use a replaceable litmus test strip which is read by what appears to be a colour-detecting LDR, so I understand the need to replace them periodically to maintain calibration and readings. What is a little frustrating is that the software only runs on Windows.

Now, I think Windows is a capable OS and useful in a variety of contexts, but one of those isn’t the area of small and embedded devices. These have very limited memory models and typically require ultra-low power levels when sleeping – something I’m certain that Windows doesn’t really understand. To run Windows locally I could actually buy a LattePanda board with Windows 10 IOT, but at around £100 the sticker shock has set in and I am thinking of taking another route.

Off-the Shelf Devices

I think I have two options at this point: Continue with the Seneye and try to get their SUD Driver working on a Linux box, or roll my own:

Seneye and SUD driver

Temperature and light are fairly easy to get with sensors like the DS18B20 and light-detecting diodes (LDRs). So, the extra which the Seneye solution give are the pH reading (which isn’t easy to calibrate nor leave in a solution too long) and the ammonia. I appreciate that these are difficult things for the reasons below.

Roll my own

What I’d ideally like is a single-board computer that is tiny, runs off a battery forever, reads from the sensors outlined above, and can communicate to a local WiFi access point. The rest I can do myself including piping MQTT messages to a remote server, setting up a firewall and whatnot. I think all of this could conceivably almost fit onto a ESP8266 device, but the voltage handling for the pH probe would most likely not go too well, and I don’t know about the inputs for the temperature probe. Measuring the chemicals is way more difficult, until they are solved in a way which means that fouling of the sensor or leaching of the indicator dyes does not occur, I think I will let others do the pathfinding.

So it is likely that something more capable like at least a WiPy or RasPi Zero W would work, or even larger like a full-blown Odroid C2 or RaspPi 3. But why stop there? Why not just go full LattePanda and £100 if you need those, since they really don’t run well on batteries.

Chemistry – measuring soup

What I really need is a neat, innovative way to read water chemistry. My son who is a chemistry student at university tells me that lab equipment to measure individual molecules is terribly expensive, and that mostly they are used in a pure environment where only one type of molecule is present in a solution. What I need is a way to look in a non-invasive way at ‘soup’, and tell how much of one chemical it contains.

Not easy.

This is likely the reason many of these devices either use rare earth metals and fancy rotating face-plates to discourage fouling. They still need replacing every so often because they are in the aquarium water, and the dyes and inks are leaching away. The Seneye does this with a covered, reference litmus strip while another is exposed to the aquarium water and provides the measurements.


My son spoke of IR-spectroscopy which is used in organic chemistry, and UV-spectroscopy which is used elsewhere. IR-spectroscopy is likely what devices like the Mindstream are using, comparing reference ranges with that coming back from their IR LDRs and using clever materials and rotation to avoid excessive fouling. It is likely that measuring such a chaotic solution would mean that multiple wave lengths would be returned, however if you just wanted very coarse-grained information like ‘no ammonia/ammonia’ it may be possible.

My guess is that these real-world issues have hampered a single device that tells you what is in your aquarium water for some time, and that until some clever thinking is applied or new material science used, we will have to continue either replacing our litmus strips periodically or buying expensive reference disks.

Aquarium monitor

Not satisfied by not finishing my first project, I boldly go where I have gone before. This time I approach my other hobby of fish keeping with all the unbridled enthusiasm of the wandering electronics geek. I’m going to build a fish tank monitor!

This was all started by building a small tropical fish tank for my in-laws. This will be sited away from where I live, and rather than the hard landscaped gravel aquarium with plastic plants which it started as, I have suggested a more living aquarium with shrimp, snails, few fish and easy plants such as ‘Cuba’ and lilies. In addition a much more capable external Eheim canister filter will scrub the water well, and I am recommending little if no water changes as that works very well for my 120l fish tank.

But all this will happen a long distance away from here and I want to know how things are going remotely – hence the felt need for a monitoring station. While there are ones like the Seneye they do need a replaceable slide every month, and others like the Mindstream or Apex are truly expensive – plus they are built to control dosing or other schedules like lighting. I can do all of that using a cheap timer, and don’t need the expensive gear for what needs to be a simple tank.

Ideally I’d like to hook up a couple of sensors to a single-board computer such as an Odroid or Raspberry Pi, and connect to the local WiFi to transmit readings through to somewhere else – none of that worries me at all and using things like MQTT make it all very simple. Apparently for the Seneye you don’t even need the branded web server as you can use another server to do it – see here. But the sensors are the right pain as I’ll explain below. I’d like to be able to read:

  • temperature probe – these are simple
  • pH probe – much more complex and needs both calibration, and removing from the water due to fouling. I could use something like this to read the probe
  • NH3 as this affects the fish badly
  • light levels – likely simple as well
  • water levels – no water = bad! Conduction strip or
  • … anything else I dream up.

I don’t need a display nor a GUI front-end as I am happiest when treating my SBCs as remote and headless – I find it tunes the mind to not trying everything and understanding how to recover remotely without a keyboard.

Weather calendar: constrained memory

I am learning a whole lot more about the constrained environment of the microprocessor. For a start, you don’t have a Linux command line! And, perhaps more insidious, you do not have much memory.

Platforms such as Arduino or Pyboard have very restricted memory spaces, and in essence you ‘sit’ within the Python interpreter and run commands there. Actually it is not fully like this, some implement a REPL or some implement a server that takes FTP or Telnet commands, but it is very different from working even in a small single-board computer such as a Raspberry Pi or Odroid which have full Linux, command line shells such as Bash, and even desktops/browsers.

What happened was that I was getting along well with my project displaying weather patterns on the e-ink device and started retrieving more and more details on the controller, however after a short few iterations got memory errors. Thinking this just to be a problem of garbage collection, I called that method and got better iterations – but still eventually crashed out. It seems that the Json parsers and whatnot are leaking memory and no amount of garbage collection will help – and the direct calls to URLs for the weather services surely isn’t helping!

So, refactor.

I’ve now split the code into two components: a server component most likely running under NodeRED on one of my SBCs, and the display component which will simply take the weather readings from the server and display them, concentrating on what it does best.

I may also get advantages by using this method as it can run on smaller embedded devices such as the ESP8266 which do low-power much better than the fuller devices such as the WiPy2. Who knows, maybe months of displays without resorting to replacing the batteries?


I started musing about the complexity of modern computer systems after talking with a colleague about recent computer crashes. Those not familiar with modern computer systems expect them to work better than they do and seem to take delight in handing out blame where none lies. Having worked for decades in computers both large and small I am astounded by a couple of things: that complex computer systems actually work pretty well, and that they don’t crash more.

Humans seem to naturally enjoy reducing things to black and white, or two or three choices. Listen to the evolutionary biologists and they’d claim all sorts of things to prove this, such as living in caves and having only a few foods to eat or whatnot. I don’t entirely believe them either – they sound too much like looking for the hypothesis in the evidence, rather than the other way around.

In my professional life I have been through perhaps 3 or 4 major incidents where massive numbers of people could have been impacted and each time a very focused and dedicated technical team have averted disaster. These things are not easy and take real hard work. I’d rather rely on people who have experienced real complex issues than those armchair generals that never saw a battle.

Case in point: you’d perhaps expect that major companies know exactly what systems they have installed, how many applications run on them and where they are? You’d be wrong, as time and time again I hear stories or know by personal experience that they do not know entirely where things are. This varies from the incidental (we didn’t know that we were running that much software) through to the monumental: a major corporation thinking it had about 8,000 systems and finding that they had … 14,000. That is quite a difference from what they expected.

Putting aside our human tendency to over-simplify, what happens as systems get even more complex? Can we get computers to manage other computers, can we adopt methods and architectures which are much more emergent rather than the procedural and directive that we currently use? What happens when the system becomes more than the sum of its parts and can’t be governed except by itself?


Welcome to the future, it looks a lot like today

Interesting discussion on a internet forum about job redundancies and how heartless large corporations are in dealing with employees. It does make me wonder who is wrong here: the expectation of employees that they will have a ‘job for life’, or that capitalism rides on the tides of the market and a fundamental aspect of that, is that supply answers demand. Redundancies are built into the system.

Of course there are reams of tomes written to describe the good and bad of economics, and I don’t think command-and-control economies are any better, nor those built on class systems or corruption. Drug- or oil-economies aren’t much better (unless you are Norway and put it into a long-term state fund) and I don’t admire those economies which allow some members to have wealth because of their family connections. No, I am not a monarchist.

What I do admire is a level playing field where all are given the same start to life, the same opportunities, and the same incentives to develop their full potential. Offering to let them become part of the ruling elite, for example, is a straw-man argument: privileging the few at the expense of the many simply extends the injustice into the next generation.

Egalitarianism has benefits, both for those at the top of society and for those at the bottom. My belief is that societal expectations have to change into a more fluid model that sees expert plumbers at the same level as philosophers (or bankers, lawyers, doctors) and rewards effort and dedication rather than connection and background. Organisational models need to change as well so that companies no longer have the protection of ‘personhood’ under the law, and plain old hierarchical organisations adopt to a more fluid view of entering and leaving that organisation.

I’m reading “Reinventing Organisations” by Frederic Laloux and taking an interest in the concepts of Holacracy – although I’d have to admit that a lot of things get hyped as the new silver bullet and touted around the management training circles simply as a way of earning money for those management consultants. I’d wait for a couple of centuries of real-world testing before suggesting that any of these are more than just fads.

Running Plex on ARM

I’ve recently been trying to get the Plex media centre running on an ARM machine. I had previously experimented with Kodi as a media server, and while I could get it running and serving from a variety of devices including a Raspberry Pi 2 machine as the playback, making it all work for my family seemed too much Heath Robinson-esque with computers stuck to the walls of my daughter’s bedroom and long lectures on complex controls.

It may be helpful at this point to understand some of the main differences between the two software alternatives for home theatre PCs: Kodi and Plex. While there are others, these are the common choice. While they had a similar beginning and may share some code they do operate in different ways: Kodi is a client, while Plex runs on a server. With both the actual recordings can be held anywhere, and neither will record TV without extra plugins.

A table taken from may help:

Database of movie information On the client On the server
Interface An app or application running on your client device – computer, phone Any browser
Transcoding – playing the recording onto your viewing device Handled by client Needs a beefy server
What client hardware can be used Lots Lots and lots
Add-ons Lots Limited
Customisation Very flexible, and necessary! Limited
Help Community Professional
Cost Free Free and subscription models
Remote Streaming Difficult Easy

So the choice comes down to the open-source, configurable Kodi or the simple to install and standardised Plex. More than their development models I appreciate the difference in their approach: one works out of the box, the other takes some configuring but is very flexible.

Having got Plex running under Exagear on my  Odroid quad-core C2 server, I then ran into the problem of the CPU power required to do ‘transcoding’.  Transcoding is essentially converting the recorded film into a format that can be displayed on the client device whether a mobile phone, web browser, or TV. The formats in which films are recorded is a whole subject in itself and changing them from one format to another is a difficult process. My little single-board computer could not cope.

Even the old Intel Atom system I used to run could not cope – apparently I need something called a passmark of over 2000 and although my quad-core server could perhaps do the work, the Plex software sometimes decided that it couldn’t. So I’ve started looking for a low-powered, small and silent system with enough processing power that it can happily transcode all my movies. Apparently some NAS (Network-Attached Storage) boxes can do it, however the price point of those is a significant purchase and not something I want to spend on entertainment. I’ve looked at AliExpress and there are some very good i5 mini-PCs available and I may go that route.

House sensors using LoRa

I’m interested in converting my house sensor network into one running on LoRa.

The background to this is the sad demise of the Nottingham company WirelessThings from whom I had purchased a number of house sensors. These were small devices transmitting over 868MHz with a variety of ‘personalities’ including temperature, flood, magnetic, on/off, light and so on. I had them scattered around the home, including a water detector on my back fence where the seasonal flooding from the local river first enters my property.

The head end was a simple USB stick plugged into one of my SBCs running NodeRED. This collected incoming radio signals using a text protocol called ‘LLAP’ and converted them to MQTT, sending them on to a variety of end points such as alerting my mobile phone when floods were coming in. As the devices went into a configurable sleep mode, the button batteries lasted years without replacement. I even had a separate NodeRED flow that handled battery level to alert to replace batteries. They were useful in a number of ways:

  • knowledge of battery state
  • ability to ‘talk back’ to a sensor
  • transmission over relatively large distances
  • simple, configurable and multiple sensors using a small form factor enclosure
  • different sensors encoded values into a similar-length text string (12 bytes)


All that has finished with the closing of that company, so I have been toying with using a LoRa gateway from someone like Pycom, or purchasing a full LoRa gateway and connecting this to The Things Network and thence back to my servers. Cost is playing a significant part as I can’t really afford three- or four-figure sums for my electronics hobby. The Pycom LoPy running MicroPython looks like a good alternative (even if it cannot act as a true LoRa gateway due to lack of multiple channels) so I may wait for that to stabilise and then work from there.

Brexit causes towel prices to rise

I think we are going to hear an oft-repeated phrase in the next five years. That phrase will be “because we left the EU…”. I’ve just encountered it at a soft-furnishings store where the price increase on Egyptian towels was explained by the salesman who said “because we’ve left the EU the prices of cotton have risen, so we had to put up the price of towels”.

  1. But the UK hasn’t yet left the EU,
  2. suppliers in Europe cannot charge the UK different prices than other EU countries before the UK leaves the Common Market,
  3. … and Egypt, from where the ‘Egyptian cotton towels’ come from was never part of the EU anyway.

What’s going on here? It can’t be only the exchange rate as cotton and other commodities are purchased on a futures market so as to avoid vagaries in price.

I’ve a feeling that this will be just the start of this saga. Retailers by their very nature often have very thin margins and are at the mercy of the whole supply chain. Disintermediation in the form of eBay or Amazon who sell direct to the consumer have cut their revenues, and the Common Market meant that I could by parts for my aquarium filters at one-third to a half of local UK prices. And that is while we were in the EU! With the upheaval in politics represented by the EU vote a whole lot of things will now be blamed on this rather than the more prosaic explanation that sellers make profit whenever they can.