Raspberry Pi 7 Segment display

Introduction

One of my #RaspberryPi Zeros is called PiClock, and has an 8 digit seven segment LED display. The program it runs displays the time, and sends it to two other Pis, that display it on Unicorn HD HATs. Between midnight and 8 am, it flashes the message “SLEEP” every five minutes, as well. The software library that it uses can display numbers, and most upper and lower case letters, but not all of them. I rather liked the idea of animating sequences of single segments on it, because, well you know, blinkenlights. I had a look at the software library, “7seg.py”, to see if I could get it to do that.

It turns out that the library uses a Python dictionary to look up the byte to send to the display for each of the characters it can display. Decoding the hexadecimal bytes took a few minutes, working from the code for the digits from 1 to 5.

The first bit is always a 0. The remaining seven are the seven segments, in the order abcdefg, which are laid out like this…

So, the codes for illuminating single segments are as follows…

Now to amend the library! I needed some typeable characters to put in the dictionary, ready to be used in strings in the python code. For no obvious reason, I chose a selection of brackets and the tilde character, and amended the library file. The selection of brackets didn’t work!

After trying characters until they did work, I ended up with #][£<$~ as the symbols for the segments abcdefg.

I’m only showing the amended part of the file, where the pattern to send to the display is looked up. The arrangement of the brackets and tilde for the segments is as follows…

Now I’m ready to program PiClock to do silly animations, which will be fun, and a lot easier than using the WordPress editor. Note to self: See if you can find a WYSIWYG editor for WordPress.

Python and SQL with matplotlib.

# Quick hack to graph last 500 greenhouse temperatures from weather database.
import mariadb
import matplotlib.pyplot as plt
conn = mariadb.connect(user="pi",password="password",host="localhost",database="weather")
cur = conn.cursor()
tempIN     = []
tempOUT    = []
timestamps = []
# Get the most recent 500 records.
cur.execute("SELECT greenhouse_temperature, ambient_temperature, created FROM WEATHER_MEASUREMENT 
             ORDER BY created DESC LIMIT 500")
for i in cur:
    tempIN.append(i[0])
    tempOUT.append(i[1])
    timestamps.append(i[2])  
conn.close()
plt.figure(figsize=(14, 6))
plt.title(label="Greenhouse and outside temperature up to "+str(timestamps[0]))
plt.xlabel("Date and time")
plt.ylabel("Temperature in Celsius")
plt.plot(timestamps, tempIN, label='Greenhouse temperature')
plt.plot(timestamps, tempOUT, label='Outside temperature')
plt.axhline(y=5.0, color='r', linestyle='-.')
plt.legend()
plt.savefig("/var/www/html/GHtemp.jpg")
plt.show()

Python on Raspberry Pi, a note about structure, or something.

I’ve been struggling with a problem with a Pi camera for a couple of days. Instead of being able to start up the camera, I just had error messages about MMAL running out of resources.

Now, I knew I’d seen it before, and sure enough, Stack Overflow had quite a lot of questions about it. But I’d seen them before. And then I remembered that I never found out why the problem went away before.

As an experiment, I tried something that I thought couldn’t possibly work, and suddenly everything worked. All it took was moving the camera instantiation from the top of the program to just below all the function declarations.

At a guess, the camera startup can’t get the resources it needs, because the Python interpreter is chewing its way though all the function declarations, and using up something the camera software wanted.

It’s an age or so, since I wrote a language interpreter, and it was for a simple language, Pilot, but I know interpreters have reasons for liking programs in a particular order, so that’s my guess…

#MMALresources

A Python time-lapse program.

A free program…

This is the Python code I cobbled together to make a time-lapse movie of my rather exciting flowering cactus. I’m sure this has been done better by lots of people. It runs on a Raspberry Pi Zero, with not much memory, and no online storage, so it sends the pictures to another Pi Zero, called PiBigStore, which happens to have a 2 Terabyte USB drive. Help yourself to a copy, if you like. Change the server name, and password, obviously. If you know ways this can be improved, feel free to comment.

# Time lapse pictures
import os
import time
import ftplib
from picamera import PiCamera
import schedule

def send_to_PiBigStore():
    hour = int(time.strftime(“%H”))
    #print(hour)
    if hour < 7 or hour > 21:
        time.sleep(250)
        return
    
    file_name = “cactus” + time.strftime(“%Y%m%d-%H%M%S”) + “.jpg”
    camera.capture(“/var/tmp/” + file_name)
    
    connected = True
    ftp = ftplib.FTP()
    try:
        ftp.connect(“PiBigStore”)
    except ftplib.all_errors:
        connected = False
        print(“Couldn’t connect to PiBigStore.”)
        ftp.quit()
        
    try:
        ftp.login(“pi”,”password goes here”)
    except ftplib.all_errors:
        connected = False
        print (“Failed to login to PiBigStore server.”)
        ftp.quit()
    
    if connected:
        ftp.cwd(“/media/pidrive/data/cactus/”)
        ftp.storbinary(‘STOR ‘+file_name, open(“/var/tmp/”+file_name, “rb”))
        print (“Sent to PiBigStore “, file_name)
    ftp.quit()
    os.remove(“/var/tmp/”+file_name)

# Main loop
schedule.every(5).minutes.do(send_to_PiBigStore)
camera = PiCamera()
camera.rotation = 90

while True:
    schedule.run_pending()
    time.sleep(10)
A foot-tall cactus on a windowsill, with a Raspberry Pi Zero with camera, mounted on a Lego tower.

Curse you, munmap_chunk()!

 I still haven’t spotted a working solution to the problem where weather station programs in Python on Raspberry Pi fail, with no traceback details, after a couple of days.

I think it must be some resource in either the operating system, or the Python interpreter, running out, with very poor error reporting. I will leave it to people more familiar with the OS and interpreter to find out what it is, and fix it, in the fairly certain knowledge that everyone who could fix it has better things to do.

I found out that a Python program can actually restart itself, and changed mine to restart once a day. If that doesn’t fix it, I’ll let you know…

#RaspberryPi #Python 

My Stack Overflow comment on this.

Greenhouse computer improvement

New sensor!

I was using an MHT22, hanging on wires outside the case, for temperature readings on my greenhouse computer. I wasn’t happy with it, as it isn’t really compatible with the connections on the Raspberry Pi, and it has a habit of giving occasional absurd readings for no obvious reason.









So, I got myself a Microdia TEMPer-2, from PiHut, which plugs into a USB port. It has a fancy button on it, which activates the sending of text messages or emails, which I shall never be using. It also has an external plug in sensor, which is waterproof, a handy thing in a greenhouse!

It comes with a software mini-disc, which may possibly be useful if you’re using it on a PC, whatever they are. (Kidding. I’m writing this on my PC.) There are several web sites that tell you how to program Python to read from it, and it didn’t take me long to install the appropriate library on the greenhouse computer, and run the test command, sudo temper-poll. That worked, but then I ran into one of those programming blockages that can send you crazy. None of the various pieces of example code would work, mostly due to my inability to get the necessary permissions set correctly. It didn’t matter, I realised, after a lot of head scratching. Instead, I just used Python’s subprocess library to run the command that worked…
import subprocess

rv = str(subprocess.check_output(“sudo temper-poll”, shell=True))
# Split the string, keep fourth block, chop last five characters, make float.
temperature = float(rv.split()[4][:-5])

I’m hoping I won’t need to write any more software for the greenhouse for a while. The Raspberry Pi now monitors the temperature, switching the fan heater on if the temperature is below 5°C, uses its fish-eye camera to take pictures at set times for a time-lapse series, and takes a picture if it spots movement. Eventual improvements under consideration are a soil moisture detection sensor, automated watering… Nothing’s ever really finished, is it?



Much improved Pi Assistant message reader.

The most important improvement is that this program now works… 😎 

I just leave it running on the Pi Assistant, along with the Google provided assistant_library_with_button_demo.py program that I like to use, when I want to ask the Google Assistant something…

#!/usr/bin/python3
#
# Program to run on PiAssistant, to watch for newly arrived text files in
# /home/pi/Messages that it should speak, and delete them once it has.
import os
import paramiko
import time

path = “/home/pi/Messages/”

while True:
    with os.scandir(path) as entries:
        for entry in entries:
            f = open(entry, “r”)
            content = f.read()
            f.close()
            command = “python ~/AIY-projects-python/src/aiy/voice/tts.py “” +
                      content + “” –lang en-GB –volume 10 –pitch 60 –speed 90″
            os.system(command)
            os.remove(path+entry.name)      
            # Now wait for tts.py to finish
            command = “ps -ef | grep -v grep | grep “voice.tts” | wc -l”
            while True:
                ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(command)
                answer = str(ssh_stdout.read())
                if answer[2] == “0”:
                break           
            time.sleep(0.5)
    time.sleep(0.5)

And here is a program to run on a cluster of Raspberry Pi computers, as say_who.py, which will each send a message to the Pi Assistant, from each core of the processor. The assistant will read them out, separately. It does seem still to lose some messages, and if I work out why, I will let you know! I have been searching for simple example programs for Raspberry Pi clusters on the internet for ages, and have not found very many. Here’s my little gift…

#!/usr/bin/python3
from mpi4py import MPI
import os
import time
import subprocess as sp
# Attach to the cluster and find out who I am and what my rank is
comm = MPI.COMM_WORLD
my_rank = comm.Get_rank()
cluster_size = comm.Get_size()
my_host = os.popen(‘cat /etc/hostname’).read().strip()
    
# Make something for the assistant to say
speech = “Host is  {} “.format(my_host)
speech = speech + “Rank {} “.format(my_rank)
# Make a unique filename and path
message = “/home/pi/” + my_host + str(my_rank)
          + time.strftime(“%H%M%S”) + “.txt”
# Put the speech in the file
fp = open(message,”x”)
fp.write(speech)
fp.close()
# Put a copy of the file on PiAssistant
cmd = “scp ” + message + ” pi@PiAssistant:/home/pi/Messages/”
sp.call(cmd,shell=True)
os.remove(message)
You would load that program on each of the Pi machines in your cluster, and then set it running using the command

mpiexec -n 16 -hostfile myhostfile python3 say_who.py

I’m assuming you have four Pi 3 machines in the cluster, hence the -n 16 parameter. 

#RaspberryPi #Cluster #Python

A new program for Google Voice AIY

My earlier post, about a function that could send things to the Google AIY Raspberry Pi Assistant, and supposedly prevent more than one message being spoken at a time, was errmmm, wrong. That’s a technical term, that we programmers use.

I wrote a test program for my Pi cluster, whereby each of the sixteen cores would announce its hostname and rank. It’s not all that often you get the opportunity to use the word cacophony, but…

Basically, several processes could all think the Assistant was not busy, and they all sent messages in the time it took for the first message to start being spoken.

I spent a while looking at how to program mutual exclusivity for a resource, and was impressed by how complex such an apparently simple thing can get.

I decided that what was needed was a simple program, running on the Assistant, that would watch a directory, notice when a file to be spoken arrived, and speak the text in that file. Python makes it easy to deal with more than one file in the directory. Here’s what I wrote…

#!/usr/bin/python3

#
# Program to run on PiAssistant, to watch for newly arrived text files in
# /home/pi/Messages that it should speak, and delete them once it has.
import os
import time
path = “/home/pi/Messages/”
while True:
    with os.scandir(path) as entries:
        for entry in entries:
            f = open(entry, “r”)
            content = f.read()
            f.close()
            command = “python ~/AIY-projects-python/src/aiy/voice/tts.py “” +
                      content + “” –lang en-GB –volume 10 –pitch 60 –speed 90″
            os.system(command)
            time.sleep(0.5)
            os.remove(path+entry.name)
    time.sleep(0.5)

When it spots one of more files in the Messages directory, it reads the text, and sends it out to be spoken. It can supposedly only do one file at a time, but… Still the cacophony!    

#RaspberryPi #GoogleAIY #Python            

Using Google AIY voice.

Using Google AIY voice, part 94.

I wrote a useful function, so that any of my Raspberry Pi machines could send a string to the Google Assistant as a message to be spoken aloud, and it worked very well. Here it is…
# A function to send a string to PiAssistant for output as speech
import paramiko

ssh = paramiko.SSHClient()
ssh.load_host_keys(filename=’/home/pi/.ssh/known_hosts’)
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())

server, username, password = (‘PiAssistant’, ‘pi’, ‘your_password-goes_here’)
def say_this_with_assistant(say):
    ssh.connect(server, username=username, password=password)
    command = “python ~/AIY-projects-python/src/aiy/voice/tts.py “” + say + “” –lang en-GB –volume 10 –pitch 60″
    ssh.exec_command(command)
    ssh.close()
    return 

print(“Starting…n”)
say_this_with_assistant(“Normal service will be resumed as soon as possible. “)
print(“Finished.n”)

The snag occurs when two, or more, machines want to say something at the same time. The PiAssistant calmly multitasks, and says the strings at the same time, which sounds interesting, but tends to be incomprehensible. I hoped that voice/tts.py would have some way of letting other programs know whether it was busy, but it doesn’t. However, the low level Linux ps command can tell when voice/tts.py is running, so I added a check for that as a Paramiko command, like this…

# A function to send a string to PiAssistant for output as speech when it’s free
import paramiko
import time
ssh = paramiko.SSHClient()
ssh.load_host_keys(filename=’/home/pi/.ssh/known_hosts’)
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
def say_when_free(say):
    server, username, password = (‘PiAssistant’, ‘pi’, ‘your_password-goes_here‘)
    ssh.connect(server, username=username, password=password)    
    command = “ps -ef | grep -v grep | grep “voice.tts” | wc -l”
    while True:
        ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(command)
        answer = str(ssh_stdout.read())
        if answer[2] == “0”:
            break
        
    command = “python ~/AIY-projects-python/src/aiy/voice/tts.py “” + say + “” –lang en-GB –volume 10 –pitch 60″
    ssh.exec_command(command)
    ssh.close()
    return 
print(“Starting…n”)
say_when_free(“I waited until I was allowed to say this. “)
print(“Finished.n”)
I’m not going to claim  any of this as properly written Python, the best way to do what is required, or even particularly original, but it does currently seem to be the only description of how to do this that’s on the internet.

Google Assistant

Google Assistant updates. 

Once I had my Google Assistant running Raspbian Buster, and verified that it could not only answer spoken questions, but also speak text sent to it by other computers, I made a bad mistake.


As a rule, it’s a good idea to keep a Raspberry Pi’s operating system up to date. Not on this system, it turns out!

I found that doing so stops the microphone working. Other people have also found this, according to the internet, dating back to 2018.

It was simple enough to fix, once I knew that the update was the cause of the trouble. I just flashed the micro SD card again, and copied my configuration files back onto it. Just like that, it was all working again. The files you need to copy back on are assistant.json client_secrets.json and credentials.json, if you were wondering.

In order not to let the same unpleasantness happen again, I disabled the update programs apt and apt-get, using the commands

sudo chmod a-x /usr/bin/apt
sudo chmod a-x /usr/bin/apt-get