My Account

Wish List (0)


PiBorg is still open and we are still shipping orders.
We expect orders may be delayed by a day or two due to COVID-19.

YetiBorg v2 - Examples - Follow Me

Written by in Build, YetiBorg - Build on .

Play chase with your YetiBorg v2

Robots can make really fun pets and YetiBorg v2 is no exception. The best thing about pets is being able to play games and teach tricks, so we thought we would teach YetiBorg v2 how to follow us like a dog :)

With this example you can see how we can get your robot to follow you around using only the camera.

Parts

All we need for this script to work is:

How does it work?

The script works by taking images from the camera and looking to see if there is any motion. It does this by taking a pair of images and looking at how they are different. For example:

The left image is first, followed by the middle image. The right image is what we get when we take the difference between the two images. We see black where there are no changes.

When we see enough change we get YetiBorg v2 to move forward. In order to decide which way we are turning we take the same difference image and slice it up into sections.

On the left we have the difference image. In the middle we have sliced up the image into our sections. On the right we have chosen the section with the most differences and decided to aim for that. The further it is from the center the faster we will turn left or right.

This all sounds good, but what happens once we are already moving?

As you can see we get lots of movement and YetiBorg v2 will get confused :(

We have a way to fix this though. If you look at the first image you can see movement only in some sections. When YetiBorg v2 is driving we see movement in all sections. What we do then is take the average amount of change in all sections and use that as our baseline for no movement. Now he will not drive from the above images.

But does this still work when both moving and following someone?

It does, the robot can still see a much larger difference where someone is moving then it sees for the movement of the background.

Tweaking the behaviour

Like any good dog, YetiBorg v2 will often become distracted by other things and forget whom to follow. We can improve the behaviour by tweaking some of the settings used for detecting movement in the "Auto drive settings" section.

  • autoZoneCount
    This is the number of slices the image is split into for detection. Either too high or too low can cause detection problems. We have found a value somewhere between 40 and 120 works best.
  • autoMinimumMovement
    Changing this value will adjust how much motion is needed before YetiBorg v2 starts chasing. If the robot does not follow you at all try turning this down. If the robot chases inanimate objects a lot (chairs, tables, et cetera) you may need to turn it up.
  • steeringGain
    This controls how fast YetiBorg v2 will turn to face movement. Smaller values may not react fast enough and larger values will often case the robot to wag instead of follow :) Turn this up if you cannot get the robot to keep up with your movement.
  • flippedImage
    If the robot seems to be running away from movement he may have is camera flipped upside-down. Swap this setting between True and False to change which way up the images are to fix the problem.

There are also some other things which cannot be fixed by tweaking some numbers:

  • The camera has a narrow field of view, making it easy to hide from YetiBorg v2 by standing too far left or right.
  • Like a T-Rex YetiBorg v2 only sees movement. If you stand perfectly still you will be invisible :)
  • Walking pace is best, very fast movements can also be missed.
  • May not play well with other pets. You have been warned :D
  • Works best in a well lit area. The camera cannot see much change with someone dressed in black in a dark room :(
  • Very tall objects may still attract attention, such as lamp posts...

Get the example

The example is part of the standard set of YetiBorg v2 examples installed during the getting started instructions: bash <(curl https://www.piborg.org/installer/install-yetiborg-v2.txt)

Run once

Go to the YetiBorg v2 code directory: cd ~/yetiborgv2 and run the script using: ./yeti2FollowMe.py

Run at startup

Open /etc/rc.local to make an addition using: sudo nano /etc/rc.local Then add this line just above the exit 0 line: /home/pi/yetiborgv2/yeti2FollowMe.py & Finally press CTRL+O, ENTER to save the file followed by CTRL+X to exit nano. Next time you power up the Raspberry Pi it should start the script for you :)

Full code listing - yeti2FollowMe.py

#!/usr/bin/env python
# coding: Latin-1

# Load library functions we want
import time
import os
import sys
import ZeroBorg
import io
import threading
import picamera
import picamera.array
import cv2
import numpy

# Re-direct our output to standard error, we need to ignore standard out to hide some nasty print statements from pygame
sys.stdout = sys.stderr
print 'Libraries loaded'

# Global values
global running
global ZB
global camera
global processor
global motionDetected
running = True
motionDetected = False

# Setup the ZeroBorg
ZB = ZeroBorg.ZeroBorg()
#ZB.i2cAddress = 0x44                  # Uncomment and change the value if you have changed the board address
ZB.Init()
if not ZB.foundChip:
    boards = ZeroBorg.ScanForZeroBorg()
    if len(boards) == 0:
        print 'No ZeroBorg found, check you are attached :)'
    else:
        print 'No ZeroBorg at address %02X, but we did find boards:' % (ZB.i2cAddress)
        for board in boards:
            print '    %02X (%d)' % (board, board)
        print 'If you need to change the I²C address change the setup line so it is correct, e.g.'
        print 'ZB.i2cAddress = 0x%02X' % (boards[0])
    sys.exit()
#ZB.SetEpoIgnore(True)                 # Uncomment to disable EPO latch, needed if you do not have a switch / jumper
# Ensure the communications failsafe has been enabled!
failsafe = False
for i in range(5):
    ZB.SetCommsFailsafe(True)
    failsafe = ZB.GetCommsFailsafe()
    if failsafe:
        break
if not failsafe:
    print 'Board %02X failed to report in failsafe mode!' % (ZB.i2cAddress)
    sys.exit()
ZB.ResetEpo()

# Power settings
voltageIn = 8.4                         # Total battery voltage to the ZeroBorg (change to 9V if using a non-rechargeable battery)
voltageOut = 6.0                        # Maximum motor voltage

# Camera settings
imageWidth  = 320                       # Camera image width
imageHeight = 240                       # Camera image height
frameRate = 10                          # Camera image capture frame rate

# Auto drive settings
autoZoneCount = 80                      # Number of detection zones, higher is more accurate
autoMinimumMovement = 20                # Minimum movement detection before driving
steeringGain = 4.0                      # Use to increase or decrease the amount of steering used
flippedImage = True                     # True if the camera needs to be rotated
showDebug = True                        # True to display detection values

# Setup the power limits
if voltageOut > voltageIn:
    maxPower = 1.0
else:
    maxPower = voltageOut / float(voltageIn)

# Calculate the nearest zoning which fits
zones = range(0, imageWidth, imageWidth / autoZoneCount)
zoneWidth = zones[1]
zoneCount = len(zones)

# Image stream processing thread
class StreamProcessor(threading.Thread):
    def __init__(self):
        super(StreamProcessor, self).__init__()
        self.stream = picamera.array.PiRGBArray(camera)
        self.event = threading.Event()
        self.lastImage = None
        self.terminated = False
        self.reportTick = 0
        self.start()
        self.begin = 0

    def run(self):
        # This method runs in a separate thread
        while not self.terminated:
            # Wait for an image to be written to the stream
            if self.event.wait(1):
                try:
                    # Read the image and do some processing on it
                    self.stream.seek(0)
                    self.ProcessImage(self.stream.array)
                finally:
                    # Reset the stream and event
                    self.stream.seek(0)
                    self.stream.truncate()
                    self.event.clear()

    # Image processing function
    def ProcessImage(self, image):
        # Flip the image if needed
        if flippedImage:
            image = cv2.flip(image, -1)
        # If this is the first image store and move on
        if self.lastImage is None:
            self.lastImage = image.copy()
            return
        # Work out the difference from the last image
        imageDiff = cv2.absdiff(self.lastImage, image)
        # Build up the zone change levels
        zoneDetections = []
        for zone in zones:
            # Grab the zone from the differences
            zoneDiff = imageDiff[:, zone : zone + zoneWidth, :]
            # Get an average for the zone
            zoneChange = zoneDiff.mean()
            zoneDetections.append(zoneChange)
        # Set drives or report motion status
        self.SetSpeedFromDetection(zoneDetections)
        # Save the previous image
        self.lastImage = image.copy()

    # Set the motor speed from the motion detection
    def SetSpeedFromDetection(self, zoneDetections):
        global ZB
        global motionDetected
        # Find the largest and average detections
        largestZone = 0
        largestDetection = 0
        averageDetection = 0
        for i in range(zoneCount):
            if zoneDetections[i] > largestDetection:
                largestZone = i
                largestDetection = zoneDetections[i]
            averageDetection += zoneDetections[i]
        averageDetection /= float(zoneCount)
        # Remove the baseline motion from the largest zone
        detection = largestDetection - averageDetection
        # Determine if the motion is strong enough to count as a detection
        if detection > autoMinimumMovement:
            # Motion detected
            motionDetected = True
            if showDebug:
                if self.reportTick < 2:
                    print 'MOVEMENT   %05.2f [%05.2f %05.2f]' % (detection, largestDetection, averageDetection)
                    print '           Zone %d of %d' % (largestZone + 1, zoneCount)
                    self.reportTick = frameRate
                else:
                    self.reportTick -= 1
            # Calculate speeds based on zone
            steering = ((2.0 * largestZone) / float(zoneCount - 1)) - 1.0
            steering *= steeringGain
            if steering < 0.0:
                # Steer to the left
                driveLeft = 1.0 + steering
                driveRight = 1.0
                if driveLeft <= 0.05:
                    driveLeft = 0.05
            else:
                # Steer to the right
                driveLeft = 1.0
                driveRight = 1.0 - steering
                if driveRight <= 0.05:
                    driveRight = 0.05
        else:
            # No motion detected
            motionDetected = False
            if showDebug:
                if self.reportTick < 2:
                    print '--------   %05.2f [%05.2f %05.2f]' % (detection, largestDetection, averageDetection)
                    self.reportTick = frameRate
                else:
                    self.reportTick -= 1
            # Stop moving
            driveLeft  = 0.0
            driveRight = 0.0
        # Set the motors
        ZB.SetMotor1(-driveRight * maxPower) # Rear right
        ZB.SetMotor2(-driveRight * maxPower) # Front right
        ZB.SetMotor3(-driveLeft  * maxPower) # Front left
        ZB.SetMotor4(-driveLeft  * maxPower) # Rear left

# Image capture thread
class ImageCapture(threading.Thread):
    def __init__(self):
        super(ImageCapture, self).__init__()
        self.start()

    def run(self):
        global camera
        global processor
        print 'Start the stream using the video port'
        camera.capture_sequence(self.TriggerStream(), format='bgr', use_video_port=True)
        print 'Terminating camera processing...'
        processor.terminated = True
        processor.join()
        print 'Processing terminated.'

    # Stream delegation loop
    def TriggerStream(self):
        global running
        while running:
            if processor.event.is_set():
                time.sleep(0.01)
            else:
                yield processor.stream
                processor.event.set()

# Startup sequence
print 'Setup camera'
camera = picamera.PiCamera()
camera.resolution = (imageWidth, imageHeight)
camera.framerate = frameRate
imageCentreX = imageWidth / 2.0
imageCentreY = imageHeight / 2.0

print 'Setup the stream processing thread'
processor = StreamProcessor()

print 'Wait ...'
time.sleep(2)
captureThread = ImageCapture()

try:
    print 'Press CTRL+C to quit'
    ZB.MotorsOff()
    # Loop indefinitely
    while running:
        # # Change the LED to show if we have detected motion
        # We do this regularly to keep the communications failsafe test happy
        ZB.SetLed(motionDetected)
        # Wait for the interval period
        time.sleep(0.1)
    # Disable all drives
    ZB.MotorsOff()
except KeyboardInterrupt:
    # CTRL+C exit, disable all drives
    print '
User shutdown'
    ZB.MotorsOff()
except:
    # Unexpected error, shut down!
    e = sys.exc_info()[0]
    print
    print e
    print '
Unexpected error, shutting down!'
    ZB.MotorsOff()
# Tell each thread to stop, and wait for them to end
running = False
captureThread.join()
processor.terminated = True
processor.join()
del camera
ZB.SetLed(False)
print 'Program terminated.'
Last update: Nov 05, 2017

Related Article

Related Products

Comments

Leave a Comment

Leave a Reply

The product is currently Out-of-Stock. Enter your email address below and we will notify you as soon as the product is available.