First impressions of the new Zumi robot car

First of all, it’s ADORABLE! So small, it can fit in the palm of your hand, but has a RaspberryPi ZeroW, and Arduino, a screen, camera, loads of sensors and LEDs all inside.

Zumi started as a Kickstarter project and just shipped to first backers (I was one) last week. You can now buy it directly from its creators, Robolink, for $139. Details are here.

My first impressions were very positive, despite the usual first-batch problems with Kickstarter projects (a few missing bits in the kit and some instructions that aren’t quite accurate). It was easy to put together and once I fixed a boneheaded mistake (if you plug in the screen connector backwards, the car won’t boot at all — yikes), it booted up fine and coached me through the process of connecting via wifi and updating its firmware.

So what can it do? The initial Kickstarter pitch focused a lot on computer vision and AI, but it’s clear that many of those videos were done on beefier computers than the RaspberryPi Zero in Zumi. The current package is more limited, with no real machine learning at all.

Nevertheless, the initial lessons are a lot of fun and show the solid Python functions Robolink has shipped with Zumi. Examples start with how to use a Jupyter notebook, and then move to sensors (IR distance/obstacle, gyro and accelerometer), LEDs and the buzzer and of course motion. Although Zumi’s motors don’t have encoders, it uses the gyro to go straight and turn by set amounts, which works well.

Once you get to computer vision, the limits of the RaspberryPi Zero, hobbled by the overhead of the Jupyter server and wifi streaming, show up. Lessons in object tracking, face and smile spotting and color spotting all suffer from terrible lag and several can not finish at all.

Although Robolink promises more is coming with updates, it’s hard to see how they can squeeze much more about RPi Zero using Jupyter. Although Jupyter is a great way to combine documentation, code and graphical output on one screen, it’s really designed to run on faster computers and slows to a crawl on Zumi if the camera is used. Robolink intends to release a different IDE in the future, perhaps one more like their Blockly interface, and that may work faster.

Once Robolink fixes the performance issues (perhaps by switching out Jupyter for a faster IDE) I think computer vision, including lane following and object detection, should be doable. Basic machine learning (with models trained in the cloud and only run locally) should also be possible. And who knows — maybe the Rasperry Pi Foundation will release a faster RPZero someday?

But for now, Zumi is really nicely designed and super-cute robot car at a reasonable price with a lot of room to grow.

Experiments with new all-in-one motor/motor controller/encoder

DF Robot has an interesting new robotics motor ($19), which has both the motor controller and the encoder integrated into the body. That means that you can drive it straight from an Arduino, with no separate motor controller required. That’s pretty cool and leads the way to very simple rover setups.

The motor is 12v (good choice for a 3-cell, 11.1v lipo battery), geared to max 159 rpm. It’s smaller than the usual robot motors, but has enough power for a small rover.

Because it’s so well integrated with controller and encoder, it’s a good way to demonstrate proper PID motor control. So here’s a demo! The code is here:

Connect it to an Arduino like this (the wire colors will probably be different; pay them no mind. Just keep the red and black right and connect the others in the following order). Important: you must also connect the Arduino ground to the 12v power supply ground as shown):

Full details are in the DF Robot wiki

Now try it with a PID controller, which will maintain a desired RPM within a reasonable range (80-140 rpm). Here’s some Arduino code that will do that.

Try playing around with the Kp (proportional), Ki (integral) and Kd (derivative) terms in this code and see what happens when you enter a different speed (80, 100, 120, etc). With the stock settings below, the Ki term will slowly build up so the RPMs approach the desired speed. A higher Kp term will speed up that convergence, but too high will overshoot. Here’s a short video that gives the basics, but the key points is that it uses feedback.

You want the speed of the motor to be a certain value, but just commanding it to go to a certain power doesn’t mean that it will get to exactly the desired speed (because of variation in battery voltage, etc). So rather than just sending a command and hoping for the best, you measure the actual speed the motor is turning with the encoder. Then just keep changing the motor control inputs until the motor speed output is close to the desired one — that’s what a PID controller does. All the various settings (P. I. D.) are just to tune it so it does that well.

// Make sure you've added the official Arduino PID library to your libraries
 // https://playground.arduino.cc/Code/PIDLibrary/
 include 
 int i = 0;
 int rpm;
 unsigned long time = 0;
 bool direction = HIGH;
 int speed;
 double Setpoint, Input, Output;  // Setpoint is going to be the desired speed. Input will be encoder reading. Output will be motor command
 double Kp = 0.5; 
 double Ki = 3;
 double Kd = 0;
 //Specify the links and initial tuning parameters
 PID myPID(&Input, &Output, &Setpoint,Kp,Ki,Kd, REVERSE);
 void setup() {
   Serial.begin(115200);
   pinMode(10, OUTPUT); //PWM PIN 10  with PWM wire
   pinMode(11, OUTPUT);//direction control PIN 11 with direction wire
   digitalWrite(10, direction);
   Setpoint = 100;
   Input = 100;
   myPID.SetMode(AUTOMATIC);
 } 
 void loop() {
   if (Serial.available())  {
     speed = Serial.parseInt();
     speed = 255 - speed;
     delay(200); 
   }
   for(int j = 0;j<8;j++)  {     i += pulseIn(9, HIGH, 500000);         //SIGNAL OUTPUT PIN 9 with  white line,cycle = 2i,1s = 1000000us,Signal cycle pulse number:272   }   i = i >> 3;
   rpm = 111111 / i;   //speed   r/min  (601000000/(4562i))
   i = 0;
 //  Serial.print(rpm);                      
 //  Serial.println("  r/min");
 Setpoint = 255-speed;
   Input = rpm;
   myPID.Compute();                       // calculate the right motor control to make rpm equal desired speed
   if (Output > 220) {Output = 220;}
   if (Output < 20) {Output = 20;}
   Serial.print("Setpoint, Input, Output: ");
   Serial.print(Setpoint);
   Serial.print("  ");
   Serial.print(Input);
   Serial.print("  ");
   Serial.println(Output);
   analogWrite(11, Output);
 }

If you enter a desired RPM between 80 and 130 in the Arduino Serial Terminal it will try to hit that number. Output should look something like this (“Setpoint” is PID-talk for “desired speed”. “Input” is the reported speed from the motor encoder. “Output” is the motor control signal [0-255] that the code sends the motor):

Official Oakland Race Rules (2019 edition)

Now that the DIY Robocars quarterly races in Oakland have gotten big and the cars fast, it’s time to evolve the rules to reflect our learnings over the past two years.

As of the Sept 21 2019 race, these are the new rules:

There are two tracks, one for beginners and a main one for returning racers:

  • There are 12″ orange cones on the inside of every curve, as shown above.
  • There will be one “obstacle cone” placed at a random location (changed with every race) inside the track.
  • The Beginners Track has the same shape, but is about 25% smaller

If this is your first race or your car’s first race, you must start on the Beginners Track. There will be one judge on that track to schedule official runs. If you can successfully complete a lap in under 40 seconds, you may advance to the Main Track and place your team name on the lineup board (example from a recent race shown below).

Main Track Rules:

Cars:

  • Cars must start with a single binary interaction. This could be a button on the car, on a controller, a key on a keyboard or equivalent. No other intervention can happen until after the race is over; otherwise the car gets a “did not finish” (DNF). An emergency stop button is recommended but not required. Deadman’s switch is also acceptable where button is pushed and held for the duration of the race.
  • There are no rules governing where the computing needs to take place.  Cars may have onboard computing, trackside computing or leverage remote or cloud resources.  
  • GPS and other similar positioning systems are not allowed except for specified outdoor races. 
  • The desire is that this is an open source league and that all designs are put on github and are readily copyable after the conclusion of every race.  If you are prevented from open-sourcing designs or prefer not to, there is no obligation to do so.

There are two categories of racers: Stock and Unlimited

1) Stock:

  • This includes all the standard platforms: DonkeyCar, JetRacer, DeepRacer as well as any custom vehicles that satisfy the below criteria
  • Cars are 1/16th scale or smaller: No more than a 7.5″ (190mm) wheelbase, axle to axle.
  • Cost no more than $400 all told (car, computer, sensors, etc)
  • Batteries must be firmly attached with velcro or other straps so they can’t come loose during the race

2) Unlimited:

  • This is for cars larger and/or more expensive than Stock
  • Cars may be up to two feet long and may weigh up to 10lbs
  • No limit on cost (although we do encourage DIY economics — if you’ve spent $10,000 on a Lidar sensor, this may not be the right event for you. Not only is it out of reach for others to follow, but in the obligatory Demolition Derby at the end it may very well get damaged)
  • Batteries must be firmly attached with velcro or other straps so they can’t come loose during the race

Races:

  • There are three heats, followed by a “ladder” race-off of the top six cars, paired by closest times, ending with a final race between the top two cars for the winning trophy (not actually a trophy!).
  • All races will be “wheel-to-wheel” with two cars on the track at the same time
  • Every car will have at least three opportunities to race. Only those in the top six will move on to the ladder.
  • The first heat pairings are random, within the class they have entered. After that, the second and third heat pairings are based on matching closest times within their class in the first heat.
  • Racers may choose to agree on track starting position (inside or outside lane). If they do not agree, the judge will flip a coin to decide.
  • The final race is an all-cars Demolition Derby. If you brought a car, even if it didn’t qualify for the Main Track or even if it doesn’t even work autonomously, you will race. There will be crashes. It will be fun. It’s the best 30 seconds of mayhem of the day.

Scoring:

  • Each heat is three laps. Both the first lap and the three-lap times will be recorded for each car, but only the first-lap time will determine ranking for the next heat (or ladder position)
  • Passing any “curve cone” on the inside is immediate disqualification for that heat
  • There is no penalty for going outside the white lines, as long as you don’t violate the cone rule above
  • The random “obstacle cone” may be passed on either side. Hitting the obstacle cone imposes a two-second penalty
  • Touching any other cone is a one-second penalty

Adding challenges to DIY Robocars tracks

Now that our computer and software platforms have improved to the point that we’re reaching the maximum physical speed of cars on regular “kidney-shaped” track without resorting to crazy tires and motors, it’s time to slow things by adding more challenge on the CV/AI side.

There are three standard ways to do this:

1) Always race with at least one other car (“wheel-to-wheel” racing). These “moving obstacles” introduce both randomness as well as the need for smart race tactics to win. Those include:

  • Spotting other cars
  • Deciding to pass on the inside or outside
  • Picking the right time to pass (straightaway, before a curve, after a curve)
  • Using AprilTags or other standard computer-readable markers on the back of each car to make them easier to identify

2: Place a cone or other obstacle in the track at some random location before each race. This introduces the following challenges:

  • It breaks naive “cone trackers” that assume that cones are stationary and always at the outside of the track
  • Requires car to differentiate between “outside” cones (which mark the outside of the curves and must always be passed inside and not touched) and “inside” cones (which can be passed on either side)
  • Requires cars to do more than just track the road lines

3: Static obstacles, such as gates, ramps or simulated pedestrians (like the above, from the Sparkfun AVC 2018). This can introduce a number of unique challenges and fun scoring opportunities:

  • Having to differentiate by kind of obstacle, such as “pass to right of blue cone but to left of red cone”, or “some blue ramps add points if jumped, but others have spikes and will stop your car”.
  • Offer extra points (or subtracted seconds) if the car successfully passes through a hoop or between two markers in tight bottleneck
  • More advanced path-finding and decision-making, such as “stop for two seconds if a stop sign is detected” or “stop for detected pedestrians but just steer away from other obstacles”

Starting with the September 21 race in Oakland we’ll do at least #1 and #2; #3 may come later.

How to display your Linux computer’s IP address on a LCD (and a few ways not to)

In a “headless” configuration (no screen or keyboard) such as typical in Raspberry Pi or other embedded computers in small robocars, one of the biggest hassles is figuring out what IP address the computer has been assigned over Wifi on the network you’re connecting to. There are some tools, such as Adafruit’s PiFinder, that will search a network to find connected RPis, but in my experience on large corporate networks they usually fail.

A better way is for the computer to display its own IP address once it boots and connects. But on small cars you don’t have room for a HDMI monitor, so you’ve got to connect a small LCD or OLED screen, such as these I2C or SPI displays from Adafruit or just a serial display like these. That’s easy enough to hook up, but how do you get the computer to automatically display on these displays its IP address when it boots up?

Google around and you will find many suggestions for autorunning programs at boot, especially for Raspberry Pi, which are a constant source of headless headaches. The problem with most of these is that they don’t work anymore. The main reason is that Debian (and, by extension, Raspian, the Linux varient that RPi runs) has changed some key functions in recent releases, breaking old methods. So, for example, putting the startup script in rc.local doesn’t work in current distros. Nor does putting it in etc/network/interfaces. All of these are ignored in the latest Debian/Raspian/Ubuntu releases.

Even worse, the hard part about this particular startup problem is that you can’t just run the program at startup, but you have to wait until it connects to a network and *then* display the IP address.

The solution is a method that uses the Linux “cron” function, which still works in current distros. Here’s what to do, using a serial LCD screen connected via USB on ttyUSB0:

  1. Edit your startup cron file by entering “sudo crontab -e” from the command line.
  2. That will ask you for an editor. Pick nano. In nano, type ”
    @reboot /home/[your username]/startup.sh”, then press control-o and then control-x to save and exit. (To explain that line we just entered, it means “at reboot, run the startup script “startup.sh” in the user’s home directory)
  3. Now go to your home directory and type “sudo nano startup.sh”, which will create that file
  4. If you’re using a serial LCD as shown above, type or paste this into that startup script, and then again pressing control-o, control-x to save and exit:
#!/bin/bash
sleep 10
ip addr | grep 'state UP' -A2 | tail -n1 | awk -F'[/ ]+' '{print $3}' > /dev/ttyUSB0
  • 5. The above line waits for 10 seconds for the network connection, then uses the “ip addr” command to get the IP address. It searches for the network connections that are working (“up”) and returns the IP address, piped to the serial port at ttyUSB0
  • 6. Now make sure sure startup.sh is executable by typing this in the command line:
sudo chmod +x startup.sh

Now it should start at next boot, wait 10 seconds, and then send the assigned IP address to a serial LCD screen.

There are better ways to do this, including a script that keeps running through a loop until “state UP” returns true, thus avoiding the fragile delay function, which might fail if the network connection hasn’t been reached in 10 seconds. But this works for me.

BTW, if you’re using an I2C or SPI screen, you’ll be better off calling a Python program that has a library to control it, rather than the simple serial LCD I used. But that’s more complicated, so I’ll leave it to another day.

Updated Minimal Viable Racer using latest OpenMV linear regression code

OpenMV continues to be the best and easiest way to get started with DIY Robocars and with the forthcoming H7 version (twice the speed and memory) it’s increasingly capable, too.

Our entry-level code for the “Minimum Viable Racer” uses a basic computer vision technique known as “blob tracking” and treats track lines of a given color as rectangles with a center point, which tells the car how to steer to stay on the line.  This is easy to code but is a pretty blunt instrument, since the rectangles cover a large area of track and if that includes a curve the center point is really very approximate.

A more sophisticated computer vision approach is to do linear regression on all the detected points of the track line, which provides a more reliable “racing line” that ensures that the car is staying closer to the actual line on the road (shown below). This is the techniques used by OpenMV-based Donkeycar, which has reliably placed in the top five in DIY Robocar races.

I’ve now created a version of that code that runs on the Minimum Viable Racer, too, using the stock OpenMV Motor Shield. No changes to the hardware are needed.

Get the code here

Notes: 

It defaults to following a yellow line. To tell it to follow a different color, change this line to the color thresholds that works for your line, using the IDE’s Tools/Machine Vision/Threshold Editor

  1. Ensure “BINARY_VIEW = False”, so you can see the actual colors of the track and not just the black-and-white results. You can switch this to “True” when you’re done if you want to see how well it’s working, which will generate an image like the above.
  2. Change “COLOR_THRESHOLDS = [( 94, 100, -27, 1, 20, 127)]” to the correct thresholds for the colors of the line you want to track.

These are some other things you can tweak:

cruise_speed = 50
steering_direction = 1 # use this to reverse the steering if your car goes in the wrong direction
steering_gain = 1.0 # calibration for your car's steering sensitivity
steering_center = 0 # set to your car's steering center point

 

 

First impressions of Slamtec SDP Mini Lidar/SLAM development rover

I’ve had a chance to try out the new Slamtec SDP Mini development platform, from the maker of the popular RP-series low-cost Lidar scanners, and it’s nothing short of amazing. For $499, you get a RP-Lidar A2 2D lidar (which normally costs $320 just by itself), a Slamware hardware SLAM processing module along with a carrier board with wifi (another $300+ package), and a very competent rover chassis with wheel encoders.

Putting aside the value of nearly $1,000 worth of gear for half that price, what’s amazing about the SDP Mini is that it works. SLAM is notoriously hard and usually requires a full PC worth of computer power and complex software like ROS (although you can roll your own for less than $200 if you’re willing to settle for lower resolution, as described in the post here).

What the Slamtec SDP Mini does is combine a capable 8m-range, 8000-samples/second 2D Lidar unit with the necessary hardware to interpret it and turn it into an accurate map of the space its in. It builds the map as it drives around and localizes itself in that, thus the name: Simultaneous Location and Mapping (SLAM).

The magic all happens on the Slamware Core module, which is a single-board computer + IMU runing Slamtec’s SLAM software. It fits on a carrier board, which adds Wifi, motor controllers for the robot and connectors for wheel encoders and other sensors.

Most important is the SDK, which supports Android, iOS, Windows and Linux. Much of the documentation is still in Chinese, but Chrome can auto-translate it quite well into English. Along with all the usual Lidar functions, the SDK has full support for maps, pathfinding, wall-avoidance and encoder reading. This is an absolute nightmare in ROS (five Linux terminal windows, if you can even get it running), but is super easy with Slamware.

Here’s a sample of navigation, using the Android app (which communicates with the SDP Mini via Wifi). (2020 Update: Slamtec has now removed the RoboHome app and replaced it with the similar RoboStudio, which is available as both Windows and Android apps. The Android app on their website won’t run on Android 11 in my testing, but they do have a more recent version that will. It’s here.) It automatically made a map of my room in a few seconds and then can navigate to any point in the room with just a click.

Even cooler, you can create “virtual walls” by just drawing a line in the app. Then, when you tell it to navigate, it will find a path that avoids the wall.

Slamtec also has a Roomba-style larger SDP platform (shown below) that does the same thing, but can carry larger payloads and also includes automatic return to charging base and bumpers/sensors to avoid obstacles and stairs. It’s only available via special order now, but will soon be available more broadly at a price to be announced. Overall, the Slamtec range is super impressive.  If you want SLAM, look to China!

Lidar SLAM without ROS for less than $200

Until recently, if you wanted to do SLAM (Simultaneous Location and Mapping) with LIDAR without a huge amount of coding work, you really only had one choice: ROS on a beefy computer.

This had two problems: 1) beefy computer (cost, size) 2) ROS (complexity, overhead, crazy difficult UI)

Good news: you don’t need either of them anymore. 

Simon Levy has recently updated his very efficient BreezySLAM python code (paper describing it is here) to support the new generation of cheap and powerful LIDAR and single-board computers, including:

  • Slamtech series of LIDARs (A1, A2, A3) using the RPLidar Python library. I recommend the A1, which is just $99 and has a range of 12m with 8,000 samples per second. Run the rpslam.py example see it working
  • Single-board computers. Although it will work on a Raspberry Pi 3, I recommend the Odroid XU4, which is just $80 and easily twice as fast and otherwise works the same under Linux
  • Mini PCs. Atom-based x86 mini PCs cost just a little more than single-board computers and are easier to expand. BreezySLAM has been tested on this one ($116, including Windows, which you should ignore and run Linux instead!) and works fine.
  • Alternatively, get a full-fledged Intel NUC, which is more expensive but can run BreezySLAM at higher resolution with lots of processing overhead to do other things like machine learning and computer vision. If you get this one ($339), you’ll need memory and storage 
  • It will of course also work on any PC running Linux

If you combine an Odroid and the RP-Lidar A1, you’ve got a powerful full Lidar SLAM solution for less than $200!

A few notes:

  • Follow the BreezySLAM instructions here. If you’re using a RP Lidar device (recommended!) don’t forget to install the RPLidar library first: “sudo pip3 install rplidar”
  • Depending on the speed of your computer, you may need to sample fewer data points per update, which you can do by modifying the “MIN_SAMPLES” line in the code. For example, on the Atom Mini PC above I can only get 195 samples before I get the “no map screen of death” shown here:

  • On the Intel NUC, I can get to 210 points per pass.
  • On the Odroid I can get 200 points
  • On the Atom Mini PC, I can get 195
  • On the Raspberry Pi 3, I can only get 75 at one update/sec, with lots of artifacts, so not really useful yet.

My recommendation: go with the RP Lidar A1 and the Odroid. That’s less than $200 and it works great.

 

 

 

 

Comparing three low-cost integrated computer vision boards for autonomous cars

 

(Above, from left: Pixy 2, Jevois, OpenMV)

Computer vision used to be hard. No more. Now you can run OpenCV on a RaspberryPi, or even better use a dedicated computer vision camera/computer combo that costs less than $60. It’s kind of amazing, and perfect for small and cheap autonomous cars.

I’ve tested three of these camera/CV combo products with a DIY Robocar as the use case.

Here’s a quick review of the three and how they compare. I’ve previously compared Jevois and OpenMV, but both have improved since then — Jevois on the software side, and OpenMV on both the software and the hardware side with the forthcoming H7.  But now Charmed Labs has come out with the Pixy 2, which adds line following along with much improved hardware and software, so I’m expanding the comparisons.

For the OpenMV car, I tested the “Minimum Racer” configuration. For the other two, I used the Adafruit rover chassis, an Arduino and a motor driver board. My code for the Pixy 2 with the Adafruit motor driver shield is here 

Winner: OpenMV M7/H7

Why: Best IDE, copious I/O and shields mean no extra parts to drive a car, easy to program. Still the one to beat, and with the forthcoming H7 (to be launched via Kickstarter in September) it will have the computing power to do amazing frame rates and neural networks.

Fun Feature: The way it handles AprilTags (like QR codes, but super fast for robots to recognize) is miraculous. It can recognize them from 20 feet away when integrated with blob tracking.

Second Place: Pixy 2

Why: This is a big improvement over the original Pixy, which could just do color blob tracking. The new version has a fun line-following app and good Arduino/RaspberryPi integration. But it’s not really programmable and you’re limited to the four apps already loaded. And you need to add an Arduino and motor driver shield to drive a car. It doesn’t really do anything that OpenMV doesn’t do better.

Fun Feature:  The three white LEDs make for great headlights!

Third Place: Jevois

Why: The Jevois hardware is amazing, but it has been hamstrung by overly complex software (no IDE)  (UPDATE 6/20: It now has one and it looks great) and no I/O. The I/O problem remains, but the software has been much improved and now includes some good TensorFlow deep learning demos and some basic Linux command line support. But like the Pixy 2, to drive a car it needs an Arduino (or equivalent) and motor driver board to be connected.

Fun Feature: It can recognize 1,000 objects out of the box thanks to TensorFlow.


Here’s how they compare in features

OpenMV M7/H7Pixy 2Jevois
Cost$65 (M7), $59 (H7, coming in Sept)$59$59
IDEFull-featured, cross platform (Mac, Win, Linux)Just a viewer appFull-featured, cross platform (Mac, Win, Linux)
LanguageMicropythonNot really programmablePython
Processing power (QVGA)60 FPS (M7)
120 FPS (H7)
60 FPS60 FPS
Memory0.5MB RAM, 2 MB Flash (M7)
1MB RAM, 2MB Flash (H7}
N/A256MB
Add-on boards availableLots: servo, Wifi, motor controller, LCDNoneNone
Interchangeable lensesYesNoNo
Interchangeable sensorsNo (M7)
Yes (H7)
NoYes
Sample apps~30 covering wide range including deep learning4 (color and line tracking). No deep learning~30 covering wide range including deep learning
I/OUSB, SPI, CAN, I2C, 3x PWM, Serial, ADC, DAC, USB, SPI, 2x PWMUSB
SD CardYesNo Yes
Lights2x IR3x WhiteNo
Cost to make a car$100 (OpenMV + motor shield + car)$115 (Pixy 2 + Arduino + Adafruit Motor Shield + car)$115 (Jevois, + Arduino + Adafruit Motor Shield + car)

Using a cheaper motor driver with the Minimum Rover

If you want to use a cheaper motor driver board ($7) for the OpenMV-based “Minimum Rover“, here’s how to hook it up:

You’ll need some female-to-female hookup wires ($6).

First, plug four wires into the OpenMV’s GND, VIN, P7 and P8 pins as shown

Then connect those four wires to the motor controllers as shown in the picture below:

  • P7 goes in In1
  • P8 goes to In4
  • GND goes to GND
  • VIN goes to +5V

Then connect the motor wires to the terminals on each side of the motor controller, right motor wires to right side terminals, and left side motor wires to left side terminals, as shown. The wires from the battery go to the GND and 12V terminals (cut off the connector and put the bare wire in the terminal and screw it down).


The code requires no modification to use this motor controller. It should work exactly the same as the OpenMV motor controller shield.