Manipulate virtual objects with Kinect and data glove

Here’s an interesting use of a data glove and Arduino with the Kinect to manipulate virtual objects. Sebastian fabricated the data glove by attaching a few resistors into the fabric. By touching the thumb with any of the other four fingers, different resistance values are obtained. This allows for a simple but effective recognition of gestures. Watch Sebastian’s demo video and you can learn the details:

Visit his site www.3rd-eye.at for more.

Android, Cloud-based Apps and Robots

I recently switched jobs and now work at Motorola Mobility as a Test Engineer for cloud-based services. In my new role, I work intimately with Motorola phones/devices such as the Droid Bionic, the Android OS, custom software that our software team develops on top of Android and a host of Web services on the cloud. Given this change of environment (I worked previously at a 3D mapping technology startup), I decided to expand my project focus to leverage the rich technology that I have access to at work. Expect more posts on these topics in future.

Android no doubt is on a meteoric rise and now captures 50% of the US smartphone market (38% worldwide), although competition with the iPhone is fierce:

Courtesy: CNN, IDC
Worldwide smartphone market
Worldwide smartphone market

At the same time, "apps on the cloud" is a sizzling field with the major players such as Google, Apple, Microsoft and Amazon jockeying for position. This got me asking: what is the synergy between robots, the Android OS and cloud services? As it turns out, this topic is fertile ground. Let’s look at a couple of examples.

Cloud Robotics

Google recently formed a new Cloud Robotics group headed by Ryan Hickman. His team is developing techology to enable a robot to offload computationally heavy tasks such as object recognition and path planning to powerful cloud servers, freeing up the its local processor to focus on immediate problems such as avoiding obstacles and getting out of messy situations. This is needed because robots typically run on embedded processors that have limited processing power (and memory, disk storage, etc.). Futhermore by deploying a remote processing service, expensive resources can be efficiently shared among robots that need them. One can think of more advanced scenarios. For example, robots can individually upload data they have locally acquired to the cloud to build collective knowledge. The cloud service can also coordinate multiple robot agents to work as a group.

Here’s a video of their talk at Google I/O 2011 decribing how cloud robotics works.

And Ryan and team’s Google I/O presentation slides:  Cloud Enabled Robots

As part of this initiative, Google teamed up with Willow Garage to develop rosjava, a Java implementation of ROS (Robot Operating System). The beauty of rosjava is that brings a complete robot software stack into Android. Anyone with an Android device now has a powerful tool to create robot applications!

CellBots

CellBot on Lego Mindstorms robot CellBots is a site started by a group of hobbyists to promoting mobile phones as a low-cost, robotic control platform.

To get started, download and install the app to your Android smartphone. Once installed, you can use your phone in two modes: (1) as a remote contol to send commands to the robot, or (2) as the robot’s actual "brain". The CellBot app supports several robot platforms including the iRobot, Lego Mindstorms, Vex Pro while others have created custom robots using Arduino microcontrollers and R/C model tanks/trucks as the robot base.

Check out the CellBots site for more info.

Of course, all of this means more opportunity to learn about robots, Android and cloud apps. What fun times!

Calibrating the Lynxmotion AL5D Robot Arm

Robot arm reaching chessboard
With the Lynxmotion robot arm assembled as described in Part 2, the next step is to calibrate. Calibration is needed for precise movement so that when we tell the arm to go to a specific (x,y,z) position, the gripper lands on that spot within some small margin of error. My goal is to get within +/- 0.25 inches of tolerance in each X and Y direction. This should be good enough for many tasks including playing a game of chess.

A Brief Survey of Existing Methods

I reviewed other’s work before devising my own arm calibration technique. After all there’s little gain in reinventing the wheel, right? Here are the ones I looked at:

  • Lynxmotion offers a free Arm Control and Calibration software. I have not used it but from reading the Lynxmotion manual, their software allows you to train the robot arm to perform complex actions. To do so, it takes a list of target positions as input then tells the arm to faithfully trace through the trajectory much like connecting-the-dots. (Of course it can do other actions such as open/close the gripper, rotate wrist, etc.) Before you can program the arm though, you must go through a calibration procedure. This involves matching the arm’s position to a set of poses as shown:

Training poses in Lynxmotion's calibration software

Training poses. Courtesy: Lynxmotion.com

After the matching step, the software learns the calibration parameters.  Unfortunately the software does not allow you to do advanced things like call an external program (eg, a chess server) or wait for user input, so I decided not to use it.

  • Several people have posted the Inverse Kinematics equations for the Lynxmotion arm. An IK formula computes the angle that each joint needs to bend to so the gripper can reach a target position. I used Mike Keesling’s formula in my code, but see Hun Hong and Laurent Gay for alternatives. What’s cool about the spreadsheets is that you can see a graphical simulation of the arm’s pose as you try different target positions. However, having the IK formula is not enough. We need to know is the exact pulse width to send each servo to rotate to the desired angle.
    Example Excel graph of the arm simulation

    Arm simulation in Excel

  • Thankfully the Tic-Tac-Toe Playing Robotic Arm project by Dr. Rainer Hessmer takes us further. Besides giving a great tutorial on Inverse Kinematics, Dr. Hessmer also explains his servo calibration method in detail and even provides source code. The basic approach is as follows:
    1. For the base rotate and wrist up/down servos, take several angle vs. pulseWidth observations and compute the slope-intercept parameters using linear regression
    2. For the gripper, take several opening distance vs. pulseWidth observations and compute the slope-intercept
    3. For the shoulder and elbow servos, move the gripper to several (x,y) locations on the surface of a paper grid. Use an Inverse Kinematics equation to solve for the shoulder and elbow joint angles. Take the derived angles and their associated servo pulse widths then compute the slope-intercepts.

Here is Dr. Hessmer’s calibration spreadsheet and demo of his Tic-Tac-Toe robot. To be clear, this is not my work, but I drew inspiration from it.

Calibration, Take One

My first approach is to model each joint with a linear equation: 

Angle = slope*PulseWidth + ZeroPoint

where:

Angle : joint angle, in degrees  
PulseWidth : servo pulse width, in milliseconds

This is similar to Dr. Hessmer’s method except I measured all joint angles directly as opposed to deriving the shoulder and elbow angles using IK. To make measuring easier, I posed the arm in an inverted “L” stance: shoulder link (humerus) points up, elbow bent at -90 degrees so that the forearm (ulna) points forward. Then I worked on each joint, sending different pulse widths to achieve the desired angles. After completing the measurements for one joint, reverted the arm to the inverted “L” pose before repeating the process for the next. These are the numbers I got:

Servo angles vs. pulse width measurements

When plotted, the pattern stands out. Notice that except for the shoulder joint, the lines are pretty straight. This is good news because a linear model matches the data well.

Serve angles vs. pulse width chart

Computing the slope and zero point (aka “intercept”) parameters is straighforward using Python and NumPy.

angles = calib_data[joint]['angles']
pulseWidths = calib_data[joint]['pulses']
A1 = np.column_stack([pulseWidths, np.ones_like(pulseWidths)])
eqn,residuals,rank,s = np.linalg.lstsq(A1, angles)
slope,intercept = eqn

Lines 1-2 just grabs all the (angle, pulse_width) value pairs that were measured for a given servo. For example, the values for the base rotation servo are:
“base”: {
“angles” : [-90, 0, 90],
“pulses” : [600, 1500, 2400]
}

Testing the Model

Armed with the new calibration parameters, it’s time for action. To test the calibration accuracy, I made the robot arm touch the center of the tiles of a wooden chessboard while I measured the position offsets. The chessboard I used is a non-folding type with 1-3/4″ tiles and wood thickness of 5/8″. To simplify measuring, I made the hand touch the board at a right angle such that the wrist is directly above the gripper tip. This virtually eliminates any error contribution from the hand. Because of this requirement, a few tiles in the first and last rows (where the hand must slant to reach the tile) were excluded.

For each tile I measured the X (depth) and Y (horizontal) position offsets from its center as well as the Z (height) offset from the surface. To check for Z-axis overshoot, I instructed the arm to hover 1/2″ above the tile and measured the gap. A total of 48 tiles were reachable with the gripper touching the board at right angle. I measured the horizontal and height position error on these 48 tiles, yielding the statistics below.

Average Stdev Max
Horizontal Error 0.35 0.15 0.71
Height Error 0.74 0.40 1.38

This level of accuracy isn’t enough. Let’s look at the error profile.. note the warping effect:

height error as a function of board positon

I was surprised by the overall inaccuracy given how good the linear model fit was for each servo. What could be causing these position errors? I’m tempted to explore this further, but my practical side convinced me to leave this stone unturned and focus on completing the project. That said, I suspect:

  1. the tension of two springs supporting the shoulder arm is non linear, causing rotational distortion in the shoulder joint
  2. small errors in calibration and measurement of link lengths get magnified at the tip of the gripper
  3. gravity effect especially when reaching far
  4. deadband in the servos

In Part 4, I’ll explain how I made the arm movement more precise.

A low-cost arm for a Personal Robot (part 2)

Building the Arm

While you can buy the complete kit from Lynxmotion, I ordered only the arm hardware (AL5D-NS), optional wrist rotate assembly (LPA-01) and the servo controller (SSC-32) from their store. I bought the servo motors from ServoCity as they had a much wider selection and lower prices. Assembling the arm was straightforward and required just basic tools: a Philips screwdriver and long-nose pliers. I completed it in four hours, anal as I am, but spent more time redesigning it as described below. Jim Frye, owner of Lynxmotion, took great care in making their kits easy to build. Each part was clearly labelled and neatly organized in plastic packages. You can read the step-by-step assembly instructions and other useful design tips on the Lynxmotion Web site. In fact, the company makes the instructions available online only to save on printing costs and keep prices low.

The AL5D is quite functional “as is” but I made some design tweaks to make the arm even more versatile. First I replaced the 4.5 inch aluminum tube of the forearm (ulna) with an 8 inch tube to give the arm a longer reach. I made the longer tube by cutting down a 12 inch aluminum tube which can be bought in hardware or hobby stores, then drilled the two holes on the ends. With this change, the robot arm can now reach objects up to 20 inches away and pick up pieces on any square of a 15×15 inch chess board. The original arm could not reach the corners as you can see from this before vs. after photos:

AL5D original design

Original design of the Lynxmotion AL5D

AL5D modified design

AL5D modified design

Next I installed the wrist rotate mechanism on top of the elbow joint rather than on the wrist. I did this to lessen the load (mechanical strain) on elbow and shoulder servos. This design has the added advantage of being able to swing the gripper side-to-side at the expense of losing full twisting motion when the gripper is bent at an angle. I think this is a good tradeoff because the side swinging dexterity could prove handy when grasping objects in cramped environments.

Wrist-rotate servo

Wrist-rotate servo

I also shifted the wrist’s vertical sweep from [+90, -90 deg] to [+45, -135] to permit a sharper bending of the wrist downwards. This is useful for manipulating objects on the ground.

Finally, some finishing touches. I mounted the arm assembly on a 14×14 inch wood base and added ultra-thin vinyl feet underneath to prevent slipping. To keep the wiring neat, I used plastic tie wraps to fasten them to the forearm and shielded them with a plastic spiral binding that I took from a discarded notebook.

With these changes, the robot arm is now ready for action.

Part 3 – Taking Control (coming soon)

A low-cost arm for a Personal Robot (part 1)

Finding Bobby

In my quest to build a $1000 personal robot, I spent some time figuring how to build a functional yet affordable mechanical arm that can pick up small objects, play chess and do other useful stuff. Point #2: I want a design that can be reproduced by anyone with basic mechanical and programming skills. To me, a repeatable design is important because it allows good ideas to be adopted quickly and helps advance the state of robot-building practice especially for hobbyists.

Here is my wishlist:

  1. affordable, ideally within $300 USD
  2. can produce accurate and repeatable motion (to reliably pick up and manipulate objects)
  3. can reach objects up to 1.5 to 2 feet away
  4. lifts 0.5 to 1 pounds of load
  5. easy to find parts and construct

When I started this project, I thought I could readily get existing designs on the Web but it turned out not to be the case. Sure you can find many videos of people showcasing their robot arm in action such as here, here and here. But dig deeper and you’ll see that: (1) the authors offer no detailed building instructions, (2) you may need to handcraft parts out of sheet metal, plastic or wood, (3) most projects focus on the arm assembly but not the motion control software.

What follows is a chronicle of my experience in exploring design alternatives and coming up with my chosen solution.

Choosing the Parts

A couple of hours of online research is all it took for me to realize that fabricating the individual parts for the arm would be too time consuming. This option only makes sense if you have the right shop tools as well as the skills, time and patience to make the pieces. Given that I probably possess only one of the three, I decided that buying a commercial robot kit is the way to go.

I narrowed my choices to the Lynxmotion AL5D (~$400) or the Crustcrawler AX-12 Smart Robot Arm ($699). Here’s how they stack up:

Lynxmotion AL5D 

Lynxmotion AL5D robot arm

Crustcrawler USB2Dynamixel 

Crustcrawler AX-12 Smart Robot Arm

Spec Sheet link link
Joints (DOF) 5 (with wrist rotate) 4
Lifting Capacity 13 oz. 2 to 3 lbs (24 to 36 oz.)
Max Forward Reach 17 in (approx), unmodified 19 in (?)
Max Height Reach 19 in 19 in
Gripper Opening 1.25 in 4 in
Weight (no batteries) 31 oz unspecified
Accuracy of motion per axis 0.09 degrees (using SSC-32 controller)
Each joint has a 300° range with 1024 steps
(0.29 degrees/step)
Repeatability unspecified 2.5 mm
Accessories Needed
AC power supply
9V battery
USB-to-RS232 converter (*)
AC power supply
(*) Needed if your computer has no serial port.

As you can see, the Crustcrawler is the bigger brother of the two but it also costs almost twice as much and its design leaves little room for modification. On the other hand (no pun intended) the Lynxmotion, dubbed the “Erector set of robotics”, has a modular design that allows you to try out different configurations. For me this is a definite plus because I want to see what combination of servos and joint designs work best. This feature swayed me to buy the Lynxmotion.

I looked into other potential solutions but nixed them. Lego Mindstorms is a popular educational kit that allow you to rapidly prototype designs and I’ve seen many impressive projects using them including the Rubik’s cube solver by Hans Andersson, Muranushi’s auto book scanner, Taylor Veltrop’s Stair Climber and Mario Ferrari’s robot projects. Another favorite are the kits by Vex Robotics. The problem with these kits however is that: (1) you are buying a generic kit which may not have all the pieces you need for your design (and buying additional parts tends to get expensive), (2) the assembled project can easily come apart, (3) in the case of the Mindstorms, it comes with only 3 motors which makes building a versatile arm impossible.

Part 2 – Assembling the Robot Arm

Robot arm for the Kinect

Lynxmotion AL5D robot arm

For the past month or so, I stopped working on the Kinect to dabble on another project. “But you can’t do that! What could possibly be better than hacking Microsoft’s hottest gadget?” you may ask. Well I will tell you.

All along I had planned to use the Kinect as part of a larger, more ambitious goal to build personal robot. That’s right, my own little minion who will do chores for me. I don’t mean one of those entertainment bots that passively carry drinks on a tray. No, I want one that does something useful with little guidance like the iRobot Roomba– except cooler. Its first mission will be to performs tasks in place: maybe play a game of chess or tic-tac-toe and pick up objects within arms distance. Over time I will add wheels for locomotion and make him do more.

I dreamt of building a robot like this for years, starting when the HERO 1 robot kit came out in the 80′s. Back then it cost around $2500 USD and couldn’t afford it. It was a blessing in disguise because the HERO 1 was not that intelligent and could not really do what I wanted. In fact it was but a fancy remote-controlled robot whose primary value was to teach basic robotics skills to its builder. Mind you I am not knocking down the HERO 1. It was a pioneer, bringing robots to people’s homes but unfortunately it was well ahead of its time. The technology in the 80s was simply too immature to make affordable, smart robots.


HERO 1 robot. Courtesy: Wikipedia.

Fast forward to 2011. With cheaper hardware and a number of impressive open-source software tools with capabilities previously only available to researchers, I believe we now have the magic ingredients to build an affordable personal robot. For my yet-to-be-named robot, I plan to use:

Of course other good alternatives exist, but I chose these because (1) they are affordable to the independent researcher or hobbyist and (2) they play nice together. Last week I finished assembling the Lynxmotion AL5D robot arm and writing a Python program to individually control each servo-motor and better yet, to command the arm to move to a specific location using Inverse Kinematics.  I’ll write more about it real soon.

Experiment to remove noise in Kinect depth maps

Following the suggestion by ChrisS in my previous blog, I tried to see if I can reduce the noise in Kinect depth map images by averaging data from multiple image frames. Just to be clear, we’re talking about the wavy appearance of what ought to be smooth surfaces of objects such as floors, walls, furniture panels, etc. Here’s a familiar example taken from my living room:

So how do we get rid of the wavy texture? If the noise is really due to random measurement errors, we can in theory reduce it by averaging several measurements for each point in the image. This works because a point distance that is overestimated in one image frame gets offset when it is underestimated in another image. As it turns out, I saved 3 frame captures of a few scenes I took last month– including the example above. The frame sets were taken approximately 33ms apart with no detectable camera movement between shots. This is convenient because we can compute statistics for each pixel location (x,y) without having to first align the images.

 

The Results

 

Hmmm… very interesting and opposite from what I originally thought. I tried (1) taking the average value for each pixel, and (2) taking the median value. In both cases the resulting depth map still contains the noise! In fact, simple averaging actually increases noise because ranging measurements at the deep end (i.e., raw disparity values > 800) tend to vary wildly and skews the mean. In the screenshot below, you can see the effect of the skewing as wisps of pixels.

Resulting depth map by averaging 3 image frames

Average result

 

Resulting depth map by taking the median pixel value from 3 frames

Median result

For those of you who want to examine the data, the Zip file below contains the 3 raw binary images and the 3D point cloud for a raw image, the averaged result and the median. I saved the clouds in PLY format… use Meshlab to view.

What do the results say? My conclusion is that the noise is not random, but is an inherent characteristic of the PrimeSense 3D imaging technology that powers the Kinect. Whether the noise is due to ranging error of the imaging sensor, distortion in the IR diffusion filter, an artifact of the depth interpolation algorithm or something else requires more investigation. Fortunately, this limitation has minor impact to many (most?) potential applications of Kinect.

Create editable 3D Point Clouds from Kinect depth images

The recent release of OpenKinect, the open-source drivers for the Microsoft Kinect, has unleashed a creative tsunami. Just days after OpenKinect announcement, scores of digital artists, reearchers and hobbyists have started to demonstrate a dazzling array of projects including RadioHead-like point clouds, skeletal tracking, multi-touch, etc. No doubt there is much more to come.

While I too have been buoyed up by this creative wave, I have a different need. As a robotics hobbyist, I want to use the depth maps produced by the Kinect device to simplify a robot’s task of recognizing objects in its view. To do that, I need to convert the Kinect’s depth map output into a 3D point cloud that can be stored, manipulated and analyzed. In other words, I want to be able to study a point cloud, make annotations if need be (e.g., to mark out surfaces or put bounding boxes around objects) and more importantly, to try out various algorithms on the dataset. Doing this type of analysis is virtually impossible(?) with a streaming video.

While searching online for solutions, I used these criteria for screening:

  • the tool should be simple to use and “lightweight” (ie, don’t have to install download a huge software bundle)
  • modular… Unix tools approach
  • leverage readily available tools, open-source if possible

What I came up with is the process described below. Source code is at the end of this post.

1-2-3 Step Process

1. Get a snapshot or sequence of snapshots from the depth camera.

glgrab  <prefix>

I created the glgrab program using the OpenKinect sample glview.c code as a start point and added code to dump the depth image. Presently, this program is bare bones. It waits for ~3s before grabbing an image frame from the depth camera and saves it to a file called <prefix>_nnn.bin, where <prefix> is the output name prefix that you specify. The nnn is the file index starting with 001. Make sure to move your last saved images before rerunning glgrab or they will be overwritten!

Use the -c flag to change how long the program will wait before saving depth images, and the -n flag to change how many frames to save (default=1). Type: glgrab -h to see the help info.

UPDATE 1/24/2011: I modified my glgrab.c program to work with the latest OpenKinect git build (2ea3ebb4b2be5d0472a8).

2. Convert the depth image into an point cloud file. I wrote a Python script that takes as input the filename of the saved image from Step 1 and the name of the output 3D point cloud file.

python  depth2cloud.py   <image_bin_file>   <cloud_name.ply>

The cloud is saved in PLY file format which is very easy to understand and can be directly loaded into MeshLab.

NOTE: depth2cloud.py requires the pylab module. See this instruction page on how to install pylab.

3. Load the point cloud. Launch MeshLab and click on File -> Open, then select the PLY file.  For navigation tips, click on Help -> On screen quick help.

Here are examples of point clouds from images I took inside my house. Note that even though some details are lost, the flat surfaces are well-defined. This should make image segmentation and surface extraction easier.

Point cloud showing shoes, exercise equipment, a ball and a toy car on the living room floor.

Point cloud showing various items on the living room floor.

Point cloud showing a globe and various items on top of a shelf

Globe and various items on shelf.

Here’s the source code. To build glgrab:

  1. Unzip and copy the contents to the libfreenect/examples folder. The CMakeLists.txt adds the glgrab as a target to be compiled.
  2. cd ../build
  3. make

You should now see the glgrab program in the build/bin folder along with glgiew and other OpenKinect binaries.

To use the depth2cloud.py, you’ll need Python and the pylab module installed.

depth2cloud.zip

If you don’t have your Kinect yet or simply want to see sample point clouds, here are the raw images and PLY files for the two examples above. I included the raw images so you can try to generate the PLY files yourself.

examples.zip

Hope you find this useful.

Reconstructing the Western Han Dynasty — cool vid

I’ve been playing around with the capturing depthmap images using the Microsoft Kinect and now trying to render them as point clouds. To work with still images rather than streaming videos, the first thing I did is to modify the glview sample program that comes with the OpenKinect library (these guys are awesome!) so that I can grab an image from the video and save it as a binary file. Then I load and display the images using pyplot. I’ll post some results soon.

In my search for tools to visualize point clouds, I came across some very interesting projects on digital archaeaology. Here’s an example of a collaborative research to reconstruct the Western Han Dynasty.

The video excerpt says:

During the summers of 2008 and 2009 the University of California Merced with the Virtual Heritage Lab, Xian Jaotong University, CNR (Italian
National Center of Research), had the great opportunity, unique for a western research group, to access archeological Chinese sites in the Xian region (China). Using Differential Global Positioning System (DGPS), Laser Scanners, 3D data processing software, those three institutions worked together to obtain: important telemetric surveys, a very rich digital data collection of the most representative monuments and artifacts of the Western Han Dynasty and also 3D reconstructions of four tombs and part of the ancient capital, Changan.

Enjoy!

Mac port of Kinect driver working on OSX 10.5

I couldn’t resist the urge to get my hands on a Microsoft Kinect, after open-source developers have announced that they have written drivers for the device to run on Linux and yes, the Mac OS. This is an amazing feat, given that they got the alternative drivers working just days after the Kinect product launch. So yesterday I bought one from a GameStop store in downtown Berkeley. Interestingly, the sales guy commented that I made the right decision to buy it now because they expect to sell out as the Christmas holidays draw near and that they were out of stock last week. Whether this true or merely a ploy to coax me to open my wallet, I wouldn’t know. But I just gotta, gotta have my Kinect now.

Kinect for Xbox 360 with free Kinect Adventures game

As you may have guessed, I’m not much into the Dance Revolution thing. In fact I don’t even own an Xbox 360 to play games with. Yeah, that free Kinect Adventures game that came with the box will probably be sitting in my shelf collecting dust… unless Santa thinks I was nice this year and hands me an Xbox.

I plan to use this neat gadget to play around with depth maps (images with depth information for each pixel) as a means simplify finding objects and other interesting features in a scene. In other words, stuff for robot vision.

Getting the driver working on the Mac was straightforward. I followed Sean Nicholls’s easy-to-use instructions and got it working at first try. Although his blog mentions OSX 10.6.5, I can confirm that it works for OSX 10.5.8 as well. Here is my obligatory depth map sample. The object on the bottom right is my daughter’s stuffed dog

Depth map sample

Overall I’m impressed with the initial results. I do see some dropped frames are being reported while running the glview demo.. see sample output below. I think I am clocking at about 20-24 frames/second rather than a full 30fps. But it is still early days and a little birdie told me that this issue is being worked on. Besides, I won’t be needing full-motion video for my projects so for me, any capture improvements made is icing on the cake. For the record, I like icing.

GOT RGB FRAME, 307200 bytes
DROPPED DEPTH FRAME 397940 bytes
GOT DEPTH FRAME, 422400 bytes
GOT RGB FRAME, 307200 bytes

As a side note, OpenCV 2.2 is due out on Nov 20. We got some very exciting times ahead.