Thursday 17 November 2016

Computer Visioning - IPcam + Gstreamer adventure = new Camera

Intro

This started as an re-run of my Face Tracking project, but with updated software. New software list:
  • OpenCV v3.1
  • Gstreamer v1.8
  • V4l2loopback v0.8
  • On Ubuntu 16.04 - x64
But this proofed to be harder then the first time. Except for OpenCV, I used the default package repository. Also building OpenCV + Gstreamer 1.0 + V4l2 + Python + OpenGL, was easy enough. There is a lot of how to do this to find...

But then I had this problem:
(Sorry, the video the turned sideways. But from the 3 frames, One is falling behind. Very clearly...)

Things to try

Here is list of methods I used to read the IPcam stream in OpenCV

Gstreamer to Loopback
cv2.VideoCapture("v4l2src device='/dev/video1' ! videoconvert ! appsink sync=false max-buffers=2 drop=true name=sink emit-signals=true")
Gstreamer in OpenCV
cv2.VideoCapture("souphttpsrc location=http://192.168.50.107/videostream.cgi?user=admin&pwd=12345 do-timestamp=true is_live=true ! queue ! appsink")
Http directly in OpenCV
cv2.VideoCapture("http://192.168.50.107/videostream.asf?user=admin&pwd=12345&resolution=64&rate=0")
Then getting frustrated, this was another idea. As the default web interface had no delay, I tried to capture this browser frame and sending that to loopback with Gstreamer:
ximagesrc use-damage=false xid=0x3a00689 ! ffmpegcolorspace ! videoscale ! v4l2sink device='dev/video1'
+
cv2.VideoCapture(1)
But this also resulted in IOctrl failure...
Here are some v4l2loopback issue that seemed related: #97 - #93 - #83
This is the script I used for testing, this I compared to the IPcam interface.
Code Snip

Solution

So Finally I concluded that the (cheap) IPcam would not work. But I did wanted to keep the pan/tilt freedom. I could rip everything out, replace the camera and the board with an Arduino like this.
Back
Front
But my choice was to keep the original hardware, so I can POST web requests for the pan/tilt movement, upgrade the camera with the PlayStation 3 Eye, which has some impressive quality!
Main Board
Motor Wiring
The Head Taken Apart
Replaced the Camera

The PlayStation 3 Eye also has an microphone array, giving a nice extra. 





If you want to do the same, I can not stress enough to pay a lot of attention to the order of how the camera is put to gether!! As some part are to small to fit the USB port I was forced to cut the PS-Eye wire....But forgot the ring that holds the base and head together, so I had to take it all apart again!

The Result

Friday 28 October 2016

Building Robot DIY Power Supply

Goal

Having multiple off shelf devices can make some things easier. But have a car battery of 12 volt doesn't fit on all of them. Converting 12 volt to a number of required volts:
  • +5v for IP camera's and Arduino's
  • +5v positive and -5v negative for robot arm (?linkje naar project?)
  • +7,5v for network switch
  • +12v (no conversion) for motors (shoulders, feet, neck) and used for Pico PSU for secondary computer.
  • +19 for laptop power-supply (E-Bay car adapter)
The last few devices were simple, just direct (as motor uses 12 volt) or a convertor from E-bay that uses 12 volt.
The first four did pose a challenge as they did take up Ampere! A quick count came to 8 Amp!! And overload must be avoided, specially the arms could use good part. So have a 10 and 5 schema to be sure. The motors will draw directly from the batteries.

Drawing Board

First found an 10 A negative and 10 A positive schema for converting the 12 to 5 volt. Then a simple regulator (7,5 volt) schema. All thanks to CircuitOnline, a great dutch elektro website.
The positive side
To extract positive and negative, two 12 volt batteries are required. If only one battery, you can use a volt divider but this will lower the voltage.
The negative side.
But with heavy loads you will need heavy resistors! So to use two batteries, just remember that combined gives double the current! As the robot has enough space for two batteries and the battery weight will increase stability, the choice was easy.

The additional 5 Amp
5A print

The choice was to used an older PC power-supply as case. It has a fan, lots of holes and spacey.
Everything inside
Running some tests
Ready for first test!




For details on first attempt see below. But it is fine to skip that.
After First Attempt, measuring the damage...

First Attempt

All apart again!
When using the MJ2955 or MJ3055, I forgot to use mica (isolation)...Because if using the same they can share the ground (which is the body on MJ2955). But with MJ3055 this causes short circuit!
MJ3055, which was fried 
Then by accident, forgot to wire ground first...This resulted in a lot of smoke and me order the same list...again!!!



Result

So this cost some additional time, but also made me redesign and clean up wires.
In the end everything was fine. All measurements were in the green and the power-supply was good to go!
Here it is finished and close. Not yet install in the body.

Wednesday 6 January 2016

Robosapien V2 rewiring

RoboSapien V2 Wire Fix and other Mods.




Some time ago I found this old but still very cool power toy.
And as a gift I wanted for my son...and a bit for myself.
I got it second hand and this was clear, it had some cracks and heavy nicotine odor...
But it worked and was complete with manual book and no battery leaking damage.

Soon I did notice some uncontrollable movements and battery drains.
When searching the internet of any tricks, hacks and modifications I found something disappointing.
The wires running through the legs are of very bad quality, more V2 models have it!
Somehow the other wires are fine, but the leg ones are not...
A foot connector cable
Total break down!
Here are some examples of how bad they become. To the point of causing short circuit.
 We (me and son) had a clear to do, rewire the legs.

Gathering information

Lucky we were one of many that committed to this job, as is a common problem.
So first taking apart the robot is a challenge, it covers are very fragile and so both upper leg parts are lost and now looking for a 3D model replacement.
Second point is that the arms can fall off after removing the chest covers (unlike with the V1 model).
Once the covers are gone and the wires are untangled, as they cross halve way the body.
Write down how all the wires fit in to place.
All the info can be found on the internet (see references), but it saves time to already have it.
Then remove the legs from the body.

We did not have the same size cables as we used the from an old computer PSU.
They come in many colors and long size, remember not making the length to long (as they might jam once the covers are back) OR short!!
The foot board
Also we needed to recycle the plugs so had to carefully take them apart and replace them with the new wires.

Unfortunately at the foot(s) base the wires are soldered thus taking more time.


Left Foot Connector
Besides time consumption, the job did not take more than a day (thanks to my little helper!).
Right Foot Connector
Foot with still old wires
the Motor Board

Finally the wires are fixed and everything is working again.

References:

Here are a few websites that helped me or can be helpful.
  1. Very useful -> For info on the Mainboard, it also have links to other internal parts.
  2. In case you (like me) forgot to write Foot Wiring information.
  3. Java is not my thing, but I found Robosapien Java Code, might be useful.
  4. Some basic Robosapien Information.
  5. For building you own remote, the IR codes.
  6. Info for connecting external power got here.
  7. Then I found RoboCommunity but this site doesn't seem to work correct or host very useful info.  But please have a look for yourself.

Last point, some extra mods:

After the wire update I did want to have look at the possibility for an external power source.
As described on one of the reference website (#) it is possible (2 pictures)...

The Ground (GND) on Mainboard

On the left you were the GND is connected.
 
Closeup of the Ground on Mainboard
And a close up of it. This GND can be used by both the 6 & 9 volt.
6 volt (orange)

Next are the two input points for the 6 and 9 volt, remember that the 9 volt can take up some amp, I did no          measurement but have a 5 AMP hooked up.
9 volt (brown)

An upgrade on this I plan to have the battery charge when external power is connected.
However this will require some additional wires (3 extra) to the batteries.

The idea did cross my mind for having wireless charging in foot(s) but you will lose weight what can make the body unstable!




Something else I still need to look in to is to replace the IR communication between remote and robot with a RF, like with a remote car.
So there is not dependency on clear line of sight.
But I will document this upgrade on a different post.

Friday 11 April 2014

Making 3D glasses from a phone

How to make 3D glasses with a phone and a 3D printer.


To be honest this is just a copy of a project of another guy that did all the real work, here I just show of my version with some tips and tricks on software and other hardware you can use.

I did this already some time ago, so I now found out that the amount of software heavily increased!
What is very nice, for I was under the impression that the Oculus Rift would have more support and thus more software avalible.
If you want a few point what make the OpenDive better (in my opinion):
  1. Cheaper
  2. Full HD (it depends on your phone).
  3. Mobile (Oculus needs a computer to connect to, can be laptop. But not the same) 
In the time I was writing this (it did take me some time...) there was a new project on Kickstarter what is very similar to OpenDive. Just that this looks like they just want to make money (sorry to say). Because they offer nice looking glasses, but at a higher cost, less freedom (you have set to your phone dimensions) and most of all, they did not publish (yet) any software...

Printing the 3D object 


For the 3D object I use the one from the site, from the how to part.To print it at my local 3D print shop cost like 40 euro, high density.

Here is the final result of the print job. First front and then back. I made it the color green because it is my son's favorite color.
 Afterward I did reinforce the top and fixed the phone holders (two bars going from top to bottom).
Also add some soft small isolation tape so the plastic is not scratching the face. And of course the lenses and the band to strap it to your face.

Doing a OpenDive Test run 

Here are the first results when running the OpenDive test run. What you can get from here.

Other applications

Of course that test run is boring, and I wanted to play Quake! For the full game you still need to provide your own texture files, you can search google for these details.
There are a few others that you can enjoy with these glasses:
  • Wings - a kind of sky diving game
  • RTPhiscics RT3DApp - Simple more testing game
  • The Height - No idea how to describe this one...
  • Bubblecards - Some stupid racing game
  • FOV2GO Minus Lab - strange looking
  • RollerCoast - Cool to show the folks the glasses!
  • Go Show Free - A real home theater
  • DiveCityCoaster - Same as the Rollercoaster
  • Virtual Reality FPS - Difficult to see FPS
  • Dive Launcher - Dive interface...
  • Jet Sprint - A Flight similator
  • Dive Deep - Underwater game
  • VR Scene - Nice look FPS, Unreal flavor
If you plan on making your own take a look here. But you can try to play game with it, see the video. This called Kainy and seems very promising.

Useful addon devices

When the games are installed, I soon found out that controlling with my bluetooth keyboard is not an option. The best is a bluetooth gamepad, I did it with a PS3 gamepad, but this required some other tweaking.
Just remember that when you use these glasses, you are wearing your control's.

The Next step: Leapmotion

First of all I came across the leap motion a year before it was released. And I was already thinking in this direction. Then Oculus Rift came alone and thus the OpenDive. Now somebody already combined the Leap with the Oculus.
But Oculus was just sold to Facebook and I the comment Markus "Notch" Persson gave on pulling back his Minecraft: "Facebook creep's him out!".
So Let now work on a way to have it work on OpenDive! I did see Minecraftfor Android, so...

I am working a some idea's, but nothing concrete yet...

Installing Open Biometric Recognition

Installing OpenBR

Introduction

The Open Source Biometric Recognition is advanced framework that can act as a add-on on OpenCV, it doesn't work with SimpleCV. Also if you are interrested in more Robot Visioning program's keep reading!

Installation

Because GCC 4.7.3 is required, Ubuntu 13.04 is advised. See here for the official installation instructions. There are few reminders when following the instructions, there is a fair amount of data to be downloaded...and just remember that places where it says:
make -j 4
Means you use 4 cores, perhaps you have more or less (like my test machine is virtual and uses only 2). Also you will need a lot of disk space and it will consume a lot of bandwidth for downloading data-sets.

running in to the error message: libpng error: Incompatible libpng version in application and library
This is fairly know problem with self compiled program's. But nothing serious.
Just get the correct libpng and compile it [./configure, make, sudo make install ].

The OpenBR SDK.

If you run in to segmentation error/faults the most conclusive I could find is this. What shows that getting it to run correctly is not that easy...I did it 3 times, same Ubuntu and GCC, OpenCV 2.4.5 and 2.4.8 And there is nothing clear yet if 2.4.8 is not supported, I will update this article as soon the cause is more clear.

Other Projects

But meanwhile getting OpenBR to work and finding useful pages I came across a few more interresting programs that I do not want write a complete article about but did find useful to mention here! Here is a list of similar and handy project. I will expend this part when I have more information of the software.

Object Recognition Kitchen


The O.R.K. is based on Ecto what is a C++/Python framework. Highly ROS supported, but like ROS, very dependable.

Eulerian Magnification

Eulerian Magnification is a method to applies spatial decomposition, take a look at the website, the only hard part is getting correct images to use. But give it a try!

Gamera

Gamera besides being some fictive monster it is also a toolkit for a document recognition system. What can be useful when you want your system to read document's, with special symbols.

ccv

CVV is a gesture recognition. Something that was to my personal robot needs.

PyVision

PyVision is a object-oriented Computer Vision Toolk. I did not test it because it is Windows and Mac only! But if anyone has something? 

RoboVision

RoboVision is a software stack, besides that I found some useful information and links for stereo visioning. Plus on the blog there are some CUDA examples, but I did not try those yet.

Qualia-Smile

Qualia is a little python script, that with the use of OpenCV can detect smiles. I did not benchmark it with a list of different faces, feel free to do so.

ofxIpVideoGrabber

ofxlpVideoGrabber is not so useful recognition but to make overlay's.

MFTracker

MFtracker is based on the TLD, and uses Python. Very simple but indeed some potential! Do not confuse people with MFtracker for the financial sector...which I did not look at.

Others...

Besides there are some project like face authentication for desktop applications, but these I did not use. Perhaps somebody else can use them...

Tuesday 25 March 2014

Installing OpenTLD on Ubuntu 13.04

Getting OpenTLD working on Ubuntu 13.04

- And so I see, and that is good!

Introducing OpenTLD

TLD is an algorithm started by Dr. Zdenek Kalal and made it open source under the name OpenTLD. I found this work already a few years ago, back then just got it to run on Windows. But besides that did not had a real application to apply it to. Now working on my robot and want to make it recognize things good! If you just want to see it work (fast and easy) try the windows version or better yet, the Android (see below).

Requirements

You should have a look at the installation page first before continuing. On the website there is an alternative for using MatLab called Octave. In Ubuntu 13.04 the version is 3.6, but we need 3.8. So here is a quick list of what you need before getting started:
  • Ubuntu 13.04 installed
  • Download/Install OpenCV
  • Install Python development packages
  • Install build essentials
  • Download the OpenTLD source
  • Octave 3.8 source, download, ./configure, make, sudo make install 
You can compile Octave with the make -j 4(=number of cores) check here for more tips
But first a few package: sudo apt-get install python2.7-dev python-gst0.10-dev libeigen3-dev libwebcam0-dev libgstreamer0.10-dev libgstreamer-plugins-good0.10-dev libv4l-dev python-gtk2-dev libgtk2.0-dev gnuplot libjasper-dev bison++ python-pycurl libcurl-dev

Also Octave needs a few packages that come from here like: general, control, signal, image, miscellaneous, io, statistics, image-acquisition
You can download the packages and install like the example on the site. But if you enabled the cURL library with compiling Octave you can use that package manager.
octave -q
 pkg install -forge [package name]

OpenTLD Modifications

For the compile.m file to run correctly you need to adjust few things. I am not an Octave expert so perhaps these things changed, I had to in order to make it work.

In the compile.m script go to the if isunix part, for anyone with some python or other programming knowledge, it helps. The script looks for OpenCV libraries version 2.2, we are using 2.4. So change that line. The next line we change is in the i loop. Make it:
lib = [lib ' -o ' libpath files(i).name]; Where the -o is new. This is required because the i loop creates a list of files, but without the -o argument this would fail (I guess capital -O did mean all). Then a few lines down change all the argument -O (which is unknown) with -o.
Make sure the cv.h is in the general library, will make the compile part work correctly. I still was getting a problem with libraries; cv.h (simple gcc include lib should fix this, but I did it different) By just putting the full location path for "cv.h" (and highgui.h) in lk.cpp from the mex directory. Final problem here was a permission, so sudo compile.m.And Presto

Then we move on to run_TLD.m
If you run into the error about videoinput is an undeclared variable, means opencv did not compile correctly with octave. Do not search the internet for "how to compile opencv with octave" because it only works on older versions of openCV.
 

OpenTLD derivatives

  • Detecting multiple objects from here or a C++ implementation motld.
  • Python version here, but this was abandon...sadly

The Result

For the moment I have not yet anything to show. I am working on a way to combine the Muliple Object and the Georg N. More on this later...

Tuesday 14 January 2014

Face Tracking and IP camera control with Python

How to read IP camera stream with OpenCV 2.4.6, track the face and adjust the camera to keep the face in center.

In this project I wanted to use face detection and track it.
For this I use OpenCV 2.4.6.
Also I use Python 2.6, gstreamer 0.10 and the python module Pycurl.

For IP camera I use the Wanscam FR4020A2 PT.
The computer hardware I have is a i5 Laptop with 8 Gig of ram.
All running on Ubuntu 10.04 x64, but I also used Ubuntu 12.04 x64, perhaps anything that runs OpenCV and Python will do.
Just remember that gstreamer can (also) take a lot of CPU so this can hang the Python script in adjusting the view and thus you lose face focus!

In OpenCV you can adjust the level how hard it will work to recognize the face.
This can help if slowness is an issue (please read OpenCV documentation for details on this).

It all started when I came across this tutorial to control you webcam with servo's.
I just needed to adjust the servo control with IP camera control, I got from here.

A few things you need to do yourself:
Set up a loopback device 
 modprobe v4l2loopback devices=2
and a gstreamer stream.
 gst-launch-0.10 souphttpsrc location="http://192.168.123.142:99/videostream.cgi?user=admin&pwd=&resolution=64&rate=0"  ! multipartdemux ! jpegdec !  v4l2sink device=/dev/video1
I did need to compile and install the v4l2loopback what I got from here.

All this combined will show you:
At some point the script crashed because of connection lost with the IP camera.

You can get my code from github.

It is still a work in progress, because I would like to be able to do error catching when connection with camera is lost. Also looking in to use Python threading so perhaps some speed increase!

Finally I have another one of the same IP camera so want to try and enable stereo version.

Thanks for reading!