top of page

Duration: Jul 2018 - Dec 2018

Purpose: This project was part of my internship at L&T Defense, a partner office for the Center for Artificial Intelligence and Robotics.

Skills: Legacy OpenGL, OpenCV 3.0, Python3, C++11, Robotic Operating System, Kinematics, Motion Planning, Image Processing


This was the project that sparked my interest in Robotics. Before this, I had no concept of robotics as a field, and was still uncertain about what I wanted to pursue in my career.

This project exposed me to some of the most fundamental aspects of robotics, including kinematics, image processing, object recognition, state estimation and motion planning. And the more I learnt, the deeper I wanted to go. Looking back, I didn't achieve much with this, but what the results don't show is how much I learnt and the drastic change it brought about in my career path, giving me a purpose and direction.

Aim of the project

This was part of a larger project to mount a 6-dof manipulator on a mobile robot and use it to explore buildings, including picking up and moving objects, turning door knobs, etc. I was tasked with creating an intuitive interface for visualizing the manipulator in it's environment and using simple point and click with a cursor to pick up objects.


The project required a custom application where more features and integrations could be added, so applications such as RViz that come with ROS were not suitable. I started by trying to visualize the motion of a manipulator in 3D using OpenGL. If I was able to do that, I could feed the joint angles to visualize the continuous motion.

I started with a 1 dof system and then visualized a 2 and 3 dof manipulator. I was successfully able to visualize the forward kinematics for a 3 dof system.

I then moved on to inverse kinematics, attempting to control the end effector of the visualized manipulator using keyboard arrow keys. This worked well for a 2 dof manipulator, but was unable to handle singularities in a 3 dof system.

I was also working on the second aspect of the project, 3D visualization of the environment.

For this, I used stereo camera to calculate a depth map, from which I attempted to generate a point cloud in RViz.

As a proof of concept, I also used the MoveIt! motion planning library and an ABB manipulator schema to visualize movement of the arm in RViz.


  1. OpenGL visualization was unable to handle singularities. Better knowledge of kinematics (which I didn't have at the time) would have allowed proper visualization.

  2. RViz visualization could not be customized to specifications. However, it worked well as a proof of concept, and being open source, it could be used to build a custom interface on similar lines.

  3. Depth estimation was inaccurate. The two cameras in the stereo rig were not well calibrated, and the algorithm used (SGBM) was not accurate enough. Additional filtering techniques could have improved results.


bottom of page