logo
down
shadow

How to stream depth image from a basic ToF camera module with Point Cloud Library(PCL)


How to stream depth image from a basic ToF camera module with Point Cloud Library(PCL)

By : user2956519
Date : November 22 2020, 10:56 AM
wish help you to fix your issue 1) Changing the OpenNI grabber to use your own ToF camera will be much more work than to just use the example from the camera in a loop shown below.
2) Yes PCL can show point cloud image and depth without accessing a .pcd file. What the .pcd loader does is to parse the pcd-file and place the values in the cloud format. You can do this directly from your camera data as shown below.
code :
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
cloud->isDense = true;
cloud->width = widthOfTOFsensor;
cloud->height = heightOfTOFsensor;
cloud->size = cloud->width * cloud->height;

//Create some loop

    //grabNewFrame from TOFsensor

    for(int pointIndex=0;pointIndex<cloud->size();pointIndex++)
    {
        cloud->points[pointIndex].x = tofSensorData[pointIndex].x; //Don't know the tofData format, so I just guessed something.
        cloud->points[pointIndex].y = tofSensorData[pointIndex].y;
        cloud->points[pointIndex].z = tofSensorData[pointIndex].z;
    }

    // Plot the data using pcl visualizer or cloud viewer, see:


Share : facebook icon twitter icon
How can I compute optical flow from a depth image stream from a depth camera?

How can I compute optical flow from a depth image stream from a depth camera?


By : Rory
Date : March 29 2020, 07:55 AM
Hope this helps I worked on Kinect depth camera and implemented a patient tracking algorithm. The algorithm itself is commercial and I cannot disclose the details. But I can give my two cents here.
The depth feed from Kinect should not directly used for optical flow (motion tracking), due to no depth pixels. You can use inpainting to fill in gaps in the depth image. If you are using OpenCV, you can refer to the implementation here.
Tango: Why Intrinsic data from RGB camera and not from depth camera for point cloud projection?

Tango: Why Intrinsic data from RGB camera and not from depth camera for point cloud projection?


By : AshGh
Date : March 29 2020, 07:55 AM
will be helpful for those in need The answer depends on what you want as the depth image. The accepted answer is assuming the desired depth map is viewed through color camera.
The easy way to think about it is that point cloud is just a bunch of points in 3D space, how it projects onto a image plane depends on the viewing camera. If you want the viewing camera to be depth camera, you could use depth camera's intrinsics. But more commonly, people would want to use color camera, because it's easier for color look up and etc.
Can I create point cloud from depth and rgb image?

Can I create point cloud from depth and rgb image?


By : ildaronline
Date : March 29 2020, 07:55 AM
I think the issue was by ths following , To solve this problem I went the three.js examples and searched for "point". I checked each matching sample for one that had different colors for each particle. Then I clicked the "view source" button to checkout the code. I ended up starting with this example and looked at the source. It made it pretty clear how to make a set of points of different colors.
So after that I just needed to load the 2 images, RGB and Depth, make a grid of points, for each point set the Z position to the depth and the color to the color of the image.
code :
// return the pixel at UV coordinates (0 to 1) in 0 to 1 values
function getPixel(imageData, u, v) {
  const x = u * (imageData.width  - 1) | 0;
  const y = v * (imageData.height - 1) | 0;
  if (x < 0 || x >= imageData.width || y < 0 || y >= imageData.height) {
    return [0, 0, 0, 0];
  } else {
    const offset = (y * imageData.width + x) * 4;
    return Array.from(imageData.data.slice(offset, offset + 4)).map(v => v / 255);
  }
}
'use strict';

/* global THREE */

function loadImage(url) {
  return new Promise((resolve, reject) => {
    const img = new Image();
    img.crossOrigin = "anonymous";
    img.onload = (e) => { resolve(img); };
    img.onerror = reject;
    img.src = url;
  });
}

function getImageData(img) {  
  const ctx = document.createElement("canvas").getContext("2d");
  ctx.canvas.width = img.width;
  ctx.canvas.height = img.height;
  ctx.drawImage(img, 0, 0);
  return ctx.getImageData(0, 0, ctx.canvas.width, ctx.canvas.height);
}

// return the pixel at UV coordinates (0 to 1) in 0 to 1 values
function getPixel(imageData, u, v) {
  const x = u * (imageData.width  - 1) | 0;
  const y = v * (imageData.height - 1) | 0;
  if (x < 0 || x >= imageData.width || y < 0 || y >= imageData.height) {
    return [0, 0, 0, 0];
  } else {
    const offset = (y * imageData.width + x) * 4;
    return Array.from(imageData.data.slice(offset, offset + 4)).map(v => v / 255);
  }
}

async function main() {
  const images = await Promise.all([
    loadImage("https://i.imgur.com/UKBsvV0.jpg"),  // RGB
    loadImage("https://i.imgur.com/arPMCZl.jpg"),  // Depth
  ]);
  const data = images.map(getImageData);
  
  const canvas = document.querySelector('canvas');
  const renderer = new THREE.WebGLRenderer({canvas: canvas});

  const fov = 75;
  const aspect = 2;  // the canvas default
  const near = 1;
  const far = 4000;
  const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
  camera.position.z = 2000;

  const controls = new THREE.OrbitControls(camera, canvas);
  controls.target.set(0, 0, 0);
  controls.update();

  const scene = new THREE.Scene();

  const rgbData = data[0];
  const depthData = data[1];
  
  const skip = 20;
  const across = Math.ceil(rgbData.width / skip);
  const down = Math.ceil(rgbData.height / skip);
  
  const positions = [];
  const colors = [];
  const color = new THREE.Color();
  const spread = 1000;
  const depthSpread = 1000;
  const imageAspect = rgbData.width / rgbData.height;
  
  for (let y = 0; y < down; ++y) {
    const v = y / (down - 1);
    for (let x = 0; x < across; ++x) {
      const u = x / (across - 1);
      const rgb = getPixel(rgbData, u, v);
      const depth = 1 - getPixel(depthData, u, v)[0];
      
      positions.push( 
         (u *  2 - 1) * spread * imageAspect, 
         (v * -2 + 1) * spread, 
         depth * depthSpread,
      );
      colors.push( ...rgb.slice(0,3) );
    }
  }
  
  const geometry = new THREE.BufferGeometry();
  geometry.addAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) );
  geometry.addAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) );
  geometry.computeBoundingSphere();
  const material = new THREE.PointsMaterial( { size: 15, vertexColors: THREE.VertexColors } );
	const points = new THREE.Points( geometry, material );
  scene.add( points );

  function resizeRendererToDisplaySize(renderer) {
    const canvas = renderer.domElement;
    const width = canvas.clientWidth;
    const height = canvas.clientHeight;
    const needResize = canvas.width !== width || canvas.height !== height;
    if (needResize) {
      renderer.setSize(width, height, false);
    }
    return needResize;
  }

  function render(time) {
    time *= 0.001;

    if (resizeRendererToDisplaySize(renderer)) {
      const canvas = renderer.domElement;
      camera.aspect = canvas.clientWidth / canvas.clientHeight;
      camera.updateProjectionMatrix();
    }

    renderer.render(scene, camera);

    requestAnimationFrame(render);
  }

  requestAnimationFrame(render);
}

main();
body {
  margin: 0;
}
canvas {
  width: 100vw;
  height: 100vh;
  display: block;
}
<canvas></canvas>
<script src="https://threejsfundamentals.org/threejs/resources/threejs/r94/three.min.js"></script>
<script src="https://threejsfundamentals.org/threejs/resources/threejs/r94/js/controls/OrbitControls.js"></script>
Get a 3D point cloud from depth image

Get a 3D point cloud from depth image


By : Chichi Papicha
Date : March 29 2020, 07:55 AM
I think the issue was by ths following , I suggest that you check out PCL library. It's open project for 3D point cloud processing, and it's quite well-documented. There are many tutorials, but for your task you should take look at:
Kinect OpenNI tutorial (fast access to point cloud using PCL and OpenNI library)
Generate point cloud from depth image

Generate point cloud from depth image


By : Diogo Outerelo
Date : September 19 2020, 04:00 PM
around this issue You may easily solve this using open3d package. Install it using sudo pip install -U open3d-python (not just open3d -- that's another package).
Once installed:
code :
from open3d import *

rgbd = create_rgbd_image_from_color_and_depth(color, depth, convert_rgb_to_intensity = False)
pcd = create_point_cloud_from_rgbd_image(rgbd, pinhole_camera_intrinsic)

# flip the orientation, so it looks upright, not upside-down
pcd.transform([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,1]])

draw_geometries([pcd])    # visualize the point cloud
shadow
Privacy Policy - Terms - Contact Us © ourworld-yourmove.org