3D point cloud data (fwd)
Mon, 21 Apr 2003 17:24:36 -0400
This is a pet peeve of mine.
> I got this message from some guys in the BCRA Cave radio and Electronics
> group, asking about skinning point clouds, now that they have a dataset.
This won't help them too much. As long as you are only dealing with a
single viewpoint, the point cloud representation isn't as bad as I imply
below. All representations are more or less equivalent, as long as you know
what the viewpoint *is* (zero is popular).
Also, your problem will be much easier if your points were recorded in a
particular order. For instance, a raster scan can be converted into
polygons very simply, just connect adjacent points. Several overlapping
raster scans (which the email implied) is tougher, but not too bad if you
can get a good registration between them.
> I know that the Wakulla II project did some software for dealing with point
> clouds, but I haven't seen any skinning done
One problem is that a "point cloud" is not particularly good representation
of 3D sensor data. Once you look at the data as a point cloud, the battle
is half lost. There is almost always much more information known about a
point than its location.
Consider one pixel of a range image. It gives the location of one surface
point (one point of the cloud), but that isn't the only, or even most
important, thing it tells you. It also tells you that all the space between
the sensor and that point is *empty*. That reduces the number of possible
worlds much more than the single point does (so it represents more
The Wakulla II data is an extreme case of this. They were using sonar with
a fairly wide beam (I don't remember how wide, but not 0). Not only is the
range more accurate than the direction, but if they recorded the *first*
return (which is usual) then what they really have is the *minimum* range
within that cone, not the point at the end of an ideal ray.
Extracting a point cloud and then fitting surfaces to that is not the best
way to get a solid model from such data. You really should be asking what
each sensor reading tells you about the world.
In the Wakulla case that might mean taking the union of all the solid
cones, and then removing "islands", places that were never looked at, but
have no connection (or too long or thin a connection?) to the walls. This
is ignoring errors for simplicity, which would not work in real life, you
also need to keep track of probabilities.
Does anyone know how the raw sensor data (pre point cloud) for Wakulla II
> and I've been told that the
> 'skinning' problem is mathematically extremely difficult
More ill-posed than difficult. But with some simplifying assumptions, it
> Does anyone know of any existing, available solutions?
No, there might be packages that do this generically, but they may not be
compatible with your data. The important thing is what kind of errors your
sensor has, and how often errors of a given kind and magnitude occur.
>Or have any suggestions as to how it might be done?