Error in loop closure algorithm?

John Halleck John.Halleck at utah.edu
Thu Aug 12 00:01:08 BST 2004


  What you have stumbled onto here is a vaguely religious issue.
  But I'll try to be somewhat agnostic about it.

  For those that play this game frequently, I'll start with the disclaimer
  that I'm about to do some serious simplification below...

> I've been uing Survex as a means of generating a reasonably accurate
> architectural drawing. The technique is to assume the builder got it
> right and that the wall of a rectangular room are aligned at 0, 90,
> 180 or 270 degrees. You then just go round each room with a tape measure,
> producing a loop per room, connected at the doorways. This works really
> well but seems to demontrate a possible error in the Survex loop-closure algorithm.

> To demontrate this, take a simple survey of two perfect loops, joined by a single leg:-

> [...]

> Now introduce an error into one of the loop:-

> [...]

> Survex move not only the survey legs in the "bad" loop but also adjusts the connecting leg (i.e. test.4 - test.5) and
> move the station in the "good" loop (i.e. test.4) where the connecting leg joins. The "error" is only 1cm which wouldn't
> how in most cave surveys (with me using the instruments!) . However, the error is still 1cm if you scale the loops down
> to, ay, 1m per leg - which becomes quite noticeable when the loops are small and generally well-closed.

> I this an error in the algorithm or just the result of me using Survex in a way that wasn't intended?

  Survex uses (basicly) a "Least Squares" survey adjustment.
  This is a wonderfull piece of mathematics invented by Gauss that
  *** Subject TO The Constraint that the ERRORS ARE RANDOMLY DISTRIBUTED ***
  (And that there are only random errors, and not systematic errors, but
  that restriction doesn't play into the following discussion)
  produces the mathematically "most probable" data that would have produced it.

  If there are "Blunders" in the data (I.E. errors that are not remotely likely
  as random error) then the assumption is violated, and the naive adjustment would
  smear this error all over the survey.   Now, in defense of Least Squares, the
  adjustment also produces error statistics.  This would allow the problem to
  be addressed (Form a new weight matrix based on the statistics, and rerun it.)

  This is what is happening in your case.   If (and I'm not recommending this, see
  below) you were to compute up new weights at this point and do the standard
  relinearization the output would be much more like you expect.   But this
  isn't done.  Formally (academicly), while that might produce a perfectly
  acceptable map, the result has lost some of the claim to "most probable"
  mathematical result.  The mathematical problem is that the "random" errors
  model has been violated.   The statistics of the adjustment should allow
  one to identify the area of the problem, and to identify the bad data for
  removal...

  From the point of view of the Geometor the problem is that Least Squares
  doesn't "know" about loops, it is (in some sense) forming appropriately
  weighted averages to get locations.

  Before computers actual surveyors (for entire country surveys) would do
  loop analysis, and do the network adjustment by the method of "closing
  your best loops first".  In your case this would have produced exactly
  what your intuition would have suggested.  (And this is what some
  cave survey programs do.)


  Vaguely religious Diatribe follows:


  What adjustment technique a cave survey program uses is, to a large degree,
  a judgement call.  There are a lot of competing factors.   (And some of
  the factors are different than land surveying, since if they identify a
  bad point they will go resurvey that and throw the old survey out.)


  Loop Oriented methods:

  Loop oriented methods are not as intrinsicly general or flexable
  as Least Squares, but do have some abilities that Least Squares
  does not.    For example: there are well known formulas that
  have been published that, given a loop known to have an angular
  blunder, will give you the distance from each point to the likely
  blunder.  There is another procedure that can, in the same case,
  give you the direction from each point to the likely blunder.
  There are tried and true methods that deal with blunders in length.

  This is not to say that "loop oriented" programs all do this, but
  only that the method allows this.   Loop oriented analysis of blunders is
  often covered in begining books on survey network adjustments,
  along with Least Squares.



  Least Squares: (My method of choice if I have to choose)

  It is the most general method of handling errors.
  Any kind of network, and almost any kind of constraints (Such
  as GPS'd points) can be thrown together meaningfully.
  If the assumptions are met the result is the most probable values for
  the adjustment.  But it is also computationally intensive.
  So it is the most commonly "simpilfied" of the methods, and most
  simplifications make it no longer have the original mathematical
  assurances.
  Surprisingly, it does about the best job around of identifying
  that there are problems, and somthing of their nature. (Although
  the final error statistics are often not printed because people
  seem to not want to wade through them....)
 
  If the error statistics show serious problems, there are well known
  (studied since the 1800's) methods of addressing this (relinearizaiton,
  for example) and producing something that is not far from what your
  intuitions lead you to expect.
  But such techniques about double the amount of programming needed
  to get it all running.  (And you still loose some of the mathematical
  assurances that one expects you choose Least Squares to get.)

  This is not to say that "Least Squares" oriented programs all do
  everything the method allows, but only that the method allows this.



  Obviously, one could also do a least squares adjustment to identify
  that there are problems, and then use the traditional loop oriented
  methods to identify them.



  And even in Least Squares there are lots of ways to go.
  To the best of my knowledge (someone correct me if I'm wrong)
  everyone in cave surveying does least squares starting from
  the connectivity matrix, with a problem size related to the
  number of points.   Some programs "trim" dead ends off to
  try to keep this number down.
  (But this number can easily be many thousands in cave surveys
  that I have seen.)

  This is but one of the methods of doing Least Squares for a survey.
  Another method involves forming the loops (outside of the least
  squares program), and setting up a problem matrix (or matrices) that
  have a size proprotional to the number of independent loops.  (And
  since the number of loops is *usually* less than a hundered or so
  in a survey of thousands of shots, this has some advantages)


  And regardless of which way the Least squares the problem is set up,
  you can solve the system in a number of different ways.  "Normal
  Equations" are popular in survey circles, and "Orthogonalization
  methods" are popular with mathematians in the last 20 or so years.


  And even if you have picked a method of setup, and the method of
  (for example) Normal Equations, there are MANY ways to solve
  such a system, from Helmet blocking (Used by NGS, for example),
  to various decomposition methods, to various gradient mathods,
  etc.




  Bottom line, Choice of method is not an easy choice.  All methods
  have good and bad points.  After people do a lot of research into
  "their" method of choice, it becomes a religious war to get them
  to even look at other methods.







More information about the Survex mailing list