Loop Closures

Olly Betts olly at survex.com
Tue Nov 11 00:02:30 GMT 2014


On Thu, Nov 06, 2014 at 05:33:40PM +0000, Wookey wrote:
> +++ Peter Smart [2014-11-04 17:12 -0500]:
> > 1) Is it moved a horizontal or sloping distance ie does it take
> > account of both plan and elevation loop closure? How much of
> > the error is then vertical versus horizontal?
> 
> The least squares anaylsis in done in 3D. The err file splits out
> vertical and horizontal misclosures:
> 
> so
> 161.twintubs.7 - 161.twintubs.3
> Original length   5.10m (  1 legs), moved   0.11m ( 0.11m/leg). Error   2.19%
> 1.237954
> H: 0.233058 V: 2.007189
> 
> means that the segment (1 leg) between 161.twintubs.7 and
> 161.twintubs.3 has just over 2% error which is largely a vertical
> offset. I can't actually remember what units those numbers are in:
> Standard deviations, percent or metres. Neither of the last two seem
> right.

They are in standard deviations.

Looking at the percentage error seems popular amongst cave surveyors,
but it's a somewhat problematic indicator - it doesn't consider the
claimed accuracy of the survey data involved in the closure, and a
longer loop should be expected to have a smaller percentage error (as
errors tend to cancel).  Both of these mean that you can't set a
consistent threshold on what an acceptable percentage error is.

The other three numbers are expressed in terms of what the error
would be expected to be based on the measurements involved and the
specified accuracy of the instruments used.  Assuming things are
normally distributed (even if the readings themselves aren't, this
assumption will be increasingly true as you combine them), then
a value over 3 means 99.7% likelihood that there's an error, and
a value over 2 means 95% likelihood there's an error:

http://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule

Those percentage likelihoods do assume that the standard deviations
specified for the instruments are correct.

The "H" and "V" numbers are standard deviations looking only at
the horizontal and vertical components respectively - these can show
if the misclosure is largely horizontal or vertical.  In the case
above the error is mostly vertical, which suggests there may be
a clino error (such as a reversed sign on a clino reading, or a
misreading on a non-steep leg), or a tape error on a plumbed (or very
steep) leg.  This can be helpful for locating transcription errors,
or working out where to break a survey which doesn't seem to fit
the sketches.

Aven's "colour by error" is based on the number of SDs (so 1.237954
above).  You can't (currently) colour by the V or H values, but it
wouldn't be hard to add if people felt it was useful.

> This stuff could be much better docunmented in the manual. Anyone keen
> to do that?

I thought it was documented, but it seems not - I guess I've just
explained it before but not added that explanation to the manual.
I'll add the text above.

> > is the loop closure routine progressive starting from small and
> > working up to large loops or is it an error minimisatiion
> > optimisation type scheme?
> 
> It is a least squares analysis, one for each segmentable section of
> the data. The network can be chopped into sections at 'articulation
> points', which is points where there is only one edge (series of legs)
> between two more complex bits of network. Each of those can be solved
> eaparatly to reduce the memory usage and make things faster. 

That's true, but really an implementation detail - the results are the
same as if everything was solved simultaneously.

The exception is that if you explicitly use "*solve" then loop
closure is done and all stations seen so far are fixed, and survey
data after that is solved as a separate operation.  The use-case for
"*solve" is that you have an existing drawn up survey and want to be
able to plot extensions without having to redraw the existing survey -
less relevant these days as there are computerised drawing approaches
which can morph an existing drawn survey.

> > Survex is capable of including different standard deviations for
> > individual survey lines or legs. Does the loop closure routine take
> > this into account eg by forcing more movement on a BCA grade 3 line
> > compared to BCRAGrade 5? 
> 
> Yes. Every leg/segment gets SDs assigned, either defaults due to the
> data type or BCRA grade info (just a set of SDs for readings). Errors
> are distributed according to the SDs. This is just a feature of the
> least squares algorithm

The error model includes covariances as well as the SDs - these express
dependencies between the expected errors in the x, y, and z components
of a leg.

> > Issue is if you have a reliable survey how
> > do you force this to have more say in the final positions after loop
> > closure compared to the lower grade survey without simply fixing a
> > point at the end of the ‘good’ survey?
> 
> You use the *SD command to allocate relatively low standard deviations
> to the data that is more accurate, and/or higher SDs to the data that
> is worse. 

You can also put the good data first, and then *solve after it before
you include the rest of the data.  In general I wouldn't recommend doing
this (it's easy to make wrong assumptions about which data is good and
which isn't), but it might be sensible if you have some data you really
want to treat as a "ground truth" - e.g.  if someone's surveyed a trunk
route with a theodolite.

Cheers,
    Olly



More information about the Survex mailing list