Document   Home     Help   Index     Login
Map Accuracy Theory and Terminology #2282
    Date first written: 2009.06.05   Review Date:2009.06.05

Table of Contents
1. Preface
2. How Accuracy Is Stated
3. Density of Data
4. Specs for TRIM Data
5. Accuracy for Survey Heights in NTS Maps
6. Comparing DEM with Topo Maps
7. Topo! DEM versus 1:24 Spot Heights
8. Confidence Intervals
9. How is Root Mean Square Error Computed?

1. Preface
This is a consolidated document where I have the latest on map accuracy for all packages. See also Peak Height Variances between TRIM and NTS Maps, which just talks about the variance between NTS and TRIM heights for specific peaks.

2. How Accuracy Is Stated
There are several different ways to state the accuracy of a set of measurements. The crudest way to state the error is to just give a single number. Eg: "This data is accurate to 5m". But what does that mean? One manufacturer might mean that the average error is 5m. Another might mean that 50% of the measurements are within that range. Another might mean that 90% or 95% are within the range. And another might mean that the maximum error is 5m.

Just having one number is not sufficient, we need to know the probability that any given measurement falls within that number. Eg: A statement of the form "90% of the errors are within 5 m". Another statistic you'll see in a lot of cases is the RMS (Root Mean Square) error. It is computed by averaging the squares of each error and then taking the square root of the total. This is the same as statistic called "variance". See Wikipedia.

Note that RMS error is different from "average" error, because the squaring of the error more heavily weights a large error. It thus gives a measure of how spread out the values are. Compare the following:

  Error Square
  5 25
  10 100
  5 25
  20 400
  ----- -----
  40 550 23.4
In the above, the average error is 10, but the RMS is 23.4. The data below is less dispursed but still has an average error of 10 but an RMS error of only 21.3.

  Error Square
  7 49
  10 100
  7 49
  16 256

In other words, if the values are really scattered, the RMS error will be higher. The advantage of the variance statistic is that there are standard ways to calculate the "standard deviation" from it, and then to say just how unlikely any given error is.

Here are some commonly quoted methods of specifying error:

 50% Half the results have a greater error than stated, half below
 Mean: The average error of all the results is in this row
 RMS: The Root Mean Square Error
 95% 95% of the time the error will be within this number
 Max This is just the biggest error seen when they did the

3. Density of Data
Accuracy isn't the only thing that matters with data - the other is how many data points are included. Eg: The TRIM data says they have a data point every 25m. This is a different statistic than the accuracy of any given point.

4. Specs for TRIM Data
I read the following specs somewhere for TRIM:

- there is a data point every 25 meters.

1. 90% of all well-defined planimetric features are coordinated to within 10 metres of their true position.

2. 90% of all discrete spot elevations and DEM points are accurate to within 5 metres of their true elevation.

3. 90% of all points interpolated from the TRIM (including contour data) are accurate to within 10 metres of their true elevation.

True position/elevation is defined as the coordinates that are obtained from positioning with high order ground methods.

5. Accuracy for Survey Heights in NTS Maps
I haven't yet found any specification which states the accuracy of this data. However, I have sampled a dozen peaks and compared the TRIM height with the NTS survey heights, and found the variance to be less than 10m. Peak Height Variances between TRIM and NTS Maps.

Therefore, I think it is safe to assume that the NTS survey heights are within 10m of the TRIM heights, 90% of the time. Therefore, it seems reasonable to use 10m as the 90% accuracy for NTS Survey.

6. Comparing DEM with Topo Maps
In Memory-Map, you can turn on the elevation, and then as you move the mouse around, it gives you instantaneous elevations. These can then be compared with the spot elevations on contour maps.

So I tested the variance between the surveyed spot elevations on the map versus the DEM.

  Spot DEM Variance -----------------------------------------
  Lewis Hills 811

7. Topo! DEM versus 1:24 Spot Heights
Topo! displays the elevation as you move the mouse around. How well does this compare with the 1:24,000 scale spot heights?

8. Confidence Intervals
The simplest way you hear people talking about map accuracy is just a single number. Eg: This data is accurate to within 5 meters. Another simple thing to understand is a statement like "this data is accurate to within 5m, 90% of the time. That statement is a probability statement, it means that if you took a big enough sample, 90% of the values are within 5m of their "true" value. This is a statement about the population as a whole.

However, for any given sample, we would expect some deviation from the stated probability. Eg: With a sample size of 10 peaks, it wouldn't be unusual to have 2 peaks (instead of 1 peak) that had an error greater than 5m. You couldn't immediately say: I'm confident that the probability of error is more than 10%. In order to be confident in making such a statement, you'd have to take a much larger sample. If you took a sample of 1000 peaks, and found that 200 of them were outside the tolerance, then you would be more confident in saying that the error rate was higher than stated. Many people mix up the concept of "confidence" and "probability".

Thus you can often see garbled statements where they use the word confidence, not probability. However a good statistics textbook is quite clear that a probability STATEMENT such as "accurate within 5 meters 95% of the time" is entirely different from the statistical concept of "95% confidence". They are two different things. The first is just a statement that only 5% of the data is out by more than 5 meters. (I'll call that an "error" rate of 5%). The second is how confident you might be in rejecting such a hypothesis, based on a given sample of the population.

Confidence is all about samples. It is all about how confident you would be in rejecting someone else's hypothesis based on your sample data. If we take only a small sample, and the "error" rate in our sample is higher than the stated, we are not in a position to reject the hypothesis with much confidence. The larger the sample, the more confidence one might have in rejecting a given hypothesis.

For example, suppose the BC Government makes the statement that their TRIM data is accurate to within 5 meters, 90% of the time. Our job as statisticians is to try and reject their hypothesis by taking random samples. So we might take a sample of 40 readings and see how many are out by more than 5m. Suppose that 4 of them are out. That's a 10% error rate. Can we reject the original hypothesis? It looks suspicious. We could reject the hypothesis with perhaps 50% confidence. But to be really confident, we need a larger sample. We could have just gotten a bad batch. What statistics is all about is calculating how big of a sample we'd need to REALLY BE CONFIDENT that the error rate was higher than stated.

If we take larger and larger samples, and still find that the error rate is 10%, we become more and more confident that the error rate is not 5%. Eg: if you took a sample of 2000 readings, and found that 200 of them were out by more than 5 meters, then you'd be pretty confident that the TRIM people were overstating the accuracy of their data. You could reject the original hypothesis with quite a bit of confidence, probably 99% confidence, because it is very unlikely that you could take such a large sample and find so many errors.

You still haven't proved it is truly 10%, all you can do with statistics is reject hypothesis.

In summary, statistics is all about sample size, and trying to reject a hypothesis. It is not probability.

A slightly better way to specify it is to use the Root mean square method to compute an "accuracy", and then talk about confidence intervals. Eg: The RMS of the error is 5 m, with a 95% confidence. (You can't reject the hypothesis that 5 m is the accuracy by taking a random sample.

The way map accuracy is stated is to talk about the variance of samples of measurements from the "true" (or higher level) measurements. Eg: Trim heights are accurate to within 5 meters, 90% of the time.

The way of computing the "accuracy" number is to add up the squares of the variances, then take the square root.

However, since we don't have a database of "higher order" measurements, another way to measure map accuracy is to compute the variance between two different sets of map data. Eg: Compare the heights for various peaks in both TRIM and then NTS survey heights. I did this. Peak Height Variances between TRIM and NTS Maps.

9. How is Root Mean Square Error Computed?
A generally accepted standard for stating the accuracy of a given set of map data values is stated below: (from US Federal Geographic Data Committee.

3.2.1 Spatial Accuracy The NSSDA uses root-mean-square error (RMSE) to estimate positional accuracy. RMSE is the square root of the average of the set of squared differences between dataset coordinate values and coordinate values from an independent source of higher accuracy for identical points Accuracy is reported in ground distances at the 95% confidence level. Accuracy reported at the 95% confidence level means that 95% of the positions in the dataset will have an error with respect to true ground position that is equal to or smaller than the reported accuracy value. The reported accuracy value reflects all uncertainties, including those introduced by geodetic control coordinates, compilation, and final computation of ground coordinate values in the product.

----------- My interpretation of this is that if someone says something is accurate to within 5 m, they mean that 95% of the points will be within the stated error. To calculate the stated error, you need to come up with a "Higher accuracy" set of at least 20 points. Eg: If someone measured 20 peaks using the same set of GPS units, and on multiple days, and found that their results were always within 1 meter of each other, then I would accept that set of data as being of "Higher Accuracy". (I call these the "Reference" measurements)

The table below shows how you would compute the "error" if you had such a reference dataset. (I only used 3 points, for simplicity). Each row compares the TRIM value with the reference value, and computes the difference.

  TRIM Reference Diff SqDiff SqRt
  2000 2010 10 100 10
  2100 2105 5 25 5
  2100 1985 15 225 15
  ----- ----
 Avg 10 350

 - in the above, the average of the squared differences is 350/3 which is 116. The square root of that is 10.77. Note that it is different from simply the average difference. In this case, we could say that the RMSE accuracy was 10.77 meters, whereas the average difference would be 10 meters.

So what does it mean when someone says "accurate to 5 m?". Or the average error was 5m.

Question: What is meant by 95% confidence? Why can't we just quote the average or RMSE?

Answer: The "confidence" tells you what chance there is that a given measurement is out by more than the stated value. The Federal document says: "Accuracy reported at the 95% confidence level means that 95% of the positions in the dataset will have an error with respect to true ground position that is equal to or smaller than the reported accuracy value."

In other words, 95% of peaks will have a height within 5 meters. Note that this is different than "plus or minus 5 meters", because 1 point in 20 will have an error greater than 5 meters.

The Federal says:

A minimum of 20 check points shall be tested, distributed to reflect the geographic area of interest and the distribution of error in the dataset.4 When 20 points are tested, the 95% confidence level allows one point to fail the threshold given in product specifications.