Lutz–Kelker bias

From Infogalactic: the planetary knowledge core
(Redirected from Lutz Kelker bias)
Jump to: navigation, search

The Lutz–Kelker bias is a systematic bias that results from using the assumption that the number of observable stars increases with the square of the distance. In particular, it causes measured parallaxes to stars to be larger than their actual values. For a given parallax measurement with an accompanying uncertainty, both stars closer and farther may, because of the uncertainty, appear at the given parallax figure. But there are more stars in the volume shells at the farther distance, biasing the measurement. This causes inferred luminosities and distances to be too small, which poses a problem to astronomers trying to measure distance. The existence of this bias and the necessity of correcting for it has recently become relevant in astronomy with the precision parallax measurements made by the Hipparcos satellite.

The correction method due to Lutz and Kelker is only valid if three things are true. The standard deviation must be much smaller than the mean because otherwise the method yields negative distances. The observables must be uniformly distributed in space, so that the number at distance d is proportional to d2. The observables must be so bright that all are potentially visible by the observing instrument over the entire distance range in question.[1]

History

The original description of the phenomenon was described in a paper by Thomas E. Lutz and Douglas H. Kelker in the Publications of the Astronomical Society of the Pacific, Vol. 85, No. 507, p. 573 article entitled "On the Use of Trigonometric Parallaxes for the Calibration of Luminosity Systems: Theory."[2]

References

  1. Paterson, David.A. "Topics in Astronomy: Topic 8. Inappropriateness of the Lutz-Kelker equation for brown dwarfs". Retrieved on 22 September 2015.
  2. Lua error in package.lua at line 80: module 'strict' not found.


<templatestyles src="Asbox/styles.css"></templatestyles>