The first X-rays were produced by accident—a by- product emission from gas discharge tubes. The high voltage that is applied between the anode and cathode of the tube accelerates the ionized atoms of the gas into the metal cathode. This releases high- energy electrons, and when these electrons hit the glass wall of the tube. X-rays are emitted.
X-ray tubes use exactly the same physical processes but here the cathode is specially shaped so that a parallel beam of electrons is emitted through a window. Metal targets produce both a continuum of X-rays and a line spectrum. These "K lines" are formed by outer shell electrons falling into the inner K-shell. Metals such as molybdenum, copper, cobalt, iron, chromium, and tungsten are used for the cathode, each producing a K line of a different wavelength. William Coolidge (1873-1975), an inventor and physicist working for General Electric in Schenectady, New York, had become interested in the properties of ductile tungsten when trying to improve the filaments of lightbulbs. As X-rays were another of his interests he decided to try using ductile tungsten as the target anode of a discharge tube. The Coolidge tube was gas-free and had a high-temperature tungsten cathode (tungsten melts at 6,170°F/3,410°C). These new tubes produced an intense, stable, and controllable beam of X-rays. They made the use of X-rays in medical diagnosis both safe and convenient. The tube was patented in 1913 and was used extensively in field hospitals during World War I. Doctors and dentists were introduced to this device and the subject of radiology blossomed.