Lords of light
This year's Nobel Prize winners in Physics are three scientists who have played a very important role in the development of information technology. Charles Kuen Kao received half of the Nobel Prize for discoveries that laid the foundations of optical fiber technology, which is today used as the most efficient for almost all types of communication in the world. The other half of the Nobel Prize was shared by Willard Sterling Boyle and George Elwood Smith for the discovery of the digital optical sensor - CCD (Charge-Coupled Device), which today is a basic part of almost every digital camera or digital camera.
When the winners of the Nobel Prize in Physics were announced in Stockholm, a large part of the world came to that information almost instantly. At about the speed of light, at the fastest possible speed, the message spread around the world. Text, image, and tone, through optical fibers and space, came to various devices used in homes to receive information, almost instantly. Such a fast transfer of information today is a convenience that many take for granted, so used to, that they could not easily imagine their lives without this technology.
The main precondition for the development of communications as we know it today was optical fibers, which Charles Kao invented about forty years ago.
After only a few years, Willard Boyle and George Smith brought a revolution in the field of photography, because thanks to their discovery, the film is no longer needed by cameras, because images can be captured using an electronic chip (CCD) that they designed. This discovery paved the way for the digital transmission of images, which are now circulating en masse on the Internet, through optical cables around the world.
How to capture light?
Sunlight is what makes us see the world. However, it took a long time for people to be able to control the light in such a way that they could use it to send different coded messages to multiple recipients at the same time. This technology required numerous inventions, both large and small, in order to lay the foundations of the modern information society. Fiber optic technology required the development of new glassmaking technologies.
A reliable light source was also needed, which was achieved by the development of semiconductors. Finally, a huge network was needed, which would be composed of transistors, amplifiers, switches, transmitters, and receivers, as well as many other elements that would work simultaneously. The telecommunication revolution, the consequences of which we are considering, was made possible by the work of thousands of scientists and inventors from all over the world.
The great world exhibition, which was held in 1889 in Paris, celebrated the centenary of the French bourgeois revolution. The famous Eiffel Tower has become one of the most famous landmarks of this exhibition. However, the extraordinary play with light that was prepared at that time was a less remembered spectacle. It was a fountain that let out jets of water illuminated by various colors. The source of this inspiration was in earlier attempts from the middle of the 19th century to make light rays that would be carried by water. These attempts have shown that when a jet of water is illuminated by sunlight, light travels through it and follows its shape.
Of course, the effects of light in water or glass date back much earlier. As far back as 4,500 years ago, glass was made in Mesopotamia and Egypt. Venetian glassmakers used various combinations of lighting and shapes to create their decorations. Cut and ground glass was used to make candlesticks and chandeliers, and the mystery of the rainbow challenged the imagination of many men and women long before the laws of optics brought answers in the 17th century. However, only about 100 years separate us from the moment when these ideas matured and people started thinking about the application of trapped light rays.
Rays of sunlight falling into water bend when they hit its surface because the optical refractive index of water is greater than the optical index of refraction of air. If the light beam is released from the other side, to travel from water to air, there is a possibility that it will not come out of the water at all, but will be reflected back. This phenomenon forms the basis of waveguide technology, where light is trapped inside a fiber with a higher optical refractive index relative to the environment. The beam of light that is released into the optical fiber is reflected from the glass wall and moves forward through it because the optical refractive index of the glass is higher than the optical refractive index of the air that surrounds it.
Short, simple optical fibers have been used in medicine since the 1930s. Using thin glass tubes, doctors can peek into a patient's stomach or lighten a patient's tooth during surgery. However, when the fibers touched, the light would come out of each other, and they would wear out quickly. The coating of the fibers with a glass coating with a lower refractive index led to a noticeable improvement. So, by the 1960s, a big step had been made towards the development of instruments for gastroscopy and other medical needs.
For long-distance communication, these optical fibers were still useless, and to make matters worse, few scientists were interested in optical light, because it was the age of electronics and radio waves. In 1956, the first transatlantic cable was laid and had a capacity of 36 simultaneous telephone calls. After that, satellites began to cover the rest of the communication needs - telephony developed dramatically, and television transmissions required even greater capacities. Compared to radio waves, infrared and visible light can carry tens of thousands of times more information, so the potential of optical light could not be further ignored.
The invention of the laser in the early 1960s was a decisive step for fiber optics. The laser was a stable light source, emitting an intense and highly directed beam of light, ideal for use with thin optical fibers. The first lasers emitted infrared light and required cooling, but more practical lasers were constructed during the 1970s, which could operate at room temperatures. This was a technological breakthrough that practically gave birth to optical communication. All information could now be encoded into extremely fast flashing lights, which would represent digital ones and zeros. However, there was still no answer to the question of how such a signal could be transmitted over longer distances because, after 20 meters, only 1% of the light that entered the fiber managed to be detected.
Reducing this loss of light has become a challenge for visionaries such as Charles Kao. He was born in 1933 in Shanghai and moved with his family to Hong Kong in 1948. He was educated as an electronics engineer and defended his doctoral thesis in 1965 in London. By then, he was already heavily employed at Standard Telecommunication Laboratories, where he and his colleague George Hockham carefully studied glass fibers. Their goal was for at least 1% of the light that enters the optical fiber to remain in it after 1 kilometer.
In January 1966, Kao presented his conclusions. It was not the imperfections of the fiber that were the main problem, but only the glass that had to be cleaner and more perfect in order to achieve its goals. He also added that it is feasible, but with great efforts. The next goal was to make a glass of such transparency that had never been achieved before. Kao's enthusiasm has inspired other researchers to share his vision of the potential of fiber optics in the future.
Glass is made from quartz, one of the most widely distributed minerals on Earth. However, in order to make the cleanest glass in the world, Kao suggested that very thin fibers be extracted from quartz melted at 2000 degrees Celsius. After 4 years, in 1971, scientists from a company that has over 100 years of experience in making glass, Corning Glass Works, from the United States, using chemical processes, produced an optical fiber 1 kilometer long.
Very thin fibers, made of glass, can seem very brittle and fragile. However, when the glass is properly pulled into a thin thread, its properties change. It becomes firm, light, and flexible, which must be a prerequisite if the fiber is to be buried, pulled underwater, or bent at the corners. Unlike copper cables, glass fibers are not sensitive to thunder and cannot be affected by bad weather like radio waves, for example.
It took quite some time for the Earth to connect with optical fibers. In 1988, the first optical cable was laid under the bottom of the Atlantic Ocean, between America and Europe. It was 6000 kilometers long. Today, telephone signals and data travel through a network of optical glass fibers whose total length is about 1 billion kilometers. If that length were wrapped around our planet, it would make about 250 thousand windings, and we should keep in mind that the number of optical fibers that come into use increases from minute to minute.
Even in the purest glass fibers, the signal is slightly lost along the way and must be amplified if transmitted over huge distances. This problem, which previously required electronics, is now solved with optical amplifiers. This ended the unnecessary losses that occurred when the light had to be converted into electronic signals and electronic signals back into the light. Today, 95% of the light remains in the fibers after a full kilometer of travel, which is a number that should be compared to Kao's ambition to transmit only 1% of light over the same distance.
Also, today it is not possible to talk about only one type of optical fiber. The choice of fibers is subject to consideration of communication needs and costs. The fibers are characterized by combinations of sizes, material characteristics, and light wavelengths. Semiconductor lasers and light grain-sized light-emitting diodes make optical cable networks stream information from entire telephony and the Internet around the world. Infrared light, with a wavelength of 1.55 micrometers, is today used for all types of long-distance communication, because the losses at that wavelength are the smallest.
The capacity of optical networks continues to grow at an astonishing rate, and the transmission of thousands of gigabits per second is no longer a dream. Technological advances are moving in the direction of increasingly interactive communication, where optical cables will be more and more accessible and eventually connect even our houses with a large network. The technology already exists, and what we do with it is another common question.
Digital eye
Sometimes inventions seem surprising, and such certainly include the CCD sensor. Without CCD chips, the development of digital cameras and camcorders would have gone much slower. If it weren't for the CCD, we wouldn't be able to see stunning photographs of space taken by the Hubble Space Telescope or images of the red desert on our neighboring planet - Mars.
The mentioned phenomena are definitely not what the inventors of CCD, Willard Boyle, and George Smith, imagined when they started their work. One September day in 1969, they laid the foundation of a photosensor model on a board in Boyle's office. At the time, they did not have in mind how to take a photograph, but their goal was to make a better electronic memory. CCD as a memory medium is now largely forgotten, but it is therefore in a very unexpected place at the time - digital cameras and cameras, as a basic component. The story of the CCD sensor is another success story of our electronic era.
Just like many other electronic devices, digital photosensor, CCD, is made of silicon. The size of a postage stamp, the silicon substrate holds millions of light-sensitive photocells. Digital photographic technology uses the photoelectric effect, which was first explained by Albert Einstein, for which he received the Nobel Prize in 1921. This effect occurs when light hits a silicon substrate and emits an electron into a photocell. The released electrons are collected in cells that act as small containers. The higher the light intensity, the more electrons that fill these small containers are released.
When an electrical voltage is passed through the CCD array, the contents of the container can be read - line by line, the electrons from the array exit into one type of transmitter. Thus, for example, a CCD matrix measuring 10 x 10 elements is transformed into a string of length of 100 elements. In this way, the CCD chip transforms the optical image into electronic signals that are later translated into digital ones and zeros. Each cell can be represented as one element of the image - a pixel. When the width of the CCD chip, expressed in pixels, is multiplied by the length of the same, the image capacity of the entire photosensor is obtained. Thus a CCD chip measuring 1280 x 1024 pixels has a capacity of 1.3 megapixels (1.3 million pixels).
The CCD renders the image as black and white, so various filters must be used to obtain information about the colors of the light. One of the filters that leak basic colors - red, green, and blue, is placed above each photocell of the photosensitive sensor. Due to the nature of the sensitivity of the human eye, the number of green pixels is twice the number of red or green ones. Various complex filter techniques can be used for more accurate photography.
When asked how Boyle and Smith came up with the idea to develop CCD during their brief brainstorming that took place some forty years ago, the answer is their employer’s internal policy at Bell Labs. Their boss encouraged them to develop a better, so-called bubble memory, which is another invention of the laboratory in which they worked. After the basic CCD model was completed, it only took a week for the technicians to make the first prototype. As a memory, CCD has long been forgotten, but it has become a center of photographic technology.
Bell Labs hired the American George Smith in 1959, who recorded 30 patents during his work in this company. When he retired in 1986, he dedicated himself to his life's passion - sailing the oceans, and as a result, he sailed around our planet several times.
By 1969, Willard Boyle had made several important discoveries, including discoveries related to the development of the world's first red laser. Boyle was born in a remote part of Canada, where he was educated until the age of 15, thanks to his mother, in his house. Bell Labs became his workplace in 1953, and in the 1960s, with 400,000 scientists in America, he worked on a project to send the first man to the moon on July 20, 1969.
The benefits of the digital optical sensor soon became apparent. Just a year after the discovery, in 1970, Smith and Boyle were able to present a CCD in their video camera for the first time. In 1972, the American company Fairchild constructed the first photographic sensor measuring 100 x 100 pixels, which entered serial production a few years later. In 1975, Boyle and Smith constructed a digital video camera of sufficient resolution to transmit a television image.
By 1981, the first camera with a built-in CCD sensor appeared on the market. Although with extremely primitive characteristics compared to modern cameras, these cameras are responsible for the beginning of digitalization of photography - digital photography was born. Five years later, in 1986, the first 1.4-megapixel sensor was made and after 9 years, in 1995, the first fully digitalized camera appeared on the market. Photo equipment manufacturers around the world quickly embraced this technology and the development of cheaper and higher quality products followed.
With cameras equipped with digital sensors instead of film, the era in the history of photography, which began in 1839, when Louis Daguerre presented the photographic film as his invention, before the French Academy of Sciences, is over.
When it comes to everyday photography, it turns out that digital cameras and cameras have become a commercial success. Recently, CCD sensors have been challenged by another technology - CMOS (Complementary Metal Oxide Semiconductor). It is a technology that was developed at the same time as CCD. Both technologies work on the principle of the photo effect, but while with a CCD chip, the photocells are read line by line, in the CMOS sensor each cell is read separately.
CMOS consumes less energy, so batteries last longer, and its production is cheaper for a longer time. However, it must be noted that the characteristics of this chip are higher noise, as well as the loss of image quality. CMOS technology is also not sensitive enough for certain areas in which photography is applied. Today, this technology is most often used in mobile phone cameras and for some other types of photography. However, both technologies are constantly evolving to be applied for various purposes.
3 years ago, the CCD reached a chip resolution of 100 megapixels. Although image quality does not only depend on the number of pixels, reaching this resolution has taken another step in the development of digital photography. There are those who predict that the future belongs to CMOS, rather than the CCD sensor. Others, however, think that these two technologies will continue to overtake each other for a long time.
Initially, no one predicted that the CCD would become the basic tool of modern astronomy. However, thanks to digital technology, the wide-angle camera of the Hubble Space Telescope is able to send beautiful photos of space to Earth. The sensor of this camera initially consisted of only 0.64 megapixels, but later, when 4 such sensors were interconnected, the camera gave photos with a resolution of 2.56 megapixels. This was a big deal in the 1980s when the telescope was designed. Today, the Kepler satellite is equipped with a 95-megapixel mosaic sensor, and scientists hope to discover Earth-like planets orbiting other stars.
Earlier, astronomers realized the benefits of a digital optical sensor. It covers a huge part of the electromagnetic spectrum - from X-rays to infrared light and is thousands of times more sensitive than photographic film. Out of 100 photons, a CCD is capable of capturing up to 90, where a photographic emulsion or human eye would barely capture 1. In just a few seconds, the CCD collects light coming from distant celestial objects, which previously took several hours of photographic exposure.
In 1974, the digital photographic sensor was used for the first time to photograph the Moon, and after that, astronomers accepted this technology with lightning speed. A digital camera with a resolution of 320 x 512 pixels was mounted on one of the telescopes of the American National Observatory Kitt Peak in Arizona, as early as 1979.
Today, whenever photography, video, or television is used somewhere, digital optical sensors are certainly involved in the process. They are also used for surveillance purposes, both on Earth and in space. CCD is also used in medicine to record internal organs for the purpose of diagnosing the disease. The digital optical sensor has become a widely used technology, both in science and in everyday life, both on the ocean floor and in space. It helps us to discover the fine details of very distant and very small objects, as well as to permanently record the wonderful moments of our lives.
@vladimir94topke