Chapter +6: California Dreaming

In 1533, Fortún Ximénez, became the first European to reach the huge fabled island that existed off the west coast of the Americas. The island took its name from a popular novel “The Adventures of Esplandián” by The Spanish writer, Garci Rodríguez de Montalvo. The novel described an island rich in gold and populated by black Amazons (female warriors). In the novel, these warriors and their trained army of flying griffins had joined with the Muslim forces in attacking the Christian stronghold of Constantinople. It was Queen Calafia[1] who ruled over the island caliphate. The name of the fabled island was, of course, California.

Fortún Ximénez died on the island, killed by the local inhabitants. But word of the discovery had got back to Hernán Cortés, the infamous Spanish conquistador responsible for the ruthless conquest of the Aztec empire just twelve years earlier (1521). Cortés sent several exploratory expeditions and was the first to actually found a colony. The colony failed and was abandoned within two years, but there were many subsequent expeditions that continued the aggressive exploration and settlement of that fabled island of California.

Of course, there were never any Amazons, nor griffins, nor a Queen Califia, and no gold (though it did finally show up 400 years later and 2000 km further north). And even the island was not actually an island. It turned out to be the tip of a very long narrow peninsula. However, Montalvo’s name stuck: Baja California (lower California) refers to the long peninsula that is now part of Mexico. Alta California (upper California) was destined to become California, the 31st state admitted to the Union (the USA). The state of California is the focus for this chapter. It’s about 1200 km (800 miles) North to South and 400 km (250 miles from East to West, so it qualifies well for this one-million meter chapter

The exploration and conquest of Alta California starting in earnest in 1769. Under the direction of the Spanish government, Franciscan Catholic priests built a series of 21 missions[2] spaced one day apart by horseback (~30 km or 20 miles). These missions stretched from Mission San Diego de Alcalá (Saint David) in the south and included Mission Santa Barbara (Saint Barbara), Mission Santa Clara de Asís, (Saint Clare), Mission San José, (Saint Joseph), and Mission San Francisco de Asís (Saint Francis) towards the northern end – lending their names to many of the modern California cities. The native population was forcibly relocated to live around these missions and were civilized with European customs and indoctrinated with the catholic religion. As a result of the missionaries’ efforts, combined with diseases such as measles and smallpox, and following many many decades of mismanagement and mistreatment by first the Spanish, then the Mexican, then the US governments, the native population dwindled from an estimated 300,000 to a minimum of 12,000 persons by 1900. According to the 2017 census[3], there are now close to 40 million people altogether living in California. California has the biggest diversity of ethnicities in the United States with large populations of European, Hispanic, Asian, and African-American heritage, but with only 1.6% claiming Native-American ancestry.

Today, five centuries after Fortún Ximénez set foot on the Baha California and two and a half centuries after the Franciscan monks’ misguided efforts to civilize the natives, California boasts the largest economy and the largest population of any of the 50 United States. If it were a separate country, it would rank 6th in GDP between Great Britain and France. California has been the world leader in the development of semiconductor technology and data storage technology and the central force in creating the modern information age. There is also another area of endeavor in which California leads the world and that will be the topic of the next section. The topic will motivate the subsequent discussion on how broadcast video and audio information gets distributed in analog or digital form. Until recently, most of this data was distributed in a real physical format and even today much of it still is. But, increasingly, wide-bandwidth optical and radio links are supplanting these physical forms of distribution such as the DVD. However, before we discuss the technologies employed, let us pay a visit to the source of much of this broadcast content – to what, in the year 1900, was a sleepy little suburb of Los Angeles with a population of 500 people.

Hollywood

The Motion Picture Patents Company (MPCC), founded in 1908 by Thomas Edison, controlled all the important patents required for film-making[4]. Headquartered in New Jersey on the east coast of the US, it maintained strict control and enforcement of the key patents covering the nascent film industry. California, however, was on the opposite side of the continent 4,000 kilometers (3000 miles) away and several days journey by train – well away from the watchful eyes of Edison’s patent enforcers. In particular, the city of Los Angeles lay only a couple of hours away from the Mexican border – offering a hasty retreat should the need arise. In addition, the Los Angeles area offered a rich and varied landscape ranging from arid desert to lush irrigated orchards and gardens and from lofty mountains to spectacular coastlines. Also, at that time, the technology required very strong levels of lighting, so the many hours each day of bright sunlight in southern California were an added bonus. In the early 1900s, independent filmmakers began setting up studios in the Los Angeles area and, in particular, in Hollywood[5]. Most notably, Cecile B DeMille’s studios were established there in 1914 and the Charlie Chaplin studios in 1917.

By the 1930’s and 1940’s (MPCC’s patents having long expired), the US movie industry was controlled by five major studios: Paramount, RKO, 20th Century Fox, Metro-Goldwyn-Mayer (MGM), and Warner Bros. The five studios were all based in or around Hollywood. This is sometimes called the golden age of Hollywood when many famous actors made their names including Ronald Reagan, Shirley Temple, John Wayne, Judy Garland, Cary Grant, and Katherine Hepburn. In 1939, Clark Gable and Vivien Leigh starred in “Gone with the Wind”, generally considered the highest grossing movie of all time at $3.4 billion (in 2014 dollars). Cinema audiences in the USA peaked during the golden age.

Over the years there have been major technology changes that have buffeted the industry and created winners and losers so that the major players get repeatedly shuffled or replaced. These changes include the transitions from silent movies to ‘talkies’ with the “Jazz Singer” in 1927 and from black-and-white to color established in 1937 with Snow White and the Seven Dwarfs in “Glorious Technicolor” (a 3-color scheme) being the highest grossing movie that year. The advent of television after World War II, had, of course a dramatic effect, much to the detriment of conventional movie making and to the advantage of television production. The difference between the two technologies was enormous, leading to very different production techniques. Movies, by that time were large-screen, high-resolution, color productions with stereo high-fidelity sound. But they could only be viewed in a movie theater. In contrast, television quickly became immensely popular to the extent that most households in the US had a “TV” receiver in their living rooms. Television was ‘live’ and ‘instantaneous’ and ideal for quickly and widely disseminating (broadcasting) news and information. But it was small-screen, poor-resolution, black-and-white, and had mono sound. Worst of all, it was strictly live. At that time, there was no way of recording the information for later editing. It had to be right first time and thus had more in common with live theatrical plays than movies. Acting and directing and production for television was at that time a totally different animal – not that that was any inhibition for Hollywood and Los Angeles.

Technology never stands still. The situation changed again with the introduction of color pictures and stereo sound in television and then, in particular, with the invention of video tape recording by Ampex in Redwood City, California. The famous “Kitchen Debate” took place on July 24th, 1959, in at the Moscow Trade Fair. Richard Nixon (then US vice-President) bragged to Nikita Khrushchev (USSR Premier) about the superiority of American technology including repeated mentions of the Ampex[6] color video recorder upon which the exchange was recorded for posterity (the YouTube video makes fascinating viewing[7]). This invention removed the requirement for live television production and served to align the movie and television industries more closely again. In 1975, Sony introduced the “Betamax” consumer video cassette recorder (VCR), and the following year saw the introduction of JVC’s Video Home System (VHS) that eventually dominated the industry. The introduction of VCRs and subsequently optical discs (DVDs introduced in 1995 and then Blu-Ray in 2006) was a boon for both movie and television programming. The ability to do a secondary release of movie or television shows on tape or disk hurt ticket sales in cinemas but provided a wonderful after-market for the production studios.

Even today the movie industry is still largely centered around the Hollywood area in Los Angeles and still, every February, hopeful actors and directors and screenwriters and the Hollywood ‘Glitterati’ gather for the ‘Oscars’ (Academy Awards) held annually on the famous Hollywood Boulevard. Similarly, though still separately, the “Emmy” Awards are an annual event in Los Angeles recognizing the leading contributors in creating shows for television. Of course, there are rivals to Hollywood. Other major film centers around the world include Mumbai, India (Bollywood) and Lagos, Nigeria (Nollywood) which both exceed Hollywood in sheer volume of films and theater tickets sold, as does the Chinese cinema industry based in Shanghai and Hong Kong. But these are far surpassed by Hollywood in worldwide influence and certainly in revenue which exceeds $10 billion per year.

Early Photography

The technology required for movie-making was much more of an engineering challenge than anything requiring a major innovation or breakthrough. The formation of images with a ‘camera obscura’ (pinhole camera) and operation of lenses and curved mirrors and basic optics were all known in antiquity in Europe and China. Polished metal mirrors and crystal lenses have been unearthed at archeological sites in the Middle East going back several thousand years. By the 14th century, lens-making for corrective eyeglasses was an established industry in Europe. In early 1600’s Galileo and others were experimenting with telescopes and microscopes and making revolutionary discoveries. It was certainly well recognized that high-quality images could be readily formed (or even projected from a ‘magic lantern’) with the use of curved mirrors and lenses. Furthermore, if the scenes were moving, so were the images.

However, the task of actually capturing those images, even stationary images, was much more difficult. Given enough time, stencil images could be formed with sunlight on green leaves or on human skin, but there was nothing available that was sufficiently sensitive to capture the faint image in a camera obscura – even with the help of a sizable lens to help gather light. Those images could certainly be used to guide the hand of a skilled artist and create the very accurate renditions painted by the Old Masters such as Vermeer in the 17th century. But ‘photography’ as we now call it, remained out of reach – that was, until 1839!

In 1839, the French Government unexpectedly published complete instructions for a working photographic process. This was presented generously as “un don au monde entier”, a gift to the whole world. Actually this gift was with the exception of Great Britain where a patent was filed just a few days before the French announcement. It was Louis-Jacques-Mandé Daguerre who had developed the eponymous ‘daguerreotype’ process. The process was based on the properties of silver iodide. Silver compounds of chlorine, bromine, and iodine (the halides) are all strongly photosensitive. In the right chemical environment, light causes a rapid decomposition yielding an elemental silver precipitate that can be clearly distinguished. After exposure to the light image, it was necessary to then ‘fix’ the image by soaking it in hot saline solution to remove the undecomposed silver iodide and render the metal plate insensitive to further light exposure. The oldest surviving human portrait, an image of his sister, was taken in New York in 1939 by the English-American scientist and philosopher, John Draper[8]. The photograph required Ms. Draper to sit very still for 65 seconds while the 4-inch aperture lens gathered enough light to capture the image.

Meanwhile, in England, perhaps prompted by the patent situation in Great Britain, Henry Fox Talbot, went on to develop his competing ‘talbotype’ process. The talbotype or calotype process, though inferior in resolution, could be used to create a translucent negative on paper from which multiple positive copies could be made. This was the process that ultimately formed the basis for commercial ‘wet’ photographic processing that was prevalent in the 20th century and epitomized by industry giants such as Kodak and Polaroid.

However, perhaps the most magical aspect of photography was the development of development. Early on, it was found that certain chemicals applied after exposure but prior to fixing could greatly strengthen a weak image. Over the years, this so-called development process was improved to the extent that images formed with extremely short exposures that were initially completely invisible (latent images) could be strengthened to perfect full contrast. The mechanism for this huge chemical amplification did not become clear until much later. It turns out that, even with very short exposures, there are always a few silver halide molecules that get reduced to atomic silver. This tends to occur in special locations where defects or dislocations occur on the surface of the photographic grains. The latent image consists of these tiny invisible clusters of just a few silver atoms. But during the development process, a single tiny cluster can act as a nucleus to start the reduction of an entire grain transforming it into a clearly visible silver particle. Accordingly it became possible to reduce exposure times from hours to minutes to seconds to milliseconds and even to microseconds. Though it is now a scene from a bygone age, there was never anything more magical than seeing that hidden image slowly emerge from a blank white sheet laying in a tray of developer in the photographer’s darkroom.

Moving Pictures

So far we haven’t discussed how to deal with moving images. Again there was nothing new in the idea that a series of incrementally changing still images or drawings presented in rapid succession could fool the eye into believing it was viewing continuous motion. This is evident to any child who has drawn a series of cartoons on the top right corner of the pages of a book and watched the apparent motion as they riffle and release the pages in quick succession. ‘Flip books’ based on this principle were popular in Europe in the 19th century and created various moving scenes from a sequence of drawings or, later, of photographs. Ten images per second is sufficient to give a reasonable perception of motion. But taking photographs are this rate with millisecond exposure times to capture actual motion in real time was no simple task.

This brings us to the story of the first movie maker and the first movie star. The first movie maker was an Englishman called Eadweard Muybridge (one of his many names) and the name of the movie star was Sallie Gardner.

Mr. Muybridge was an interesting character. On the 17th of October 1874, Muybridge confronted his wife’s purported lover, Major Harry Larkyns and shot him dead. Muybridge’s plea in the courtroom in Napa, California, was “innocent by reason of an insanity” (the insanity having been brought on by a serious stagecoach accident years earlier). The jury, however, concluded that he was quite sane. Nevertheless, in explicit defiance of the judge’s instructions, the jury proceeded to acquit him on the basis that the murder was a reasonable and “justifiable homicide”. Four years later, Eadweard Muybridge was to be found in Palo Alto collaborating on a very special project with that atrociously powerful and wealthy former Governor of California and longtime President of Southern Pacific Railroad, a man often labelled as one of the major robber barons in the USA, Amasa Leland Stanford.

The Sallie Gardner movie was made by Muybridge at the specific request of the Leland Stanford. The movie was shot in Leland Stanford’s Palo Alto farm (now the home of Stanford University). The movie comprises a mere 24 frames in total. Each of the 24 frames came from one of the 24 cameras that were set up equispaced in a straight line about 50 feet long (15 meters). Extending in front of each camera was a thin tripwire that could open the shutter momentarily (for less than a millisecond). On June 15, 1878, everything was ready – the tripwires tripped and the shutters all clicked in quick succession as the racehorse Sallie Gardner came galloping through at 36 miles per hour (58 km/h). The whole sequence lasts one second. Leland Stanford had been determined to settle a major controversy that had been raging: for a horse that is in full gallop, is there a point in time when all four hooves are off the ground simultaneously? The movie answered the question with an unequivocal “Yes”. This historical movie can be easily found and viewed on the internet, it is typically shown as a loop of about 15 frames (one cycle of the horse’s gait) slowed down to about 10 frames per second (about 2.5x slow-motion).

At that time, however, the viewing or projecting of the series of photographs in real time was quite difficult. There were a number of contraptions devised in Victorian times that allowed a series of drawings or photographs to be viewed in quick succession. One of the best known was the zoetrope (‘wheel of life’) where a series of images were pasted on the inside of a rotating cylinder. Opposite each image on the other side of the cylinder was a narrow slot which allowed the viewer a momentary glimpse of each image as the corresponding slot went past. The inside of the cylinder had to be brightly illuminated because only a small fraction of the light passed through the narrow slot to the viewer’s eye. An improvement on this called the praxinoscope (‘action-viewer’), used mirrors rather than slots. The images were seen in reflection when viewed from slightly above the open cylinder. The mirrors (one per image) were placed exactly half-way between the center of rotation and the image placed on the inside of the cylinder. This clever geometry keeps the image stationary during rotation and almost all the light reaches the person viewing the image. This method also largely eliminates flicker (Even though successive images can be perceived by the human eye/brain as continuous at about 10 frames per second, the flicker associated with the time spent between images when ‘blank’ or ‘black’ is displayed can be perceived up to about 50 ‘flashes’ per second under some conditions).

Eadweard Muybridge went on to create the zoopraxiscope (living-action-viewer) in 1879. Some consider this to be the first movie projector. This operated on a similar principle to the zoetrope but with silhouettes or images on a rotating 16-inch diameter transparent glass disk. In this configuration the images could be backlit by a powerful lime-light (calcium oxide heated in an oxy-hydrogen flame) and projected onto a large screen. Because of the aspect-ratio distortion and keystone distortion inherent in the zoopraxiscope design, the images had to be corrected by pre-distorting the photographs. This was achieved by re-photographing with the originals placed on a specially angled and curved surface. Although the transfer to the glass disk was photographic, an artist would typically re-touch the images with paint to render them in high-contrast silhouette or to add color or other detail. Muybridge by this time was Professor Muybridge at the University of Pensylvania and was renowned for his scientific studies of animal and human locomotion. This was the first time that rapid complex motions had been broken down quantitatively in time and space. Muybridge lectured widely in the US and Europe entertaining his audience with zoopraxiscope projections from his extensive collection of disks that involved different kinds of animals walking and running and jumping as well as humans (including himself) performing various athletic feats.

Cinematography

By the end of the 19th century, there were a number of inventors, including Thomas Edison in New York, working to create practical devices for taking and showing moving pictures. In the end, it was the Lumiere brothers in France with their cinématographe (movie camera/projector) that provided the technology and gave the name to the industry and to all the moving picture houses or ‘cinemas’. Their solution dispensed with the type of smooth motion and clever geometry seen in the praxinoscope and adopted more of a ‘brute force’ approach. The ‘intermittent mechanism’ found in sewing machines (to hold the cloth stationary while the needle and thread passed through the fabric) became the basis of the technology. The film was held stationary while one frame of the film was being exposed or being projected. Then a shutter would close while the film was being moved on to the next frame. Central to this, was the availability, from Eastman Kodak in particular, of reels of continuous photographic film with carefully cut perforations or sprocket holes down both sides. The sprocket holes are used to advance the film but, in particular, they serve to accurately register the film in its stationary position and to ensure that successive frames don’t ‘jiggle’ around on the screen. Each frame must be registered better than about 6 microns in both the camera and projector (remembering that the projected image can be up to 1000 times bigger than the image on film).

There are 64 sprocket holes per foot on standard 35 mm (13/8 inch) wide film as dictated by Thomas Edison. At four sprocket holes per frame (i.e. about 19 mm per frame) and with an available width between the holes on both sides of about 25 mm, this led to the adoption of the 4:3 aspect ratio (roughly the same as the human visual field) for early movies and subsequently for television. The frame rate eventually settled at 24 frames per second giving a film speed of 18-inches per second (~45 cm/s). This frame-rate was more than adequate to capture motion realistically, however, with the introduction of talking movies, higher film speed was desired in order to provide the bandwidth and quality for the audio channels that were optically encoded down the left side of the film. The higher speed also helped prevent the film from catching fire. Early film-stock was based on nitrocellulose (guncotton) and were highly flammable. An automatic ‘dowser’ blade is still an important feature of all projectors. The blade is immediately released if the film fails to advance correctly. The blade blocks the light source thus preventing the intense light from striking the film. Still higher frame-rates were desirable to reduce the flicker, but, of course, at increased consumption of the expensive film. Movies shown at 24 frames per second do exhibit noticeable flicker. To mitigate the flicker (at the expense of brightness), the shutter is closed two or three times per frame (48 or 72 Hz) and only during one of those closures is the film moved to the next frame.

These film standards above became ubiquitous and persisted for much of the 20th century allowing almost any film to be viewed in almost any cinema worldwide. Various ingenious methods allowed the same 35 mm film-format to be used for color (Technicolor with three film-strips each recording one of three colors, later merged into three different color-sensitive emulsions on one film roll) and for wide-screen (Cinerama using three synchronized cameras/projectors, and Cinemascope using an ‘anamorphic’ lens to squish the photographic image sideways reducing the aspect ratio down to 4:3 on the film and then expanding it back for widescreen projection). The later introduction of 16:9 aspect ratio on more recent televisions and computers and smart-phones is a compromise between the original 4:3 aspect ratio and the various wide-screen formats which often exceed 2:1 aspect ratio. Hence the black-bars at the sides of the screen for 4:3 source material or black-bars at the top and bottom of the screen (letterbox style) for wide-screen movies.

For still-photography, the same 35 mm film-stock (referred to as 135) was used but the standard image extended along 8 sprocket-holes rather than 4, changing the aspect ratio from 4:3 to 2:3 (i.e. 3:2 with a 90 degree rotation of the image). The negative size was 24 x 36 mm and the standard 4R print size was 4 x 6 inches in the US (or KG 10 × 15 cm in metric).

Projecting a bright ‘colorful’ high-resolution image on a large screen requires very intense consistent source of white light. This remains a major technical challenge even in the age of digital projectors. Carbon arc lamps were very early inventions and produce a very intense localized white light from the carbon plasma. Originally operators had to adjust the separation of the carbon electrodes as they burned away during the movie. Today’s arc lamps are very different. The favorite high-intensity light-source in movie theaters is a high-pressure xenon arc contained in a fused silica envelope. These lamps are still limited by wear of the electrodes but last for about 1000 hours without any attention (i.e. two to three months in a typical cinema operation). The xenon plasma in the arc produces a broad spectrum ‘white’ light with a color temperature of around 6200 K (to be compared with the Sun’s surface temperature is about 5800 K). Power consumption is in the range of one to ten kW depending on the size of the screen to be illuminated. A major part of the challenge is to dump the excess heat produced and to ensure that the film itself or other components are not damaged. In high-power lamps, the anode (positive electrode) is water-cooled.

Sadly, perhaps, almost all applications of wet-processed photographic film have disappeared. Still photographs are now almost universally digital and the vast majority are taken with smartphones rather than dedicated cameras. Since about 2010, almost all movies shown are from digital movie cameras, stored on digital devices, and presented on a digital projector. The classical cavernous movie theatres have been turned into multiplex cinemas (cineplex) showing multiple films simultaneously. The need for a skilled operator to make several seamless reel-changes during a feature film or synchronize the sound or adjust the equalization depending on the size of the audience or to stand ready to re-thread the projector if the film breaks (and reach for the fire extinguisher if necessary) – all this has disappeared in the face of automation. A single centralized operator can happily handle 16 or more simultaneous digital movies in a large Cineplex operation.

Movie film cameras and projectors have been almost totally replaced by their digital equivalents. The image from the lens in a camera now falls on a silicon chip comprising a rectangular array of photo-diode sensors with CMOS amplifiers and selection transistors. Photons striking the reverse-biased diode junction excite electron/hole pairs in the silicon causing a current to flow proportional to the illumination. On top of this photo-sensor array is a corresponding micro-array of dye color filters. The outputs from three adjacent sensor elements (one for each primary color) are combined create one color pixel. The actual silicon chip or die might be as small as 5 x 3 mm in a smartphone ranging to several times that in a high-end camera.

For viewing digital images on small screens or monitors, Liquid Crystal Displays (LCD) are the dominant technology. Liquid crystals contain long molecules that can polarize light. Critically, these molecules can be aligned in different ways in response to an applied voltage. With these properties, it is thus possible to build an electrically-controlled light-valve – exactly what is needed for a digital display. Again an array of color filters is needed positioned over the array of LCD elements. A white light source is required behind the LCD and is provided by light-emitting diodes (LED). This LCD technology can scale from tiny displays on smart-watches to the giant 100-inch-plus displays seen in shopping malls and other venues.

Movie projectors, however, are required to illuminate a really enormous screen and require very high intensity light at the source. Liquid crystals are very temperature sensitive and are easily damaged in this environment. The answer for digital movie projectors is back to mechanics with an array of tiny micro-fabricated mirrors (one per pixel) and a rotating shutter/color-wheel. The acronym MOEMS means Micro-Opto-Electro-Mechanical Switch and is used to describe the tiny mirrors. The entire array, millions of mirrors or pixels, is fabricated using a complex multi-step photolithographic process (see section later in this chapter). The mirror array if fabricated on top of a silicon substrate already containing all the necessary control electronics. Each tiny mirror is on a torsional mount that allows it to swing through a small angle (~10 degrees). It is essentially a binary device. At one end of its stroke, the mirror reflects light onto the movie screen and is ‘on’. At the other end of the stroke the light is deflected into a black sink and is ‘off’. The mirror is switched between the two positions using electrostatic attraction from electrodes closely spaced behind each tiny mirror. The pixel brightness (the fraction of light reaching the screen) is controlled by increasing or decreasing the fraction of time it spends in the ‘on’ position. The switching is done at about 10 times the frame rate, well beyond the persistence of human vision and the flicker threshold. Between the mirror and the light-source is a rapidly–rotating color wheel with three primary colors. The mirrors are also being controlled independently for each of the successive three colors as the wheel rotates. With mirrors, there is a much less heat dissipated in the optical switch and the materials employed (Aluminum for example) are intrinsically less heat sensitive. These are mechanical devices but because they are scaled to such a small size, that they can operate remarkably fast (~10 microseconds to switch) and are also surprisingly robust mechanically.

The conversion of the cinema industry from analog film to Digital Light Processing (DLP) technology was rapid. The technology was pioneered by Larry Hornbeck at Texas Instruments in the 1990’s. The first full-length feature film released with DLP was “Star Wars: Episode 1 – The Phantom Menace”, in 1999. By 2015, when Larry Hornbeck received the coveted Oscar for his technical contributions, almost all cinemas worldwide were using the technology.

Photographic film as a storage medium

Taking photographs and making movies obviously involves recording and storing and retrieving huge amounts of information. Traditionally this has been in analog form but the digital sound included into the later 35 mm formats makes it clear that photographic technology is equally capable of storing digital data. High-quality, small-grain (slow) photographic film is capable of a resolution of around 5 micrometers. This is about an order of magnitude poorer than the limit implied by the wavelength of light (less than 1 micrometer). This limitation in photographic film is due to the finite size of the silver halide grains and the associated dye polymer molecules (for color) that are used in the photographic emulsions (strictly speaking these are not emulsions of liquid droplets but ‘suspensions’ of relatively solid particles in a transparent plastic binder material).

On the standard Academy 35 mm format, a 5 micrometer resolution implies 22 x 16 mm / (0.005 x 0.005 mm) = 14 Megapixels. (For comparison, a high-definition television image or smartphone image contains about 2 Megapixels). With three-color emulsions storing the equivalent of, say, 256 color intensities each, a single pixel carries 3 x log2(256) = 24 bits. Using the 14 Megapixel number gives a total of 24 x 14 = 338 Megabits or 42 MegaBytes of information per image. A standard still-photograph, being over twice the size could theoretically carry about 90 Mbytes of information!

A two-hour movie shot at 24 frames per second (172800 frames occupying several reels of film) could carry over seven TeraBytes (TB) of information. As a reference point, the largest capacity Hard Disk Drive (HDD) in 2016 had a capacity of 10 TB. These very large numbers speak more to the capability of photographic film to store information rather than the information actually contained in the movie. The human observer cannot really appreciate more than about 2 Megapixels per frame in a movie (4 Megapixels for a still image). Furthermore, powerful digital compression algorithms can reduce the number of bits required to represent a typical photograph by an order of magnitude without any perceptible change to the human eye. This can be done because photographs often contain large areas that require very little information to describe them (e.g. blue sky, blank walls, smooth surfaces). The human eye also has quite poor resolution for color (hue and saturation) which can be thus represented by much fewer dots or lines per inch. Mainly the algorithms have to accurately represent brightness (luminance) changes and must especially display any edges correctly and, of course, must not create any distracting artifacts. Single photographs vary wildly in the compression ratios that that can be achieved. Very much larger and more consistent compression ratios can be achieved in movies which average over many individual images. The necessarily strong similarity between images in successive frames allows further huge compression gains – another order of magnitude or more. So instead of several TeraBytes, a two hour movie can be compressed to about 4 GB in high-definition’ or as little as 1 GB in standard definition (old-fashioned television quality) for recording and storage

The longevity of photographically stored images is obvious from the existence of Miss Draper’s nearly 200 year old photograph. The original daguerreotype images were formed from silver particles on a copper or brass substrate. The surface was easily damaged by scratches in handling but otherwise daguerreotype photographs offered not only impressive resolution but also very good archival stability. The same is less true of the later negatives and prints which involved plastic substrates and binders and complex color dye molecules. Nevertheless black and white images and high-quality color photographs stored in a dark cool environment with 30-40% relative humidity and protected from airborne acidic pollutants should last for a time-scale measured in centuries.

High quality color photography is theoretically capable of recording almost one Mbit per square millimeter or about 600 Mbits per square inch. These are certainly remarkable numbers but still over 1000 times below the areal-densities that technologies such as the hard disk drive are able to achieve and also very far below what magnetic tape-recording can achieve in volumetric storage density. Of course, we will be talking much more on the capabilities of HDD and tape recording in the following chapters.

Today’s ‘hi-def’ cameras and projectors/displays are described as “4K”. For movies, “4K” refers to 4096 × 2160 pixels (8.8 megapixels and ~17:9 aspect-ratio) and for television it refers to 3840 × 2160 (8.3 Mpixels and 16:9 aspect ratio). At 24-bits per pixel and a frame-rate of 24 frames/s for movies and 30 frames/s for television (US), the raw uncompressed data-rates reach over 5 Gb/s. If no compression were used, a two-hour movie would consume over 40 Terabits or 5 TeraBytes. There numbers are similar to those quoted earlier for 35 mm film. The quality and resolution of “4K” technology is considered similar to that of 35 mm film.

Film distribution has obviously changed very much too. Gone are the deliveries of the several large film canisters that would contain the latest block-buster. Today’s movies are delivered by overnight mail on a hard disk drive or more recently by dedicated satellite or fiber-optic link. Feature films shown in cinemas use much less aggressive compression than DVDs or Blu-Ray disks. A movie might occupy 100 to 300 GBytes of data. For reference, the capacity of a DVD is 4.7 GBytes and of a Blu-Ray is 50 Gbytes. The movies are encrypted to avoid unauthorized copying (piracy). A decryption key unique to the cinema and the show-time(s) is delivered separately.

Photolithography

The story of photolithography is very much intertwined with that of photography and has very much the same beginnings. It had long been recognized that some stones and metals could be etched with acid solutions. Furthermore, flat surfaces could be selectively etched into patterns by coating parts of the surface with a waterproof barrier of oil or tar to prevent etching. Words and even drawings could be etched into a surface in this manner. In particular, etching on a limestone tablet created rougher depressed areas that attracted and held ink in a manner very suitable for replicating images in a printing press. These limestone tablets were referred to as lithographic plates from the Greek for stone and drawing.

Nicéphore Niépce is another interesting character in this story. He and his older brother, Claude, created the Pyréolophore (fire – wind – bearer), the world’s first internal combustion engine. Not a mere laboratory curiosity, the engine was used to power a boat up the river Saône at Chalon in France. A patent for the Pyréolophore engine was signed by Emperor Napoleon Bonaparte on the 20 July, 1807. This was in essence a fuel-injection engine with the fuel for the engine being a mixture of lycopodium powder (the highly-flammable spores of certain mosses or ferns) mixed with coal-dust. The air-injected suspension of these fine particles was ignited with a smoldering fuse that was mechanically rapidly withdrawn behind a metal plate sealing the chamber. The pressure from the burning fuel then forced a column of water to be ejected out of the back of the boat thus propelling it forward. The explosions (or better described as rapid burns) occurred about once every 5 seconds.

However, more importantly, Nicéphore Niépce is credited with the invention of photo-lithography. A substance called “bitumen of Judea”, a naturally occurring asphalt known since ancient times, was already being used as the acid-resistant coating for making etchings. Niépce became aware that bitumen exposed to light became more difficult to subsequently remove with the lavender-oil solvent. Using this property, he was able to create acid etchings of direct-contact stencil patterns and images simply through exposure to strong sunshine. But, most magically, by placing a bitumen-coated pewter plate in a camera obscura for several days, he was able to create a crude image of the view from the window in his house – an image that was permanently etched into the plate. The actual etched plate, dating from 1826 or 1827 was rediscovered in 1952. It is the oldest known surviving photograph. In 1829, Niépce formed a partnership with Louis Daguerre and was influential in helping create the daguerreotype process. Sadly, he died just four years later financially ruined by the efforts of his brother, Claude, to commercialize the Pyréolophore engine. As part of Daguerre’s arrangement to publish the daguerreotype process, the French government awarded Niépce’s son a lifetime pension in recognition of his father’s contributions to both photolithography and photography.

Photolithography is absolutely central to the data storage business and, in fact, to the entire infrastructure of the modern information age. The imaging and chemical processes involved in photolithography have come a long long way since Niépce’s first steps. Nowadays it is possible to create complex multi-layer structures with lateral dimensions of just a few tens of nanometers and thickness dimensions in the Angstrom regime (tenths of a nanometer). The very fine lateral resolution results from a combination of factors. First, the use of ultra-violet light with ~200 nm wavelength in air which is decreased to ~140 nm with the use of a distilled water or oil immersion interface. Second, the use of physically large precision lenses (recall that angular resolution is given roughly by wavelength over lens-diameter) with a short focal length (to translate the high angular resolution into very fine spatial resolution). These lenses are referred to as high numerical-aperture (NA) lenses. The NA is defined as the sine of the half-angle subtended by the lens at its focal length. The lenses used in photolithography have NAs of around sqrt(½) (i.e. the focal-length is about half the lens diameter making the half-angle about 45 degrees). Beyond these two factors are a number of techniques such as phase-shift masks that take advantage of destructive optical interference to produce clean sharp nulls in the illumination that reaches the wafer (thus creating very narrow sharp lines). The photo-resist itself must be thin (so as not to spoil the resolution) and must produce a strong response to the illumination and also differentiate strongly between exposed and unexposed regions in the subsequent lithographic steps. The response of the photoresist and of the subsequent processing steps are highly nonlinear, a factor which again can be taken advantage of in enhancing contrast and shifting edges and widths to help create these incredibly small features.

The complexity and tiny dimensions of structures that can be created by photolithography almost defy belief. In terms of mechanical complexity, the micro-mirrors in DLP digital movie projectors are a wonderful example. The several million mirrors on a chip are collectively referred to as a ‘pond’. Each mirror is etched to separate it from its neighbors by a small gap of less than a micron and then further etched to undercut the mirror and separate it from the underlying electrostatic motor electrodes and the silicon circuitry. This etching occurs all around the mirror so that it is completely free except for two areas that had been masked and form the two narrow bridges that provide support and form the torsional springs. The mirrors are 5 to 10 micrometers on a side, so, while they represent a complex difficult to fabricate mechanical structure, they do not stress the resolution limits of the technology. Good examples of the limits in creating small structures are provided by the reading and writing transducers (heads) in a hard disk drive. These are also very complex structures. Several hundred steps and several months of processing are required in order to complete the entire complex structure. The full structure involves an inductive writer stacked on top of a magnetoresistive reader plus one or more thermal actuators and thermal contact sensors. The critical physical widths of the writer and reader are around 30 nm and 20 nm respectively. There will be much more discussion of the read and write heads in future chapters. Similarly, the tall vertical structures that support 3D NAND flash memory are truly remarkable in terms of the extreme aspect ratio (about 60:1) of the topology that must be created to interconnect through the vertical dimension – more about this later too.

Also deferred to the following chapters is so-called imprint technology. This is a technology that looks posed to play a role in nano-lithography (well below optical wavelengths) for integrated circuits. But we will look at it first in a different context. With imprint lithography, the pattern or information can be impressed directly from a single master inexpensively onto multiple daughters. Examples include vinyl phonograph records, Compact Disks (CD), Digital Versatile Disks (DVD), and more-recently Blu-Ray Disks. The interplay and competition in the distribution of music and movies between these various imprint technologies and the magnetic recording technologies makes for a fascinating story that we’ll cover in the next chapter.

California High-Tech.

Much of the story of photography and moving pictures and of photolithography and integrated circuits evolved in California. In 1956 William Shockley Jr., Nobel prize-winning co-inventor of the transistor, moved to California to look after his ailing mother in Palo Alto. This move is the single seminal event that led to the establishment of ‘Silicon Valley’ in the San Francisco Bay Area and led to industry behemoths such as Fairchild, Intel, and AMD.

Meanwhile, Southern California (Los Angeles and San Diego areas), aside from the fame of Hollywood, also developed a huge high-tech manufacturing industry. Much of this is focused on military and aerospace. San Diego on the southern border has one of the highest concentrations of major defense contractors and its natural deep-water harbor supports one of the largest naval fleets in the world. In the Los Angeles area, the traditional companies that were dominant towards the end of the 20th-century included Lockheed in Burbank, Hughes in Culver City, Northrop Grumman in Redondo Beach, and Rocketdyne in Canoga Park. The newer generation that is leading a resurgence in the aerospace industry include Space-X in Hawthorne, Virgin Galactic in Long Beach, Scaled Composites (SpaceShipOne) in Mojave, SpaceDev (Dream Chaser) in Poway. The Vandenberg air-force base on the coast between Santa Barbara and Santa Maria is the US West-coast launch site used by NASA and by these new commercial spaceflight companies. Vandenberg has open sea directly to the South and is especially useful for launching into polar orbits Edwards Air Force base in the California desert is a rocket testing site and was a landing site for the Space-Shuttle. However, going forward in this book, we finally have left spaceflight and space-probes and space-telescopes and we are now firmly down to Earth.

The state of California extends about 800 miles (~1300 km) from north to south and about 200 miles (~300 km) east to west. The next chapter and the next factor-of-ten in diminishing scale takes us down to ~100 km. We will focus specifically on the San Francisco Bay Area: birthplace of video-tape recording, birthplace of hard-disk-drives, home of “Silicon Valley”, and headquarters to many of the newer giants of the internet and of social media.

Further Reading

A. Rolle, A. Verge, California: A History, Harlan Davidson, Wheeling, IL, 2008

B. Newhall, The History of Photography: From 1839 to the Present, Bulfinch Press, Oct. 30, 1982

Edward Ball, The Inventor and the Tycoon, Knopf-Doubleday Publishing, Nov. 5, 2013

G.P. Williams, The Story of Hollywood, BL Press, Oct. 1, 2011


References

  1. https://en.wikipedia.org/wiki/Calafia
  2. http://californiamissionsfoundation.org/the-california-missions/
  3. https://www.census.gov/quickfacts/ca
  4. https://en.wikipedia.org/wiki/Motion_Picture_Patents_Company
  5. https://www.u-s-history.com/pages/h3871.html
  6. https://museumofmagneticsoundrecording.org/ManufacturersAmpex.html
  7. https://www.youtube.com/watch?v=D7HqOrAakco
  8. https://en.wikipedia.org/wiki/John_William_Draper