Digital photo. Medium format and other professional digital cameras. The result cannot be seen immediately

The history of inventions is sometimes very bizarre and unpredictable. Exactly 40 years have passed since the invention in the field of semiconductor optoelectronics, which led to the emergence of digital photography.

November 10, 2009 inventors Willard Boyle (born in Canada in 1924) and George Smith (born in 1930) are awarded Nobel Prize. While working at Bell Labs, in 1969 they invented a charge-coupled device: the CCD sensor, or CCD (Charge-Coupled Device). At the end of the 60s. 20th century scientists have found that the MOS structure (metal-oxide-semiconductor compound) is light-sensitive. The principle of operation of a CCD sensor, consisting of individual MOS photosensitive elements, is based on reading the electrical potential that has arisen under the influence of light. The charge shift is performed sequentially from element to element. The CCD matrix, consisting of individual light-sensitive elements, has become a new device for fixing an optical image.

Willard Boyle (left) and George Smith. 1974 Photo: Alcatel-Lucent/Bell Labs

CCD sensor. Photo: Alcatel-Lucent/Bell Labs

But to create a portable digital camera based on a new photodetector, it was necessary to develop its small-sized components with low power consumption: an analog-to-digital converter, a processor for processing electrical signals, a small high-resolution monitor, and a non-volatile information storage device. The problem of creating a multi-element CCD structure seemed no less urgent. It is interesting to trace some stages of digital photography creation.

The first CCD matrix, created 40 years ago by newly minted Nobel laureates, contained only seven photosensitive elements. On its basis, in 1970, scientists from Bell Labs created a prototype of an electronic video camera. Two years later, Texas Instruments received a patent for an "All-Electronic Device for Recording and Reproducing Still Images." And although the images were stored on magnetic tape, they could be played back on a TV screen, i.e. the device, in fact, was analog, the patent gave an exhaustive description of the digital camera.

In 1974, an astronomical electronic camera was created on the Fairchild CCD (black and white, with a resolution of 100x100 pixels). (Pixel is an abbreviation of the English words picture (pix-) picture and element (-el) - element, i.e. image element). Using the same CCD sensors, a year later, Kodak engineer Steve Sasson created the first conditionally portable camera. A 100x100 pixel image was recorded on a magnetic cassette for 23 seconds, and it weighed almost three kilograms.

1975, prototype of the first Kodak digital still camera in the hands of engineer Steve Sasson.

IN former USSR similar developments were also carried out. In 1975, television cameras were tested on domestic CCDs.

In 1976, Fairchild launches the first commercial electronic camera, the MV-101, which was used on the assembly line for product quality control. The image was transferred to a mini-computer.

Finally, in 1981, Sony Corporation announced the creation of an electronic model of the Mavica camera (an abbreviation for Magnetic Video Camera) based on a SLR camera with interchangeable lenses. For the first time in a consumer camera, the image receiver was a semiconductor matrix - a CCD measuring 10x14 mm with a resolution of 570x490 pixels. This is how the first prototype of a digital camera (DSC) appeared. It recorded individual frames in analog form on a medium with a metallized surface - a floppy disk (this two-inch floppy disk was called Mavipak) in NTSC format and therefore it was officially called a "static video camera" (Still video camera). Technically, the Mavica was a continuation of Sony's line of CCD television cameras. Bulky cameras with cathode ray tubes have already been replaced by a compact device based on a solid-state CCD sensor - another direction in which the invention of the current Nobel laureates is used.

Sony Mavica

Since the mid-80s, almost all leading photo brands and a number of electronic giants have been working on the creation of digital cameras. In 1984, Canon creates the Canon D-413 camcorder with twice the resolution of the Mavica. A number of companies have developed digital camera prototypes: Canon launched the Q-PIC (or ION RC-250); Nikon - a prototype DSC QV1000C with data recording in analog form; Pentax demonstrated a prototype DSC called the PENTAX Nexa with a 3x zoom lens. The camera's CCD receiver also acted as a metering sensor. Fuji introduced the Digital Still Camera (DSC) DS-IP at Photokina. True, she did not receive commercial promotion.


Nikon QV1000C


Pentax Nexa


Canon Q-PIC (or ION RC-250)

In the mid-1980s, Kodak developed an industrial design for a 1.4-megapixel CCD sensor and coined the term "megapixel" itself.

The camera that saved the image as a digital file was the Fuji DS-1P (Digital Still Camera-DSC) announced in 1988, equipped with 16 MB of built-in volatile memory.

Fuji DS-1P(Digital Still Camera-DSC)

Olympus showed a prototype of the Olympus 1C digital camera at the PMA in 1990. At the same exhibition, Pentax demonstrated its advanced PENTAX EI-C70 camera, equipped with an active autofocus system and an exposure compensation function. Finally, the amateur DPC Dycam Model 1, better known as the Logitech FotoMan FM-1, appeared on the American market. Its CCD-matrix with a resolution of 376x284 pixels formed only a black and white image. The information was recorded in conventional RAM (not on flash memory) and when the batteries (two AA cells) were turned off or they were discharged, they disappeared forever. There was no display for viewing frames, the lens was with manual focus.

Logitech FotoMan FM-1

In 1991, Kodak added digital content to the Nikon F3 professional camera, calling it the Kodak DSC100. The recording took place on a hard disk located in a separate unit, which weighed about 5 kg.

Kodak DSC100

Sony, Kodak, Rollei and other companies in 1992 introduced high-resolution cameras that could be classified as professional. Sony demonstrated the Seps-1000, which had a photosensitive element made up of three CCDs, providing a resolution of 1.3 megapixels. Kodak developed the DSC200 based on the Nikon camera.

At the Photokina exhibition in 1994, a professional high-resolution digital camera Kodak DSC460 was announced, the CCD contained 6.2 megapixels. It was developed on the basis of a professional film SLR camera Nikon N90. The CCD itself, 18.4x27.6 mm in size, was built into an electronic adapter that was docked to the body. In the same 1994, the first Flash cards of Compact Flash and SmartMedia formats appeared with a volume of 2 to 24 MB.

Kodak DSC460

The year 1995 was the starting point for the mass development of digital cameras. Minolta, together with Agfa, manufactured the RD175 camera (CCD matrix 1528x1146 pixels). At the exhibition in Las Vegas, about 20 models of amateur DPCs have already been demonstrated: a compact digital camera from Kodak with a resolution of 768x512 pixels, a color depth of 24 bits and built-in memory that allows recording up to 20 pictures; pocket ES-3000 from Chinon with a resolution of 640x480 with replaceable memory cards; Epson compact Photo PC cameras with two possible resolutions - 640x480 and 320x240 pixels; device Fuji X DS-220 with an image size of 640x480 pixels; Ricoh RDC-1 camera with the possibility of both frame-by-frame and video recording with a resolution of the Super VHS video format of 768x480 pixels. The RDC-1 was equipped with a 3x zoom lens with a focal length of 50-150mm (35mm equivalent), and automated focusing, exposure detection, and white balance settings. There was also an LCD display for quick viewing of the captured frames. Casio also showed commercial samples their cameras. Released the first consumer cameras Apple QuickTake 150, Kodak DC40, Casio QV-11 (the first digital camera with an LCD display and the first with a swivel lens), Sony Cyber-shot.

So the digital race began to gain momentum. Today there are thousands of models of digital cameras, camcorders and phones with built-in cameras. The marathon is far from over.

It is necessary to pay attention to the fact that some digital cameras are equipped with a CMOS image sensor. CMOS is a complementary metal-oxide-semiconductor structure. Without going into the topological features of CMOS and CCD matrices, we emphasize that their serious differences are only in the method of reading the electronic signal. But both types of matrices are built on the basis of light-sensitive MOS structures (metal-oxide-semiconductor).

1. The purpose of the work

To study analog and digital imaging technologies, basic principles of operation, device, controls and settings of modern cameras. Classification, structure of black-and-white and color negative photographic films, main characteristics of photographic films and a method for choosing photographic materials for solving specific photographic problems. Analog and digital photography technologies. Obtain practical skills in the operation of the studied devices.

2. Theoretical information about the device of a film (analogue) camera

A modern camera with automatic focus is justifiably compared to the human eye. On fig. 1 on the left, schematically shows the human eye. When the eyelid is opened, the light flux that forms the image passes through the pupil, the diameter of which is regulated by the iris depending on the light intensity (limits the amount of light), then it passes through the lens, is refracted in it and focuses on the retina, which converts the image into electric current signals and transmits them along the optic nerve to the brain.

Rice. 1. Comparison of the human eye with a camera device

On fig. 1 on the right, schematically shows the device of the camera. When photographing, the shutter opens (adjusts the illumination time), the light flux that forms the image passes through the hole, the diameter of which is regulated by the aperture (regulates the amount of light), then it passes through the lens, is refracted in it and focuses on the photographic material that registers the image.

Film (analogue) camera- an optical-mechanical device with which photographs are taken. The camera contains interconnected mechanical, optical, electrical and electronic components (Fig. 2). Camera general purpose consists of the following main parts and controls:

- housing with a light-tight chamber;

- lens;

- diaphragm;

- photographic shutter;

- Shutter button - initiates the shooting of a frame;

- viewfinder;

- focusing device;

- camera roll;

- cassette (or other device for placing photographic film)

- film transport device;

- photoexposure meter;

- built-in flash;

- camera batteries.

Depending on the purpose and design, photographic devices have various additional devices to simplify, clarify and automate the process of photographing.

Rice. 2. The device of a film (analog) camera

Frame - the basis of the design of the camera, combining components and parts into an optical-mechanical system. The walls of the housing are a light-tight camera, in front of which a lens is installed, and in the back - a film.

Lens (from the Latin objectus - object) - an optical system enclosed in a special frame, facing the subject and forming its optical image. A photographic lens is designed to obtain a light image of the subject on a photosensitive material. The nature and quality of the photographic image largely depends on the properties of the lens. Lenses are permanently built into the camera body or interchangeable. Lenses, depending on the ratio of the focal length to the diagonal of the frame, are usually divided into normal,wide angle And telephoto lenses.

Lenses with variable focal length (zoom lenses) allow you to take images of different scales at a constant shooting distance. The ratio of the largest focal length to the smallest is called the magnification of the lens. So, lenses with a variable focal length from 35 to 105 mm are called lenses with a 3x change in focal length (3x zoom).

Diaphragm (from the Greek diaphragma) - a device with which the beam of rays passing through the lens is limited to reduce the illumination of the photographic material at the time of exposure and change the depth of the sharply depicted space. This mechanism is implemented in the form of an iris diaphragm, consisting of several blades, the movement of which ensures a continuous change in the diameter of the hole (Fig. 3). The aperture value can be set manually or automatically using special devices. In the lenses of modern cameras, the aperture setting is performed from the electronic control panel on the camera body.

Rice. 3. The iris mechanism consists of a series of overlapping plates

photographic shutter - a device that provides exposure to light rays on photographic material for a certain time, called endurance. The shutter is opened at the command of the photographer when the shutter button is pressed or with the help of a software mechanism - the self-timer. Exposures that are worked out by a photographic shutter are called automatic. There is a standard series of shutter speeds measured in seconds:

30

15

8

4

2

1

1/2

1/4

1/8

1/15

1/30

1/60

1/125

1/250

1/500

1/1000

1/2000

1/4000

Adjacent numbers of this series differ from each other by 2 times. Going from one shutter speed (for example 1/125 ) to its neighbor, we increase ( 1/60 ) or decrease ( 1/250 ) the exposure time of photographic material is doubled.

According to the device, the shutters are divided into central(folding) and curtain-slit(focal planar).

Central shutter has light cutters, consisting of several metal petals-shutters, concentrically located directly near the optical block of the lens or between its lenses, driven by a system of springs and levers (Fig. 4). The simplest clock mechanism is most often used as a time sensor in the central shutters, and at short shutter speeds, the shutter opening time is regulated by the force of the spring tension. Modern models of central shutters have an electronic control unit for holding time, the petals are held open with an electromagnet. Central shutters automatically work out shutter speeds in the range from 1 to 1/500 second.

Shutter-aperture- a central shutter, the maximum degree of opening of the petals of which is adjustable, due to which the shutter simultaneously performs the role of a diaphragm.

In the central shutter, when the release button is pressed, the cutters begin to diverge and open the light hole of the lens from the center to the periphery like an iris diaphragm, forming a light hole with a center located on the optical axis. In this case, a light image appears simultaneously on the entire area of ​​​​the frame. As the petals diverge, the illumination increases, and then, as they close, it decreases. The shutter will return to its original position before the next shot starts.

Rice. 4. Some types of central shutters: on the left - with single-acting light cutters; center - with double-acting light cutters; on the right - with light cutters that act as a shutter and aperture

The principle of operation of the central shutter ensures high uniformity of illumination of the resulting image. The central shutter allows you to use the flash in almost the entire range of shutter speeds. The disadvantage of the central shutters is the limited possibility of obtaining short shutter speeds, associated with large mechanical loads on the cut-offs, with an increase in their speed.

Roller shutter has cut-offs, in the form of shutters (metal - brass corrugated tape) or a set of movably fastened lamella petals (Fig. 5), made of light alloys or carbon fiber, located in close proximity to the photographic material (in the focal plane). The shutter is built into the body of the camera and is actuated by a system of springs. Instead of a spring that moves the curtains in a classic slotted shutter, electromagnets are used in modern cameras. Their advantage is the high accuracy of working out exposures. In the cocked state of the shutter, the photographic material is blocked by the first curtain. When the shutter is released, it shifts under the action of the spring tension, opening the way for the light flux. At the end of the specified exposure time, the light flux is blocked by the second curtain. At shorter shutter speeds, the two shutters move together at a certain interval, through the resulting gap between the rear edge of the first curtain and the front edge of the second curtain, the photographic material is exposed, and the exposure time is controlled by the width of the gap between them. The shutter will return to its original position before the next shot starts.

Rice. 5. Shutter-slit shutter (movement of curtains across the frame window)

The curtain-slit shutter allows the use of various interchangeable lenses, as it is not mechanically connected to the lens. This shutter provides shutter speeds up to 1/12000 s. But it does not always make it possible to obtain uniform exposure over the entire surface of the frame window, yielding in this parameter to central shutters. The use of pulsed light sources with a curtain-slit shutter is possible only at such shutter speeds ( sync speed), at which the slit width ensures full opening of the frame window. In most cameras, these shutter speeds are: 1/30, 1/60, 1/90, 1/125, 1/250 s.

Self-timer- a timer designed to automatically release the shutter with an adjustable delay after pressing the shutter button. Most modern cameras are equipped with a self-timer as an additional component in the shutter design.

Photo exposure meter - an electronic device for determining exposure parameters (shutter speed and aperture value) at a given brightness of the subject and a given photosensitivity of photographic material. In automatic systems, the search for such a combination is called program processing. After determining the nominal exposure, the shooting parameters (f-number and shutter speed) are set on the corresponding scales of the lens and the photographic shutter. In cameras with varying degrees of automation, both exposure parameters or only one of them are set automatically. To improve the accuracy of determining exposure parameters, especially in cases where shooting is performed using interchangeable lenses, various attachments and nozzles that significantly affect the aperture ratio of the lens, photocells of exposure meters are placed behind the lens. Such a system for measuring the luminous flux was called TTL (Eng. Through the Line - “through the lens / lens”). One of the variants of this system is shown in the scheme of the mirror viewfinder (Fig. 6). The metering sensor, which is a receiver of light energy, is illuminated by light that has passed through the optical system of the lens mounted on the camera, including filters, attachments, and other devices that the lens may currently be equipped with.

Viewfinder - an optical system designed to accurately determine the boundaries of the space included in the image field (frame).

Frame(from French cadre) photographic - a single photographic image of the subject. Frame boundaries are set by framing at the stages of shooting, processing and printing.

Cropping for photo, film and video shooting– targeted selection of the shooting point, angle, shooting direction, angle of view of the lens to obtain the necessary placement of objects in the field of view of the camera's viewfinder and on the final image.

Cropping when printing or editing an image– selection of borders and aspect ratio of a photographic image. Allows you to leave outside the frame all insignificant, random objects that interfere with the perception of the image. Cropping provides the creation of a certain pictorial emphasis on the plot-important part of the frame.

Optical viewfinders contain only optical and mechanical elements and do not contain electronic ones.

Parallax Viewfinders They are an optical system separate from the shooting lens. Due to the mismatch between the optical axis of the viewfinder and the optical axis of the lens, parallax occurs. The effect of parallax depends on the angle of view of the lens and viewfinder. The longer the focal length of the lens and, accordingly, the smaller the angle of view, the greater the parallax error. Usually, in the simplest models of cameras, the viewfinder and lens axes are made parallel, thereby limiting themselves to linear parallax, the minimum effect of which is when focusing is set to “infinity”. In more sophisticated camera models, the focus mechanism is equipped with a parallax compensation mechanism. In this case, the optical axis of the viewfinder is tilted to the optical axis of the lens, and the smallest difference is achieved at the distance at which focusing is made. The advantage of the parallax viewfinder is its independence from the shooting lens, which allows you to achieve greater image brightness and get a small image with clear frame boundaries.

Telescopic viewfinder(Fig. 6). It is used in compact and rangefinder cameras and has a number of modifications:

Galileo's viewfinder Galileo's inverted spotting scope. Consists of a short-focus negative objective and a long-focus positive eyepiece;

Viewfinder Albad. Development of Galileo's viewfinder. The photographer observes the image of a frame located near the eyepiece and reflected from the concave surface of the viewfinder lens. The position of the frame and the curvature of the lenses are chosen in such a way that its image seems to be located at infinity, which solves the problem of obtaining a clear image of the frame boundaries. The most common type of viewfinder on compact cameras;

Parallax-free viewfinders.

Mirror viewfinder consists of an objective, a deflecting mirror, a focusing screen, a pentaprism and an eyepiece (Fig. 6). The pentaprism flips the image into a straight line, familiar to our vision. During framing and focusing, the deflecting mirror reflects almost 100% of the light entering through the lens onto the frosted glass of the focusing screen (in the presence of automatic focusing and exposure metering, part of the light flux is reflected on the corresponding sensors).

Beam splitter. When using a beam splitter (translucent mirror or prism), 50–90% of the light passes through a mirror tilted at an angle of 45° onto the photographic material, and 10–50% is reflected at an angle of 90° onto the frosted glass, where it is viewed through the eyepiece part, as in a mirror camera. The disadvantage of this viewfinder is its low efficiency when shooting in low light conditions.

Focusing is to install the lens relative to the surface of the photographic material (focal plane) at such a distance at which the image on this plane is sharp. Acquisition of sharp images is determined by the ratio between the distances from the first principal point of the lens to the subject and from the second principal point of the lens to the focal plane. On fig. Figure 7 shows five different subject positions and their respective image positions:

Rice. 6. Schemes of telescopic and reflex viewfinders

Rice. 7. Relationship between the distance from the main point of the lens O to the object K and the distance from the main point of the lens O to the image of the object K"

The space to the left of the lens (in front of the lens) is called the object space, and the space to the right of the lens (behind the lens) is called the image space.

1. If the object is at "infinity", then its image will be obtained behind the lens in the main focal plane, i.e. at a distance equal to the main focal length f.

2. As the subject approaches the lens, its image begins to move more and more towards the double focal length point F' 2 .

3. When the object is at the point F 2 , i.e. at a distance equal to twice the focal length, its image will be at point F' 2. Moreover, if until this moment the dimensions of the object were larger than the dimensions of its image, then now they will become equal.

5. When the object is at the point F 1 , the rays coming from it behind the lens form a parallel beam and the image will not work.

In large-scale shooting (macro shooting), the object is placed at a close distance (sometimes less than 2 f) and use various devices to extend the lens further than the frame allows.

Thus, in order to obtain a sharp image of the object being photographed, it is necessary to set the lens at a certain distance from the focal plane before shooting, that is, to focus. In cameras, focusing is performed by moving a group of objective lenses along the optical axis using a focusing mechanism. Usually, focusing is controlled by turning the ring on the lens barrel (it may not be available on cameras with a lens set to a hyperfocal distance or in devices that only have an automatic focus mode - autofocus).

It is impossible to focus directly on the surface of the photographic material, therefore, various focusing devices for visual control of sharpness.

Focusing by distance scale on the lens barrel provides good results with lenses that have a large depth of field (wide-angle). This method of aiming is used in a large class of scale film cameras.

Focusing with a rangefinder It is highly accurate and is used for fast lenses with a relatively shallow depth of field. The scheme of the rangefinder combined with the viewfinder is shown in Figure 8. When observing the subject through the viewfinder-rangefinder, two images are visible in the central part of its field of view, one of which is formed by the optical channel of the rangefinder, and the other by the channel of the viewfinder. Moving the lens along the optical axis through the levers 7 causes the deflecting prism to rotate 6 so that the image transmitted by it moves in the horizontal direction. When both images in the viewfinder's field of view coincide, the lens will be in focus.

Rice. Fig. 8. Schematic diagram of a rangefinder device for focusing the lens: a: 1 – viewfinder eyepiece; 2 - a cube with a translucent mirror layer; 3 - diaphragm; 4 - camera lens; 5 – rangefinder lens; 6 - deflecting prism; 7 - levers for connecting the lens mount with a deflecting prism; b - focusing of the lens is performed by combining two images in the viewfinder field of view (two images - the lens is not installed accurately; one image - the lens is installed accurately)

Focusing in a reflex camera. The scheme of the SLR camera is shown in fig. 6. Rays of light, passing through the lens, fall on the mirror and are reflected by it onto the matte surface of the focusing screen, forming a light image on it. This image is flipped by a pentaprism and viewed through an eyepiece. The distance from the rear principal point of the lens to the frosted surface of the focusing screen is equal to the distance from this point to the focal plane (film surface). Focusing of the lens is done by rotating the ring on the lens barrel, with continuous visual control of the image on the matte surface of the focusing screen. In this case, it is necessary to determine the position at which the sharpness of the image will be maximum.

To facilitate focusing and improve the accuracy of the lens, various auto focus systems.

Autofocusing of the lens is carried out in several stages:

Measurement of the parameter (distance to the object of shooting, maximum image contrast, phase shift of the components of the selected beam, arrival delay time of the reflected beam, etc.) of the sharpness-sensitive image in the focal plane and its vector (to select the direction of change of the mismatch signal and predict the possible distance focusing at the next point in time when the object moves);

Generation of a reference signal equivalent to the measured parameter and determination of the error signal of the autofocus automatic control system;

Sending a signal to the focus actuator.

These processes take place almost simultaneously.

Focusing the optical system is performed by an electric motor. The time it takes to measure the selected parameter and the time it takes for the lens mechanics to process the mismatch signal determines the speed of the autofocus system.

The operation of the autofocus system can be based on various principles:

Active autofocus systems: ultrasonic; infrared.

Passive autofocus systems: phase (used in SLR film and digital cameras); contrast (camcorders, non-mirror digital cameras).

Ultrasonic and infrared systems calculate the distance to the object by the time of return from the object of the fronts emitted by the camera of infrared (ultrasonic) waves. The presence of a transparent barrier between the object and the camera leads to erroneous focusing of these systems on this barrier, and not on the subject.

Phase autofocus. The camera body contains special sensors that receive fragments of the light flux from different points of the frame using a system of mirrors. Inside the sensor are two separating lenses that project a double image of the subject of photography onto two rows of photosensitive sensors that generate electrical signals, the nature of which depends on the amount of light falling on them. In the case of precise focusing on an object, two light fluxes will be located at a certain distance from each other, specified by the sensor design and an equivalent reference signal. When the focus point TO(Fig. 9) is closer to the object, the two signals converge to each other. When the focus point is farther than the object, the signals diverge further from each other. The sensor, having measured this distance, generates an electrical signal equivalent to it and, comparing it with the reference signal using a specialized microprocessor, determines the mismatch and issues a command to the focusing actuator. The focusing motors of the lens work out commands, refining the focus until the signals from the sensor match the reference signal. The speed of such a system is very high and depends mainly on the speed of the lens focus actuator.

Contrast autofocus. The principle of operation of contrast autofocus is based on the constant analysis by the microprocessor of the degree of image contrast, and the development of commands to move the lens to obtain a sharp image of the object. Contrast autofocus is characterized by low performance due to the lack of initial information about the current state of lens focusing in the microprocessor (the image is initially considered blurred) and as a result of the need to issue a command to shift the lens from its original position and analyze the resulting image for the degree of contrast change. If the contrast has not increased, then the processor changes the sign of the command to the autofocus actuator and the motor moves the lens group in the opposite direction until the maximum contrast is fixed. When the maximum is reached, autofocus stops.

The delay between pressing the shutter button and the moment the frame is taken is explained by the operation of passive contrast autofocus and the fact that in non-mirror cameras the processor is forced to read the entire frame from the matrix (CCD) in order to analyze only the focus areas for contrast.

photo flash . Electronic flashes are used as the main or additional light source, and can be of different types: built-in camera flash, external self-powered flash, studio flash. Although the built-in flash has become a standard feature on all cameras, the high output of stand-alone flashes offers the added benefit of more flexible aperture control and enhanced shooting techniques.

Rice. 9. The scheme of the phase detection autofocus

The main components of the flash:

A pulsed light source is a gas-discharge lamp filled with an inert gas - xenon;

Lamp ignition device - step-up transformer and auxiliary elements;

Accumulator of electric energy - high-capacity capacitor;

Power supply device (batteries of galvanic cells or accumulators, current converter).

The nodes are combined into a single structure, consisting of a body with a reflector, or arranged into two or more blocks.

Flash discharge lamps- These are powerful light sources, the spectral characteristics of which are close to natural daylight. The lamps used in photography (Fig. 10) are a glass or quartz tube filled with an inert gas ( xenon) under a pressure of 0.1–1.0 atm, at the ends of which electrodes made of molybdenum or tungsten are installed.

The gas inside the lamp does not conduct electricity. To turn on the lamp (ignition), there is a third electrode ( incendiary) in the form of a transparent layer of tin dioxide. When a voltage not lower than the ignition voltage and a high-voltage (>10000 V) ignition pulse between the cathode and the ignition electrode is applied to the electrodes, the lamp ignites. The high voltage pulse ionizes the gas in the lamp bulb along the outer electrode, creating an ionized cloud connecting the positive and negative electrodes of the lamp, allowing the gas to ionize now between the two electrodes of the lamp. Due to the fact that the resistance of the ionized gas is 0.2–5 Ohm, the electrical energy accumulated on the capacitor is converted into light energy in a short period of time. Pulse duration - the period of time during which the intensity of the pulse decreases to 50% of the maximum value and is 1/400 - 1/20000 s and shorter. Quartz cylinders of flash lamps transmit light with a wavelength of 155 to 4500 nm, glass - from 290 to 3000 nm. The emission of pulsed lamps begins in the ultraviolet part of the spectrum and requires the application of a special coating on the bulb, which not only cuts off the ultraviolet region of the spectrum, acting as an ultraviolet filter, but also corrects the color temperature of the pulsed source to the photographic standard of 5500 K.

Rice. 10. The device of a flash gas discharge lamp

The power of flash lamps is measured in joules (wattsecond) according to the formula:

Where WITH is the capacitance of the capacitor (farad), U ignition - ignition voltage (volts), U pg - extinction voltage (volt), E max is the maximum energy (Ws).

The flash energy depends on the capacitance and voltage of the storage capacitor.

Three ways to control flash energy.

1. Parallel connection of several capacitors ( C = C 1 + C 2 + C W + ... + C n) and, turning on/off some of their groups to control the radiation power. The color temperature remains stable with this power control, but power control is only possible in discrete values.

2. Changing the initial voltage on the storage capacitor allows you to adjust the energy within 100–30%. At lower voltages, the lamp does not light up. Further improvement of this technology, the introduction of another low-capacity capacitor into the lamp start-up circuit, on which a voltage sufficient to start the lamp is reached, and the remaining capacitors are charged to a lower value, which makes it possible to obtain any intermediate power values ​​ranging from 1:1 to 1: 32 (100–3%). The discharge in this mode of turning on the lamp in its characteristics is close to glowing, which lengthens the glow time of the lamp, and the total color temperature of the radiation approaches the standard 5500K.

3. Interruption of the pulse duration when the required power is reached. If, at the moment of ionization of the gas in the lamp bulb, the electrical circuit leading from the capacitor to the lamp is broken, the ionization will stop and the lamp will go out. This method requires the use of special electronic circuits in the control of a flash lamp that monitor a given voltage drop across the capacitor, or take into account the luminous flux returned from the subject.

Guide number - flash power, expressed in arbitrary units, is equal to the product of the distance from the flash to the subject by the f-number. The guide number depends on the flash energy, the angle of light scattering and the design of the reflector. Typically, the guide number is indicated for photographic material with a sensitivity of 100ISO.

Knowing the guide number and the distance from the flash to the subject, you can determine the aperture required for correct exposure using the formula:

For example, with a guide number of 32, we get the following parameters: aperture 8=32/4 (m), aperture 5.6=32/5.7 (m) or aperture 4=32/8 (m).

The amount of light is inversely proportional to the square of the distance from the light source to the object (the first law of illumination), therefore, to increase the effective flash distance by 2 times, with a fixed aperture value, it is necessary to increase the sensitivity of the photographic material by 4 times (Fig. 11).

Rice. 11. The first law of illumination

For example, with a guide number of 10 and aperture of 4, we get:

At ISO100 - effective distance =10/4=2.5 (m)

At ISO400 - effective distance = 5 (m)

Flash auto modes

A modern flash, in accordance with the film sensitivity and aperture data set on the camera, can dose the amount of light, interrupting the lamp discharge at the command of the automation. The amount of light can only be adjusted in the direction of decreasing, i.e. either a full discharge, or a smaller part of it, if the subject is close enough and maximum energy is not required. The automation of such devices captures the light reflected from the object, assuming that in front of it is a medium gray object, the reflectance of which is 18%, which can lead to exposure errors if the reflectivity of the object differs significantly from this value. To solve this problem, flashes have exposure compensation mode, which will allow you to adjust the flash energy, based on the lightness of the object, both towards increasing (+) and decreasing (-) energy from the level calculated by the automation. The mechanism of exposure compensation when working with a flash is similar to that discussed earlier.

It is very important to know at what shutter speed you can use manual or automatic flash, because the duration of the flash light pulse is very short (measured in thousandths of a second). The flash must fire when the shutter is fully open, otherwise the shutter curtain may cover part of the image in the frame. This shutter speed is called sync speed. It varies for different cameras from 1/30 to 1/250 s. But if you choose a shutter speed longer than the sync speed, you will be able to set the flash firing time.

Synchronization on the first (opening) curtain- allows immediately after the full opening of the frame window to produce a pulse of light, and then the moving object will be illuminated by a constant source, leaving blurry traces of the image in the frame - a loop. In this case, the loop will be in front of a moving object.

Second (closing) curtain sync– synchronizes the triggering of the pulse before the start of closing the frame window by the camera shutter. The result is that the trail from a moving object is exposed behind the object, emphasizing its movement dynamics.

In the most advanced models of flashes, there is a mode of dividing energy into equal parts and the ability to give it out in alternating parts for a certain time interval and with a certain frequency. This mode is called stroboscopic, the frequency is indicated in hertz (Hz). If the subject is moving relative to frame space, the stroboscopic mode will allow you to fix the individual phases of movement, “freezing” them with light. In one frame it will be possible to see all phases of the movement of the object.

Red-eye effect. When shooting people with flash, their pupils may appear red in the picture. Red-eye is caused by the reflection of light emitted by a flash from the retina at the back of the eye, which is returned directly to the lens. This effect is typical for the built-in flash due to its close location to the optical axis of the lens (Fig. 12).

Ways to reduce red-eye

Using a compact camera to take pictures can only reduce the chance of red-eye. The problem is also subjective in nature - there are people who can experience red-eye even when shooting without a flash ...

Rice. 12. Scheme for the formation of the effect of "red eyes"

To reduce the likelihood of the "red eye" effect, there are a number of methods based on the property of the human eye to reduce the size of the pupil with increasing illumination. The eyes are illuminated with the help of a preliminary flash (lower power) before the main pulse or a bright lamp at which the subject must look.

The only one reliable way To combat this effect, use an external stand-alone flash with an extension cord, placing its optical axis approximately 60 cm from the optical axis of the lens.

Film transport. Modern film cameras are equipped with a built-in motor drive to transport the film inside the camera. After each shot, the film is automatically rewound to the next frame and the shutter is cocked at the same time.

There are two film transport modes: single frame and continuous. In single-frame mode, one shot is taken after pressing the shutter button. Continuous mode shoots a series of shots for as long as the shutter button is pressed. Film rewind is done automatically by the camera.

The film transport mechanism consists of the following elements:

Film cassette;

Take-up spool on which the film is wound;

The toothed roller engages with the perforations and advances the film in the frame window by one frame. More advanced film transport systems use special rollers instead of a toothed roller, and one row of film perforations is used by a sensor system to accurately position the film for the next frame;

Locks for opening and closing the rear cover of the film cassette changer.

Cassette- is a lightproof metal case in which the film is stored, installed in the camera before shooting and removed from it after shooting. The cassette of a 35 mm camera has a cylindrical shape, consists of a reel, a body and a cover, and can accommodate film up to 165 cm long (36 frames).

camera roll - a photosensitive material on a flexible transparent basis (polyester, nitrate or cellulose acetate), on which a photographic emulsion is applied containing silver halide grains, which determine the sensitivity, contrast and optical resolution of the film. After exposure to light (or other forms of electromagnetic radiation, such as x-rays), a latent image is formed on photographic film. With the help of subsequent chemical processing, a visible image is obtained. The most common is perforated film 35 mm wide for 12, 24 and 36 frames (frame format 24 × 36 mm).

Photographic films are divided into: professional and amateur.

Professional films are designed for more precise exposure and post-processing, come with tighter tolerances for key features, and typically require cold storage. Amateur films are less demanding on storage conditions.

Photographic film happens black and white or color:

Black and white film designed to capture black and white negative or positive images using the camera. IN black and white film there is one layer of silver salts. Upon exposure to light and further chemical processing, silver salts turn into metallic silver. The structure of a black-and-white photographic film is shown in fig. 13.

Rice. 13. Structure of black and white negative film

color film designed to capture color negative or positive images using a camera. Color film uses at least three layers. Coloring, adsorbing substances, interacting with crystals of silver salts, make the crystals sensitive to different parts of the spectrum. This way of changing the spectral sensitivity is called sensitization. Sensitive only to blue, usually unsensitized, the layer is located on top. Since all other layers, in addition to "their own" ranges of the spectrum, are also sensitive to blue, they are separated by a yellow filter layer. Next come green and red. During exposure, clusters of metallic silver atoms are formed in silver halide crystals, just like in black and white film. Subsequently, this metallic silver is used to develop color dyes (in proportion to the amount of silver), then again turns into salts and is washed out during the bleaching and fixing process, so that the image in the color film is formed by color dyes. The structure of a color photographic film is shown in fig. 14.

Rice. 14. Structure of color negative film

There is a special monochrome film, it is processed using the standard color process, but produces a black and white image.

Color photography became widespread due to the appearance of various cameras, modern negative materials and, of course, the development of a wide network of mini-photo labs that allow you to quickly and accurately print pictures of various formats.

The photographic film is divided into two large groups:

Negative. On a film of this type, the image is inverted, that is, the lightest parts of the scene correspond to the darkest parts of the negative, and colors are also inverted on color film. Only when printed on photographic paper does the image become positive (valid) (Fig. 15).

Reversible or slide film so named because the colors on the processed film correspond to the real ones - a positive image. reversible film, often referred to as slide film, is used primarily by professionals and achieves excellent results in terms of color richness and fine detail. The developed reversible film is already the final product - a transparencies (each frame is unique).

By the term “slide”, we mean a transparencies framed by a frame measuring 50 × 50 mm (Fig. 15). The main use of slides is projection onto a screen using an overhead projector and digital scanning for printing purposes.

Selecting the film speed

Lightsensitivity photographic material - the ability of photographic material to form an image under the influence of electromagnetic radiation, in particular light, characterizes the exposure that can normally convey the photographed plot in the picture, and is expressed numerically in ISO units (abbreviated from the International Standard Organization - International Organization for Standardization), which are universal the standard for calculating and designating the sensitivity of all photographic films and digital camera matrices. The ISO scale is arithmetic - doubling the value corresponds to doubling the sensitivity of the photographic material. ISO 200 is twice as fast as ISO 100 and half as fast as ISO 400. For example, if you get an exposure of 1/30 sec, F2.0 for ISO 100 in a given scene, F2.0, for ISO 200 you can reduce your shutter speed to 1/60 sec., and at ISO 400 - up to 1/125.

Among general purpose color negative films, the most common are ISO100, ISO 200, and ISO 400. The most sensitive general purpose film is ISO 800.

A situation is possible when in the simplest cameras there is not enough range of exposure parameters (shutter speed, aperture) for specific shooting conditions. Table 1 will help you navigate the choice of sensitivity for the planned shooting.

Rice. 15. Analog photo process

Rice. 16. Analog photography technology

Table 1

Evaluation of the possibility of shooting on photographic material of different photosensitivity

Light sensitivity, (ISO)

Shooting conditions

Sun

Cloudiness

Movement, sport

Flash photography

Permissible

Permissible

The lower the ISO speed of a film, the less grainy it is, especially at high magnifications. Always use the lowest ISO speed film suitable for the shooting conditions.

Film grain setting speaks of the visual visibility of the fact that the image is not continuous, but consists of individual grains (clots) of the dye. Film grain is expressed in relative grain units O.E.Z. (RMS - in the English literature). This value is quite subjective, since it is determined by visual comparison under a microscope of test samples.

Color distortion. The presence of color distortions associated with the quality of films affects the reduction of color differences between details in highlights and shadows ( gradation distortion), on decreasing color saturation ( color separation distortion) and on the reduction of color differences between the fine details of the image ( visual distortions). Most color films are versatile and balanced for shooting under daylight with color temperature 5500 K(Kelvin is a unit of measure for the color temperature of a light source) or with flash ( 5500 K). A mismatch between the color temperatures of the light source and the film used causes color distortion (unnatural tones) to appear on the printout. Artificial lighting with fluorescent lamps has a significant effect on the color of the image ( 2800–7500 K) and incandescent lamps ( 2500–2950 K) when shooting on film designed for daylight.

Let's take a look at some of the most typical examples of shooting on universal film for natural light:

- Shooting in clear sunny weather. The color rendition in the picture is correct - real.

- Shooting indoors with fluorescent lamps. The color rendition in the picture is shifted towards the predominance of green.

- Shooting indoors with incandescent lights. The color rendition in the picture is shifted towards the predominance of a yellow-orange hue.

Such color distortions require the introduction of color correction during photography (correction filters) or during photo printing, so that the perception of prints is close to the real one.

Modern photographic films are packaged in metal cassettes. Photocassettes have a code on their surface containing information about the film.

DX encoding - a method of designating the type of photographic film, its parameters and characteristics for entering and automatically processing these data in the control system of an automatic camera when taking photographs or an automatic mini-photo laboratory when photographing.

For DX coding, bar and chess codes are used. A bar code (for a miniphoto laboratory) is a series of parallel dark stripes of different widths with light gaps, applied in a certain order to the surface of the cassette and directly to the film. The code for miniphotolabs contains the data necessary for automatic development and photo printing: information about the type of film, its color balance, and the number of frames.

The chess DX code is intended for automatic cameras and is made in the form of 12 light and dark rectangles alternating in a certain order on the surface of the cassette (Fig. 17). Conductive (metallic color) sections of the chess code correspond to "1", and isolated (black) - "0" of the binary code. For cameras, film sensitivity, number of frames, and photographic latitude are coded. Zones 1 and 7 are always conductive - correspond to "1" of the binary code (common contacts); 2–6 – photosensitivity of photographic film; 8–10 – number of frames; 11–12 - determine the photographic latitude of the film, i.e. maximum exposure deviation from nominal (EV).


Rice. 17. DX coding by chess code

Dynamic Range - one of the main characteristics of photographic materials (photographic film, matrix of a digital photo or video camera) in photography, television and cinema, which determines the maximum range of brightness of the subject, which can be reliably transmitted by this photographic material at nominal exposure. Reliable transmission of brightness means that equal differences in the brightness of the elements of the object are transmitted by equal differences in brightness in its image.

Dynamic Range is the ratio of the maximum allowable value of the measured value (brightness) to the minimum value (noise level). Measured as the ratio of the maximum and minimum exposure values ​​of the linear section of the characteristic curve. Dynamic range is usually measured in exposure units (EV) or f-stops and is expressed as a base 2 logarithm (EV), more rarely (analogue photography) the decimal logarithm (denoted by the letter D). 1EV = 0.3D .

where L is the photographic latitude, H is the exposure (Fig. 1).

To characterize the dynamic range of photographic films, the concept is usually used photographic latitude , showing the range of brightness that the film can transmit without distortion, with uniform contrast (the range of brightness of the linear part of the characteristic curve of the film).

The characteristic curve of silver halide (photographic film, etc.) photographic materials is non-linear (Fig. 18). In its lower part there is a veil region, D 0 is the optical density of the veil (for photographic film, the optical density of the veil is density of unexposed photographic material). Between points D 1 and D 2, one can distinguish a section (corresponding to the photographic latitude) of an almost linear increase in blackening with increasing exposure. At long exposures, the degree of blackening of the photographic material passes through a maximum of D max (for photographic film, this density of illuminated areas).

In practice, the term " useful photographic latitude» photographic material L max , corresponding to a longer section of the «moderate non-linearity» of the characteristic curve, from the threshold of the least blackening D 0 +0.1 to a point near the point of maximum optical density of the photolayer D max -0.1.

At photosensitive elements of the photoelectric principle of operation there is a physical limit, called the “charge quantization limit”. The electric charge in one photosensitive element (matrix pixel) consists of electrons (up to 30,000 in one saturated element - for digital devices this is the “maximum” pixel value that limits the photographic latitude from above), the element’s own thermal noise is not lower than 1–2 electrons. Since the number of electrons roughly corresponds to the number of photons absorbed by the photosensitive element, this determines the maximum theoretically achievable photographic latitude for the element - about 15EV (binary logarithm of 30,000).

Rice. 18. Film characteristic curve

For digital devices, the lower limit (Fig. 19), expressed in an increase in “digital noise”, the causes of which are composed of: thermal noise of the matrix, charge transfer noise, analog-to-digital conversion error (ADC), also called “sampling noise” or “quantization noise signal".

Rice. 19 Characteristic curve of a digital camera sensor

For ADCs with different bit depths (number of bits) used for quantization of the binary code (Fig. 20), the greater the number of quantization bits, the smaller the quantization step and the higher the conversion accuracy. In the process of quantization, the number of the nearest quantization level is taken as the sample value.

Quantization noise means that a continuous change in brightness is transmitted as a discrete, stepped signal, therefore, different levels of object brightness are not always transmitted different levels output signal. So with a three-bit ADC in the range from 0 to 1 exposure steps, any changes in brightness will be converted to a value of 0 or 1. Therefore, all image details that are in this exposure range will be lost. With a 4-bit ADC, detail transmission in the exposure range from 0 to 1 becomes possible - this practically means an increase in photographic latitude by 1 stop (EV). Hence the photographic latitude of the digital apparatus (expressed in EV) cannot be greater than the bit depth of the analog-to-digital conversion.

Rice. 20 Analog-to-digital dimming conversion

Under the term photographic latitude it is also understood the value of the permissible deviation of the exposure from the nominal for a given photographic material and given shooting conditions, while maintaining the transmission of details in the light and dark parts of the scene.

For example: the photographic latitude of KODAK GOLD film is 4 (-1EV....+3EV), which means that at the nominal exposure for this scene of F8, 1/60, you will get acceptable quality details in the picture that would require shutter speeds from 1 /125 sec to 1/8 sec, fixed aperture.

When using FUJICHROME PROVIA slide film with a photographic latitude of 1 (-0.5EV....+0.5EV), you need to determine the exposure as accurately as possible, because with the same nominal exposure of F8, 1/60, with a fixed aperture, you get in the picture details of acceptable quality that would require shutter speeds from 1/90 sec to 1/45 sec.

Insufficient photographic latitude of the photographic process leads to the loss of image details in the light and dark parts of the scene (Fig. 21).

The dynamic range of the human eye is ≈15EV, the dynamic range of typical subjects is up to 11EV, the dynamic range of a night scene with artificial lighting and deep shadows can be up to 20EV. It follows that the dynamic range of modern photographic materials is insufficient to convey any scene of the surrounding world.

Typical indicators of the dynamic range (useful photographic latitude) of modern photographic materials:

– color negative films 9–10 EV.

– color reversible (slide) films 5–6 EV.

- matrices of digital cameras:

Compact cameras: 7-8 EV;

SLR cameras: 10–14 EV.

– photo print (reflective): 4-6.5 EV.

Rice. 21 Influence of the dynamic range of photographic material on the result of shooting

Camera batteries

Chemical current sources- devices in which the energy of chemical reactions occurring in them is converted into electricity.

The first chemical current source was invented by the Italian scientist Alessandro Volta in 1800. Volta's element is a vessel with salt water with zinc and copper plates lowered into it, connected by wire. Then the scientist assembled a battery of these elements, which was later called the Voltaic Pillar (Fig. 22).

Rice. 22. Voltaic pillar

The basis of chemical current sources are two electrodes (a cathode containing an oxidizing agent and an anode containing a reducing agent) in contact with the electrolyte. A potential difference is established between the electrodes - an electromotive force corresponding to the free energy of the redox reaction. The action of chemical current sources is based on the flow of spatially separated processes with a closed external circuit: the reducing agent is oxidized at the cathode, the resulting free electrons pass, creating an electric current, along the external circuit to the anode, where they participate in the oxidizer reduction reaction.

In modern chemical current sources are used:

- as a reducing agent (at the anode): lead - Pb, cadmium - Cd, zinc - Zn and other metals;

– as an oxidizing agent (at the cathode): lead oxide PbO 2 , nickel hydroxide NiOOH, manganese oxide MnO 2, etc.;

- as an electrolyte: solutions of alkalis, acids or salts.

According to the possibility of repeated use, chemical current sources are divided into:

galvanic cells, which, due to the irreversibility of the chemical reactions occurring in them, cannot be used repeatedly (recharge);

electric accumulators– rechargeable galvanic cells that can be recharged and used repeatedly with the help of an external current source (charger).

Galvanic cell- a chemical source of electric current, named after Luigi Galvani. The principle of operation of a galvanic cell is based on the interaction of two metals through an electrolyte, leading to the appearance of an electric current in a closed circuit. The EMF of a galvanic cell depends on the material of the electrodes and the composition of the electrolyte. The following galvanic cells are now widely used:

The most common salt and alkaline elements of the following sizes:

ISO designation

IEC designation

As the chemical energy is exhausted, the voltage and current drop, the element ceases to operate. Galvanic cells are discharged in different ways: salt cells reduce the voltage gradually, lithium cells maintain voltage throughout the entire period of operation.

Electric battery- a chemical current source of reusable action. Electric batteries are used for energy storage and autonomous power supply of various consumers. Several batteries combined into one electrical circuit are called a battery. Battery capacity is usually measured in amp-hours. The electrical and performance characteristics of the battery depend on the material of the electrodes and the composition of the electrolyte. The most commonly used batteries are:

The principle of operation of the battery is based on reversibility chemical reaction. As the chemical energy is exhausted, the voltage and current drop - the battery is discharged. The performance of the battery can be restored by charging with a special device, passing current in the opposite direction to the current during discharge.

Despite the abundance of photographers, often self-made, few can tell in detail about the history of photographs. That is what we will do today. After reading the article, you will learn: what is a camera obscura, what material became the basis for the first photograph, and how instant photography appeared.

Where did it all begin?

About chemical properties sunlight people have known for a long time. Even in ancient times, any person could say that the sun's rays make skin color darker, guessed about the effect of light on the taste of beer and sparking precious stones. History has more than a thousand years of observations of the behavior of certain objects under the influence of ultraviolet radiation (this is the type of radiation characteristic of the sun).

The first analogue of photography began to be truly used as early as the 10th century AD.

This application consisted in the so-called camera obscura. It represents a completely dark room, one of the walls of which had a round hole that transmits light. Thanks to him, a projection of the image appeared on the opposite wall, which the artists of that time “finalized” and received beautiful drawings.

The image on the walls was upside down, but that didn't make it any less beautiful. This phenomenon was discovered by an Arab scientist from Basra named Alhazen. For a long time he was engaged in observing light rays, and the phenomenon of the camera obscura was first noticed by him on the darkened white wall of his tent. The scientist used it to observe the dimming of the sun: even then they understood that it was very dangerous to look at the sun directly.

First photo: background and successful attempts.

The main premise is the proof by Johann Heinrich Schulz in 1725 that it is light, and not heat, that causes silver salt to turn dark. He did this by accident: trying to create a luminous substance, he mixed chalk with nitric acid, and with a small amount of dissolved silver. He noticed that under the influence of sunlight the white solution darkens.

This prompted the scientist to another experiment: he tried to get an image of letters and numbers by cutting them out on paper and applying them to the illuminated side of the vessel. He received the image, but he did not even have thoughts about saving it. Based on the work of Schultz, the scientist Grotgus found that the absorption and emission of light occurs under the influence of temperature.

Later, in 1822, the world's first image was obtained, more or less familiar to modern man. It was received by Joseph Nsefort Niépce, but the frame he received was not preserved properly. Because of this, he continued to work with great zeal and received in 1826, a full-fledged frame, called "View from the Window". It was he who went down in history as the first full-fledged photograph, although it was still far from the quality we were used to.

The use of metals is a significant simplification of the process.

A few years later, in 1839, another Frenchman, Louis-Jacques Daguerre, published new material for taking photographs: copper plates coated with silver. After that, the plate was doused with iodine vapor, which created a layer of light-sensitive silver iodide. It was he who was the key to future photography.

After processing, the layer was subjected to a 30-minute exposure in an illuminated sunlight room. Then the plate was taken to a dark room and treated with mercury vapor, and the frame was fixed with table salt. It is Daguerre who is considered to be the creator of the first more or less high-quality photograph. This method, although it was far from "mere mortals", was already much simpler than the first.

Color photography is a breakthrough of its time.

Many people think that color photography appeared only with the creation of film cameras. This is not true at all. The year of creation of the first color photograph is considered to be 1861, it was then that James Maxwell received the image, later called the “Tartan Ribbon”. For creation, the method of three-color photography or the color separation method was used, whichever one likes more.

To obtain this frame, three cameras were used, each of which was equipped with a special filter that makes up the primary colors: red, green and blue. As a result, three images were obtained, which were combined into one, but such a process could not be called simple and fast. To simplify it, intensive research was carried out on photosensitive materials.

The first step towards simplification was the identification of sensitizers. They were discovered by Hermann Vogel, a scientist from Germany. After some time, he managed to get a layer sensitive to the green color spectrum. Later, his student Adolf Miethe created sensitizers sensitive to the three primary colors: red, green and blue. He demonstrated his discovery in 1902 at a Berlin scientific conference along with the first color projector.

One of the first photochemists in Russia, Sergei Prokudin-Gorsky, a student of Mitya, developed a sensitizer more sensitive to the red-orange spectrum, which allowed him to surpass his teacher. He also managed to reduce the shutter speed, managed to make the pictures more massive, that is, he created all the possibilities for replicating photographs. Based on the inventions of these scientists, special photographic plates were created, which, despite their shortcomings, were in high demand among ordinary consumers.

Snapshot is another step towards speeding up the process.

In general, the year of the appearance of this type of photography is considered to be 1923, when a patent was registered for the creation of an “instant camera”. There was little use for such a device, the combination of a camera and a photo lab was extremely cumbersome and did not greatly reduce the time it takes to get a frame. Understanding the problem came a little later. It consisted in the inconvenience of the process of obtaining the finished negative.

It was in the 1930s that complex light-sensitive elements first appeared, which made it possible to obtain a ready-made positive. Agfa was involved in their development in the first couple, and the guys from Polaroid were engaged in them en masse. The first cameras of the company made it possible to take instant photographs immediately after taking a picture.

A little later, similar ideas were tried to be implemented in the USSR. Photo sets "Moment", "Photon" were created here, but they did not find popularity. main reason- the absence of unique light-sensitive films to obtain a positive. It was the principle laid down by these devices that became one of the key and most popular at the end of the 20th century - early XXI century, especially in Europe.

Digital photography is a leap forward in the development of the industry.

This type of photography really originated quite recently - in 1981. The founders can be safely considered the Japanese: Sony showed the first device in which the matrix replaced the film. Everyone knows how a digital camera differs from a film camera, right? Yes, it could not be called a high-quality digital camera in the modern sense, but the first step was obvious.

In the future, a similar concept was developed by many companies, but the first digital device, as we are used to seeing it, was created by Kodak. The serial production of the camera began in 1990, and it almost immediately became super popular.

In 1991, Kodak, together with Nikon, released the Kodak DSC100 professional digital SLR camera based on the Nikon F3 camera. This device weighed 5 kilograms.

It is worth noting that with the advent of digital technologies, the scope of photography has become more extensive.
Modern cameras, as a rule, are divided into several categories: professional, amateur and mobile. In general, they differ from each other only in the size of the matrix, optics and processing algorithms. Due to the small number of differences, the line between amateur and mobile cameras is gradually blurring.

Application of photography

Back in the middle of the last century, it was hard to imagine that clear images in newspapers and magazines would become a mandatory attribute. The boom in photography was especially pronounced with the advent of digital cameras. Yes, many will say that film cameras were better and more popular, but it was digital technology that made it possible to save the photographic industry from such problems as running out of film or overlaying frames on top of each other.

Moreover, modern photography is undergoing extremely interesting changes. If earlier, for example, to get a photo in your passport, you had to stand in a long queue, take a picture and wait a few more days before it was printed, now it’s enough just to take a picture of yourself on a white background with certain requirements on your phone and print the pictures on special paper.

Artistic photography has also come a long way. Previously, it was difficult to get a highly detailed frame of a mountain landscape, it was difficult to crop unnecessary elements or make quality processing photos. Now even mobile photographers are getting great shots, ready to compete with pocket digital cameras without any problems. Of course, smartphones cannot compete with full-fledged cameras, such as Canon 5D, but this is a topic for a separate discussion.

Digital SLR for beginners 2.0- for connoisseurs of Nikon.

My first MIRROR— for connoisseurs of CANON.

So, dear reader, now you know a little more about the history of photography. I hope this material will be useful to you. If so, why not subscribe to the blog update and tell your friends about it? Moreover, you will find a lot of interesting materials that will allow you to become more literate in matters of photography. Good luck and thank you for your attention.

Sincerely yours, Timur Mustaev.

Digital photography entered life gradually, step by step. The US National Aerospace Agency began using digital signals in the 1960s, along with flights to the Moon (for example, to create a map of the lunar surface) - as you know, analog signals can be lost during transmission, and digital data is much less error prone. The first ultra-precise image processing was developed during this period, as the National Aerospace Agency used the full power of computer technology to process and enhance space images. The Cold War, during which a wide variety of spy satellites and secret imaging systems were used, also helped accelerate the development of digital photography.

The first electronic camera without film was patented by Texas Instruments in 1972. The main disadvantage of this system was that the photographs could only be viewed on television. A similar approach was implemented in the Mavica device. Sony, which was announced in August 1981 as the first commercial electronic camera. The Mavica camera could already be connected to a color printer. At the same time, it was not a real digital camera - it was more of a video camera with which you can take and show individual pictures. The Mavica (Magnetic Video Camera) camera made it possible to record up to fifty images on two-inch floppy disks using a CCD sensor with a size of 570x490 pixels, which corresponded to the ISO 200 standard. Lenses: 25mm wide, 50mm regular, and 16-65mm zoom. At present, such a system may seem primitive, but do not forget that Mavica was developed almost 25 years ago!

In 1992, Kodak announced the release of the first professional digital camera, the DCS 100, based on the Nikon F3. The DCS 100 was equipped with a 1.3 MB CCD image sensor and a portable hard drive for storing 156 captured images. It should be noted that this disk weighed about 5 kg, the camera itself cost $25,000, and the resulting images were only good enough for printing on the pages of newspapers. Therefore, it was advisable to use such photographic equipment only in cases where the timing of obtaining images was more important than their quality.

The prospects for digital photography became clearer with the introduction of two new types of digital cameras in 1994. Apple Computer first released the Apple QuickTake 100 camera, which had a strange sandwich shape and was capable of capturing 8 images at a resolution of 640 x 480 pixels. It was the first mass-market digital camera available for a selling price of $749. The images produced with it were also of poor quality, which did not allow them to be printed properly, and since the Internet was then at the initial stage of its development, this camera did not find wide use.

The second camera, released in the same year by Kodak in conjunction with the Associated Press news agency, was intended for photojournalists. Its NC2000 and NC200E models combined the look and functionality of a film camera with the instant access to images and capturing convenience of a digital camera. The NC 2000 was widely adopted by many newsrooms, prompting the move from film to digital.

Since the mid-1990s, digital cameras have become more advanced, computers have become faster and less expensive, and software- more developed. In their development, digital cameras have gone from an alien type of device that could be dear only to their creators, to a universal, easy-to-use photographic equipment that can be built into even ubiquitous cell phones and has the same technical specifications, like the most latest models full-frame (35 mm) digital cameras. And in terms of the quality of the images obtained, such photographic equipment surpasses film cameras.

The changes that are constantly taking place in digital camera technology are remarkable.

The rapid development of the digital photography industry is evidenced by the increase in the production of cameras, as well as the reduction in the production of film by all manufacturers, the departure of the pillars of the photo industry from the market or their complete transition to digital technologies. The development of photo inkjet printers also indicates the growth of the digital camera (DSC) market.

A digital photograph is a photograph taken with a digital camera or still camera; a photograph digitized by a scanner, taken with an ordinary camera; slide.

Digital camera

The camera is one of the most amazing inventions of man. It leaves many moments of our lives forever.

The modern photographic industry began with Talbot's discovery 160 years ago. Now a new photographic era has begun - the era of digital photography.

The digital camera is different from the usual topics that instead of a film, a photosensitive matrix is ​​installed in it. It converts the image into an electrical signal, which is then processed and stored in digital form in the camera's memory.

The DPC matrix consists of cells, the operation of each of which is similar to the operation of a photoexposure meter, when, depending on the intensity of the light that hits it, an electrical signal is generated. When creating matrices for the CFC, different technologies are used. For example, the Bayer pattern, CCD RGBE technology developed by Sony.

With a digital camera, a computer, and photo software for photo editing, there are almost limitless possibilities for unleashing your creativity and ideas. The technology of creating digital photographs allows you to instantly share visual information with people, regardless of their geographical location. If the image was taken using digital cameras, then Adobe Photoshop CS5 supports a large number of Camera RAW formats.

Open the file with the RAW extension and save it in another format, such as TIFF format, as printers require images to be in this format.

Compact Flash memory card

Compact Flash (CF card or flash card) is a high-tech electrical device designed to store information in the form of digital images obtained with a digital camera.

Precautions when handling CF cards: Do not bend them, apply force to them, subject them to shock and vibration; Do not disassemble or modify the CF card. Sudden changes in temperature may cause moisture to condense in the card and cause it to malfunction. Do not use CF cards in places with a lot of dust or sand, in places with high humidity and high temperatures.

Formatting a CF card erases all data, including protected images and other types of files. Formatting is performed both for a new CF card and for deleting all images and data from the CF card.

Principles of operation of a digital camera

A digital camera creates an image based on light rays, however, it does not fire them on film, but using a photosensitive matrix, which can be called in another way a set of light-sensitive computer chops. Currently, there are two varieties of these chips: CCD (charge-coupled device - charge-coupled device - CCD), which stands for charge-coupled device, and CMOS (complementary metal-oxide semiconductor) - complementary metal oxide semiconductor.

When beams of light hit these devices, they generate electrical charges, which are then analyzed by the digital camera's processor and converted into digital image information. The more light, the more powerful the charge generated by the chip.

Once the electrical impulses have been converted into image information, this data is stored in the camera's memory, which can be stored either as a built-in memory chip or as a replaceable memory card or disk.

Typically, the camera uses a 1/3-inch CCD, which consists of elements that convert light waves into electrical impulses. The number of such elements depends on the brand of the camera.

For example, a 5-megapixel camera has about 5 million of these elements.

To access the image recorded by the camera, it is enough to transfer the data to the computer's memory. Some cameras allow you to display recorded images directly on a TV screen or directly output them to a printer for printing, thus bypassing the stage of editing the received frames on a computer.

The illumination or darkness of the resulting frame depends on the exposure - the amount of light acting on the film or on the photosensitive matrix. The more light, the brighter the resulting frame will be. Too much light, the image will be overexposed; too little light, the image will be too dark.

The amount of light hitting the film can be controlled in two ways:

© determining the amount of time the shutter will remain open (in this case, the shutter speed changes);

© by changing the aperture.

The aperture value is the size of the hole created by the set of plates located between the lens and the shutter. Rays of light are directed through this hole to the shutter with the help of lenses, after which they fall on the film or matrix. Thus, if you want more light to hit the sensor, you make the aperture size larger (larger aperture); if you need less light, you make the aperture size smaller (small the aperture).

Aperture values ​​are indicated by f-numbers, known in the English literature as f-stops (f-stops). Standard numbers are f/1.4, f/2, f/2.8, f/4, f/5.6, f/8, f/11, f/16 and f/22.

The shutter speed, or simply shutter speed, is measured in more understandable units - in fractions of a second. For example, if the shutter speed is 1/8, that means the shutter opens for 1/8 of a second.

If you find an error, please select a piece of text and press Ctrl+Enter.