Digital photo. Maximum number of frames during continuous shooting. Setting the Image Quality

The history of inventions is sometimes very bizarre and unpredictable. Exactly 40 years have passed since the invention in the field of semiconductor optoelectronics, which led to the emergence digital photography.

On November 10, 2009, inventors Willard Boyle (born in Canada in 1924) and George Smith (born in 1930) were awarded Nobel Prize. While working at Bell Labs, in 1969 they invented a charge-coupled device: a CCD sensor, or CCD (Charge-Coupled Device). At the end of the 60s. XX century Scientists have discovered that the MOS structure (metal-oxide-semiconductor compound) is photosensitive. The operating principle of a CCD sensor, consisting of individual MOS photosensitive elements, is based on reading the electrical potential generated under the influence of light. The charge shift is performed sequentially from element to element. The CCD matrix, consisting of individual light-sensitive elements, has become a new device for capturing optical images.

Willard Boyle (left) and George Smith. 1974 Photo: Alcatel-Lucent/Bell Labs

CCD sensor. Photo: Alcatel-Lucent/Bell Labs

But to create a portable digital camera based on a new photodetector, it was necessary to develop small-sized components with low power consumption: an analog-to-digital converter, a processor for processing electrical signals, a small monitor high resolution, non-volatile information storage device. The problem of creating a multi-element CCD structure seemed no less urgent. It is interesting to trace some of the stages of creating digital photography.

The first CCD matrix, created 40 years ago by newly minted Nobel laureates, contained only seven photosensitive elements. On its basis, in 1970, scientists from Bell Labs created a prototype of an electronic video camera. Two years later, Texas Instruments received a patent for “An all-electronic device for recording and subsequently reproducing still images.” And although the images were stored on magnetic tape, they could be reproduced on a TV screen, i.e. The device was essentially analog; the patent provided a comprehensive description of a digital camera.

In 1974, an astronomical electronic camera was created using a Fairchild CCD matrix (black and white, with a resolution of 100x100 pixels). (Pixel is an abbreviation English words picture (pix-) picture and element (-el) - element, i.e. image element). Using the same CCD matrices, a year later Kodak engineer Steve Sasson created the first conventionally portable camera. A picture measuring 100x100 pixels was recorded on a magnetic cassette for 23 seconds, and it weighed almost three kilograms.

1975, a prototype of the first Kodak digital camera in the hands of engineer Steve Sasson.

IN former USSR similar developments were also carried out. In 1975, tests were carried out on television cameras using domestic CCDs.

In 1976, Fairchild launched the first commercial electronic camera, the MV-101, which was used on an assembly line for product quality control. The image was transferred to a mini-computer.

Finally, in 1981, Sony announced the creation of an electronic model of the Mavica camera (abbreviation Magnetic Video Camera) based on a SLR camera with interchangeable lenses. For the first time in a household camera, the image receiver was a semiconductor matrix - a CCD measuring 10x14 mm with a resolution of 570x490 pixels. This is how the first prototype of a digital camera (DCC) appeared. It recorded individual frames in analog form on a medium with a metallized surface - a flexible magnetic disk (this two-inch floppy disk was called Mavipak) in the NTSC format and therefore it was officially called a “static video camera” (Still video camera). Technically, Mavica was a continuation of Sony's line of CCD-based television cameras. Cumbersome television cameras with cathode ray tubes have already been replaced by a compact device based on a solid-state CCD sensor - another area of ​​​​using the invention of current Nobel laureates.

Sony Mavica

Since the mid-80s, almost all leading photo brands and a number of electronic giants have been working on creating digital cameras. In 1984, Canon created the Canon D-413 video camera with twice the resolution of the Mavica. A number of companies have developed prototypes of digital cameras: Canon launched the Q-PIC (or ION RC-250); Nikon - prototype of the QV1000C DSC with data recording in analog form; Pentax demonstrated a prototype digital camera called the PENTAX Nexa with a 3x zoom lens. The camera's CCD receiver simultaneously served as an exposure metering sensor. Fuji presented the Digital Still Camera (DSC) DS-IP at the Photokina exhibition. True, she did not receive any commercial promotion.


Nikon QV1000C


Pentax Nexa


Canon Q-PIC (or ION RC-250)

In the mid-80s, Kodak developed an industrial prototype of a CCD sensor with a resolution of 1.4 megapixels and coined the term “megapixel”.

A camera that saved images as a digital file was the Fuji DS-1P (Digital Still Camera-DSC), announced in 1988, equipped with 16 MB of built-in volatile memory.

Fuji DS-1P(Digital Still Camera-DSC)

Olympus showed a prototype of the Olympus 1C digital camera at the PMA in 1990. At the same exhibition, Pentax demonstrated its improved PENTAX EI-C70 camera, equipped with an active autofocus system and exposure compensation function. Finally, the amateur Dycam Model 1, better known under the name Logitech FotoMan FM-1, appeared on the American market. Its CCD matrix with a resolution of 376x284 pixels formed only a black and white image. The information was written to regular RAM (not to flash memory) and was lost forever when the batteries (two AA cells) were turned off or discharged. There was no display for viewing frames; the lens was manually focused.

Logitech FotoMan FM-1

In 1991, Kodak added digital content to the Nikon F3 professional camera, calling the new product Kodak DSC100. The recording took place on a hard drive located in a separate block that weighed about 5 kg.

Kodak DSC100

Sony, Kodak, Rollei and other companies introduced high-resolution cameras in 1992 that could be classified as professional. Sony demonstrated the Seps-1000, whose photosensitive element consisted of three CCDs, which provided a resolution of 1.3 megapixels. Kodak developed the DSC200 based on the Nikon camera.

At the Photokina exhibition in 1994, the Kodak DSC460 high-resolution professional digital camera was announced, the CCD matrix contained 6.2 megapixels. It was developed on the basis of the professional film SLR camera Nikon N90. The CCD matrix itself, measuring 18.4x27.6 mm, was built into an electronic adapter that was docked to the body. In the same 1994, the first Flash cards of the Compact Flash and SmartMedia formats with a capacity of 2 to 24 MB appeared.

Kodak DSC460

The year 1995 marked the start of mass development of digital cameras. Minolta, together with Agfa, manufactured the RD175 camera (CCD matrix 1528x1146 pixels). At the exhibition in Las Vegas, about 20 models of amateur digital digital cameras were demonstrated: a small-sized digital camera from Kodak with a resolution of 768x512 pixels, a color depth of 24 bits and built-in memory that allows you to record up to 20 pictures; pocket ES-3000 from Chinon with a resolution of 640x480 with removable memory cards; small-sized Photo PC cameras from Epson with two possible resolutions - 640x480 and 320x240 pixels; Fuji X DS-220 with an image size of 640x480 pixels; camera RDC-1 from Ricoh with the possibility of both time-lapse and video recording with a Super VHS video format resolution of 768x480 pixels. The RDC-1 was equipped with a lens with a triple zoom and a focal length of 50-150 mm (35 mm equivalent), and the functions of focusing, determining exposure and adjusting white balance were automated. There was also an LCD display for quick viewing of captured footage. Casio also demonstrated commercial samples their cameras. The first consumer cameras were released: Apple QuickTake 150, Kodak DC40, Casio QV-11 (the first digital camera with an LCD display and the first with a rotating lens), Sony Cyber-Shot.

This is how the digital race began to gain momentum. Nowadays, thousands of models of digital cameras, video cameras and phones with built-in cameras are known. The marathon is far from over.

It is necessary to pay attention to the fact that some digital cameras are equipped with a CMOS photosensitive matrix. CMOS is a complementary metal-oxide-semiconductor structure. Without going into the topological features of CMOS and CCD matrices, we emphasize that their serious differences are only in the method of reading the electronic signal. But both types of matrices are built on the basis of photosensitive MOS structures (metal-oxide-semiconductor).

Teledermatology, the storage, processing and transmission of digital images over a distance, are topics that now occupy many dermatologists both in clinics and in private practice. In this article we will try to reveal the most important, in our opinion, possibilities of teledermatology. The use of teledermatology, along with improving the quality of treatment and diagnosis, makes the doctor’s work more cost-effective, which is especially important for private practitioners.

Saving digital images and studying skin pigment formations

Epiluminescence dermatoscopy was “rediscovered” in the early 70s for the preoperative diagnosis of pigmented skin lesions. At first, this method seemed quite complicated due to the use of stationary, rather bulky, stereomicroscopes .

With the advent of portable, hand-held dermatoscopes, as well as a binocular dermatoscope with significantly higher magnification, epiluminescence dermatoscopy has taken a strong place among traditional examination methods.

Using a dermatoscope, just like using a lighted magnifying glass, you can quickly examine the surface of the skin. When examining with a dermatoscope, a special washer made of transparent material is placed on an area of ​​skin, onto which an immersion liquid is applied, which makes it possible to examine deeper layers of the skin. Studies have shown that already at 10x magnification all significant structural and color components are identifiable.

Initially, during examinations with both a stereomicroscope and dermatoscopes different types photographs or transparencies were taken (if necessary). This was always accompanied by significant costs due to the lack of immediate control over image quality, since the result of the shooting was visible only after the film was developed. All this significantly limited the possibilities of documenting survey results. Later they were found technical solutions, allowing you to mount dermatoscopes on a video camera connected to a computer. This method makes it possible to display images either on a computer monitor or on a separate monitor and then save them (Fig. 1, Fig. 2).

This method is definitely superior to traditional photography in terms of speed, cost (due to the rapid decline in the cost of high-quality computer equipment in last years) and the ability to control the quality of image storage. However, the use of this method is limited by the fact that the optical resolution of a computer image when using today's “ordinary” video cameras and computer video cards is lower than with classic transparencies.

In addition, computer images cannot be enlarged to the extent necessary for clinical presentations or lectures without a noticeable loss of quality. Although, when viewing a dermoscopic finding saved in a computer on a monitor or when printing it on a photo-sized color or video printer (as is done in everyday practice for diagnostics and documentation), the image quality is practically no different from a regular photograph.

In both clinical photography and video photography, it is important that the colors conveyed are natural. Modern video cameras are able to compare White color as a sample and constantly monitor the color spectrum at every moment of shooting. However, in the field of color perception, epiluminescent dermatoscopy is an absolutely subjective method, since any standards for comparative analysis colors are not possible. For example, when assessing the color nuances of melanocytic formations, the researcher must rely only on personal perception. When analyzing an image, you must remember that not only the camera and lighting, but also the computer components that process and transmit the image (monitor, graphics or video card, etc.) can affect color. The diagnosis is made, as always, by the doctor, not the system. Expert systems or automatic screening systems are currently only being developed.

First, let's try to find out what digital is. When comparing the terms “film photography” and “digital photography”, it is not difficult to understand that both are photographs. But if in the first case it is a photograph on film, then in the second it is a photograph, firstly, without film, and secondly, “with numbers”. That's right. Fundamental difference digital cameras from film cameras is that the image, picture outside world, is stored in them not on film, but in the camera’s memory in digital form, that is, like ordinary pictures on a computer.

This curious effect is obtained as follows: the image, the light passing through the lens of a digital camera, falls not on the film, as we are accustomed to, but on the sensor. The sensor, the most important part of a digital camera, is a matrix of light-sensitive elements that, in response to incident light, produce different electronic signals. The received signals are processed by a special microprocessor and converted into digital form. That’s all, actually, the photo is ready.
All this clever technology turns out to be very simple for the user. Press the shutter release, take a second to think, and the photographer sees the finished result on the camera screen. Extremely simple. There is no need to develop the film (which still needs to be “snapped off” to the end, otherwise it’s wasteful), you don’t need to print pictures and then throw away the ones that didn’t work out - everything is visible at once. Perhaps it was simplicity that served as one of the main reasons for the popularization of digital photography. Popularization, it should be noted, is total and universal. It’s not for nothing that the introduction said about the death of film - so it is. Digital photography is increasingly replacing film photography, and will soon replace it altogether. Thus, in Japan over the past year, sales of digital cameras exceeded sales of traditional film cameras. In Europe and America, “digital” has come close to film, however, predicting when it will completely replace film is a thankless task.
In addition to modern ideas and ease of use, digital cameras have other advantages over film:
Firstly, processing speed. As already mentioned, a digital camera image does not need to be developed or taken to a darkroom, etc. In those distant times, when digital cameras were still hard-to-find outlandish beasts, even then journalists and reporters loved them: a fresh incriminating photograph of a local pop star was featured on the cover of newly printed newspapers immediately after the shooting, and not taken long journey from the photographer to the darkroom, from there to the slide scanner, and only from there to the designers.

Advantages

Get results quickly

The resulting image can be seen much faster than with the traditional photographic process. As a rule, cameras allow you to view the image on the built-in or attached monitor immediately after shooting (and in non-mirror and some SLR cameras - even before shooting). In addition, the image can be quickly downloaded to your computer, and then examined in all details.

Fast results enable early detection of fatal errors (and reshoots) and easy learning. Which is convenient for both beginners and amateurs/professionals.

Ready for use on a computer

Digital photography is the fastest and cheapest way to obtain images for later use on a computer - in web design, uploading images (photos of people and objects) to databases, creating artwork based on photographs, measurements, etc.

For example, when preparing modern international passports, a person is photographed with a digital camera. His photo is printed on the passport and entered into the database.

In the traditional photographic process, images are required before processing on a computer, requiring additional funds.

Cost-effective and simple

The digital shooting process does not require consumables (film) and tools/materials for the photographic process (developing the image on film). Therefore, unsuccessful shots, if you do not take into account labor costs, do not cost the photographer a penny. More precisely, they cost very little, since digital media are mainly reusable with a large rewriting resource.

Moreover, the entire process from shooting to receiving prints (or previews) can be done from the comfort of your home or studio, and just requires a computer and a photo printer. The capabilities and quality of prints (compared to processing in a laboratory), in this case, will depend only on the capabilities of the equipment and the skill of the operator.

All greater distribution receive instant photography studios, consisting of a digital camera, a computer and a digital darkroom. Photos taken in such a studio are better in both image quality and durability than traditional Polaroid-type instant photos.

Some cameras and printers allow you to take prints without a computer (cameras and printers with direct connection or printers that print from memory cards), but this option usually excludes the ability to correct the image and has other limitations.

Flexible control of shooting parameters

Digital shooting allows you to flexibly control some parameters that, in the traditional photographic process, are strictly tied to the photographic film material - light sensitivity and color balance (also called white balance).

Light sensitivity (in ISO units, similar to photographic materials) can be set manually, or be determined automatically by the camera, in relation to the scene being photographed.

In the traditional photographic process, two types of film of different color balance are used (for daylight And electric lighting), and corrective filters.

A digital camera can change the color balance very flexibly - you can choose it according to the lighting, let the camera determine it automatically, or fine-tune it based on a gray pattern.

Wide range of post process capabilities

Unlike the traditional photographic process, in digital photography there are very wide possibilities for correction and adding additional effects after shooting.

You can rotate, crop, edit, change image parameters (entirely or in a separate area), perform manual or automatic correction of defects much easier and better than when shooting on film.

Benefits of Digital Presentation

Since the original image during digital shooting is an array of numbers, storage, copying, or transmission to an arbitrary distance does not change it - any copy is identical to the original. In any case, the unreliability of the data can be quite simply established, and a repeated copy/transfer of the entire array or its fragment can be made (or its restoration using redundant information). A copy from film, especially when copied sequentially, will differ from the original.

Of course, digital media can fail, but information, if stored correctly (with sufficient redundancy and periodic replacement of media), can be kept unchanged for an arbitrary period of time.

Compactness

Most digital cameras are more compact than their film counterparts, since in their design there is no need to allocate space for film and film channel mechanics.

The ability to miniaturize the elements of digital cameras makes it possible to produce ultra-compact versions of cameras and cameras built into all kinds of devices that were not originally intended for photography - players, etc.

Of course, reduced geometric dimensions (especially optical dimensions) introduce their own characteristics into images:

  • high (built-in options, as a rule, do not have focusing mechanisms at all)
  • low optical resolution (“softness”) of images
  • more noise - sensor small size has less sensitivity and the signal from it needs additional amplification, which, in addition to the signal, also increases the background noise

Number of frames

Digital cameras, as a rule, allow you to take a larger number of frames than film cameras, because (if you do not take into account the capacity of the batteries) they are limited only by the capacity of digital media, and the latter have a wider range than film. However, the actual number of photos that can be recorded on the media depends on the characteristics of the camera (image resolution) and the recording format.

In addition, when shooting digitally, if desired/necessary, the number of shots can be increased by reducing the image parameters - resolution, recording format and/or quality Images.

  • Resolution can usually be reduced by 2-4 times or reduced to standard resolutions (640x480, 1024x768, 1600x1200)
  • Recording formats differ in the amount of information stored, type of compression, etc.
  • Under quality It is customary to understand the degree of compression with loss of information (as a rule, when saving in format) - with low quality, the image loses shades, but takes up less space.

If you have time, you can also delete unsuccessful frames from the media, making room for new ones, and download frames to a computer or pocket storage devices for large amounts of information.

Of course, you can also use multiple media, but this option is also available for film cameras.

Problems

Image Resolution

When shooting digitally, the image is represented as a discrete array of points (). Image details smaller than one pixel are not preserved. the resulting image (number or size of the pixel matrix) is determined by the base resolution of the camera sensor, as well as its current settings.

At the same time, photographic film also has its own discreteness. The image on the film is formed by black or pigment domains (“grains”) of different sizes, deposited during the photographic process.

Based on the average grain size of photographic film, a similar resolution for a digital image is considered to be 12-16 megapixels per frame. Professional cameras have this or greater resolution.

However, the actual resolution of the resulting image (that is, the degree of discernibility of details), in addition to the pixel resolution of the sensor, depends on the optical resolution of the lens and the sensor design.

Lens optical resolution

The image resolution cannot be higher than the lens. Optical resolution sufficient to obtain a clear image with a resolution of 12-16 megapixels can only be provided by detachable semi-professional optics. The lenses of most compact cameras provide a resolution of 2-4 (sometimes 6) megapixels.

Compared to film cameras, digital cameras in the same class have the same lenses or smaller lenses (and therefore potentially lower resolution).

DSLR cameras use the same lenses, but models with partial-format sensors capture only part of the frame, and therefore have lower resolution relative to the frame size.

Effect of sensor device

Image resolution may also limit the sensor design. (see section ).

Digital noise

Digital photographs, to one degree or another, contain . The amount of noise depends on the technological features of the sensor (linear pixel size, CCD/CMOS technology used, etc.).

Noise in to a greater extent appears in the image. The noise increases with increasing photosensitivity of shooting, as well as with increasing exposure time.

Digital noise is somewhat equivalent to film grain. Grain increases with film speed, just like digital noise. However, grain and digital noise are of different natures and differ in appearance:

property grain digital noise
Is … ... by limiting the resolution of the film, an individual grain follows the shape and size of the photosensitive crystal of the emulsion ... noise deviations introduced by the camera electronics, noise is formed by pixels (or spots of 2-3 pixels, when interpolating color planes) of the same size.
It appears... ... nonlinear brightness and, to a lesser extent, color texture, broken lines of sharp transitions of brightness and color ... a noise texture of brightness and color deviations throughout the image, reducing the visibility of details that create inhomogeneities in monochromatic areas
Overall it captures... ... exact brightness and colors, deviations are of a positional nature ...brightness and color with a statistical deviation to gray color, chromatic deviants have colors unusual for the subject of shooting (which irritates the perception of the image), deviations are amplitude in nature
With increased sensitivity... ...increases maximum size grains
With increasing exposure... ...does not change … the noise level increases (degree of deviation)
On white areas... ... appears weakly
On the black areas... ... practically does not appear ... manifests itself most strongly

Unlike digital noise, which varies from camera to camera, the degree of film grain does not depend on the camera used - the most expensive professional camera and a cheap compact camera on the same film will produce an image with the same grain.

Digital noise begins to be suppressed even when reading from the sensor (by subtracting the “zero” level of each pixel from the read potential), and continues when the image is processed by the camera (or RAW file converter). If necessary, noise can also be further suppressed in image processing programs.

Moire

When shooting digitally, images occur, so if the image contains another raster (textured fabrics, linear patterns, monitor and TV screens) close in size to the sensor raster, raster runout may occur, forming zones of increased and decreased brightness that merge into lines and textures , which are not on the subject of shooting.

Moire increases as the frequencies approach and the angle between the rasters decreases. The latter property means that moire can be reduced by filming the scene from a certain angle, selected experimentally. The normal orientation of the scene can be returned in a graphics editor (at the cost of losing edges and some loss of clarity).

Moire is greatly weakened by defocusing - including with “softening” filters (which are used in portrait photography) or relatively low-resolution optics that are unable to focus a point commensurate with the sensor raster line (that is, low-resolution optics or a sensor with small pixels).

Sensors, which are a rectangular matrix of light-sensitive sensors, have at least two rasters - a horizontal one, which is formed by lines of pixels, and a vertical one, perpendicular to it. Fortunately, most modern cameras have a low enough optical resolution (or high enough sensor resolution) to focus a close frequency raster well, and the resulting moiré is quite weak.

Static sensor defects

As a result of a manufacturing defect, individual light-sensitive elements of the sensor may have abnormal (reduced or increased) sensitivity or may not work at all. During operation, new defective elements may appear.

At the current level of development of sensor production technology, it is very difficult to avoid the appearance of defective elements, and sensors containing them in small quantities are not considered defective.

Statically “white” or elements with increased sensitivity are called “hot” pixels (or hot pixels), statically black ones are called “dead” or “broken” pixels.

Image defects caused by sensor anomalies are usually eliminated by noise reduction filters.

The camera can also be programmed for the features of its sensor so that anomalous elements are ignored when reading, and their values ​​are determined by interpolation. Such programming (remapping, remaping) are carried out during the quality control process; if new defective elements appear, rimaging can be repeated (either independently or in a service center).

Low photographic latitude

The light sensitivity of the sensor is lower than that of traditional photographic film (especially negative film). Therefore, when shooting a scene with a large range of brightness, “burn-in” and/or blackening may be observed in digital photographs. When “burning in” the pixel acquires a maximum brightness value; when blackening, the brightness value approaches the minimum value (and also approaches or is below the digital noise level).

Majority amateur cameras When viewing images, they allow you to see “burnt-out” pixels, so you can retake them if necessary.

To combat light burnout, some sensors have additional photodiodes with reduced sensitivity.

Internal reflections

High power consumption

The entire process of obtaining a digital image, processing it and recording it on a medium is electronic. Due to this, the vast majority of digital cameras consume more electricity than their film counterparts. Compact cameras that use a viewfinder as a viewfinder are especially high in power consumption.

Sensors made using CMOS technology have lower power consumption than CCD sensors.

Due to energy consumption, as well as the desire for compactness, in most digital cameras, manufacturers have abandoned the use of batteries, popular in film cameras, in favor of more capacious and compact batteries. Some models allow you to use AA batteries in optional battery packs.

Complex design and high price of digital cameras

Even the simplest digital camera is a complex electronic device, because when shooting, at a minimum, it must:

  • open the shutter for a specified time
  • read information from the sensor
  • write the image file to storage media

While a simple film camera simply needs to open the shutter, and for this (as well as for manipulating the film) a few simple mechanical components are enough.

It is this complexity that explains the prices of digital cameras that are 5-10 times higher than the prices of similar film models. Moreover, among simple models Digital cameras are often inferior to film cameras in terms of picture quality (mainly in resolution and digital noise).

Among other things, complexity increases the number of possible malfunctions and the cost of repairs.

The design of a color sensor and its disadvantages

The traditional color photographic process uses a multilayer emulsion with sensitive layers in different ranges.

Most modern color digital cameras use mosaic or its analogues for color separation. In the Bayer filter, each sensor does not have a light filter of one of the three primary colors and perceives only that. This approach has a number of disadvantages.

Loss of resolution

The complete image is obtained by restoring (interpolating) the color of intermediate points in each of the color planes. Interpolation reduces the resolution (sharpness) of the image.

The reduction in resolution is partly corrected by the “unsharp mask” method - increasing the contrast in the brightness transitions of the image. In the documentation, this operation is called “sharpness correction” or simply “sharpness”. Excessive use of an unsharp mask leads to the appearance of halos at the boundaries.

Often the “sharpening” is done by the camera itself. But automatic sharpening often has a sensitivity threshold that is too low and increases digital noise. In amateur cameras, the use of an unsharp mask can be disabled in order to make the necessary corrections on the computer (in a RAW file converter or graphics editor) with the parameters most suitable for each image, and also perform them in the required order.

Color artifacts

Interpolation may produce incorrect color at the edges and details of an image that are comparable in size to a pixel. Also, color artifacts can form moire patterns (see section ).

Improved interpolation algorithms that track color transitions are designed to prevent distortion at the boundaries. To suppress color artifacts in finished images, a “low-pass filter” algorithm is used, but its use makes small details of the image less contrasty and sharp.

RAW file converters and photo processing programs are responsible for preventing and suppressing color artifacts and moire. High-end cameras have built-in algorithms for this.

Alternative color schemes

The disadvantages of the Bayer filter force developers to look for alternative solutions. Here are the most popular ones.

Three-sensor circuits

These schemes use three sensors and a prism that separates the light flux into its component colors.

The main problem with a three-sensor system is combining the three resulting images into one. But this does not prevent it from being used in systems with relatively low resolution, such as video cameras.

Multilayer sensors

The idea of ​​a multilayer sensor, similar to modern color photographic film with a multilayer emulsion, has always occupied the minds of electronics developers, but until recently there were no methods for practical implementation.

Foveon developers decided to use the property of silicon to absorb light of different wavelengths (colors) to different depths crystal, placing the primary color sensors one below the other on various levels microcircuits. The implementation of this technology was sensors announced in 2005.

X3 sensors read the full gamut of colors at every pixel, so they are not prone to problems associated with color plane interpolation. They have their own problems - a tendency to noise, interlayer, etc. but this technology is still under active development.

Permission when applied to sensors, X3 has several interpretations based on various technical aspects. So for the top model Foveon “X3 10.2 MP”:

  • The final image has a pixel resolution 3,4 megapixel. This is how the user understands megapixel.
  • The sensor has 10,2 million sensors (or 3.4×3). The company uses this understanding for marketing purposes (these numbers are present in the markings and specifications).
  • The sensor provides image resolution (in a general sense) corresponding 7 -megapixel sensor with a Bayer filter (according to Foveon calculations), since it does not require interpolation and therefore provides a clearer image.

Comparative Features

Performance

Digital and film cameras generally have similar performance, determined by the delays before taking a picture in various modes. Although certain types of digital cameras may be inferior to film ones.

Shutter lag

However, most compact and budget digital cameras use slow but accurate contrasting autofocus (not applicable to film cameras). Film cameras in the same category use less accurate (relying on high) but fast focusing systems. SLR cameras (both digital and film) use the same system phase focusing with minimal delays.

To reduce the influence of autofocus on shutter lag (both in digital and in some types of film cameras), preliminary (including proactive, for moving objects) focusing is used, activated by the middle position of the three-position shutter button.

Viewfinder delay

Non-optical viewfinders used in non-DSLR digital cameras - LCD screen or electronic viewfinder(eyepiece with a CRT or LCD screen) may display an image with a delay, which, like shutter lag, can lead to a delay in shooting.

Ready time

Camera ready time is a concept that exists for electronic cameras and cameras with retractable elements. Most mechanical cameras are always ready to shoot, and there are no digital ones among them - all digital cameras and backs are electronic.

The readiness time of electronic cameras is determined by the time the camera starts initializing. For digital cameras, the initialization time can be longer, but it is quite short - 100-200 milliseconds.

Compact cameras with retractable lenses have significantly longer turnaround times, but both digital and film cameras have such lenses.

Continuous Shooting Delay

The delay during continuous shooting is due to the processing of the current frame and preparation for shooting the next one, which require some time. For a film camera, this processing would be to rewind the film to the next frame.

Before taking the next photo, the digital camera must:

  • Read data from the sensor;
  • Process the image - make a file of the desired format and size with the necessary corrections;
  • Write the file to digital media.

The slowest of the listed operations is writing to storage media (Flash card). To optimize it, it is used - writing a file to a buffer (AKA cache cache; region random access memory), with writing from a buffer to slow media, in parallel with other operations.

Processing includes a large number of operations for restoration, image correction, reduction to the required size and packaging into a file of the required format. To increase performance, in addition to increasing the operating frequency of the camera's processor, its efficiency is increased by developing specialized processors with hardware implementation of image processing algorithms.

Sensor reading speed usually becomes a performance bottleneck only in top models professional cameras, with high resolution sensors. Manufacturers eliminate all other types of delays in them. Usually, maximum speed The operation of a particular sensor is limited by physical factors that lead to sharp decreases in image quality at higher speeds. New types of sensors are being developed to work with greater productivity.

Also, the preparation time for shooting the next frame (both digital and conventional shooting) is affected by the time required to charge the flash, if one is used.

Maximum amount frames during continuous shooting

Caching writes to slow media sooner or later leads to the buffer being filled and performance dropping to the real level. Depending on the camera software, shooting can:

  • stay;
  • continue at low speed as images are recorded;
  • or continue at the same speed, overwriting previously captured but not recorded images in the buffer.

Therefore, for continuous shooting, in addition to the number of frames per second, the camera has a parameter maximum number of frames, which the camera can do before the recording cache overflows. This amount depends on:

  • Size of RAM and sensor resolution (factory specifications) of the camera;
  • User selected:
    • file format (if the camera allows it);
    • image size (if the format allows it);
    • image quality (if the format allows it).

Film cameras, due to their design, always work with real performance, and the maximum number of frames is limited only by the number of frames on the film.

Shooting in the infrared range

Most digital cameras allow shooting, partially, in invisible infrared range(thermal or infrared photography) because the photosensor is capable of detecting the upper part of this range. Visible light, if necessary, can be filtered with a special one.

In classical photography, infrared photography requires special film, but, unlike photosensors, it is capable of sensing most of the infrared range.

The main advantages and problems of digital photography, in comparison with the traditional photographic process using photographic film.

Advantages

Get results quickly

Some cameras and printers allow you to take prints without a computer (cameras and printers with direct connection or printers that print from memory cards), but this option usually eliminates or reduces the ability to correct the image and has other limitations.

Flexible control of shooting parameters

Digital photography allows you to flexibly control some parameters that, in the traditional photographic process, are strictly tied to the photographic film material - light sensitivity and color balance (also called white balance).

Digital noise

On the left side of the image is a fragment of a photograph taken during unfavorable conditions(long shutter speed, high ISO sensitivity), noise is clearly visible. On the right side of the image is a fragment of a photograph taken under favorable conditions. The noise is almost unnoticeable

Digital photographs, to varying degrees, contain digital noise. The amount of noise depends on the technological features of the sensor (linear pixel size, CCD/CMOS technology used, etc.).

Noise appears most in the shadows of the image. The noise increases with increasing photosensitivity of shooting, as well as with increasing exposure time.

Digital noise is somewhat equivalent to film grain. Grain increases with film speed, just like digital noise. However, grain and digital noise are of different natures and differ in appearance:


property grain digital noise
Is … ... by limiting the resolution of the film, an individual grain follows the shape and size of the photosensitive crystal of the emulsion ... noise deviations introduced by the camera electronics, noise is formed by pixels (or spots of 2-3 pixels, when interpolating color planes) of the same size.
It appears... ... nonlinear brightness and, to a lesser extent, color texture, uneven lines of sharp transitions of brightness and color ... a noise texture of brightness and color deviations throughout the image, reducing the visibility of details that create inhomogeneities in monochromatic areas
Overall it captures... ... exact brightness and colors, deviations are of a positional nature ... brightness and color with a statistical deviation towards gray, chromatic deviants have colors that are unusual for the subject (which irritates the perception of the image), deviations are amplitude in nature
With increased sensitivity... …maximum grain size increases
With increasing exposure... ...does not change … the noise level increases (degree of deviation)
On white areas... ... practically does not appear ... appears weakly
On the black areas... ... practically does not appear ... manifests itself most strongly

Unlike digital noise, which varies from camera to camera, the degree of film grain does not depend on the camera used - the most expensive professional camera and a cheap compact camera on the same film will produce an image with the same grain.

Digital noise begins to be suppressed even when reading from the sensor (by subtracting the “zero” level of each pixel from the read potential), and continues when the image is processed by the camera (or RAW file converter). If necessary, noise can also be further suppressed in image processing programs.

When converting RAW files, we work with unchanged data from the device’s sensor and therefore can work more accurately with noise reduction, since the image and noise on it are not blurred by interpolation of color planes (see section The design of a color sensor and its disadvantages).

Moire

Defect. Moire when shooting texture (world of contrast)

When shooting digitally, the image is rasterized. If the image contains another (not necessarily uniform) raster, which, when focusing, produces frequencies close to the frequency of the sensor raster, moiré may occur - raster beating, forming zones of increased and decreased brightness. They can merge into lines and textures that were not originally present on the subject.

Moire increases as the frequencies approach and the angle between the rasters decreases. The latter property means that moire can be reduced by filming the scene from a certain angle, selected experimentally. The normal orientation of the scene can be returned in a graphics editor (at the cost of losing edges and some loss of clarity).

Moire is greatly weakened by defocusing - including with “softening” filters (which are used in portrait photography) or relatively low-resolution optics that are unable to focus a point commensurate with the sensor raster line (that is, low-resolution optics or a sensor with small pixels).

Sensors, which are a rectangular matrix of light-sensitive sensors, have at least two rasters - a horizontal one, which is formed by lines of pixels, and a vertical one, perpendicular to it. Most modern cameras use a high sensor resolution, as well as special filters that slightly blur the image, so the possible moiré is quite weak.

High power consumption

In film photography, the image is produced chemically and does not require electricity. Electricity can only be used by additional electronic components (display, flash, motors, autofocus, exposure meters, etc.) if the camera is equipped with them. The process of obtaining and recording a digital image is completely electronic. Because of this, the vast majority of digital cameras consume more electricity than their electronic film counterparts (mechanical film cameras, of course, consume nothing at all). Compact cameras that use a liquid crystal screen with fluorescent backlight as a viewfinder are especially high in power consumption.

Sensors made using CMOS technology have lower power consumption than CCD sensors.

Due to power consumption, as well as the desire for compactness, most digital camera manufacturers have abandoned the use of AA and AAA batteries, popular in film cameras, in favor of larger, more compact batteries. Some models allow you to use AA batteries in optional battery packs.

Complex design and high price of digital cameras

Even the simplest digital camera is a complex electronic device, because when shooting, at a minimum, it must:

  • open the shutter for a specified time
  • read information from the sensor
  • write the image file to storage media

While a simple film camera simply needs to open the shutter, and for this (as well as for manipulating the film) a few simple mechanical components are enough.

It is this complexity that explains the prices of digital cameras that are 5-10 times higher than the prices of similar film models. At the same time, among simple models, digital cameras are often inferior to film cameras in terms of picture quality (mainly in terms of resolution and digital noise).

Among other things, complexity increases the number of possible malfunctions and the cost of repairs.

Color filter array systems

The most common color film photography today uses a multilayer emulsion with layers sensitive to different ranges of the visible light spectrum.

Most modern color digital cameras use a Bayer mosaic filter or its analogues for color separation. In the Bayer filter, each sensor on the photosensor has a light filter of one of the three primary colors and perceives only that.

This approach has a number of disadvantages.

Resolution loss and color artifacts

The complete image is obtained by restoring (interpolating) the color of intermediate points in each of the color planes. Thus, interpolation errors are possible, which reduce the resolution (sharpness) of the image.

Interpolation may produce incorrect color, and thus introduce additional color noise even at high ISO and sensitivity. The disadvantages already discussed above include ).

These issues are addressed by RAW file converters and photo editing programs.

Sensitivity

For good color rendering, each pixel must receive only part of the incident light spectrum. Thus, part of the light will not be taken into account, which will lead to a decrease in sensitivity. (In systems with a color separation prism, potentially less light is absorbed.)

Alternative color separation schemes

The disadvantages of the Bayer filter force developers to look for alternative solutions. Here are the most popular ones.

Three-sensor circuits

These schemes use three sensors and a prism that separates the light flux into its component colors.

The main problem of a three-sensor system is combining the three resulting images into one. But this does not prevent it from being used in systems with relatively low resolution, for example in video cameras.

Multilayer sensors

The idea of ​​a multilayer sensor, similar to modern color photographic film with a multilayer emulsion, has always occupied the minds of electronics developers, but until recently there were no methods for practical implementation.

Foveon developers decided to take advantage of silicon's ability to absorb light of different wavelengths (colors) at different depths of the crystal by placing primary color sensors below each other at different levels of the chip. The implementation of this technology was sensors announced in 2005.

X3 sensors read the full gamut of colors at every pixel, so they are not prone to problems associated with color plane interpolation. They have their own problems - a tendency to noise, interlayer chromatic aberration, etc. but this technology is still under active development.

Permission when applied to sensors, X3 has several interpretations based on various technical aspects. So for the model “Foveon X3 10.2 MP”:

  • The final image has a pixel resolution 3,4 megapixel. This is how the user understands megapixel.
  • The sensor has 10,2 million sensors (or 3.4×3). The company uses this understanding for marketing purposes (these numbers are present in the markings and specifications).
  • The sensor provides image resolution (in a general sense) corresponding 7 -megapixel sensor with a Bayer filter (according to Foveon calculations), since it does not require interpolation and therefore provides a clearer image.
Dichroic division within a pixel

A prototype of a matrix with color separation inside a pixel has been created, devoid of most of the disadvantages of all the above color separation methods. However, its extremely low manufacturability prevents its widespread implementation.

Comparative Features

Performance

Digital and film cameras, in general, have similar performance, determined by the delays before taking a frame in various modes. Although certain types of digital cameras may be inferior to film ones.

Shutter lag

However, most compact and budget digital cameras use slow but accurate contrasting autofocus (not applicable to film cameras). Film cameras in the same category use less accurate (relying on high depth of field) but fast focusing systems.

SLR cameras (both digital and film) use the same system phase focusing with minimal delays.

To reduce the effect of autofocus on shutter lag (both in digital and in some types of film cameras), preliminary focusing (including proactive focusing for moving objects) is used.

Viewfinder lag

Non-optical viewfinders used in non-DSLR digital cameras - LCD screen or electronic viewfinder(eyepiece with a CRT or LCD screen) may display an image with a delay, which, like shutter lag, can lead to a delay in shooting.

Ready time

Camera ready time is a concept that exists for electronic cameras and cameras with retractable elements. Most mechanical cameras are always ready to shoot, and there are no digital ones among them - all digital cameras and backs are electronic.

The readiness time of electronic cameras is determined by the time the camera starts initializing. For digital cameras, the initialization time may be longer, but it is quite short - 0.1-0.2 seconds.

Compact cameras with retractable lenses have significantly longer turnaround times, but both digital and film cameras have such lenses.

Continuous shooting delay

The delay during continuous shooting is due to the processing of the current frame and preparation for shooting the next one, which require some time. For a film camera, this processing would be to rewind the film to the next frame.

Before taking the next photo, the digital camera must:

  • Read data from the sensor;
  • Process the image - make a file of the desired format and size with the necessary corrections;
  • Write the file to digital media.

The slowest of the listed operations is writing to storage media (Flash card). To optimize it, it is used caching- writing a file to a buffer, with writing from the buffer to slow media, in parallel with other operations.

Processing includes a large number of operations to restore, correct the image, reduce it to the required size and package it into a file of the required format. To increase performance, in addition to increasing the operating frequency of the camera's processor, its efficiency is increased by developing specialized processors with hardware implementation of image processing algorithms.

Sensor reading speed usually becomes a performance bottleneck only in top models of professional cameras with high-resolution sensors. Manufacturers eliminate all other types of delays in them. As a rule, the maximum operating speed of a particular sensor is limited by physical factors that lead to sharp decreases in image quality at higher speeds. New types of sensors are being developed to work with greater productivity.

Also, the time it takes to charge the flash, if one is used, affects the time it takes to prepare for the next shot (both digital and conventional shooting).

Maximum number of frames during continuous shooting

Caching writes to slow media sooner or later leads to the buffer being filled and performance dropping to the real level. Depending on the software cameras, while shooting can:

  • stay;
  • continue at low speed as images are recorded;
  • or continue at the same speed, overwriting previously captured but not recorded images in the buffer.

Therefore, for continuous shooting, in addition to the number of frames per second, the camera has a parameter maximum number of frames, which the camera can do before the recording cache overflows. This amount depends on:

  • Size of RAM and sensor resolution (factory specifications) of the camera;
  • User selected:
    • file format (if the camera allows it);
    • image size (if the format allows it);
    • image quality (if the format allows it).

Film cameras, due to their design, always work with real performance, and the maximum number of frames is limited only by the number of frames on the film.

Shooting in the infrared range

Most modern (2008) digital cameras contain a filter that removes the infrared component from the light flux. However, in a number of cameras this filter can be removed and, having filtered out the visible part of the light, photograph in the invisible infrared range (shooting thermal radiation or shooting with infrared illumination)

    Digital SLR Canon camera EOS 350D Digital camera Canon PowerShot A95 Digital photography is a photograph, the result of which is an image in the form of an array of digital file data, and as a photosensitive material ... ... Wikipedia

    Digital reflex camera Canon EOS 350D Digital camera Canon PowerShot A95 Digital photography is a photograph, the result of which is an image in the form of an array of digital file data, and as a photosensitive material... ... Wikipedia Wikipedia

    Matrix on printed circuit board digital camera Matrix or photosensitive matrix specialized analog or digital analog integrated circuit, consisting of photosensitive elements of photodiodes. Designed for... ... Wikipedia



If you find an error, please select a piece of text and press Ctrl+Enter.