Thursday, April 11, 2019

White Balance


White Balance

White balance is a feature many digital cameras and video cameras use to accurately balance color. It defines what the color white looks like in specific lighting conditions, which also affects the hue of all other colors. Therefore, when the white balance is off, digital photos and recordings may appear to have a certain hue cast over the image. For example, fluorescent lights may cause images to have a greenish hue, while pictures taken on a cloudy day may have a blue tint.
Since different types of lighting affect the way a camera's sensor captures color, most digital cameras and camcorders include an auto white balance (AWB) setting. The AWB setting automatically adjusts the white balance when capturing a photo or recording video. However, this setting may not always provide the most accurate color. Therefore, many cameras and camcorders also include preset white balance settings for different lighting conditions. Common options include fluorescent light, tungsten light (for typical indoor lighting), cloudy conditions, bright sunlight, and camera flash. By choosing the appropriate white balance preset, you may be able to capture pictures with more accurate color.

Some high-end cameras and camcorders also include a custom white balance option. This feature allows you to take a sample of a white object, such as a white wall or a piece of paper, within the current lighting conditions. By manually setting the white balance to the white color within the sample image, you can set the white balance with a high degree of accuracy. Of course, if you find out you have already taken several photos with the incorrect white balance setting; you can adjust the color afterwards with an image editing program.
Some digital video cameras also include a "black balance" setting, which is used to define how black should appear in the current lighting conditions. However, this setting is used far less commonly than white balance.
Color temperature

Color temperature refers to a characterization of the spectral properties of a light source and is commonly used during the production phase in the film and photography industries. Low color temperature is the warmer, more yellow to red light while high color temperature is the colder, more blue light. Daylight, for example, has a lower color temperature near dawn and a higher one during the day. The standard unit of measurement for color temperature is Kelvin (K). Some typical color values include the following;

·         candles or oil lamps: 1000K
·         household light bulbs: 2500K
·         bright sunshine on a clear day: 6000K
·         very overcast sky: 10,000K

 

How to Create a Custom White Balance Setting on a Canon EOS 70D

If none of the preset White Balance options on the Canon EOS 70D produces the right amount of color correction, you can create your own, custom setting. To use this technique, you need a piece of card stock that’s either neutral gray or absolute white — not eggshell white, sand white, or any other close-but-not-perfect white. (You can buy reference cards made just for this purpose in many camera stores for less than $20.)
Position the reference card so that it receives the same lighting you’ll use for your photo. Then follow these steps:
1
Set the camera to the P, Tv, Av, M, or C exposure mode.
You can’t create a custom setting in any of the fully automatic modes or in B (Bulb) mode.
2.    Set the White Balance setting to Auto (AWB).
3.    Set the lens to manual focusing.
This step helps because the camera may have a hard time autofocusing on the card stock.
4.    Frame the shot so that your reference card fills the center area of the viewfinder.
Make sure that at least the center autofocus point and the six surrounding points fall over the reference card.
5.    Set focus and make sure that the exposure settings are correct.
Just press the shutter button halfway to check exposure. If necessary, adjust ISO, aperture, or shutter speed to get a proper exposure.
6.    Take the picture of your reference card.

The camera will use this picture to establish your custom White Balance setting.
7.    Display Shooting Menu 3 and choose Custom White Balance, as shown in the following figure.
After you select the option, you see the screen shown on the left in the figure below. The image you just captured should appear. If it doesn’t, use the normal playback controls to scroll to it. (Note that you may see additional data on the screen depending on the current playback display mode; press the Info button to cycle through the various displays.)
8.    Tap the Set icon (or press the Set button).
You see the message shown on the right in the figure below, asking you to confirm that you want the camera to use the image to create the custom White Balance setting.
Your white image appears on the screen (left); press or tap Set and then confirm that you want to store that image as your White Balance preset (right).
9.    Tap OK or highlight it and press the Set button.
Now you see the screen shown on the right in the figure below. This message tells you that the White Balance setting is now stored. The little icon in the message area represents the custom setting.
10.  Tap OK (or highlight it and press Set).
Your custom White Balance setting remains stored until the next time you work your way through these steps. Any time you’re shooting in the same lighting conditions and want to apply the same White Balance correction, just select the Custom option as your White Balance setting. Remember, the icon for that setting looks like the one on the screen in the above figure.


What is Frame per seconds FPS


Frame per seconds
Human vision
The temporal sensitivity and resolution of human vision varies depending on the type and characteristics of visual stimulus, and it differs between individuals. The human visual system can process 10 to 12 images per second and perceive them individually, while higher rates are perceived as motion.  Modulated light (such as a computer display) is perceived as stable by the majority of participants in studies when the rate is higher than 50 Hz through 90 Hz. This perception of modulated light as steady is known as the flicker fusion threshold. However, when the modulated light is non-uniform and contains an image, the flicker fusion threshold can be much higher, in the hundreds of hertz. With regard to image recognition, people have been found to recognize a specific image in an unbroken series of different images, each of which lasts as little as 13 milliseconds. Persistence of vision sometimes accounts for very short single-millisecond visual stimulus having a perceived duration of between 100 ms and 400 ms. Multiple stimuli that are very short are sometimes perceived as a single stimulus, such as a 10 ms green flash of light immediately followed by a 10 ms red flash of light perceived as a single yellow flash of light.
Film and video
Silent films

Early silent films had stated frame rates anywhere from 16 to 24 frames per second (fps), but since the cameras were hand-cranked, the rate often changed during the scene to fit the mood. Projectionists could also change the frame rate in the theater by adjusting a rheostat controlling the voltage powering the film-carrying mechanism in the projector. Film companies often intended that theaters show their silent films at higher frame rates than they were filmed at these frame rates were enough for the sense of motion, but it was perceived as jerky motion. To minimize the perceived flicker, projectors employed dual- and triple-blade shutters, so each frame was displayed two or three times, increasing the flicker rate to 48 or 72 Hertz and reducing eye strain. Thomas Edison said that 46 frames per second was the minimum needed for the eye to perceive motion: "Anything less will strain the eye. In the mid to late 1920s, the frame rate for silent films increased to between 20 and 26 fps.
Sound films
When sound film was introduced in 1926, variations in film speed were no longer tolerated, as the human ear is more sensitive to changes in audio frequency. Many theaters had shown silent films at 22 to 26 fps—which is why the industry chose 24 fps for sound as a compromise. From 1927 to 1930, as various studios updated equipment, the rate of 24 fps became standard for 35 mm sound film.  At 24 fps, the film travels through the projector at a rate of 456 millimeters (18.0 in) per second. This allowed for simple two-blade shutters to give a projected series of images at 48 per second, satisfying Edison's recommendation. Many modern 35 mm film projectors use three-blade shutters to give 72 images per second—each frame is flashed on screen three times.
Modern video standards

Due to the mains frequency of electric grids, analog television broadcast was developed with frame rates of 50 Hz (most of the world) or 60 Hz (US, Japan, South Korea). Hydroelectric generators, due to their massive size, developed enough centrifugal force to make the power mains frequency extremely stable, so circuits were developed for television cameras to lock onto that frequency as their primary reference.
The introduction of Color Television technology made it necessary to lower that 60 fps frequency by .1% to avoid "dot crawl", an annoying display artifact appearing on legacy black-and-white displays, showing up on highly-color-saturated surfaces. It was found that by lowering the frame rate by .1%, that undesirable effect was highly minimized.
Today's North America, Japan, and South Korea's video transmission standards are still based on 60÷1.001 or ≈59.94 images per second. Two sizes of images are typically used: 1920x540 (1080i) and 1280x720 (720p); Confusingly, interlaced formats are customarily stated at 1/2 their image rate, 29.97 fps, and double their image height, but these statements are purely custom; in each format, 60 images per second are produced. 1080i produces 59.94 1920x540 images, each squashed to half-height in the photographic process, and stretched back to fill the screen on playback in a television set. The 720p format produces 59.94 1280x720 images, not squeezed, so that no expansion or squeezing of the image is necessary. This confusion was industry-wide in the early days of digital video software, with much software being written incorrectly -- the coders believing that only 29.97 images were expected each second, which was incorrect. While it was true that each picture element was polled and sent only 29.97 times per second, the pixel location immediately below that one was polled 1/60th of a second later  part of a completely separate image for the next 1/60 second frame.
Film, at its native 24fps rate could not be displayed without the necessary pulldown process, often leading to "judder": To convert 24 frames per second into 60 frames per second, every odd frame is repeated, playing twice; Every even frame is tripled. This creates uneven motion, appearing stroboscopic. Other conversions have similar uneven frame doubling. Newer video standards support 120, 240, or 300 frames per second, so frames can be evenly multiplied for common frame rates such as 24 fps film and 30 fps video, as well as 25 and 50 fps video in the case of 300 fps displays. These standards also support video that's natively in higher frame rates, and video with interpolated frames between its native frames. Some modern films are experimenting with frame rates higher than 24 fps, such as 48 and 60 fps.
Frame rate in electronic camera specifications may refer to the maximum possible number of frames per second, where, in practice, other settings (such as exposure time) may reduce the frequency to a lower number


Aperture/f-stop and depth of field


Aperture/f-stop and depth of field
Main articles: Aperture, f-number, and Depth of field
The aperture of a lens is the opening that regulates the amount of light that passes through the lens. It is controlled by a diaphragm inside the lens, which is in turn controlled either manually or by the exposure circuitry in the camera body.

The relative aperture is specified as an f-number, the ratio of the lens focal length to its effective aperture diameter. A small f-number like f/2.0 indicates a large aperture (more light passes through), while a large f-number like f/22 indicates a small aperture (little light passes through). Aperture settings are usually not continuously variable; instead the diaphragm has typically 5–10 discrete settings. The normal "full-stop" f-number scale for modern lenses is as follows: 1, 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32, but many lenses also allow setting to half-stop or third-stop increments. A "slow" lens (one that is not capable of passing a lot of light through) might have a maximum aperture from 5.6 to 11, while a "fast" lens (one that can pass more light through) might have a maximum aperture from 1 to 4. Fast lenses are by definition larger than slow lenses (for comparable focal length), and typically cost more.  The aperture affects not only the amount of light that passes through the lens, but also the depth of field of the resulting image: a larger aperture (a smaller f-number, e.g. f/2.0) will have a shallow depth of field, while a smaller aperture (a larger f-number, e.g. f/11) will have a greater depth of field.
Focal length and angle of view
Main articles: Focal length and Angle of view
The focal length of a lens, together with the size of the image sensor in the camera (or size of the 35 mm film), determines the angle of view. A lens is considered to be a "normal lens", in terms of its angle of view on a camera, when its focal length is approximately equal to the diagonal dimension of the film format or image sensor format.  The resulting diagonal angle of view of about 53 degrees is often said to approximate the angle of human vision; since the angle of view of a human eye is at least 140 degrees, more careful authors will qualify that, for example as "similar to the angle of crisp human vision." A wide-angle lens has a shorter focal length, and includes more of the viewed scene than a normal lens; a telephoto lens has a longer focal length, and images a small portion of the scene, making it seem closer.

Lenses are not labeled or sold according to their angle of view, but rather by their focal length, usually expressed in millimeters. But this specification is insufficient to compare lenses for different cameras because field of view also depends on the sensor size. For example, a 50 mm lens mounted on a Nikon D3 (a full-frame camera) provides approximately the same field of view as a 32 mm lens mounted on a Sony α100 (an APS-C camera). Conversely, the same lens can produce different fields of view when mounted on different cameras. For example, a 35 mm lens mounted on a Canon EOS 5D (full-frame) provides a slightly wide-angle view, while the same lens mounted on a Canon EOS 400D (APS-C) provides a "normal" or slightly telephoto view.

In order to make it easier to compare lens–camera pairs, it is common to talk about their 35 mm equivalent focal length. For example, when talking about a 14 mm lens for a Four Thirds System camera, one would not only indicate that it had a focal length of 14 mm, but also that its "35 mm equivalent focal length" is 28 mm. This way of talking about lenses is not just limited to SLR and DSLR lenses; it is very common to see this focal length equivalency in the specification of the lens on a digicam.



Interchangeable lenses


Interchangeable lenses

Most SLR and DSLR cameras provide the option of changing the lens. This enables the use of lens that are best suited for the current photographic need, and allows the attachment of specialized lenses. Film SLR cameras have existed since the late 1950s, and over the years a very large number of different lenses have been produced, both by camera manufacturers (who typically only make lenses intended for their own camera bodies) and by third-party optics companies who may make lenses for several different camera lines.


DSLRs became affordable around the mid-1990s, and have become extremely popular in recent years. Some manufacturers, for example Minolta, Canon and Nikon, chose to make their DSLRs 100% compatible with their existing SLR lenses in the beginning, allowing owners of new DSLRs to continue to use their existing lenses and get a longer lifespan from their investment. Others, for example Olympus, chose to create a completely new lens mount and series of lenses for their DSLRs. The Pentax SLR camera K-mount system is backward compatible to all previous lens generations from Pentax, including the latest digital SLRs like the K-3 and K-50. A Pentax K-mount lens from the early 70s can be used on the newest Pentax DSLR although it may not provide features that are included in newer lenses (e.g. autofocus). There are a few exceptions from the MZ and ZX series of Pentax film cameras that do not work with some of the older lenses.

As implied by the above, lenses are only directly interchangeable within the "mount system" for which they are built. Mixing mounting systems requires an adapter, and most often results in compromises such as loss of functionality (e.g. lack of autofocus or automatic aperture control). Further, in some cases the adapter will require an additional optical element to correct for varied registration distances (the distance from the rear of the mount to the focal plane on the image sensor or film). Adapters may not be available to bridge every combination of lens mount and camera mount.


What is ISO


ISO
ISO is one of three important settings on your camera that is used to take a well exposed photo. The other two are Aperture and Shutter Speed.
If you’d like to learn about these check out our introduction to aperture and our beginner’s guide to shutter speed.

An Introduction to ISO Settings in Photography
We regularly get questions about ISO from readers of Digital Photography School like these:
What is ISO and why is it important? What is the best setting to choose? Should I always choose the lowest one?’
In this short tutorial I want to answer each question in turn. Let’s start with a definition of ISO.
What is ISO?
ISO in Traditional/Film Photography
In traditional (film) photography ISO (or ASA) was the indication of how sensitive a film was to light. It was measured in numbers (you’ve probably seen them on films – 100, 200, 400, 800 etc). The lower the number the lower the sensitivity of the film and the finer the grain in the shots you’re taking.
ISO in Digital Photography
In Digital Photography ISO measures the sensitivity of the image sensor.
The same principles apply as in film photography – the lower the number the less sensitive your camera is to light and the finer the grain.
Higher numbers mean your sensor becomes more sensitive to light which allows you to use your camera in darker situations. The cost of doing so is more grain (although cameras are improving all the time and today many are able to use high ISO settings and still get very useable images).
An example of a situation you might want to choose a higher ISO would be photographing an indoor sporting event where the light is low and your subject is moving fast. By choosing a higher ISO you can use a faster shutter speed to freeze the movement.
ISO Settings and Grain
As mentioned – the cost of choosing higher ISO settings is that you begin to get higher grain or noise in your images the higher you go.
I’ll illustrate this below with two enlargements of shots that I just took – the one on the left is taken at 100 ISO and the one of the right at 3200 ISO.


100 ISO is generally accepted as a ‘normal’ or ‘standard’ ISO and will give you lovely crisp shots (with little to no noise/grain).
Most people tend to keep their digital cameras in ‘Auto Mode’ where the camera selects the appropriate ISO setting depending upon the conditions you’re shooting in (it will try to keep it as low as possible) but most cameras also give you the opportunity to select your own ISO also.
When you do override your camera and choose a specific ISO you’ll notice that it impacts the aperture and shutter speed needed for a well exposed shot. For example – if you bumped your ISO up from 100 to 400 you’ll notice that you can shoot at higher shutter speeds and/or smaller apertures.
ISO is an important aspect of digital photography to have an understanding of if you want to gain more control of your digital camera. Experiment with different settings and how they impact your images today – particularly learn more about Aperture and Shutter Speed which with ISO are a part of the Exposure Triangle.

How to use ISO sensitivity when shooting video

The Video Mode takes a look at the how best to use ISO sensitivity when shooting video, and how it compares to taking stills. There are plenty of similarities in ISO settings between the two.
For example, sticking with lower ISO sensitivity between a range of ISO 100 – 400 will generally producer higher quality results than shooting with ISO 1600 and above.
That’s because the ISO control regulates the sensitivity of the sensor’s pixels to light by boosting the electrical charge. A higher ISO creates a brighter image but because of the increased electrical signal, the recorded images or video will be covered in digital artefacts, sometimes referred to as “grain” or “noise”.

Using ISO sensitivity: Grain

For some projects, grain can be used to good effect to create a more cinematic or artistic looking video. But as you become more experienced in video production you’ll learn that, while as photographers, we aim to get everything as close to perfect in-camera, in filmmaking, you’ll want as clean an image as possible with the maximum amount data to work with in post-production.
If you’ve recorded clean footage with low noise, you can add the precise amount that you want during the editing process.

Using ISO sensitivity: Noise

It’s good to experiment with ISO settings to find what you feel is an acceptable level of noise for the videos you’re making. There’s also an interesting quirk with some cameras, mostly Canon branded, that suggests that using ISO sensitivities in multiples of ISO 160 to reduce the impact of noise.
In short, the theory is that using these interval ISO settings decreases the impact of noise at the cost of reduced dynamic range. For more information, read this insightful discussion as to why you may get better results if you decide to use 160-multiples.
Newer cameras are much better equipped at dealing with high ISO sensitivity settings though because they use clever algorithms to reduce the impact of noise.
Cameras with larger sensors and cameras designed for video such as the Canon Cine EOS also handle noise better because they have larger pixels. But all things being equal, the lower the ISO, the better the quality of the recorded image.



Electronic shutter


Electronic shutter
Digital image sensors (both CMOS and CCD image sensors) can be constructed to give a shutter equivalent function by transferring many pixel cell charges at one time to a paired shaded double called frame transfer shutter. If the full-frame is transferred at one time, it is a global shutter. Often the shaded cells can independently be read, while the others are again collecting light. Extremely fast shutter operation is possible as there are no moving parts or any serialized data transfers. Global shutter can also be used for videos as a replacement for rotary disc shutters.
Image sensors without a shaded full-frame double must use serialized data transfer of illuminated pixels called rolling shutter. A rolling shutter scans the image in a line-by-line fashion, so that different lines are exposed at different instants, as in a mechanical focal-plane shutter, so that motion of either camera or subject will cause geometric distortions, such as skew or wobble.

Today, most digital cameras use combination of mechanical shutter and electronic shutter or mechanical shutter solely. Mechanical shutter can accommodate up to 1/16000 seconds (for example the Minolta Dynax/Maxxum/α-9 film camera had a maximum of 1/12000, a record in its era, and the later digital Nikon D1 series were capable of 1/16000), while electronic shutter can accommodate at least 1/32000 seconds, used for many superzoom cameras and currently many Fujifilm APS-C cameras (X-Pro2, X-T1, X100T and others).


Rolling shutter

A photo of a Eurocopter EC-120. Notice that the rotor blades seem to be swept back more than usual due to the rolling shutter effect.
Simulation of the rolling shutter effect on a rotating propeller and a moving car
(click for SMIL animation)

Rolling shutter is a method of image capture in which a still picture (in a still camera) or each frame of a video (in a video camera) is captured not by taking a snapshot of the entire scene at a single instant in time but rather by scanning across the scene rapidly, either vertically or horizontally. In other words, not all parts of the image of the scene are recorded at exactly the same instant. (Though, during playback, the entire image of the scene is displayed at once, as if it represents a single instant in time.) This produces predictable distortions of fast-moving objects or rapid flashes of light. This is in contrast with "global shutter" in which the entire frame is captured at the same instant.


The "rolling shutter" can be either mechanical or electronic. The advantage of this method is that the image sensor can continue to gather photons during the acquisition process, thus effectively increasing sensitivity. It is found on many digital still and video cameras using CMOS sensors. The effect is most noticeable when imaging extreme conditions of motion or the fast flashing of light. While some CMOS sensors use a global shutter, the majority found in the consumer market use a rolling shutter.
CCDs (charge-coupled devices) are alternatives to CMOS sensors, which are generally more sensitive and more expensive. CCD-based cameras often use global shutters, which take a snapshot representing a single instant in time and therefore do not suffer from the motion artifacts caused by rolling shutters.