How Does A Smart-phone Cameras Work ? Detail analysis.

Millions of photos are capture everyday with the smart-phones cameras. But, have you wondered how a smart-phone cameras work actually?

Almost all cameras work the same way and the main subject in all this cameras is light to create an image.

However, by design the smart-phones cameras have to be very small compares to other digital cameras. . This significantly influence on how mobile cameras function. And also what quality of images they can produce.

In this article, we’ll look at just how smart-phone cameras work. At the end of this article, you should have a quite good idea of how a mobile camera works.

Then, let’s get directly to it!

The first and the most important thing in cameras to work is a light. Let’s first understand the light.

Also read : Samsung Galaxy Watch 4 Classic With Physical Rotating Dial Surfaces.

& Realme Hair Dryer, Beard Trimmer Plus And Beard Trimmer Launched In India At Rs. 1299

Light

How light works?

However, in order to understand how a smart-phone camera works, you need to understand the basic of how light works.

The light is simply made of different colors- the color of rainbow. Which we learned in10th standard as a prism theory. The “white” light we see every day from the sun is actually made of seven different colors.

However, we can’t see actually these individual colors exclude when the light travels through object like a glass prism and gets split. This action effectively creating a rainbow.

Refraction theory

This behavior of light is called “refraction” in our basic science. . This is when light is bent as it travels from one medium to another, as seen with the prism.

When light is travel through space it travels in a straight line at a speed of around 300 000km/h. But when light travels from air to a dense material such as water or glass, it slows down. This slowing down of light causes it to bend.

 Let’s understand with an example, if you stick a pole in a pool of water. You’ll notice that the pole emerge to bend right where the water and air meet.

However, the pole itself has not change shape but because of the density of water compared to air. The pole emerge to have bent because of how the light is deform.

Just as light slows down and bends when it travels from an infrequent medium like air to a dense one such as water. Light speeds up again and bends when moving from a dense medium to a infrequent one.

This plays a big part in how the lens of a camera works, which we’ll look at further on.

Now, let’s see how a smart-phone camera uses light to create an image.

On its way to creating an image on a phone camera’s sensor, light has to travel through various parts of the camera.

Following are the parts of camera on which light gets through during photography.

The lens

A lens is usually a round piece of transparent material. Such as glass or plastic that focuses light in order to form an image.

Moreover, lenses have two polished surfaces on both side that curve inwards or outwards depending on the type of lens. The radius of the curvature is almost always constant.

A simple lens, as the name suggests, is just one piece of glass use in things like eyeglasses, magnifying glasses, contact lenses, viewfinders, etc.

On the other hand, a compound lens is made up of a number of various types of single lens elements combine. Each of this is serving a unique purpose, to correct optical issue and guide the light to the sensor. This is the type of lens that is found in a smart-phone cameras.

How a lens works?

However, if you wanted the camera work properly, the primary purpose is to bend light. As we early discuss, light travels a difinite way depending on the medium it’s travelling through.

Thus, when light rays go from travelling through air to passing through glass. It will stop travelling in a straight line and bend. This is because, similar to water, light travels slower through glass than it does through air.

Thin Lenses - The New Cosmic Universe - OpenStax CNX

In which direction the light is bent depends on the shape of the lens. Lenses that have a surge in the centre that curves outwards are known as convex lenses.

These are known as converging lenses because when light passes through them. And it is bent inward towards a focal plane.

An example of this is a magnifying glass. If you hold it a difinite way outdoors in the sun, you can see the light pass all over the lens of the magnifying glass and intersect to a single point.

That’s the focal plane and can burn quite badly because all the sun’s rays are being focus on one single spot.

Another way a lens can change the direction of light is by diverging or spreading it outwards instead of inward. Concave lenses are known to bend light this way. Unlike convex lenses, concave lenses curve inward in the middle.

How a compound lens system works

According to study, that an image captured using one lens is usually not good enough for photography. Because of this reason, our smart-phone cameras are made up with three or four lens.

As we already discuss that light is main reason to create images. The lens unit hold a series of convex and concave lenses of various densities that work together to direct the light through to the sensor to create an image.

The lens is design this way so that the camera can create an image that is as accurate as possible. You want your photos to look perfectly sharp all around, even at the edges and not just in one area in Smart-phone Cameras.

The quality and positioning of these lens elements are of utmost importance otherwise the resulting images might just suffer from issues such as chromatic aberration, blurring, and reduced contrast.

Lens focal length and angle-of-view

Nowadays mobile phones commonly have more than one camera. In the coming of age of cases, these cameras built with lenses with different focal lengths. This means the pictures taken by each camera are different.

Focal length, which is express in millimetres (mm). It is basically an indication of how much of a scene a particular lens can cover.

The shorter the focal length, the wider the angle-of-view became in your smart-phone. The longer the focal length, the more magnified the image is and therefore the narrower the angle-of-view.

To better understand the relationship between focal length and angle-of-view, and how they affect your photos, I strongly suggest you read this in-depth article on focal length.

Zoom

When you zoom in on a subject using a DSLR camera, the lens elements inside the lens barrel move around in order to change the focal length of the lens and enlarge the subject.

This is known as optical zoom because the lens elements themselves actually move.

Digital zoom

Generally, single-camera smart-phones could not zoom in imaged. That’s because they had a lens that had a fixed focal length.

In other words, the lenses didn’t have movable parts that could zoom in to a subject. Instead, mobile cameras used to rely on digital zoom, which was an inferior form of zoom.

With digital zoom, the more you zoom in, the more the camera crops the image and digitally enlarges it to fill the frame. This results in very poor quality pictures.

Optical zoom

When dual-camera smart-phones were launched some years back, smart-phone companies started marketing their cameras as having 2x optical zoom.

The reason behind this is the two cameras had lenses with different focal lengths. One had a wide-angle lens and the other had a telephoto lens.

Moreover, switching between the two cameras would make it seem like you optically zoomed in twice the focal length of the wide-angle lens without losing quality as you would with digital zoom. However, in most if not all such cases, it’s not truly optical zoom.

How this works in most cases is that when you zoom in, the camera interpolates, or mixes, the pixels from the sensors of the two cameras and creates a hybrid image. So, in essence, there aren’t any moving parts in this type of zoom just like with digital zoom.

The only difference is that this hybrid type of zoom hold on to a better picture quality because of the telephoto lens of the second camera.

Periscope zoom

Periscope zoom is a game-changer because it works completely differently from the traditional way a mobile camera zooms.

The periscope camera has quite a large zoom lens that doesn’t stick out of the back of the phone, Because of its sideways positioning inside the phone’s body.

And because the zoom lens is basically large for a phone camera, you can actually zoom in optically with it. In other words, as you zoom in and out, the lens elements inside the periscope zoom lens physically move.

It’s worth pointing out that no matter which type of zoom you use, you will need to keep your camera steady to avoid blurry shots. The more zoomed in you are the more apparent camera shake becomes and that leads to undesirable photos.

Focus

The positioning of the lens elements also affects the focus Smart-phone Cameras. When you zoom in and out, you need to adjust your focus if you’re shooting in Manual mode. Otherwise, your phone can automatically adjust the focus for you. Smartphone cameras employ different methods to get an image in focus automatically.

The most popular method at the moment of this writing is Dual Pixel Auto-focus. But it seems a new technology called 2×2 OCL is starting to gain some traction.

No matter which method of auto-focus a phone camera uses, the principles of how the lens elements work to get the focus right are pretty much the same.

Once you have selected where you want to focus in the frame, the camera’s ISP (which we’ll look at later) does some calculations and forwards the correct focus data to the focus motor. This motor then aligns the lens elements to a point where the focus is set where you want it to be.

So, as you can see, there’s quite a lot that goes on with the lens. And for good reason. Without the lens, the light coming through the camera will have no direction. Yes, a camera can take photos without a lens but you won’t get a sharp image.

Next in the process of turning light into an image is an area that controls exactly how much light can get through to the sensor of the Smart-phone Cameras.

Aperture

Aperture refers to the opening that determines how much light can reach the sensor. On a traditional DSLR lens, the aperture is adjustable. The wider the aperture, the more light goes through.

Aperture is expressed in f-stops. The higher the f-stop, the narrower the aperture and therefore less light coming through. The lower the f-stop, the more light comes through.

For example, setting your aperture to f/2.2 would allow more light to come through than if you set it to f/8.

This helps when you need to adjust your exposure to suit various lighting situations but it does also affect the depth-of-field.

How does aperture work on mobile phones?

However, on Smart-phone things are different. Mobile cameras have a fixed aperture and therefore it cannot be adjusted for different lighting conditions. In the case of mobile cameras, the larger the aperture the better it works.

Because smart-phone cameras are so small, they need every little bit of light that they can get. The aperture of mobile phones has been getting large  over the years.

The widest aperture on a mobile camera currently is f/1.4, which is somewhat wide for a phone. That’s one of the things you should look out for when comparing smart-phone cameras.

Starting with the Galaxy S9, Samsung introduced a variable aperture to its flagship cameras. This allowed the photographer to switch between f/1.5 and f/2.4.

Now, there are many smart-phones company is uses the different aperture in Smart-phone Cameras .

Once as much light as required has passed through the aperture, it’s well on it’s way to the sensor to be processed into a photo. But first, the light has to go through an important process in Smart-phone Cameras.

Image stabilization

Image stabilization (IS) is a family of techniques that reduce blurring associated with the motion of a camera or other imaging device during exposure.

Generally, it pay back for pan and tilt (angular movement, equivalent to yaw and pitch) of the imaging device, though electronic image stabilization can also compensate for rotation.

It is mainly used in high-end image stabilized binoculars still ad video cameras, astronomical telescopes, and also smart-phones. With still cameras, camera shake is a particular problem at slow shutter speeds or with long focal length lenses (telephoto or zoom).

With video cameras, camera shake causes visible frame-to-frame jump in the recorded video. In astronomy, the problem of lens shake is added to variation in the atmosphere, which changes the apparent positions of objects over time in Smart-phone Cameras.

Shutter

The thing that makes optical image stabilization necessary in smart-phone cameras is the shutter and the speed at which it operates.

In bigger and dedicated cameras, before the light can reach the sensor, it has to jump through one more hoop– the shutter. This is a mechanical device that is positioned in front of the sensor and blocks light from reaching the sensor.

When the shutter button is pressed to take a photo, the mechanical shutter opens up and exposes the sensor to light for a certain period and then closes again. The amount of time the shutter remains open is known as shutter speed.

The faster the shutter opens and closes, the less blurry your shots will be. The downside is that your pictures will look considerably dark without adequate lighting.

A slow shutter speed allows the sensor to be exposed to light for an extended period. This works well for brightening up the image in low light conditions. However, the trade-off is that the slower the shutter speed, the likelier you are to have blurry images.

And this is where image stabilization helps. It allows you to shoot at a reasonably slow shutter speed without messing up your photo. However, the slower you go with the shutter speed, the more difficult it becomes for mobile camera OIS to keep up. So, again, you need to support the camera phone to avoid blur.

Mobile cameras don’t have mechanical shutters. They operate electronically by activating and deactivating the sensor for a certain period.

So, in smartphone cameras, as soon as light makes it through the aperture and has been stabilized, it has pretty much arrived at destination sensor-ville. However, it won’t be registered until the sensor is activated.

Mechanical shutters :

Just like with the mechanical shutter, the amount of time the sensor remains activated is known as shutter speed. Despite their physical difference, these two types of shutter affect the image in the same way.

So, now that our light has finally reached the sensor, let’s look at how it’s converted into an image.

The sensor

The sensor is basically the backbone of digital photography because that’s where the imaging happens.

It is made up of millions of pixels (or photosites as others call them) that make up the total number of megapixels of the camera.

If you’d like to know which Smart-phone Cameras have the highest megapixel cameras, be sure to check out this list.

Photosite/Pixels

The photosite is found on the digital image sensor in a camera. The sensor array is made up of millions of individual photosites.

Each sensor has a specific number of tiny individual sensors. Each is a photosite. For example, a Canon 5D MkII camera has a 21.1 MegaPixel full-frame digital sensor. In this case that is 5616 photosites wide by 3744 photosites high.

Clearing up a confusion?

A digital image is composed of pixels. Each pixel in an image gets its data for light intensity and colour from a corresponding ‘pixel’ on the digital image sensor.

Originally the term ‘pixel’ referred to the electric component that was sensitive to light on the sensor. Once light impacted the tiny component it excited a small electric potential that could then be detected. Thus, data on incoming light could be collected. An array of tiny sensors of this type (millions of them) can be used to form a digital image sensor for use in a camera.

Unfortunately, the use of the term pixel can seem to be confusing. It applies separately to three different things which are closely associated…

  1. the individual location on a digital image sensor of one tiny light sensitive component;
  2. the corresponding display component on a screen (a tiny LED ) which emits light showing one tiny point of light in an image to the user;
  3. the smallest individual point of light in a displayed digital image.

However, recent use of the term pixel in common parlance gives most emphasis on the pixel being on the screen, the display side of the digital image, not the sensor location.

So, increasingly, other terms are used to describe the sensor location of a component that senses incoming light. These have been variously called Photosite; Photosites; Photo-site; occasionally pixelsite(s). Each tiny photosite senses a tiny part of the light coming through the photographic lens and records data on that light.

We are not aware of any official definition that clarifies these terms. However, at the time of writing there is increasing use of the term photosite on the Internet. Some manufacturers use the term , other writers and bloggers are using it too. We include the term in this glossary in order to help readers understand the diverse terms that apply to the sensor components of a digital image sensor. We also acknowledge that the use of language is evolving and that in the future this use of the term may not be sustained in common use. This article will be updated as necessary.

Colour filter array

This color filter is required for capture the images. . The Bayer filter array is the most popular on a lot of sensors.

This is a color filter that is placed over each photosite to determine the color of the image. It acts as a display that only allows photons of a certain color into each pixel.

The Bayer filter is made up of alternating rows of blue/green and red/green filters. The blue filter captures blue light, the green filter captures green light, and the red filter captures red light. The light that doesn’t match the filter is reflected.

Because so much light is being bounced off the filter (about two thirds), the camera has to calculate how much of the other colours are in each pixel.

The measurement of electrical signals from neighbouring photosites is used to determine this and ultimately the colour of the entire image.

The article on smartphone sensors also covers the inner workings of the Bayer filter. Check it out if you’re interested in the details of how a greyscale image is converted to colour.

Image signal processor

The sensor is not where the creation of an image ends. The image created in the steps above is simply latent.

This means that nevertheless the image is captured, it’s not yet fully developed. There’s still some processing work to be done, and then the final image is created.

This is what the image signal processor (ISP) is responsible for. The ISP is the brains of a mobile camera. It’s a special processor that takes the raw image data from the camera’s sensor and transform it into a usable image.

The image signal processor do a number of tasks in order to get to built the final image. The first step is known as demosaicing.

Once this is done, the image signal processor continues to apply more corrections to the raw image.

Other fixes include things such as noise reduction, lens shade correction, and defect pixel correction.

The ISP also makes adjustments to parameters such as white balance, auto-focus, and exposure. And because the work of the image signal processor relies heavily on algorithms, it’s also responsible for things such as HDR, night mode, EISimage compression, etc.

Once the image data captured by the sensor has gone through the processing pipeline, you have a final image which you can edit, save on your phone, share online, or even print out to frame and display.

Camera software

Of course, none of the above would be of any use if you have no way to access the camera. To be able to take photos with your camera phone, you need an app that will allow you to communicate your commands to the phone’s camera module.

Actually, none of the above would be of any use if you not have camera features in your smart-phones. To be able to take photos with your camera phone, you need an app that will allow you to interface your commands to the phone’s camera module.

From the app, you can choose what resolution you want your photos to be, where you want them to be saved, and whether you want to save the photos as RAW or jpeg files (provided your camera can do this).

Moreover, there are other activity you can do from the camera app such as switch between cameras, apply filters, activate HDR, change the app’s settings, and more.

All camera phones come with a native camera app installed which is usually set to take photos in Auto mode by default.

This allows you to just point your camera at what you want to capture and click away. The camera automatically calculates what it thinks to be the best settings for the shot so you don’t have to worry about it.

Some native camera apps on popular smart-phones allow you to switch to Manual mode. This mode gives you the chance to take full control of the camera and adjust settings like shutter speedISOwhite balance, and others yourself.

If you don’t have a camera app that has a Manual mode, then do yourself a favour and download one. There are plenty available for you to choose from.

Verdict

Finally, you will understand what happen in the mean time when you capture the photo using your smart-phones. This article can teach you everything about the smart camera works. Knowing how to use it properly to capture great photos is another.

Question and Answer about the camera.

1)What type of camera is needed in a Smart-phone Cameras?

  It depends on what kind of photography you intend to take with it. If it’s sports and low-light I still feel that a small compact like the Sony RX100 is a safer bet, but for day-to-day shooting especially in good daylight most medium-range to high-end smart phones will do. The iPhone 7 and Google’s Pixel are said to be the most preferred smart phones when it comes to nimble, mobile photography.


2)  Why do smart-phones need multiple cameras, why not just one better quality camera?

Multiple cameras in a mobile phone have different functions. It is upto the manufacturers as to how they wish to utilise the additional cameras. Today, the average customer is well aware of the prowess of modern digital cameras. Since possession of a dslr can be a costly affair, many customers want the image quality of a dslr in a cheap device such as a smartphone.

Now, even the customers have different requirements, there are some who click photographs only for online consumption while traditional photographers are looking to replace their bulky dslrs with a portable one.

Thus, a smartphone is born which can take on a dslr(though not literally) at least from the perspective of an average customer. The iphone 7 plus revolutionised the world of smartphone photography. It took the world by a smartphone with its portrait mode feature. It had dual cameras. One for normal photography, the other for optical zoom and edge detection(it helps in creating a bokeh effect).

Since then, almost all companies have incorporated this bokeh effect by adding an additional lens at the rear. Google pixel series managed to do it with a single lens by using complex algorithms which are not available for everyone. Thus, it is easier to meet the demands of a bokeh effect by simply adding another lens. Many companies have gone a step further to utilise the additional lens for different purpose. Some use it for wide angle, aome for lowlight, some for optical zoom, some for monochrome, some for simply edge detection. All this so as to woo the customers.

So, Smart-phone Cameras has not necessary to have multiple cameras to produce excellent images, but having multiple cameras makes the work of a smartphone camera easier to produce outstanding images in all situations.

3) Are smart-phone cameras getting better than DSLRs?

No, they are simply more idiot-proof. A person with no idea about photography will get better results with a smart-phone than with a DSLR because in the case of a smart-phone all they have to do is press the button. The software behind the camera is design to do all the thinking for them (pumping up the ISO and removing noise afterwards in the case of night shots for example to not allow for excessive motion blur).

However, auto mode usually works quite poorly in DSLRs because they aren’t really meant to be used in auto mode. Also, the quality of JPGs produce by DSLRs often leaves a lot to be desire. Again, ideally one would be shooting RAW when using a DSLR.

Smart-phone Cameras don’t feature magic sensors and magic lenses that are somehow better than the DSLR ones that cost big bucks. A tiny sensor and a tiny lens will always be inferior to their larger counterparts (when comparing recent products at the same stage of sensor technology development). No, I’m not talking about megapixels. Unlike what’s been said in this thread.

Sensors uses in smart-phones

Nokia 808 doesn’t have a larger sensor than a DSLR. Its sensor is about 11x8mm (that’s quite sizeable for a smart-phone actually, the sensor of i-Phone 6 is a mere 4.89×3.67mm). A typical crop DSLR’s sensor is about 24x16mm. Full frame is 36x24mm. The Nokia does have a larger resolution than most DSLRs but that’s a different matter.

 Don’t get fooled by smart-phone fanboys/girls saying DSLRs are becoming obsolete because “look what a cute photo of my cat my i-Phone took!!!”. If they were as obsessed about photography as they are about having access to Facebook wherever they go then they wouldn’t mind the extra bulk carrying a camera around. If you want to go beyond pressing a button and applying a filter then you need a camera that will give you control over the picture and will provide you with quality output that you can then process to your liking. Doesn’t need to be a DSLR. Doesn’t need to be much more expensive than a smartphone either.

4) Which company launched first mobile camera ?

In May of 1999, Japan was the launchpad for the Kyocera VP-210. It was the first such phone with a built-in camera that was sold commercially to the general public. However, the idea of merging a camera with a mobile phone didn’t come first from Kyocera. In fact, there seems to be some confusion online as to which device was actually the first camera phone 

5 thoughts on “How Does A Smart-phone Cameras Work ? Detail analysis.”

  1. Pingback: Apple i-Phone 13 series feature portrait video built-in, larger induction coils and more

  2. Pingback: realme GT Master Edition with 6.43-inch FHD+ 120Hz AMOLED display.

  3. Pingback: Nokia G20 with 6.5-inch display, Helio G35 starts at Rs. 12999 in India

  4. Pingback: Samsung India launched AR Demo for The Serif lifestyle TVs and SpaceMax Family Hub refrigerators

  5. Pingback: Noise ColorFit Qube with 8 sports modes launched for Rs. 2499

Leave a Reply