La The camera has become the main selling point of mobile phones.Especially when we're talking about mid-range and high-end phones. Each generation introduces new sensors, more megapixels, as in the... Xiaomi Mi Note 10Crazy zooms and technologies that we only saw in professional settings not long ago. Among all of them, one term is starting to appear frequently in technical specifications: ToF.
You may have seen “ToF camera” or “ToF depth sensor” when looking at Samsung phones, Huawei P40You've probably bought a phone from HONOR, LG, Oppo, or other brands and still haven't noticed. It's not a passing fad or purely a marketing ploy: behind it lies a mature depth-sensing technology developed in industry and research, used in devices like Kinect, and now finding its place in mobile phones to improve photos, videos, security, gestures, and augmented reality.
What is ToF in a mobile phone camera?
ToF stands for “Time of Flight”.In a smartphone, when we talk about a Time-of-Flight (ToF) camera or sensor, we're referring to a type of depth camera that calculates the distance to each point in the scene using infrared light. These systems are also called time-of-flight cameras, 3D depth cameras, or simply ToF cameras.
The concept is very similar to a sonar or radar, but using light instead of soundThe phone emits an infrared beam of light towards what's in front of it. This light hits objects and people, bounces off, and returns to the phone. By measuring the time it takes to make this round trip, and knowing the speed at which light travels, the system can calculate the distance to each point the sensor sees.
That measurement is not done for a single point, but for thousands or hundreds of thousands. distributed throughout the scene. In this way, the mobile device obtains a 3D depth map in which each pixel not only stores color or intensity, but also how far away it is. With this map, the software understands the scene in volume: it knows what is in front, what is behind, where there are gaps, edges, corners, etc.
It is important to be clear that The ToF camera does not replace the "normal" camera of the mobile phone.It's not designed for taking photos and videos directly, but rather to work in the background as a supporting sensor. It provides highly accurate depth data that the processor combines with the RGB image from the other cameras to improve portrait mode, focus, 3D facial recognition, augmented reality, and gesture control.
In some high-end mobile phones, More than one ToF sensor is integratedOne on the back, designed primarily for photography, video, AR, and measurements; and another on the front, focused on advanced facial recognition and air gestures. Clear examples can be found in devices like the Samsung Galaxy S10 5G or in some offerings from LG, HONOR, Huawei, or Oppo.
Components of a ToF camera in a smartphone
A ToF camera is not just another black "dot" next to the mobile phone's lenses; It is a small, complete system with several elements that work together.Although at first glance it looks like a small hole, there's quite a lot more to it than meets the eye.
The heart of the group is the ToF sensor, a specialized pixel arrayAt first glance, it resembles a classic image sensor (like a CMOS or CCD), but its pixels are designed to record how the infrared light emitted by the phone itself arrives: they measure its intensity, but also phase changes and the time lag relative to what was emitted. Each tiny cell acts as an ultra-fast stopwatch for the incoming light.
In front of the sensor we find a simple optical module, i.e., a dedicated lens which is responsible for focusing the light reflected from the scene towards the sensor. It is not usually as complex as the optics of the main camera, but it performs a similar function: defining the angle of view and ensuring that the light reaches each pixel of the ToF array clearly.
For all of this to work, a own infrared light sourceThis is usually in the form of an LED or laser that emits NIR (near infrared) light with typical wavelengths between 850 and 940 nm. This light is usually modulated at frequencies on the order of tens of MHz (around 20 MHz in many designs) so that the sensor can easily differentiate the "house" pulse from ambient light and apply phase-shifting techniques that greatly refine the distance calculation.
The last key piece of the puzzle is the depth processorIt can be a dedicated chip integrated into the camera module or a block within the ISP (Image Signal Processor) of the phone's SoC. Its function is to transform the raw data from the sensor (pixel values, phases, times) into a clean depth map, filter noise, manage a potential 2D IR image, and prepare the information for the operating system, apps, and other cameras to use in real time.
How ToF works step by step
Although there's a lot of engineering underneath, The physical principle of ToF can be explained quite intuitively.Basically, the system measures how long it takes for a pulse of light to leave the sensor, bounce off an object, and return. That's the famous "time of flight." From there, it's all about multiplying and dividing.
The entire process is repeated many times per second. and can be divided into a series of phases that run continuously while the ToF camera is active:
- EmissionThe infrared emitter sends pulses of modulated IR light towards the scene in front of the mobile device.
- Interaction with the sceneThese pulses travel through the air, hit people, furniture, walls, plants, etc., and some of that light bounces back to the phone.
- DetectionThe ToF sensor captures the returning infrared light; each pixel receives the reflected signal from a specific point in the environment.
- Temporal or phase measurementThe system calculates the difference between the signal that was emitted and the one being received, either by measuring pure time or the phase change of the modulated wave.
- Distance calculationWith the formula distance = (speed of light × time of flight) / 2, the processor gets how far away each point is; it is divided by two because it considers a round trip.
- Generation of the depth map: all those distances are gathered into a matrix, creating a depth map in which each pixel has an associated distance value.
The most important thing is that The entire field of view is measured in a single "shot"There's no need to pan or focus one plane after another, as with other systems. Each frame of the ToF camera already contains all the depth information of the entire scene it sees, allowing for real-time work with considerable ease.
If you've ever used an Xbox 360 or Xbox One Kinect, you have a good idea of what this approach achieves: recognize people, objects, and gestures with considerable accuracy in 3D. Essentially, ToF mobiles transfer that same concept to a much smaller and more portable format, adapting it to the space and power limitations of a smartphone.
In some designs, moreover, Infrared light is modulated to specific frequencies to avoid confusion with the background light and improve the robustness of the measurements. By knowing the modulation frequency, the system can calculate distances using phase-shift techniques even when ambient lighting conditions are challenging.
Advantages of ToF technology

Measuring depth is not exclusive to ToF; it can be done with stereoscopic vision, structured light, traditional lasers, or even pure algorithms on a 2D cameraHowever, the time-of-flight approach has a very interesting combination of advantages for mobile devices.
One of its greatest strengths is the low energy consumptionThe system only needs an infrared light source and a specialized sensor to directly obtain distance and amplitude information for each pixel, without relying on heavy processing that strains the processor for extended periods. Compared to techniques like structured light (which projects complex patterns) or pure stereoscopic vision (which requires significant computing power), Time-of-Flight (ToF) is generally more battery-efficient.
Another very clear advantage is the high precision in depth measurementA well-designed and calibrated ToF module is capable of measuring with very small errors, in the range of millimeters or centimeters depending on the distance to the object and the quality of the system. This makes all the difference when you want to achieve a "fine" portrait mode, good selective focus, or 3D reconstruction of objects with a certain level of detail.
His ability to work in real-time with very low latencyThe ToF sensor can capture complete depth maps at high speed (frame by frame), which is vital for applications that require immediate response: tracking people or objects, gestures in the air, augmented reality, near-user robotics, etc.
Furthermore, this type of camera offers a It has a wide dynamic range in depth and tolerates a wide range of lighting conditions.Because they have their own infrared illumination, they are less dependent on visible light in the scene. They can function in low light and even in complete darkness, making them ideal for nighttime facial recognition, gesture control in dimly lit environments, or indoor photography with depth assistance.
Finally, it should be mentioned that, compared to other 3D systems such as certain long-range LiDAR or complex structured light equipment, ToF cameras are relatively inexpensive and compact.This makes them much easier to integrate into mass-market consumer products such as phones, tablets, home cameras, cleaning robots, or augmented reality devices.
Disadvantages and limitations of ToF sensors
Like any technology, ToF also has its downsides and It presents a number of technical and design limitations which must be managed well to get the most out of it.
The first one is usually obvious: The resolution of the ToF sensor is usually low compared to that of traditional mobile phone cameras. Depth maps have enough detail to separate subject from background, control gestures, or measure medium-sized objects, but they fall short if what we want is extremely fine 3D modeling or working with very small details.
Another common problem is the artifacts caused by scattered lightVery shiny surfaces, surfaces that are very close together, or surfaces with unusual geometries can reflect more light than necessary towards the sensor, generating spots, halos, or measurement errors that later appear as "teeth" or strange cuts in the depth map. The software then has to intervene to correct what it can.
Related to the above are the multiple reflections in areas with corners and concave surfacesIn these scenarios, light can bounce several times before returning to the sensor, introducing an additional delay that the system interprets as a greater distance than the actual distance. This type of error adds uncertainty and necessitates careful data filtering to ensure a reliable final result.
La Intense ambient light, especially direct sunlight, is another classic enemy of the ToF sensors. In bright daylight, the amount of infrared light coming from the environment can quickly saturate the pixels, making it very difficult to distinguish the phone's own pulse from all the background noise. Under these conditions, the sensor's useful range can be reduced and its accuracy can drop significantly.
Beyond the purely technical aspects, there is a rather prosaic physical limitation: the interior space in the mobile phoneA Time-of-Flight (ToF) module requires its own sensor, optics, and infrared emitter, taking up virtually the same space as a conventional camera. In a chassis where every millimeter counts for the battery, speakers, modem, antennas, vibration motors, and other components, reserving space for another "eye" is no trivial matter for manufacturers.
ToF versus LiDAR: how they are similar and how they differ
ToF and LiDAR are often lumped together because, ultimately, Both are based on measuring distances with light and on the concept of time of flight.However, in practice they are usually implemented differently and with slightly different objectives, although more and more concepts are being mixed between the two.
In classic LiDAR systems, especially those used in autonomous vehicles and advanced mappingTypically, more powerful lasers and specialized optics are used to scan the environment at long distances, tens or even hundreds of meters. Their accuracy and range are very high, but so are the size and cost of the equipment.
Consumer ToF cameras, on the other hand, They aim to be compact, integrable and reasonably inexpensiveThey are designed to work at short and medium ranges, typical of the distance between a user and their mobile phone, TV, game console, or home robot. They don't need to see 100 meters away, but rather understand what is happening a few centimeters or a few meters around the device.
Therefore, in consumer electronics, Time-of-Flight (ToF) is often the preferred option when you want to offer integrated 3D depth without driving up costs.LiDAR, while sharing the basic idea, is usually reserved for applications where extreme precision and long range justify investing considerably more in hardware.
That doesn't change the fact that, in marketing, some brands use the terms almost interchangeably or refer to "LiDAR" when what they're actually using is an advanced form of Time-of-Flight (ToF). The important thing is to understand that In mobile phones, the usual setup is a compact ToF sensor with modest LED or laser emittersdesigned for short-range tasks such as photography, near AR, and biometrics.
Uses of ToF in mobile phone cameras
Beyond theory, what's interesting is What does a ToF sensor contribute to the real-world use of a smartphone?Its applications are grouped into four main blocks: photography, video, security/biometrics and gesture control and augmented reality, although it also lends itself to measurements and 3D scanning.
Portrait mode and blur effects
The most visible improvement for the average user is probably in photography. especially in the famous portrait mode And in any scene where we want to clearly separate the subject from the background to apply a bokeh effect, the 3D depth map generated by the ToF sensor makes all the difference.
With a ToF sensor, the mobile It knows quite accurately which parts of the image are closer and which are farther awayThis knowledge allows for better outlining of the main person or object, avoiding common mistakes when cutting out hair, glasses, fingers, or other fine details. Background blur can be applied gradually and consistently with distance, creating a more natural effect.
On mobiles like the Huawei P30 Pro, the HONOR View 20, or some Samsung Galaxy phonesThe manufacturer has openly touted the role of the ToF camera in improving portrait mode, supporting the high-resolution main sensors and wide-angle lenses. The result is photos with a much more "professional" look for social media and portraits.
The benefit is not limited to people: It also works for pets, objects, food, and any situation where a softly blurred background is desired.The ToF gives the camera the depth information that software algorithms need to decide what to focus on, what to defocus, and with what intensity.
Focus and tracking in video
If we move on to video, ToF becomes an ally of continuous focus: The real-time depth map allows for precise tracking of a moving subject.even if it enters and exits the frame or moves closer to and further away from the mobile phone at good speed.
Many smartphones that rely solely on contrast-detection or phase-detection autofocus struggle in complex or low-light scenes. With a supporting ToF sensor, The processor knows at every instant which object is at what distance and you can adjust the lens without so much hesitation, reducing the typical focus "wobbles" that ruin a shot.
This is especially appreciated in action video, children, pets, or events with lots of movementwhere keeping the subject sharp and well separated from the background is key. Time-of-Flight (ToF) provides an extra layer of information that focus algorithms can use to prioritize subjects and anticipate scene changes.
In some cases, in-depth information is also combined with face or body detection using artificial intelligenceIn this way, the system not only knows where something is, but also what it is, which helps decide what should remain in focus and what can be left out of focus.
3D facial recognition and security
Another major use of ToF on the front of the phone is the advanced biometric authentication using 3D facial recognitionInstead of relying solely on a flat image of the face, the mobile phone is able to reconstruct its volume by analyzing the depth of different points on the face.
A sensor of this type can reach read hundreds of thousands of points on the surface of the face in a single shotAs some brands have pointed out, this allows for the generation of a highly detailed pattern of facial geometry, difficult to fool with photos, videos, or basic masks, and which can be compared with the template registered on the device.
The added advantage is that, by using infrared, This recognition works well in the dark or in low lightThere's no need to dazzle the user with the screen or rely on ambient lighting. The ToF sensor sees in IR and, provided there's no significant interference, can identify the owner in milliseconds.
Some manufacturers have gone even further with biometrics: The LG G8 ThinQ, for example, uses the front-facing ToF sensor to analyze the vein pattern in the hand.The user shows their palm a few centimeters from the sensor and the system recognizes both the shape of the hand and the distribution of blood vessels, offering a rather curious alternative unlocking method.
Gesture control without touching the phone
The possibility of control the smartphone by moving your hand in the air This is becoming a reality thanks to ToF sensors. By providing a precise depth reading in the area near the screen, the phone can detect the position of the hand, its distance, and certain basic movements.
In the aforementioned LG G8 ThinQ, for example, the Air Motion system allows answer calls, play or pause music, change songs, or raise and lower the volume Simply by making gestures above the phone. The front-facing ToF sensor interprets these changes in hand position and depth.
For it to work, the user usually has to Place your open hand a few centimeters from the sensors, move it slightly away, and then move it. in one direction or another. The available gestures are still relatively few and somewhat "specialized", but they give a very clear indication of the technology's potential.
This form of interaction becomes especially useful in situations where we don't want to or can't touch the screen: wet hands, dirty hands, gloves, kitchen, workshop… Or simply when the mobile phone is resting on the table and we prefer to handle it from a distance without having to pick it up every two seconds.
Augmented reality, measurements and 3D scanning
Augmented reality (AR) is another field where Time-of-Flight (ToF) technology is very noticeable. Having a Reliable depth map of the environment, the mobile phone "understands" the geometry of the room better and knows where there are floors, walls, furniture and objects.
Thanks to that, AR apps can placing virtual objects with much greater stabilitypreventing them from appearing to float aimlessly or pass through solid surfaces. Phones like the Oppo RX17 Pro or the HONOR View 20 have showcased games and experiences where the user interacts with 3D elements that accurately reflect the scene's true depth.
The ToF also enables functions of measuring distances, surfaces and volumes directly with the cameraSimply point the app at the object, piece of furniture, or person and let it calculate its size based on the depth data provided by the sensor. Perfect for taking quick measurements without needing a tape measure.
Regarding 3D scanning, the mobile phone can visually scan an object from different angles and combine the depth maps to reconstruct a fairly accurate three-dimensional model. That model can then be used in 3D printing, design, architecture, video games, or virtual reality.
Other areas of use for ToF cameras
Although they are most often heard about on mobile phones these days, Time-of-Flight (ToF) cameras have been used in many other fields for some time.The leap to the smartphone is simply a natural evolution of something that had already been successfully tested in industry and research.
In industrial robotics, for example, Real-time 3D depth maps help robots see and understand their environmentThey allow you to locate parts, calculate safe ranges of movement, avoid collisions with people or other machines, and grasp objects with considerable precision in three dimensions.
In the world of 3D modeling and virtual reality, ToF cameras are used to scan spaces, rooms, and objects and transform them into interactive digital environments. Architects, manufacturers, or designers can quickly capture a space and work on a virtual twin with reliable measurements.
There are also applications in non-mobile consumer devicessuch as smart home cameras, robot vacuum cleaners that map the house, home automation solutions that detect presence and movement with depth, or advanced authentication systems in laptops and smart doors.
This entire ecosystem means that, although we only see one side of the coin on the smartphone, ToF technology still has a long way to go. and it will probably continue to lose physical weight, gain resolution, and combine with other techniques such as LiDAR or computer vision to achieve an increasingly rich understanding of the environment.
Looking at it all together, the introduction of ToF sensors in mobile phones has been an interesting leap forward: photos with more natural blurring, better-focused videos, more secure 3D face unlocks, air gestures, and much more convincing augmented reality experiencesThis is all based on precisely measuring how long it takes light to travel to and from the sensor. Despite their resolution limitations, problems with bright ambient light, and the space they occupy inside the phone, everything indicates that their presence in mobile phones will continue to grow and be refined in the coming years.
