The notch full of sensors on iPhone X enables Face ID to capture
accurate face data by projecting and analyzing over 30,000 invisible dots to create a depth map of your face and also captures an infrared image of your face. A portion of the A11 Bionic chip’s neural engine — protected within the Secure Enclave — transforms the depth map and infrared image into a mathematical representation and compares that representation to the enrolled facial data.
Meanwhile, from the same notch, third party developers can access
a coarse 3D mesh geometry matching the size, shape, topology, and current facial expression of the user’s face.
These are 2 different things.
For more see Apple’s support article on Face ID and their developer documentation on ARKit Face Tracking.