Configuration Options

The following properties are exposed as configuration options:

Required

  • authToken (string): Please see the authorization section for details on how to use this property. Not required if the apiKey and authEndpoint properties are used.

  • apiKey (string): Your API key as provided by Sybrin. Please see the authorization section for details on how to use this property. Not required if the authToken property is used.

  • authEndpoint (string): The endpoint that will be used to authorize. Not required if the authToken property is used.

  • passiveLivenessEndpoint (string): The endpoint that will be used to execute passive liveness detection.

Optional

  • allowMultipleFaces (boolean): Sets whether or not the SDK should allow multiple faces in the photo. If set to true, the closest face will be used for liveness detection.

  • assetHeadersCallback (function): A callback function that may be used to modify the headers of the assets HTTP request. As parameters, this function provides the headers object before modification. The function expects the modified headers object to be returned.

  • authBodyCallback (function): A callback function that may be used to modify the body of the authorization API call. As parameters, this function provides the body object before modification. The function expects the modified body object to be returned.

  • authHeadersCallback (function): A callback function that may be used to modify the headers of the authorization API call. As parameters, this function provides the headers object before modification. The function expects the modified headers object to be returned.

  • authHttpMethod (string: GET, POST, PUT): Overrides the HTTP method type for the authorization API call.

  • blurThreshold (number): The threshold that the blur detection algorithm must pass for the image to be considered clear enough. A higher value is stricter. A lower value is less strict. Default 7.5.

  • blurThresholdMobileModifier (number): The factor by which blur threshold is adjusted on mobile devices. A higher value makes blur detection stricter on mobile. Default 1.25.

  • debugInfoEndpoint (string): The endpoint that will be used to post debug information to upon using the upload debug info function.

  • eyeDetectionScale (number): Influences thoroughness of eye detection. Lower values are more reliable, but also slower. Larger values are faster, but less reliable. May be any value from 1.05 up to 1.4. Default 1.2.

  • eyeDetectionThreshold (number): Sets the eye detection sensitivity. Lower values are more relaxed and more likely to return false positives for eye detection, while higher values are stricter. This value has to be a whole number. It is recommended not to go lower than 2 or higher than 5. Default 3.

  • encryptionKey (string): The 32-character AES encryption key that will be used for encrypting network traffic from the SDK to the backend API. This value must match the key used by the decryption algorithm in the backend.

  • faceCompareEndpoint (string): The endpoint that will be used to execute face comparison.

  • faceCompareBodyCallback (function): A callback function that may be used to modify the body of the face compare API call. As parameters, this function provides the body object before modification as well as a FaceCompareSnapshotData object (please see Middleware section for more details). The function expects the modified body object to be returned.

  • faceCompareHeadersCallback (function): A callback function that may be used to modify the headers of the face compare API call. As parameters, this function provides the headers object before modification as well as a FaceCompareSnapshotData object (please see Middleware section for more details). The function expects the modified headers object to be returned.

  • faceCompareHttpMethod (string: GET, POST, PUT): Overrides the HTTP method type for the face compare API call.

  • faceDetectionScale (number): Influences thoroughness of face detection. Lower values are more reliable, but also slower. Larger values are faster, but less reliable. May be any value from 1.05 up to 1.4. Default 1.1.

  • faceDetectionThreshold (number): Sets the face detection sensitivity. Lower values are more relaxed and more likely to return false positives for face detection, while higher values are stricter. This value has to be a whole number. It is recommended not to go lower than 2 or higher than 5. Default 5.

  • faceDistanceLandscapeThreshold (number): The distance threshold in landscape orientation for the face to be considered close enough, where 0.9 is the closest and 0.1 is the furthest. Default 0.35.

  • faceDistancePortraitThreshold (number): The distance threshold in portrait orientation for the face to be considered close enough, where 0.9 is the closest and 0.1 is the furthest. Default 0.35.

  • includeVideo (boolean | string: none, onselfie, oninit): Sets whether or not a video should be recorded with passive liveness detection, and also the recording style. If this property is set to true, 'onselfie' or 'oninit', the passive liveness API request will include the video with the payload, and return it on the result object (Please see the section on running liveness using the camera for details). Default value is 'none'. The possible configurations are:

    • false or 'none': No video is recorded.

    • true or 'onselfie': A video recording starts after the selfie image is taken and runs for the number of seconds configured in the videoDuration property. Use the messageHold property to set the message that should be displayed while recording is taking place.

    • 'oninit': A video recording starts as soon as the camera opens and stops when the selfie is taken. Please note that this is a highly intensive mode and may result in the recorded video stuttering, especially on devices with low processing power.

  • integrationMode (0 - direct | 1 - middleware): Sets how the web SDK integrates with backend services to execute liveness. If you wish to use the companion API included with the SDK, or your own middleware implementation, please see the Middleware section. Default 0 (direct).

  • livenessMedia (string: image, video): Sets the media type that will be analyzed for liveness. Video may only be used if video recording has been enabled. The default value is "image".

  • maxUploadFileSize (integer): The maximum size, in bytes, that uploaded files may be. Default 5242880.

  • mediaStreamRetryCount (integer): The number of times that the SDK must retry gaining access to the camera if it fails the first time.

  • mediaStreamRetryDelay (integer): The duration that the SDK must wait before retrying gaining access to the camera if it fails the first time.

  • mediaStreamStartTimeout (integer): The time (in milliseconds) that the SDK is given to enable and hook onto the user's camera before it times out. The default value is 6000.

  • metadataPropertyName (string): The SDK has the capability to emit metadata from the liveness remote API request as an additional field on the PassiveLivenessResult object. The value that is configured in metadataPropertyName will be used to determine what property on the response payload will be emitted as metadata. The default value is 'metadata'.

  • modelPath (string): Path to the model used for UI-side facial detection. The default value is "assets".

  • overexposedThreshold (number): The percentage of overexposed pixels that should be present for an image to be considered overexposed. Default 50.

  • overexposedValue (number): The grayscale RGB value (0-255) that a pixel's color must be larger than in order to be considered overexposed. Default 220.

  • passiveLivenessBodyCallback (function): A callback function that may be used to modify the body of the passive liveness API call. As parameters, this function provides the body object before modification as well as a LivenessSnapshotData object (please see Middleware section for more details). The function expects the modified body object to be returned.

  • passiveLivenessHeadersCallback (function): A callback function that may be used to modify the headers of the passive liveness API call. As parameters, this function provides the headers object before modification as well as a LivenessSnapshotData object (please see Middleware section for more details). The function expects the modified headers object to be returned.

  • passiveLivenessHttpMethod (string: GET, POST, PUT): Overrides the HTTP method type for the passive liveness API call.

  • recordAudio (boolean): Sets whether or not audio should be included if video recording is enabled. Default false.

  • recordDebugInfo (string: never, always, onerror, onspoof, onsuccess): Sets whether and when debug info should be recorded for use by download or upload functionality. The default value is "never".

    • 'never': No debug info is ever recorded.

    • 'always': Debug info is recorded after every liveness attempt.

    • 'onerror': Debug info is only recorded when an error occurs.

    • 'onspoof': Debug info is recorded when a face detection comes back as a spoof.

    • 'onsuccess': Debug info is only recorded on a successful face detection that is not a spoof.

  • registerFaceEndpoint (string): The endpoint that will be used to execute face registration. See functionality section for more information.

  • showDebugOverlay (boolean): Sets whether or not the debug overlay should be shown.

  • targetProcessingSize (number): The target value of the largest dimension that the image will be resized to during preprocessing. Lower values enhance performance but reduce accuracy. Default 640.

  • thresholdAdjustAmount (number): The amount by which face detection sensitivity will be adjusted if face detection is taking a long time. Default value is 1.

  • thresholdAdjustInterval (number): The amount of time (in milliseconds) that needs to pass before face detection sensitivity is adjusted if detection is taking a long time. Default value is 4000.

  • tokenTimeout (number): The duration (in milliseconds) that a token is valid and will be reused for before a new authorization call is made. Default 120000.

  • translations ({ [key: string]: string }): An object literal representing a dictionary lookup that will be used for translating text shown by the JavaScript API. Please see the translations section on this page for a list of all translatable text, as well as the localization page for a detailed description on how to implement localization.

  • underexposedThreshold (number): The percentage of underexposed pixels that should be present for an image to be considered underexposed. Default 40.

  • underexposedValue (number): The grayscale RGB value (0-255) that a pixel's color must be smaller than in order to be considered underexposed. Default 30.

  • verifyFaceEndpoint (string): The endpoint that will be used to execute face verification. See functionality section for more information.

  • verifyIdentifierEndpoint (string): The endpoint that will be used to execute identifier verification. See functionality section for more information.

  • videoDuration (number): Sets the length (in seconds) that the passive liveness video must be recorded for if the includeVideo property is set to true or 'onselfie'. Default value is 3.

  • videoInterval (integer): The interval (in milliseconds) at which the video stream is analyzed for faces during liveness detection. The default and minimum value is 500.

Last updated