JavaScript API
If you wish to use the functionality provided by the Sybrin Biometrics Web SDK but create your own UI entirely, you may achieve this by simply creating an instance of the JavaScript API and using the functions exposed by it.
Initialization
The JavaScript API may be initialized as follows:
The options object may be used to configure the API as desired.
Configuration Options
The following properties are exposed as configuration options:
Required
apiKey (string)
: Your API key as provided by Sybrin.authorizationEndpoint (string)
: The endpoint that will be used to authorize.passiveLivenessEndpoint (string)
: The endpoint that will be used to execute passive liveness detection.
Optional
allowMultipleFaces (boolean)
: Sets whether or not the SDK should allow multiple faces in the photo. If set to true, the closest face will be used for liveness detection.assetHeaders (dictionary<string, any>)
: A dictionary of strings as keys and any value types, used to construct request headers when performing asset retrievals.authHeaders (dictionary<string, any>)
: A dictionary of strings as keys and any value types, used to construct request headers when performing the authorization API call.authToken (string)
: Provides an alternative to theapiKey
property. Renders theapiKey
property unused and it is thus not required anymore. Only to be used if authentication is being handled externally.blurThreshold (number)
: The threshold that the blur detection algorithm must pass for the image to be considered clear enough. A higher value is more strict. A lower value is less strict. Default 7.5blurThresholdMobileModifier (number)
: The factor by which blur threshold is adjusted on mobile devices. A higher value makes blur detection more strict on mobile. Default 1.25.debugInfoEndpoint (string)
: The endpoint that will be used to post debug information to upon using the upload debug info function.eyeDetectionThreshold (number)
: Sets the eye detection sensitivity. Lower values are more relaxed and more likely to return false positives for eye detection, while higher values are stricter. This value has to be a whole number. It is recommended not to go lower than 2 or higher than 5. Default 2.faceDistanceLandscapeThreshold (number)
: The distance threshold in landscape orientation for the face to be considered close enough, where 0.9 is the closest and 0.1 is the furthest. Default 0.5.faceDistancePortraitThreshold (number)
: The distance threshold in portrait orientation for the face to be considered close enough, where 0.9 is the closest and 0.1 is the furthest. Default 0.5.includeVideo (boolean)
: Sets whether or not a video should be recorded with passive liveness detection. If this property is set to true, the passive liveness API call will include the video with the payload, and then also return it on the result object (Please see the section on running liveness using the camera for details). Use in conjunction with thevideoDuration
property to also set the video recording length, and use themessageHold
property to control what message is displayed while recording is taking place. Default value is false.integrationApiBaseUrl (string)
: Sets the base URL for the companion onboarding web API. Only used whenuseIntegrationApi
is set to true.mediaStreamStartTimeout (integer)
: The time (in milliseconds) that the SDK is given to enable and hook onto the user's camera before it times out. The default value is 6000.modelPath (string)
: Path to the model used for UI-side facial detection. The default value is "assets".overexposedThreshold (number)
: The percentage of overexposed pixels that should be present for an image to be considered overexposed. Default 50.overexposedValue (number)
: The grayscale RGB value (0-255) that a pixel's color must be larger than in order to be considered overexposed. Default 220.passiveLivenessBodyValues (array of { name: string, value: any })
: A collection of names and values, used to construct additional request body items when performing the passive liveness API call.passiveLivenessHeaders (dictionary<string, any>)
: A dictionary of strings as keys and any value types, used to construct request headers when performing the passive liveness API call.passiveLivenessModelNumber (string)
: The version number of the server-side model used for liveness detection. The default value is "7".passiveLivenessImageParamName (string)
: The parameter name of the image to run passive liveness detection on. The default value is "media".passiveLivenessVideoParamName (string)
: The parameter name of the video that gets passed down with the passive liveness API request. The default value is "video".recordDebugInfo (string: never | always | onerror | onspoof | onsuccess)
: Sets whether and when debug info should be recorded for use by download or upload functionality. The default value is "never".never: No debug info is ever recorded.
always: Debug info is recorded after every liveness attempt.
onerror: Debug info is only recorded when an error occurs.
onspoof: Debug info is recorded when a face detection comes back as a spoof.
onsuccess: Debug info is only recorded on a successful face detection that is not a spoof.
showDebugOverlay (boolean)
: Sets whether or not the debug overlay should be shown.targetProcessingSize (number)
: The target value of the largest dimension that the image will be resized to during preprocessing. Lower values enhance performance but reduce accuracy. Default 640.thresholdAdjustAmount (number)
: The amount by which face detection sensitivity will be adjusted if face detection is taking a long time. Default value is 1.thresholdAdjustInterval (number)
: The amount of time (in milliseconds) that needs to pass before face detection sensitivity is adjusted if detection is taking a long time. Default value is 4000.tokenTimeout (number)
: The duration (in milliseconds) that a token is valid and will be reused for before a new authentication call is made. Default 120000.translations ({ [key: string]: string })
: An object literal representing a dictionary lookup that will be used for translating text shown by the JavaScript API. Please see the translations section on this page for a list of all translatable text, as well as the localization page for a detailed description on how to implement localization.underexposedThreshold (number)
: The percentage of underexposed pixels that should be present for an image to be considered underexposed. Default 40.underexposedValue (number)
: The grayscale RGB value (0-255) that a pixel's color must be smaller than in order to be considered underexposed. Default 30.useIntegrationApi (boolean)
: Sets whether or not the companion onboarding web API is being used with the web SDK. Renders theauthorizationEndpoint
anddataExtractionEndpoint
properties unused and they are thus not required anymore.integrationApiBaseUrl
should also be set when this value is true. Default false.videoDuration (number)
: Sets the length (in seconds) that the passive liveness video must be recorded for if theincludeVideo
property is set to true. Default value is 3.videoInterval (integer)
: The interval (in milliseconds) at which the video stream is analyzed for faces during liveness detection. The default and minimum value is 500.
Functionality
The Sybrin Biometrics Javascript API provides multiple ways of running liveness detection and other functions to control the Web SDK.
These include:
Run Liveness Using Camera
Cancel
Additionally, the API provides:
Set Translations
Version Information Retrieval
Client Information Retrieval
Compatibility Check
Get Video Input Devices
Debug Information Download
Debug Information Upload
Run Liveness Using Camera
To use the camera for passive liveness detection, you may make use of the openPassiveLivenessDetection
function exposed by the JavaScript API.
Signature:
openPassiveLivenessDetection(params?: { id?: string; element?: HTMLElement; correlationId?: string; deviceId?: string; faceInfo?: FaceInfo; }): Promise<PassiveLivenessResult>
Optionally, you may pass an object literal as a parameter and specify either id
or element
as a property. If no element or ID is passed, the Web SDK will temporarily inject a full screen element to display the video feed. If id
(type string) or element
(type HTMLElement) is passed, the Web SDK will create the video feed inside the passed element or element matching the passed ID.
You may also optionally pass a device ID to invoke a specific camera. Use the SDK's getVideoInputDevices
function to retrieve a list of available video input devices and their IDs.
To cater for disabilities, you may optionally pass a faceInfo
value. It is of type FaceInfo, which contains a single property (eyeCount
, of type number). This will adjust the eye count validation that is executed during passive liveness.
You may also optionally pass a correlationId
value to associate the result with a specific case.
The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then()
, .catch()
and .finally()
.
Usage example:
The result is of type PassiveLivenessResult
and has the following properties:
alive (boolean)
: Whether or not passive liveness passed. True means the selfie is of a live person. False means it is a spoof or that liveness could not be established.confidence (number)
: Confidence level that the selfie is of a live person. This is a decimal value between 0 and 1, where 0 means not confident at all and 1 means 100% confident.facingMode (string: user | environment | left | right)
: The direction/orientation of the camera that was used during capture.image (string)
: The selfie image that was analyzed, in data URI format (mime type and base64 data).video (blob)
: Blob data of the .webm video that was recorded for analysis. This property is only populated if video recording was enabled using theincludeVideo
property within the API configuration options.
Cancel
A function to cancel any action that the biometrics Web SDK is currently executing. This is useful if you wish to add a cancel button to the UI so that the user may stop liveness detection while it's in progress.
Signature:
cancel(): void
Usage example:
Set Translations
This function may be used to set translations on a JavaScript API level.
Signature:
setTranslations(translations: { [key: string]: string }): void
Usage example:
Version Information Retrieval
To get all version info regarding the Web SDK and its components, the API exposes a function called getVersionInfo
.
Signature:
getVersionInfo(): Promise<any>
The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then()
, .catch()
and .finally()
.
Usage example:
The result has the following properties:
webSdkVersion (string)
: The semantic version number of the JavaScript web SDK that is currently in use.
Client Information Retrieval
To get all version info regarding the client environment in which the application is currently running, the API exposes a function called getClientInfo
.
Signature:
getClientInfo(): Promise<ClientInfo>
The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then()
, .catch()
and .finally()
.
Usage example:
The result is of type ClientInfo
and has the following properties:
isMobile (boolean)
: Whether or not the client environment is mobile (tablet or phone).isMobileAndroid (boolean)
: Whether or not the client environment is running Android.isMobileBlackberry (boolean)
: Whether or not the client environment is running Blackberry.isIphone (boolean)
: Whether or not the client environment is running on an iPhone.isIpad (boolean)
: Whether or not the client environment is running on an iPad.isIpadPro (boolean)
: Whether or not the client environment is running on an iPad Pro.isIpod (boolean)
: Whether or not the client environment is running on iPod.isMobileIos (boolean)
: Whether or not the client environment is running iOS.isMobileOpera (boolean)
: Whether or not the client environment is mobile Opera.isMobileWindows (boolean)
: Whether or not the client environment is Windows Mobile.isMac (boolean)
: Whether or not the client environment is running on Mac.
Compatibility Check
This function checks compatibility of the web SDK with the environment in which it is running (device, operating system, browser etc.) and reports back on it.
Signature:
checkCompatibility(handleIncompatibility?: boolean): Promise<CompatibilityInfo>
Optionally, you may pass down true
to signal for the web SDK to handle incompatibility internally. This will result in a modal prompt with an appropriate message automatically being shown if the function finds incompatibility with the environment.
The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then()
, .catch()
and .finally()
.
Usage example:
The result is of type CompatibilityInfo
and has the following properties:
compatible (boolean)
: Whether or not the web SDK is compatible with the client environment.mediaRecorder (boolean)
: Whether or not video recording is supported.mediaStream (boolean)
: Whether or not the client environment supports media stream access.message (string)
: An appropriate message that describes the related incompatibility if detected.
Get Video Input Devices
This function returns a promise with a list of all video input devices that can be used.
Signature:
getVideoInputDevices(showLoader?: boolean, loaderContainer?: HTMLElement): Promise<VideoInputDevice[]>
The optional showLoader
parameter sets whether or not the UI must be blocked with a loader. When used in conjunction with the optional loaderContainer
parameter, the specific element will be blocked with a loader.
The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then()
, .catch()
and .finally()
.
Usage example:
The result is an array of type VideoInputDevice
and each instance has the following properties:
deviceId (string)
: ID of the device.groupId (string)
: Group that the device belongs to.type (string: Camera | Webcam | Device)
: The type of device.direction (string: Front | Back | Integrated)
: The direction of the device.label (string)
: A short description of the device.counter (number)
: Number indicator for the device. Value is 0 unless there are multiple devices of the same type and direction available.
Debug Information Download
Before using this function, please ensure that the recordDebugInfo
configuration option has been set.
Signature:
downloadDebugInfo(): void
This is for debug and diagnostic purposes only and can only be used once debug functionality has been configured. It can be used after a liveness scan to download an HTML file containing information relating to the scan attempt.
Usage example:
Debug Information Upload
Before using this function, please ensure that the recordDebugInfo
configuration option has been set.
Signature:
uploadDebugInfo(): Promise<boolean>
This is for debug and diagnostic purposes only and can only be used once debug functionality has been configured. It can be used after a liveness scan to upload an HTML file containing information relating to the scan attempt. The file is uploaded to the endpoint configured in the debugInfoEndpoint
configuration option.
This function sends a POST message to the configured endpoint and the payload is a string on the form body, called debugInfo
.
The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then()
, .catch()
and .finally()
.
IMPORTANT: The HTML file includes the selfie taken during the scan attempt. Please keep the POPI act in mind when making use of this feature. Sybrin accepts no responsibility for any breach of the POPI act should this function be used to upload data to your own custom hosted service.
Usage example:
Translations
The JavaScript API is affected by the following translation keys:
sy-b-translation-21
Text prompt to display when a selfie has successfully been taken and the user has to wait for processing to complete
Good job! Please wait...
sy-b-translation-22
Text prompt to display with countdown when conditions are all correct and video recording has started (if enabled)
Perfect! Please hold still.
sy-b-translation-23
Text prompt to display while liveness is initializing
Preparing...
sy-b-translation-24
Text prompt to display when the user's face is not centered properly
Please center face
sy-b-translation-25
Text prompt to display when the SDK is unable to detect the user's eyes
Please open both eyes
sy-b-translation-26
Text prompt to display when the user's face is too far away from the camera
Please move closer to the camera
sy-b-translation-27
Text prompt to display when the SDK is unable to detect a face
Scanning for face...
sy-b-translation-28
Text prompt to display when more than one face is being detected
Please ensure only one face is visible in frame
sy-b-translation-29
Text prompt to display when more light is needed
Lighting conditions too dark
sy-b-translation-30
Text prompt to display when there is too much light
Lighting conditions too bright
sy-b-translation-31
Text prompt to display when the image is not clear enough
Image too blurry
sy-b-translation-32
Alert message to show if the SDK detects that the browser is not supported
Browser is not supported.
sy-b-translation-33
Alert message to show if the SDK detects that the user is using a third party browser on an Apple device that doesn't allow camera access to third party browsers.
Browser is not supported. Please open in Safari.
sy-b-translation-34
Alert message to show if video recording is enabled and the SDK detects that the browser does not support video recording
Video recording is not supported in this browser.
sy-b-translation-35
Caption of the button that dismisses the alert window that is shown when a compatibility issue is detected
Ok
Last updated