JavaScript API

If you wish to use the functionality provided by the Sybrin Biometrics Web SDK but create your own UI entirely, you may achieve this by simply creating an instance of the JavaScript API and using the functions exposed by it.

Authorization

The web SDK offers two methods of authorization.

  • The first, most preferred and most secure method is to implement the authorization step in your backend solution and then provide the token to the front-end web SDK component. This is the Token Method.

  • The second method is to allow the web SDK to execute authorization from the UI. This is the API Key Method.

Warning: The API Key Method is extremely unsecure and is not recommended for production environments.

Token Method

Contrary to what the name implies, you will still require an API key for this approach. The only difference is that the web SDK will never directly make use of the API key and will instead only receive a token.

Sybrin will provide you with an orchestration API endpoint, along with a personalized API key (If you have not received this, please contact us).

To use this method:

  • Excute a POST call to the authorization endpoint provided to you by Sybrin, adding your API key to an apiKey header on the request. The response will include a token (AuthToken property).

  • Provide the token returned from the API request to your front-end solution.

  • Set the token on the authToken property of your Sybrin.Biometrics.Options instance.

API Key Method

This method is much simpler, but also much less secure.

To use this approach:

  • Set the API key provided to you by Sybrin on the apiKey property of your Sybrin.Biometrics.Options instance.

  • Set the authorization endpoint provided to you by Sybrin on the authEndpoint property of your Sybrin.Biometrics.Options instance.

Initialization

Token Method

This is the preferred approach.

Please follow the steps described here. The JavaScript API may then be initialized as follows:

var options = new Sybrin.Biometrics.Options({
    authToken: 'your-auth-token-here', 
    passiveLivenessEndpoint: 'your-liveness-endpoint-here',
    // modelPath: 'assets' // Use if your model is located elsewhere
});

var biometrics = new Sybrin.Biometrics.Api();

biometrics.initialize(options);

API Key Method

This approach is not secure and is not recommended for production environments.

Please follow the steps described here. The JavaScript API may then be initialized as follows:

var options = new Sybrin.Biometrics.Options({
    apiKey: 'your-api-key-here', 
    authEndpoint: 'your-auth-endpoint-here'
    passiveLivenessEndpoint: 'your-liveness-endpoint-here',
    // modelPath: 'assets' // Use if your model is located elsewhere
});

var biometrics = new Sybrin.Biometrics.Api();

biometrics.initialize(options);

Configuration Options

The following properties are exposed as configuration options:

Required

  • authToken (string): Please see the authorization section for details on how to use this property. Not required if the apiKey and authEndpoint properties are used.

  • apiKey (string): Your API key as provided by Sybrin. Please see the authorization section for details on how to use this property. Not required if the authToken property is used.

  • authEndpoint (string): The endpoint that will be used to authorize. Not required if the authToken property is used.

  • passiveLivenessEndpoint (string): The endpoint that will be used to execute passive liveness detection.

Optional

  • allowMultipleFaces (boolean): Sets whether or not the SDK should allow multiple faces in the photo. If set to true, the closest face will be used for liveness detection.

  • assetHeadersCallback (function): A callback function that may be used to modify the headers of the assets HTTP request. As parameters, this function provides the headers object before modification. The function expects the modified headers object to be returned.

  • authBodyCallback (function): A callback function that may be used to modify the body of the authorization API call. As parameters, this function provides the body object before modification. The function expects the modified body object to be returned.

  • authHeadersCallback (function): A callback function that may be used to modify the headers of the authorization API call. As parameters, this function provides the headers object before modification. The function expects the modified headers object to be returned.

  • authHttpMethod (string: GET | POST | PUT): Overrides the HTTP method type for the authorization API call.

  • blurThreshold (number): The threshold that the blur detection algorithm must pass for the image to be considered clear enough. A higher value is stricter. A lower value is less strict. Default 7.5.

  • blurThresholdMobileModifier (number): The factor by which blur threshold is adjusted on mobile devices. A higher value makes blur detection stricter on mobile. Default 1.25.

  • debugInfoEndpoint (string): The endpoint that will be used to post debug information to upon using the upload debug info function.

  • eyeDetectionScale (number): Influences thoroughness of eye detection. Lower values are more reliable, but also slower. Larger values are faster, but less reliable. May be any value from 1.05 up to 1.4. Default 1.2.

  • eyeDetectionThreshold (number): Sets the eye detection sensitivity. Lower values are more relaxed and more likely to return false positives for eye detection, while higher values are stricter. This value has to be a whole number. It is recommended not to go lower than 2 or higher than 5. Default 3.

  • encryptionKey (string): The 32-character AES encryption key that will be used for encrypting network traffic from the SDK to the backend API. This value must match the key used by the decryption algorithm in the backend.

  • faceDetectionScale (number): Influences thoroughness of face detection. Lower values are more reliable, but also slower. Larger values are faster, but less reliable. May be any value from 1.05 up to 1.4. Default 1.1.

  • faceDistanceLandscapeThreshold (number): The distance threshold in landscape orientation for the face to be considered close enough, where 0.9 is the closest and 0.1 is the furthest. Default 0.35.

  • faceDistancePortraitThreshold (number): The distance threshold in portrait orientation for the face to be considered close enough, where 0.9 is the closest and 0.1 is the furthest. Default 0.35.

  • includeVideo (boolean): Sets whether or not a video should be recorded with passive liveness detection. If this property is set to true, the passive liveness API call will include the video with the payload, and then also return it on the result object (Please see the section on running liveness using the camera for details). Use in conjunction with the videoDuration property to also set the video recording length, and use the messageHold property to control what message is displayed while recording is taking place. Default value is false.

  • integrationMode (0 - direct | 1 - middleware): Sets how the web SDK integrates with backend services to execute liveness. If you wish to use the companion API included with the SDK, or your own middleware implementation, please see the Middleware section. Default 0 (direct).

  • mediaStreamRetryCount (integer): The number of times that the SDK must retry gaining access to the camera if it fails the first time.

  • mediaStreamRetryDelay (integer): The duration that the SDK must wait before retrying gaining access to the camera if it fails the first time.

  • mediaStreamStartTimeout (integer): The time (in milliseconds) that the SDK is given to enable and hook onto the user's camera before it times out. The default value is 6000.

  • modelPath (string): Path to the model used for UI-side facial detection. The default value is "assets".

  • overexposedThreshold (number): The percentage of overexposed pixels that should be present for an image to be considered overexposed. Default 50.

  • overexposedValue (number): The grayscale RGB value (0-255) that a pixel's color must be larger than in order to be considered overexposed. Default 220.

  • passiveLivenessBodyCallback (function): A callback function that may be used to modify the body of the passive liveness API call. As parameters, this function provides the body object before modification as well as a SnapshotData object (please see Middleware section for more details). The function expects the modified body object to be returned.

  • passiveLivenessHeadersCallback (function): A callback function that may be used to modify the headers of the passive liveness API call. As parameters, this function provides the headers object before modification as well as a SnapshotData object (please see Middleware section for more details). The function expects the modified headers object to be returned.

  • passiveLivenessHttpMethod (string: GET | POST | PUT): Overrides the HTTP method type for the passive liveness API call.

  • recordAudio (boolean): Sets whether or not audio should be included if video recording is enabled. Default false.

  • recordDebugInfo (string: never | always | onerror | onspoof | onsuccess): Sets whether and when debug info should be recorded for use by download or upload functionality. The default value is "never".

    • never: No debug info is ever recorded.

    • always: Debug info is recorded after every liveness attempt.

    • onerror: Debug info is only recorded when an error occurs.

    • onspoof: Debug info is recorded when a face detection comes back as a spoof.

    • onsuccess: Debug info is only recorded on a successful face detection that is not a spoof.

  • showDebugOverlay (boolean): Sets whether or not the debug overlay should be shown.

  • targetProcessingSize (number): The target value of the largest dimension that the image will be resized to during preprocessing. Lower values enhance performance but reduce accuracy. Default 640.

  • thresholdAdjustAmount (number): The amount by which face detection sensitivity will be adjusted if face detection is taking a long time. Default value is 1.

  • thresholdAdjustInterval (number): The amount of time (in milliseconds) that needs to pass before face detection sensitivity is adjusted if detection is taking a long time. Default value is 4000.

  • tokenTimeout (number): The duration (in milliseconds) that a token is valid and will be reused for before a new authorization call is made. Default 120000.

  • translations ({ [key: string]: string }): An object literal representing a dictionary lookup that will be used for translating text shown by the JavaScript API. Please see the translations section on this page for a list of all translatable text, as well as the localization page for a detailed description on how to implement localization.

  • underexposedThreshold (number): The percentage of underexposed pixels that should be present for an image to be considered underexposed. Default 40.

  • underexposedValue (number): The grayscale RGB value (0-255) that a pixel's color must be smaller than in order to be considered underexposed. Default 30.

  • videoDuration (number): Sets the length (in seconds) that the passive liveness video must be recorded for if the includeVideo property is set to true. Default value is 3.

  • videoInterval (integer): The interval (in milliseconds) at which the video stream is analyzed for faces during liveness detection. The default and minimum value is 500.

Functionality

The Sybrin Biometrics Javascript API provides multiple ways of running liveness detection and other functions to control the Web SDK.

These include:

  • Run Liveness Using Camera

  • Selfie Capture

  • Cancel

Additionally, the API provides:

  • Set Translations

  • Version Information Retrieval

  • Client Information Retrieval

  • Compatibility Check

  • Get Video Input Devices

  • Debug Information Download

  • Debug Information Upload

Run Liveness Using Camera

To use the camera for passive liveness detection, you may make use of the openPassiveLivenessDetection function exposed by the JavaScript API.

Signature:

openPassiveLivenessDetection(params?: { id?: string; element?: HTMLElement; correlationId?: string; deviceId?: string; faceInfo?: FaceInfo; }): Promise<PassiveLivenessResult>

Optionally, you may pass an object literal as a parameter and specify either id or element as a property. If no element or ID is passed, the Web SDK will temporarily inject a full screen element to display the video feed. If id (type string) or element (type HTMLElement) is passed, the Web SDK will create the video feed inside the passed element or element matching the passed ID.

You may also optionally pass a device ID to invoke a specific camera. Use the SDK's getVideoInputDevices function to retrieve a list of available video input devices and their IDs.

To cater for disabilities, you may optionally pass a faceInfo value. It is of type FaceInfo, which contains a single property (eyeCount, of type number). This will adjust the eye count validation that is executed during passive liveness.

You may also optionally pass a correlationId value to associate the result with a specific case.

The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then(), .catch() and .finally().

Usage example:

<button onclick="openPassiveLivenessDetection()">Start Liveness Detection</button>

The result is of type PassiveLivenessResult and has the following properties:

  • alive (boolean): Whether or not passive liveness passed. True means the selfie is of a live person. False means it is a spoof or that liveness could not be established.

  • confidence (number): Confidence level that the selfie is of a live person. This is a decimal value between 0 and 1, where 0 means not confident at all and 1 means 100% confident.

  • facingMode (string: user | environment | left | right): The direction/orientation of the camera that was used during capture.

  • image (string): The selfie image that was analyzed, in data URI format (mime type and base64 data).

  • video (blob): Blob data of the .webm video that was recorded for analysis. This property is only populated if video recording was enabled using the includeVideo property within the API configuration options.

Selfie Capture

This function is for taking a selfie without running liveness detection.

For this functionality, you may make use of the openSelfieCapture function exposed by the JavaScript API.

Signature:

openSelfieCapture(params?: { id?: string; element?: HTMLElement; deviceId?: string; faceInfo?: FaceInfo; }): Promise<SelfieCaptureResult>

Optionally, you may pass an object literal as a parameter and specify either id or element as a property. If no element or ID is passed, the Web SDK will temporarily inject a full screen element to display the video feed. If id (type string) or element (type HTMLElement) is passed, the Web SDK will create the video feed inside the passed element or element matching the passed ID.

You may also optionally pass a device ID to invoke a specific camera. Use the SDK's getVideoInputDevices function to retrieve a list of available video input devices and their IDs.

To cater for disabilities, you may optionally pass a faceInfo value. It is of type FaceInfo, which contains a single property (eyeCount, of type number). This will adjust the eye count validation that is executed during selfie capture.

The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then(), .catch() and .finally().

Usage example:

<button onclick="openSelfieCapture()">Open Selfie Capture</button>

The result is of type SelfieCaptureResult and has the following properties:

  • facingMode (string: user | environment | left | right): The direction/orientation of the camera that was used during capture.

  • image (string): The selfie image that was analyzed, in data URI format (mime type and base64 data).

  • video (blob): Blob data of the .webm video that was recorded for analysis. This property is only populated if video recording was enabled using the includeVideo property within the API configuration options.

  • hash (string): The hash calculated on the image that is used to ensure that the image comes from a trusted source.

Cancel

A function to cancel any action that the biometrics Web SDK is currently executing. This is useful if you wish to add a cancel button to the UI so that the user may stop liveness detection while it's in progress.

Signature:

cancel(): void

Usage example:

<button onclick="cancel()">Cancel</button>

Set Translations

This function may be used to set translations on a JavaScript API level.

Signature:

setTranslations(translations: { [key: string]: string }): void

Usage example:

biometrics.setTranslations({
    'sy-b-translation-23': 'Preparing'
});

Version Information Retrieval

To get all version info regarding the Web SDK and its components, the API exposes a function called getVersionInfo.

Signature:

getVersionInfo(): Promise<any>

The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then(), .catch() and .finally().

Usage example:

biometrics.getVersionInfo().then(function(result) {
    console.log(result);
}).catch(function(error) {
    console.log('An error occurred: ', error);
}).finally(function() {
    console.log('Done');
});

The result has the following properties:

  • webSdkVersion (string): The semantic version number of the JavaScript web SDK that is currently in use.

Client Information Retrieval

To get all version info regarding the client environment in which the application is currently running, the API exposes a function called getClientInfo.

Signature:

getClientInfo(): Promise<ClientInfo>

The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then(), .catch() and .finally().

Usage example:

biometrics.getClientInfo().then(function(result) {
    console.log(result);
}).catch(function(error) {
    console.log('An error occurred: ', error);
}).finally(function() {
    console.log('Done');
});

The result is of type ClientInfo and has the following properties:

  • isMobile (boolean): Whether or not the client environment is mobile (tablet or phone).

  • isMobileAndroid (boolean): Whether or not the client environment is running Android.

  • isMobileBlackberry (boolean): Whether or not the client environment is running Blackberry.

  • isIphone (boolean): Whether or not the client environment is running on an iPhone.

  • isIpad (boolean): Whether or not the client environment is running on an iPad.

  • isIpadPro (boolean): Whether or not the client environment is running on an iPad Pro.

  • isIpod (boolean): Whether or not the client environment is running on iPod.

  • isMobileIos (boolean): Whether or not the client environment is running iOS.

  • isMobileOpera (boolean): Whether or not the client environment is mobile Opera.

  • isMobileWindows (boolean): Whether or not the client environment is Windows Mobile.

  • isMac (boolean): Whether or not the client environment is running on Mac.

Compatibility Check

This function checks compatibility of the web SDK with the environment in which it is running (device, operating system, browser etc.) and reports back on it.

Signature:

checkCompatibility(handleIncompatibility?: boolean): Promise<CompatibilityInfo>

Optionally, you may pass down true to signal for the web SDK to handle incompatibility internally. This will result in a modal prompt with an appropriate message automatically being shown if the function finds incompatibility with the environment.

The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then(), .catch() and .finally().

Usage example:

biometrics.checkCompatibility().then(function(result) {
    console.log(result);
}).catch(function(error) {
    console.log('An error occurred: ', error);
}).finally(function() {
    console.log('Done');
});

The result is of type CompatibilityInfo and has the following properties:

  • compatible (boolean): Whether or not the web SDK is compatible with the client environment.

  • mediaRecorder (boolean): Whether or not video recording is supported.

  • mediaStream (boolean): Whether or not the client environment supports media stream access.

  • message (string): An appropriate message that describes the related incompatibility if detected.

Get Video Input Devices

This function returns a promise with a list of all video input devices that can be used.

Signature:

getVideoInputDevices(showLoader?: boolean, loaderContainer?: HTMLElement): Promise<VideoInputDevice[]>

The optional showLoader parameter sets whether or not the UI must be blocked with a loader. When used in conjunction with the optional loaderContainer parameter, the specific element will be blocked with a loader.

The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then(), .catch() and .finally().

Usage example:

biometrics.getVideoInputDevices().then(function(result) {
    console.log(result);
}).catch(function(error) {
    console.log('An error occurred: ', error);
}).finally(function() {
    console.log('Done');
});

The result is an array of type VideoInputDevice and each instance has the following properties:

  • deviceId (string): ID of the device.

  • groupId (string): Group that the device belongs to.

  • type (string: Camera | Webcam | Device): The type of device.

  • direction (string: Front | Back | Integrated): The direction of the device.

  • label (string): A short description of the device.

  • counter (number): Number indicator for the device. Value is 0 unless there are multiple devices of the same type and direction available.

Debug Information Download

Before using this function, please ensure that the recordDebugInfo configuration option has been set.

Signature:

downloadDebugInfo(): void

This is for debug and diagnostic purposes only and can only be used once debug functionality has been configured. It can be used after a liveness scan to download an HTML file containing information relating to the scan attempt.

Usage example:

biometrics.downloadDebugInfo();

Debug Information Upload

Before using this function, please ensure that the recordDebugInfo configuration option has been set.

Signature:

uploadDebugInfo(): Promise<boolean>

This is for debug and diagnostic purposes only and can only be used once debug functionality has been configured. It can be used after a liveness scan to upload an HTML file containing information relating to the scan attempt. The file is uploaded to the endpoint configured in the debugInfoEndpoint configuration option.

This function sends a POST message to the configured endpoint and the payload is a string on the form body, called debugInfo.

The function returns a promise, so you may choose to use either the asynchronous await pattern or to subscribe to the result using .then(), .catch() and .finally().

IMPORTANT: The HTML file includes the selfie taken during the scan attempt. Please keep the POPI act in mind when making use of this feature. Sybrin accepts no responsibility for any breach of the POPI act should this function be used to upload data to your own custom hosted service.

Usage example:

biometrics.uploadDebugInfo().then(function() {
    console.log('Upload complete');
}).catch(function(error) {
    console.log('An error occurred: ', error);
}).finally(function() {
    console.log('Done');
})

Translations

The JavaScript API is affected by the following translation keys:

Translation KeyDescriptionDefault/Source

sy-b-translation-21

Text prompt to display when a selfie has successfully been taken and the user has to wait for processing to complete

Good job! Please wait...

sy-b-translation-22

Text prompt to display with countdown when conditions are all correct and video recording has started (if enabled)

Perfect! Please hold still.

sy-b-translation-23

Text prompt to display while liveness is initializing

Preparing...

sy-b-translation-24

Text prompt to display when the user's face is not centered properly

Please center face

sy-b-translation-25

Text prompt to display when the SDK is unable to detect the user's eyes

Please open both eyes

sy-b-translation-26

Text prompt to display when the user's face is too far away from the camera

Please move closer to the camera

sy-b-translation-27

Text prompt to display when the SDK is unable to detect a face

Scanning for face...

sy-b-translation-28

Text prompt to display when more than one face is being detected

Please ensure only one face is visible in frame

sy-b-translation-29

Text prompt to display when more light is needed

Lighting conditions too dark

sy-b-translation-30

Text prompt to display when there is too much light

Lighting conditions too bright

sy-b-translation-31

Text prompt to display when the image is not clear enough

Image too blurry

sy-b-translation-32

Alert message to show if the SDK detects that the browser is not supported

Browser is not supported.

sy-b-translation-33

Alert message to show if the SDK detects that the user is using a third party browser on an Apple device that doesn't allow camera access to third party browsers.

Browser is not supported. Please open in Safari.

sy-b-translation-34

Alert message to show if video recording is enabled and the SDK detects that the browser does not support video recording

Video recording is not supported in this browser.

sy-b-translation-35

Caption of the button that dismisses the alert window that is shown when a compatibility issue is detected

Ok

Last updated