Add to package.json
file under dependencies
group:
"react-native-trust-vision-SDK": "git+https://github.com/tsocial/react-native-trust-vision-SDK#<version_tag_name>"
$ yarn
...
pod 'RNTrustVisionRnsdkFramework', path: '../node_modules/react-native-trust-vision-SDK'
$ pod install
Add to root-level build.gradle
file (host app):
maven {
url("$rootDir/../node_modules/react-native-trust-vision-SDK/android/repo")
}
eg:
allprojects {
repositories {
mavenLocal()
...
maven {
url("$rootDir/../node_modules/react-native-trust-vision-SDK/android/repo")
}
google()
jcenter()
maven { url 'https://jitpack.io' }
}
}
Add to app/build.gradle
android {
...
aaptOptions {
noCompress "tflite"
noCompress "lite"
}
}
import { NativeEventEmitter } from "react-native";
import RNTrustVisionRnsdkFramework, {
TVConst,
TVErrorCode,
} from "react-native-trust-vision-SDK";
Full steps:
try {
await RNTrustVisionRnsdkFramework.initialize(
clientSettingJsonString,
"vi",
true
);
const tvsdkEmitter = new NativeEventEmitter(RNTrustVisionRnsdkFramework);
// Listen to the events during the capturing
const subscription = tvsdkEmitter.addListener("TVSDKEvent", (event) => {
console.log("TVSDK - " + event.name + " - " + event.params.page_name);
});
// Listen to the frame batches recorded during the capturing
const framesRecordedSubscription = tvsdkEmitter.addListener(
"TVSDKFrameBatch",
(event) => {
console.log("TVSDK - " + "FrameBatch: ", obj);
// upload frame batch to server using this api:
// https://ekyc.trustingsocial.com/api-reference/customer-api/#upload-videoaudioframes
}
);
const cardType = {
id: "card_id",
name: "card_name",
orientation: TVConst.Orientation.LANDSCAPE,
hasBackSide: true,
};
const idConfig = {
cardType: cardType,
isEnableSound: false,
isReadBothSide: true,
cardSide: TVConst.CardSide.FRONT,
};
console.log("Id Config", idConfig);
const idResult = await RNTrustVisionRnsdkFramework.startIdCapturing(idConfig);
console.log("Id Result", idResult);
} catch (e) {
console.log("Error: ", e.code, " - ", e.message);
}
SDK needs to be initialized first
await RNTrustVisionRnsdkFramework.initialize(
clientSettingJsonString,
"vi", // language code
true // enable event or not
);
Options:
string
. The jsonConfigurationByServer is optional but recommended. It's the setting specialized for each client from TS server. It's the response json string get by API https://ekyc.trustingsocial.com/api-reference/customer-api/#get-client-settings. When it's null or unmatched with the expected type then the default setting in the SDK will be used.string
. Language code. vi
or en
bool
. Enable event tracking or notThe SDK provides some built in functions to capture id, selfie, liveness...
const idConfig = {
cardType: cardType,
cardSide: TVConst.CardSide.FRONT,
isEnableSound: false,
isReadBothSide: true,
skipConfirmScreen: true,
isEnablePhotoGalleryPicker: false,
};
Options:
CardType
. Card typeTVConst.CardSide
. Card sidebool
. Sound is played or notbool
. Read both sides of id card or notbool
. Skip confirmation screen or notbool
. Allow user select id card image from phone galleryconst result = await RNTrustVisionRnsdkFramework.startIdCapturing(config);
if result.frontIdQr.is_required
is true then result.frontIdQr.images
array should be non-empty. Otherwise, clients should be warned to re-capture id card photos.
QR imagegs will be uploaded with this api: https://ekyc.trustingsocial.com/api-reference/customer-api/#upload-image
result.frontIdQr.images[i].raw_image_base64
result.frontIdQr.images[i].label
result.frontIdQr.images[i].metadata
*The same logic will be applied to result.backIdQr
const config = {
cameraOption: TVConst.SelfieCameraMode.FRONT,
isEnableSound: true,
livenessMode: TVConst.LivenessMode.PASSIVE,
skipConfirmScreen: true,
};
Options:
TVConst.SelfieCameraMode
. Camera optionbool
. Sound is played or notTVConst.LivenessMode
. Liveness modebool
. Skip confirmation screen or notconst selfieCapturingResult =
await RNTrustVisionRnsdkFramework.startSelfieCapturing(config);
Note: Ignore this section if Frame recording is disabled by client settings.
var frameBatchIdsDictionary = []; // this dictionary will be used for liveness verification
// frameBatchIdsDictionary.push({
// key: <id_returned_from_sdk>,
// value: <id_returned_from_server>
// });
// Listen to the frame batches recorded during the capturing
const framesRecordedSubscription = tvsdkEmitter.addListener(
"TVSDKFrameBatch",
async (frameBatch) => {
console.log("TVSDK - " + "FrameBatch: ", frameBatch);
// upload frame batch to server using this api:
// https://ekyc.trustingsocial.com/api-reference/customer-api/#upload-videoaudioframes
const uploadingResult = await uploadFrameBatch(frameBatch);
frameBatchIdsDictionary.push({
key: frameBatch.batchId,
value: uploadingResult.fileId,
});
}
);
Note: Ignore this section if Frame recording is disabled by client settings.
Only frame batches of Selfie capturing which id is containing in selfieCapturingResult.livenessFrameBatchIds
are valid to be used for liveness verification.
// Remove all invalid batch ids:
Object.entries(frameBatchIdsDictionary).map(
([id_returned_from_sdk, id_returned_from_server]) => {
if (
!selfieCapturingResult.livenessFrameBatchIds.includes(
id_returned_from_sdk
)
) {
delete frameBatchIdsDictionary[id_returned_from_sdk];
}
}
);
https://ekyc.trustingsocial.com/api-reference/customer-api/#upload-image
id of frontal image i = image id of selfieCapturingResult.selfieImages[i].frontal_image.raw_image_base64
id of gesture image i = image id of selfieCapturingResult.selfieImages[i].gesture_image.raw_image_base64
https://ekyc.trustingsocial.com/api-reference/customer-api/#verify-face-liveness with params
images
field, each element contains:{
"id": "<id of frontal image i>"
}
gesture_images
field, each element contains:{
"gesture": "lower case string of <selfieCapturingResult.selfieImages[i].gesture_type>",
"images": [
{
"id": "<id of gesture image i>"
}
]
}
videos
field is the list of frame batch ids returned from server, which are the values of frameBatchIdsDictionary
Note: Ignore this field if Frame recording is disabled by client settings.{
"id": "<frameBatchIdsDictionary's values[0]>"
},
{
"id": "<frameBatchIdsDictionary's values[1]>"
}
...
metadata
field is selfieCapturingResult.livenessMetadata
result:
cardType: CardType
. Card type
actionMode: TVConst.ActionMode
. Action Mode
selfieImages: [SelfieImage]
. List of selfie image objects
livenessVideos: [Base64 String]
. List of liveness videos data base64
livenessMetadata: json
livenessVideoFramesList: [json]
idFrontImage: ImageClass
. Id front image object
idBackImage: ImageClass
. Id back image object
frontIdQr: TVCardQr
. Front Id card's QR info
backIdQr: TVCardQr
. Back Id card's QR info
frontIdCapturingVideoFramesList: json
backIdCapturingVideoFramesList: json
SelfieImage:
String
. UP | DOWN | LEFT | RIGHT | FRONTAL
ImageClass
. Frontal image objectImageClass
. Gesture image objectImageClass:
String
. Base64 string of image dataString
. Image labeljson
. Image metadataTVCardQr:
Bool
. This side of card contains QR or not[ImageClass]
. QR imagesError:
String
. The specific error codeString
. The human-readable error description can be show to end user