Add to package.json
file under dependencies
group:
"react-native-trust-vision-SDK": "git+https://github.com/tsocial/react-native-trust-vision-SDK#<version_tag_name>"
$ yarn
...
pod 'RNTrustVisionRnsdkFramework', path: '../node_modules/react-native-trust-vision-SDK'
$ pod install
Add to root-level build.gradle file (host app):
maven {
url("$rootDir/../node_modules/react-native-trust-vision-SDK/android/repo")
}
eg:
allprojects {
repositories {
mavenLocal()
...
maven {
url("$rootDir/../node_modules/react-native-trust-vision-SDK/android/repo")
}
google()
jcenter()
maven { url 'https://jitpack.io' }
}
}
Add to app/build.gradle
android {
...
aaptOptions {
noCompress "tflite"
noCompress "lite"
}
}
import { NativeEventEmitter } from "react-native";
import RNTrustVisionRnsdkFramework, {
TVConst,
TVErrorCode,
} from "react-native-trust-vision-SDK";
Full steps:
try {
await RNTrustVisionRnsdkFramework.initialize(
clientSettingJsonString,
"vi",
true
);
const tvsdkEmitter = new NativeEventEmitter(RNTrustVisionRnsdkFramework);
const subscription = tvsdkEmitter.addListener("TVSDKEvent", (event) => {
console.log("TVSDK - " + event.name + " - " + event.params.page_name);
});
const cardType = {
id: "card_id",
name: "card_name",
orientation: TVConst.Orientation.LANDSCAPE,
hasBackSide: true,
};
const idConfig = {
cardType: cardType,
isEnableSound: false,
isReadBothSide: true,
cardSide: TVConst.CardSide.FRONT,
};
console.log("Id Config", idConfig);
const idResult = await RNTrustVisionRnsdkFramework.startIdCapturing(idConfig);
console.log("Id Result", idResult);
} catch (e) {
console.log("Error: ", e.code, " - ", e.message);
}
SDK needs to be initialized first
await RNTrustVisionRnsdkFramework.initialize(
clientSettingJsonString,
"vi", // language code
true // enable event or not
);
Options:
string
. The jsonConfigurationByServer is optional but recommended. It's the setting specialized for each client from TS server. It's the response json string get by API https://ekyc.trustingsocial.com/api-reference/customer-api/#get-client-settings. When it's null or unmatched with the expected type then the default setting in the SDK will be used.string
. Language code. vi
or en
bool
. Enable event tracking or notThe SDK provides some built in functions to capture id, selfie, liveness...
const idConfig = {
cardType: cardType,
cardSide: TVConst.CardSide.FRONT,
isEnableSound: false,
isReadBothSide: true,
skipConfirmScreen: true,
isEnablePhotoGalleryPicker: false,
};
Options:
CardType
. Card typeTVConst.CardSide
. Card sidebool
. Sound is played or notbool
. Read both sides of id card or notbool
. Skip confirmation screen or notbool
. Allow user select id card image from phone galleryconst result = await RNTrustVisionRnsdkFramework.startIdCapturing(config);
if result.frontIdQr.is_required
is true then result.frontIdQr.images
array should be non-empty. Otherwise, clients should be warned to re-capture id card photos.
QR imagegs will be uploaded with this api: https://ekyc.trustingsocial.com/api-reference/customer-api/#upload-image
result.frontIdQr.images[i].raw_image_base64
result.frontIdQr.images[i].label
result.frontIdQr.images[i].metadata
*The same logic will be applied to result.backIdQr
const config = {
cameraOption: TVConst.SelfieCameraMode.FRONT,
isEnableSound: true,
livenessMode: TVConst.LivenessMode.PASSIVE,
skipConfirmScreen: true,
};
Options:
TVConst.SelfieCameraMode
. Camera optionbool
. Sound is played or notTVConst.LivenessMode
. Liveness modebool
. Skip confirmation screen or notconst result = await RNTrustVisionRnsdkFramework.startSelfieCapturing(config);
https://ekyc.trustingsocial.com/api-reference/customer-api/#upload-image
id of frontal image i = image id of result.selfieImages[i].frontal_image.raw_image_base64
id of gesture image i = image id of result.selfieImages[i].gesture_image.raw_image_base64
https://ekyc.trustingsocial.com/api-reference/customer-api/#upload-videoaudioframes
id of selfie video i = video id of result.livenessVideos[i]
https://ekyc.trustingsocial.com/api-reference/customer-api/#verify-face-liveness with params
images
field, each element contains:{
"id": "<id of frontal image i>"
}
gesture_images
field, each element contains:{
"gesture": "lower case string of <result.selfieImages[i].gesture_type>",
"images": [
{
"id": "<id of gesture image i>"
}
]
}
videos
field is a list which comprises 2 lists:
{
"id": "<id of selfie video i>" (step 2)
}
result.livenessVideoFramesList
metadata
field is result.livenessMetadata
result:
cardType: CardType
. Card type
actionMode: TVConst.ActionMode
. Action Mode
selfieImages: [SelfieImage]
. List of selfie image objects
livenessVideos: [Base64 String]
. List of liveness videos data base64
livenessMetadata: json
livenessVideoFramesList: [json]
idFrontImage: ImageClass
. Id front image object
idBackImage: ImageClass
. Id back image object
frontIdQr: TVCardQr
. Front Id card's QR info
backIdQr: TVCardQr
. Back Id card's QR info
frontIdCapturingVideoFramesList: json
backIdCapturingVideoFramesList: json
SelfieImage:
String
. UP | DOWN | LEFT | RIGHT | FRONTAL
ImageClass
. Frontal image objectImageClass
. Gesture image objectImageClass:
String
. Base64 string of image dataString
. Image labeljson
. Image metadataTVCardQr:
Bool
. This side of card contains QR or not[ImageClass]
. QR imagesError:
String
. The specific error codeString
. The human-readable error description can be show to end user