MS Azure Face Recognition

Advanced-UI Nova-UI

The MS Azure Face Recognition uses the Face API which is part of the Microsoft Azure Cognitive Services. Faces have to be detected first, then they have to be learned (persisted and added to the large person group) and finally the large person group has to be trained before faces can be identified.

Detect Faces

A face detection is automatically performed in the following situations:

  • Tagging is executed manually, either through the context menu or the button on the detail view.
  • Tagging is executed automatically either by the tagging subscription or by executing the (re-)tag task.
  • Tagging is executed by the initialize task.

Except for the initialize task, a face identification is executed immediately afterwards.

The detected faces are only stored temporarily for 24 hours. The face rectangles are still available after this time, but if a face should be persisted, it has to happen within 24 hours, otherwise the face detection has to be triggered again.

Learn Faces

Faces can either be learned manually via the detail view or by the initialize task. Only learned faces can be identified.

The initialize task looks for taggable assets within the defined scope, and if there is only one face on the image, it will be learned automatically. Already learned faces are ignored.

The first learned face for a person will automatically become the reference face for the person. It is possible to change the reference face to another learned (persisted) face later. The reference face for a person is useful if there is a person asset type with all the important meta data for a person. In the detail view you can directly go to the reference asset for an identified face to know more about the person.

Train Learned Faces

There is a system task to train the learned faces. After the training they will be available for identification. The system task can either be executed manually or automatically (cron job, only executed if there are changes). The initialize task automatically performs a training at the end.

Identify Persons

Persons are identified automatically whenever a tagging operation is performed.

Deletion of Persons or the Person Group

To delete the large person group and all persons, the root node for the persons has to be deleted.

To delete a person, the node for the person has to be deleted.

What Happens on Asset Deletion

All the learned faces will be removed for the corresponding persons. If it is the last remaining learned face for a person, then the person will be deleted.

If the asset is the reference asset for a person and there are other learned faces, a new reference asset is set automatically. The face with the largest circumference will be chosen.

What Happens if a New Version Is Created

If the old version isn't the reference asset for a person, the procedure is the same as for asset deletion.

If the old version contains reference assets then the tagging of the new version is triggered automatically. If a person for which the old version was the reference asset can be identified on the new version, then the asset will stay the reference asset for the person and the old face is replaced by the new one. If the person cannot be found on the new version then the procedure is the same as for asset deletion.

Components

MS Azure Face Recognition Menu

Azure Face Recognition: Details (1)

This menu is used to open the detail view for an asset. It is visible if all the following conditions are met:

  • The menu is not hidden (azureFaceRec.menu.details.hide=false)
  • Only a single asset is selected
  • The asset type of the asset is taggable (has the required information fields)
  • The user is either superadmin or member of one of the groups specified under azureFaceRec.menu.details.allowedUserGroupIds and azureFaceRec.menu.admin.allowedUserGroupIds

Azure Face Recognition: Tag (2)

This menu is used to tag assets (without opening the detail view). It can be used to tag several assets at once, either by selecting several assets or by opening the context menu on nodes (all assets linked to the node or one of its children will be tagged). All but the taggable assets (assets with the required information fields) are ignored. This menu is visible if:

  • The menu is not hidden (azureFaceRec.menu.tag.hide=false)
  • Only nodes or only assets are selected
  • The user is either superadmin or member of one of the groups specified under azureFaceRec.menu.tag.allowedUserGroupIds

There is a summary telling how many assets were updated, ignored or have failed.

Subscriptions

The functionalities described in What Happens on Asset Deletion and What Happens if a New Version Is Created cannot be turned off and are always active.

Additionally an asset is tagged automatically if the following conditions are met:

  • Subscriptions are enabled (azureFaceRec.autoTagging.enableSubscription=true)
  • A scope for automated tagging is defined (azureFaceRec.autoTagging.scope=...)
  • The asset is within the scope, i.e. a search executed with the specified user (azureFaceRec.autoTagging.userId) will find the asset
  • The asset was changed and there is no previous detection or it is outdated (e.g. was done on an older version)

If assets should be tagged automatically, it is recommended to turn on subscriptions, run the (Re-)Tag Assets Task once (to tag all assets in the scope, otherwise they would only be tagged if there are changes to the asset) and then disable the (Re-)Tag Assets Task again (because all relevant changes are detected by the subscriptions now).

System Tasks

MS Azure Face Recognition System Tasks

Initialize Persons (1)

The initialize persons task is used to learn persons initially. It looks for all taggable assets in its scope and if there is exactly one face on the image, it will create the person, learn the face and make it the reference asset for the person if the person doesn't exist yet. Otherwise, if the face wasn't learned yet, it will just be learned without making it the reference asset. If it is already learned it will be ignored. The evaluation to check if a person exists or not happens by name (case-sensitive).

It is especially useful if there is a node with person assets (big picture of the person's face along with the name and optionally other data about them). That way, those persons can be learned automatically, and if the task is run regularly, the trained persons will be updated automatically.

(Re-)Tag Assets (2)

This task tags all assets in its scope (again). This can be useful to initialize the automation if subscriptions are turned on, or to re-tag assets if new persons were trained that couldn't be identified before.

Train Person Group (3)

The training task has to be executed before persons can be identified. After the initialize persons task, it is run automatically. The task is only run if there are changes, so this task can be run quite often or automated with cron jobs.

Detail View

In the detail view we can (re-)tag the whole image, which is necessary if we want to tag or learn a face. It is recommended to learn an unknown face after tagging, otherwise the information will be lost after re-tagging. By clicking on the info button we can get a nice summary.

Administrative actions like tagging or learning a face or to make a face the reference for the person, is only available to people in one of the groups specified under azureFaceRec.menu.admin.allowedUserGroupIds.

MS Azure Face Recognition Details 1

MS Azure Face Recognition Details 2

MS Azure Face Recognition Details 3

Properties

To be configured in {home}/appserver/conf/custom.properties

azureFaceRec.license

type: string, required: yes, default: -

The license key for the plugin (product: azureFaceRec), provided by brix.

azureFaceRec.menu.tag.allowedUserGroupIds

type: comma-separated list of user group ids, required: no, default: super-admins only

Usergroups allowed to use tag context menu is visible.

azureFaceRec.menu.details.allowedUserGroupIds

type: comma-separated list of user group ids, required: no, default: super-admins only

Usergroups allowed to open the detail view via context menu.

azureFaceRec.menu.admin.allowedUserGroupIds

type: comma-separated list of user gourp ids, required: no, default: super-admins only

Usergroups allowed to do admin actions in the detail view. All usergroups with this permission are allowed to open the detail view and don't have to be listed twice.

Admin actions:

  • Tag face
  • Learn face
  • Make face reference for person
azureFaceRec.menu.tag.hide

type: boolean, requierd: no, default: false

Hide the tag context menu.

azureFaceRec.menu.details.hide

type: boolean, required: no, default: false

Hide the detail view context menu.

azureFaceRec.threads

type: integer, required: no, default: 10

The number of threads in the thread pool used for the processing of asset and node changes, has to be >= 2.

azureFaceRec.debounceTimeInSeconds

type: integer, required: no, default: 10

The debounce time in seconds. E.g. 10 means that for 10s all changes that happen on an asset are collected and then processed at once. This has the advantage that if 10 relevant changes happen one after another within an interval of 10s, the Face API is still only asked once and not 10 times.

azureFaceRec.waitSecondsBeforeUpdate

type: integer, required: no, default: 120

Seconds to wait before updating after version change (time needed to generate preview).

azureFaceRec.maxNodesPerLevel

type: integer, required: no, default: 100

The maximum number of data nodes per level (intermediate nodes are not counted). Has to be between 50 and 500.

azureFaceRec.url

type: string, required: yes, default: https://westeurope.api.cognitive.microsoft.com/face/v1.0/

The URL to the custom Face API end point.

azureFaceRec.key

type: string, required: yes, default: -

The custom security key for the Face API.

azureFaceRec.detectionModel

type: string, required: no, default: detection_02 (recommended)

The detection model to use.

azureFaceRec.recognitionModel

type: string, required: no, default: recognition_03 (recommended)

The recognition model to use.

azureFaceRec.personGroupId

type: string, required: yes, default: celum

The name (id) to use for the large person group. Has to be unique and not existing yet. The name has to match the following regex: ^[a-z0-9_-]+$.

azureFaceRec.initializePersons.cronExpression

type: string, required: no, default: -

The cron expression to use for the initialize persons task.

azureFaceRec.initializePersons.scope

type: string, required: no, default: -

The scope for the initialize task (see Search Util 2). Required to run the task.

azureFaceRec.initializePersons.userId

type: integer, required: no, default: api-user

The user id used to perform the search (scope) defined for the initialize task. That way, assets or nodes can be excluded with permission settings.

azureFaceRec.initializePersons.name

type: string, required: no, default: $name

The person's name. The name has to be unique, the same person should always have the same name and no other person should have the same name. The name can be generated from the metadata using the following placeholders:

  • $name: the asset's name
  • $<number> (e.g. $123): the value of the information field with the specified number, has to be a text, text area or number field.
azureFaceRec.initializePersons.enable

type: boolean, required: no, default: false

Enable the initialize persons task.

azureFaceRec.autoTagging.cronExpression

type: string, required: no, default: -

The cron expression for the (re-)tagging task. It is not recommended to execute this task regularly. It is better to execute this task once and to enable subscriptions.

azureFaceRec.autoTagging.scope

type: string, required: no, default: -

The scope of the (re-)tagging task and its subscriptions. See Search Util 2. Required to run the (re-)tagging task.

azureFaceRec.autoTagging.userId

type: integer, required: no, default: api-user

The user with which the search (scope) is performed for the (re-)tagging task and its subscriptions. Assets can be excluded via permissions like this.

azureFaceRec.autoTagging.enableSubscription

type: boolean, required: no, default: false

Enable the subscriptions for automatic tagging. If subscriptions are enabled and an asset within the scope is changed, it is tagged automatically.

azureFaceRec.autoTagging.enableTask

type: boolean, required: no, default: false

Enable the (re-)tagging task.

azureFaceRec.training.cronExpression

type: string, required: no, default: -

The cron expression for the training task.

azureFaceRec.personNodeId

type: integer, required: yes, default: -

The node id of the node where persons are saved. Also the root node for the personTags node referencing information field.

azureFaceRec.personNodeData

type: integer, required: yes, default: -

The text area information field id to store the node data.

azureFaceRec.personTags

type: integer, required: yes, default: -

The node referencing information field id to store the tags (the persons on the image).

azureFaceRec.personTagData

type: integer, required: yes, default: -

The text area information field id to store the detection/identification data on the asset.

Installation

  1. Get a license from brix

  2. Create a subscription for Microsoft Azure Face Recognition (See here)

  3. Select Face and go to Keys and Endpoints to get the API key

  4. Create the required information fields:

    • Azure Face Recognition Data (text area field): Required for taggable asset types and keyword nodes

    • Azure Face Recognition Persons (node referencing field): Required for taggable asset types

  5. Add the configuration to the custom.properties file, put the jar file into the lib folder and restart the CELUM app server

Warning
The training fails if there is no person in the person group. So face detection can only be used after at least one person was learned (through initialize task or manually).

Compatibility Matrix

MS Azure Face Recognition CELUM (min. version)
1.0.0 5.13.4 (tested with 6.8)
Nova Plugin CELUM (min. version) Backend Plugin (min. version)
1.0.0 6.10.0 1.1.0

Release Notes

1.0.0

Released 2021-03-08

Initial version

1.1.0

Released 2021-10-11

Nova plugin support

1.2.0

Released 2021-12-02

  • menu bug fix
  • use preview instead of large preview