![No UI](https://img.shields.io/static/v1?label=UI&message=none&color=inactive) The *MS Azure Autotagger* uses the *Computer Vision* API which is part of the *Microsoft Azure Cognitive Services* and provides the following functionalities: Automated tagging, captions, background/foreground/accent color and object recognition. The service can be triggered via context-menu or automatically. Currently, only tagging in English is fully supported (only some of the content can be received in another language), but with the [Auto Translator](https://docs.brix.ch/celum_extensions/auto_translator) plugin, tags can be translated automatically. If English is available, then tags are added in English and the default system language (if it is not English) because the default system language is required. If English isn't available, then only the default system language is set (this way, tags cannot be translated with the Auto Translator). **Image requirements** (therefore preview (default) is recommended): - max 4MB - at least 50 x 50 pixels - png, jpg, gif, bmp **Attention: <=v1.1.1 large preview was the default, in rare cases this can produce images >4MB which causes the tagging to fail. An Update to v1.1.2+ is recommended.** [MINITOC] ## Properties To be configured in {home}/appserver/conf/custom.properties ##### azureComputerVision.license > type: string, **required: yes**, default: - The license key for the plugin (product: azureComputerVision), provided by *brix*. ##### azureComputerVision.allowedUserGroupIds > type: comma-separated list of user group ids, required: no, default: - The user group ids which are allowed to use this extension (superadmins are always entitled). ##### azureComputerVision.threads > type: integer, required: no, default: 10 The number of threads in the thread pool, that are processing the asset changes and initiating the automated tagging, has to be >= 2. ##### azureComputerVision.debounceTimeInSeconds > type: integer, required: no, default: 10 The debounce time in seconds. E.g. 10 means that for 5s all changes that happen on an asset are collected and then processed at once. This has the advantage that if 10 relevant changes happen one after another within an interval of 5s, cognitive services is still only asked once and not 10 times. ##### azureComputerVision.downloadFormatId > type: integer, required: no, default: large preview (recommended) Specify a download format id here. If not, the large preview should be used for the service. ##### azureComputerVision.url > type: url, **required: yes**, default: https://westeurope.api.cognitive.microsoft.com/ API-URL for the Computer Vision service. E.g. `https://{resource}.cognitiveservices.azure.com/` or like default. ##### azureComputerVision.key > type: string, **required: yes**, default: - The Computer Vision key. See [here](https://docs.brix.ch/celum_extensions/azure_computer_vision#installation). ##### azureComputerVision.region > type: string, required: no, default: -, since v1.1.1 The Computer Vision region, seems to be optional. ##### azureComputerVision.tagsInfoFieldId > type: integer, required: no, default: - The id of the node reference information field for the tags. Required to be able to use this feature. ##### azureComputerVision.tagsThreshold > type: double, required: no, default: 0 Only tags with a certainty (a number between 0 and 1) >= the threshold will be added (this only works for the normal tags and not for the tags sent with the image description). ##### azureComputerVision.tagsFromDescriptionInfoFieldId > type: integer, required: no, default: - The id of the node reference information field for the tags from the description. Required to be able to use this feature. ##### azureComputerVision.captionInfoFieldIds > type: list of integer, required: no, default: - The ids of the (localized) text (area) information field for the caption. All types of text information fields are supported. Required to be able to use this feature. ##### azureComputerVision.backgroundColorInfoFieldId > type: integer, required: no, default: - The id of the node reference information field for the dominant background color. Required to be able to use this feature. ##### azureComputerVision.foregroundColorInfoFieldId > type: integer, required: no, default: - The id of the node reference information field for the dominant foreground color. Required to be able to use this feature. ##### azureComputerVision.dominantColorsInfoFieldId > type: integer, required: no, default: - The id of the node reference information field for the dominant colors. Required to be able to use this feature. ##### azureComputerVision.accentColorInfoFieldId > type: integer, required: no, default: - The id of the text (area) information field for the accent color (web-format). All types of text information fields are supported. Required to be able to use this feature. ##### azureComputerVision.blackAndWhiteInfoFieldId > type: integer, required: no, default: - The id of the checkbox information field for the black and white property. Required to be able to use this feature. Since v1.0.1 all the information from checkboxes can be collected in a node referencing field. The syntax is `:`. So instead of setting the checkbox, the specified node is added to or removed from the node referencing field. ##### azureComputerVision.adultContentInfoFieldId > type: integer, required: no, default: - The id of the checkbox information field for the adult content property. Required to be able to use this feature. Since v1.0.1, all the information from checkboxes can be collected in a node referencing field. The syntax is `:`. So instead of setting the checkbox, the specified node is added to or removed from the node referencing field. ##### azureComputerVision.adultScoreInfoFieldId > type: integer, required: no, default: - The id of the number (score shown as percentage 0 to 100) or double (actual score between 0 and 1) information field for the adult score. Required to be able to use this feature. ##### azureComputerVision.goryInfoFieldId > type: integer, required: no, default: - The id of the checkbox information field for the gory property. Required to be able to use this feature. Since v1.0.1 all the information from checkboxes can be collected in a node referencing field. The syntax is `:`. So instead of setting the checkbox, the specified node is added to or removed from the node referencing field. ##### azureComputerVision.goryScoreInfoFieldId > type: integer, required: no, default: - The id of the number (score shown as percentage 0 to 100) or double (actual score between 0 and 1) information field for the gory score. Required to be able to use this feature. ##### azureComputerVision.racyInfoFieldId > type: integer, required: no, default: - The id of the checkbox information field for the racy property. Required to be able to use this feature. Since v1.0.1 all the information from checkboxes can be collected in a node referencing field. The syntax is `:`. So instead of setting the checkbox, the specified node is added to or removed from the node referencing field. ##### azureComputerVision.racyScoreInfoFieldId > type: integer, required: no, default: - The id of the number (score shown as percentage 0 to 100) or double (actual score between 0 and 1) information field for the racy score. Required to be able to use this feature. ##### azureComputerVision.categoriesInfoFieldId > type: integer, required: no, default: - The id of the node reference information field for the categories. Required to be able to use this feature. ##### azureComputerVision.categoriesThreshold > type: double, required: no, default: 0 A value between 0 and 1. Only categories above this threshold are accepted. ##### azureComputerVision.brandsInfoFieldId > type: integer, required: no, default: - The id of the node reference information field for the brands. Required to be able to use this feature. ##### azureComputerVision.objectsInfoFieldId > type: integer, required: no, default: - The id of the text area information field for the objects (JSON). Required to be able to use this feature. ##### azureComputerVision.facesInfoFieldId > type: integer, required: no, default: - The id of the text area information field for the faces (JSON). Required to be able to use this feature. ##### azureComputerVision.contextMenu > type: boolean, required: no, default: true Whether the context menu should be available (for the allowed user groups) or not. Restart required. ##### azureComputerVision.automate > type: boolean, required: no, default: false Whether images should be tagged automatically or not. Restart required. ##### azureComputerVision.search > type: string, required: no, default: fileCategory=image [Search expression](https://docs.brix.ch/celum_extensions/search_util_2), only tag images within the scope (for automation and initialize task). Older versions (< v1.1) use [the old serach util](https://docs.brix.ch/celum_extensions/search_util). ##### azureComputerVision.search.userId > type: integer, required: no, default: api-user User id of the user to perform the search with (only assets visible to this user will be found). ##### azureComputerVision.tags.partition.threshold > type: integer, required: no, default: 100 Nodes are partitioned if the number of children in a node (which are not intermediate nodes) becomes greater than the threshold. That way, the GUI doesn't freeze if you try to open the tag tree. A number <= 0 prevents the partitioning. ##### azureComputerVision.tags.partition.locale > type: string, **required: yes**, default: default locale, since v1.1.1 Set this to `en` if you have English and use a forced translation from English to other languages, but English is not the default language. ## Installation 1. Get a license from brix 2. Create a subscription for Microsoft Azure Cognitive Services > Computer Vision (See [here](https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision)) 3. Select Computer Vision and go to Keys and Endpoints to get the API key 4. Create the information fields below 5. Add the information and configuration to the custom.properties file, put the jar file into the lib folder and restart the CELUM app server The information fields below have to be added to all the asset types which should be available for the *Azure Computer Vision* extension. Each information field corresponds to one of the functionalities, it is possible to add only some of them and ignore the ones which are of no importance. The fields can be added to an existing or a new fieldset. Tags i.e. nodes will be created inside the root node for the corresponding node referencing information field. ? ? ? ? ? ? ? ## Compatibility Matrix | MS Azure Autotagger | CELUM (min. version) | :----- | :----- | 1.0.0 | 5.13.4 (tested with 6.4) | | 1.0.1 | 5.13.4 (tested with 6.8) | | 1.1.0 | 6.4.0 (tested with 6.8) | ## Release Notes ##### 1.0.0 > Released 2020-10-23 Initial version ##### 1.0.1 > Released 2020-10-27 Added possibility to collect boolean fields (checkboxes) in a single node referencing field ##### 1.1.0 > Released 2021-05-27 Switched to Search Util 2