A webinar and demo session on AWS Machine Learning services was held on 19th April 2020 starting from 8:30 pm to 10:30 pm NPT. The whole session enlightened the participants about the Amazon Recognition, a service that lets developers working with Amazon Web Services add image analysis to their applications. With AWS Recognition users can build apps that can detect, remember, and recognize objects, scenes, and faces in images. It powers innovative solutions, such as personal photo applications, secondary authentication for mobile devices from where you can understand the average age ranges, gender distribution, and emotions expressed by the people, without identifying them. 1. The session covered the following Machine Learning services:
Object and scene detection
Text in image
Object and scene detection :This section provides information for detecting labels in images and videos with Amazon Recognition Image and Amazon Recognition Video. Amazon Recognition automatically labels objects, concepts, and scenes in your images, and provides a confidence score.For instance, A sample advertisement image was taken which has various objects in order to discover its features and it showed analysis which is mentioned in the picture below.Image moderation :Image moderation plays a vital role in identifying images that contain suggestive or explicit content that may not be appropriate for your site. Recognition automatically detects explicit or suggestive adult content, or violent content in your images, and provides confidence scores.The moderation labels provide detailed sub-categories, allowing you to fine-tune the filters that you use to determine what kinds of images you deem acceptable or objectionable. One can use this feature to improve photo sharing sites, forums, dating apps, content platforms for children, e-commerce platforms and marketplaces, and more.Facial Analysis :Amazon Recognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.A sample photo was taken to illustrate the result of the facial analysis feature and the below outcome was obtained.Not only did this feature analyze the age group of the person in the image but also their gender and emotions, which was an intriguing finding.CelebrityRecognition :This feature automatically recognizes celebrities in images and provides confidence scores.For instance, an image of a celebrity was taken as a sample and ran it through the Celebrity Recognition engine and the following result was obtained.The image was of a famous Nepali celebrity Dayahang Rai and it was detected 100% correct. Face Comparison :This service compares faces to see how closely they match based on a similarity percentage.A demonstration was done by taking two different pictures of the same person having a long span of time gap and the below result was obtained.The facial comparison feature is very helpful for the conditions where we are not able to recognize the old images of people whether they are of the same people or not.Text in image :Amazon Recognition text detection can detect text in images and videos. It also can then convert the detected text into machine-readable text. For the proper demonstration, a sample image was extracted with the lots of text in it and the below result was obtained.The blue boxes represent information about the detected text and the location of the text. To be detected, the text must be within +/- 90 degrees orientation of the horizontal axis.2.Amazon Comprehend a. Real-time analysis Real-Time Analysis:Amazon Comprehend enables real-time analysis using either built-in or custom models. With built-in models, you can recognize entities, extract key phrases, detect dominant languages, analyze syntax, or determine sentiment. With custom models, create an endpoint to classify documents using custom categories or labels.When Amazon Comprehend analyzes a document, it identifies the following:
Detects entities based on built-in models
The one with the pink underlined denotes the Entity.Detects key phrasesThe one with the blue underlined denotes the key phrases.It helps to determine the language of the dominant.
Determines the sentiment of the text
It helps to determine the sentiment of the text and analyses whether it is positive, negative, neutral, or mixed.
Analyzes the syntax in the document
It helps to determine the speech of language whether it is a noun, adjective, verb, determiner, proper noun, etc.Lastly, one major thing that makes the webinar more exciting and fruitful was participants were very interactive and the intercommunication between the participants and the facilitators was very smooth and effective which resulted in a great impact on learning for all the attendees. In a nutshell, the webinar was a complete success as the response from the event was overwhelming with enthusiastic participants enrolled and was able to attract the attendees of varied career levels and was commended as fruitful by most of the attendees.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.