To recognize faces, we will use Amazon Rekognition collections. A collection is a container for persisting faces detected by the IndexFaces API action. Amazon Rekognition doesn’t store copies of the analyzed images. Instead, it stores face feature vectors as the mathematic representation of a face within the collection.
You can use the facial information in a collection to search for known faces in images, stored videos, and streaming videos.
To create a collection you will first need to configure the AWS CLI:
aws rekognition create-collection --collection-id "Faces" --region us-east-1
DynamoDB works well for this use case. As a fully managed service, you don’t need to worry about the elasticity and scalability of the database because there is no limit to the amount of data that can be stored in a DynamoDB table. As the size of the data set grows, DynamoDB automatically spreads the data over sufficient machine resources to meet storage requirements. If you weren’t using DynamoDB, the incremental count added to the table would require you to scale accordingly. As for pricing, with DynamoDB you only pay for the resources you provision. For this use case, though, it is quite possible to remain within the AWS Free Tier pricing model or have the project running at a low DynamoDB price point.
To create the table in DynamoDB, in the AWS Management Console, navigate to the DynamoDB console and create a table. Use Faces as the table name and faceID as the primary key, and leave the other settings as defaults.
We’ll also create a table named logs for storing the logs of your Lambda function. For this table use unixtime as the primary key.
You should follow AWS IAM best practices for production implementations.
Replace the template Lambda code with the code you downloaded from GitHub. Modify the Lambda timeout to 1 minute.
Copy the code from the GitHub repository and paste in in the code box. Let’s inspect the Lambda code to understand what it’s doing:
response = rekognition.detect_labels(Image=image, MaxLabels=123, MinConfidence=50) for object in response["Labels"]: if object["Name"] == "Coffee Cup" or object["Name"] == "Cup": coffee_cup_detected = True break :: message = detect_faces(image, bucket, key)
This part of code uses AWS Rekognition to detect the labels in the image. It checks if “Cup” or “Coffee Cup” is found in a response. If it finds any of these labels, it calls a face detection function, which searches the face collection to find if there is a matching face:
faces = rekognition.search_faces_by_image(CollectionId=face_collection, Image=image, FaceMatchThreshold=face_match_threshold, MaxFaces=1)
If no matching faces are found in the collection, the face is indexed and added to the collection:
faces = rekognition.index_faces(Image=image, CollectionId=face_collection)
To test the function, you can upload an image to your S3 bucket and check your DynamoDB table to see the result.