In this section, you set up your AWS DeepLens device, import a pre-trained model, and deploy the model to AWS DeepLens.
You first need to register your AWS DeepLens device, if you haven’t already.
After you register your device, you need to install the latest OpenCV (version 4.x) packages and Pillow libraries to enable the preprocessing algorithm in the DeepLens inference Lambda function. To do so, you need the IP address of AWS DeepLens on the local network, which is listed in the Device details section. You also need to ensure that Secure Shell (SSH) is enabled for your device. For more information about enabling SSH on your device, see View or Update Your AWS DeepLens 2019 Edition Device Settings.
Open a terminal application on your computer. SSH into your DeepLens by entering the following code into your terminal application:
Then enter the following commands in the SSH terminal:
sudo su pip install --upgrade pip pip install opencv-python pip install pillow
For this post, you use a pre-trained model. We trained the model for 36 objects on The Quick Draw Dataset made available by Google, Inc., under the CC BY 4.0 license. For each object, we took 1,600 images for training and 400 images for testing the model from the dataset. Holding back 400 images for testing allows us to measure the accuracy of our model against images that it has never seen.
For instructions on training a model using Amazon SageMaker as the development environment, see AWS DeepLens Recipes and Amazon SageMaker: Build an Object Detection Model Using Images Labeled with Ground Truth.
To import your model, complete the following steps:
Download the model aws-deeplens-pictionary-game.tar.gz.
Create an Amazon Simple Storage Service (Amazon S3) bucket to store this model. For instructions, see How do I create an S3 Bucket?. The S3 bucket name must contain the term
deeplens. The AWS DeepLens default role has permission only to access the bucket with the name containing
After the bucket is created, upload
aws-deeplens-pictionary-game.tar.gz to the bucket and copy the model artifact path.
On the AWS DeepLens console, under Resources, choose Models.
Choose Import model.
On the Import model to AWS DeepLens page, choose Externally trained model.
For Model artifact path, enter the Amazon S3 location for the model you uploaded earlier.
For Model name, enter a name.
For Model framework, choose MXNet.
Choose Import model.
To deploy your model, complete the following steps:
On the AWS DeepLens console, under Resources, choose Projects.
Choose Create new project.
Choose Create a new blank project.
For Project name, enter a name.
Choose Add model and choose the model you imported earlier.
Choose Add function and choose the Lambda function you created earlier.
Select your newly created project and choose Deploy to device.
On the Target device page, select your device from the list.
On the Review and deploy page, choose Deploy.
The deployment can take up to 5 minutes to complete, depending on the speed of the network your AWS DeepLens is connected to. When the deployment is complete, you should see a green banner message that the deployment succeeded.
To verify that the project was deployed successfully, you can check the text prediction results sent to the cloud via AWS IoT Greengrass. For instructions, see Using the AWS IoT Greengrass Console to View the Output of Your Custom Trained Model (Text Output).
In addition to the text results, you can view the pose detection results overlaid on top of your AWS DeepLens live video stream. For instructions, see Viewing AWS DeepLens Output Streams.