Professional Documents
Culture Documents
Sakinmaz S. Python Essentials For AWS Cloud Developers 2023
Sakinmaz S. Python Essentials For AWS Cloud Developers 2023
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80461-006-0
www.packtpub.com
To my mother, Reyhan, and my father, Sami, for always
supporting and loving me. To my sons, Batu and Arman,
for recharging my energy. To my wife, Yonca, for giving
me support and love.
– Serkan Sakinmaz
Contributors
Prefacexi
Part 1: Python Installation and the
Cloud
4
Running Python Applications on
EC241
What is EC2?42
EC2 purchasing options42
On-Demand42
Reserved42
Spot42
Dedicated42
EC2 instance types42
Auto-scaling43
Provisioning an EC2 server44
Connecting to an EC2 server53
Running a simple P ython application on
an EC2 server54
Processing a C SV file with a P ython
application on an EC2 server55
The AWS CLI57
Summary63
5
Running Python Applications with
PyCharm65
Installing the AWS Toolkit65
Configuring the AWS Toolkit67
Creating a sample Lambda function in
AWS70
Running an AWS Lambda function using
the AWS Toolkit72
Summary74
10
12
Index199
Conventions used
There are a number of text conventions used throughout
this book.
wget https://raw.githubusercontent.com/PacktPublishing/Python-
Essentials-for-AWS-Cloud-Developers/main/fileprocessor.py
Bold: Indicates a new term, an important word, or words
that you see onscreen. For instance, words in menus or
dialog boxes appear in bold. Here is an example: “Click
Instances on the left side, and then click Launch
Instances.”
Get in touch
Feedback from our readers is always welcome.
Don’t worry, now with every Packt book you get a DRM-
free PDF version of that book at no cost.
The perks don’t stop there, you can get exclusive access to
discounts, newsletters, and great free content in your inbox
daily
https://packt.link/free-ebook/9781804610060
3. That’s it! We’ll send your free PDF and other benefits to
your email directly
Part 1: Python Installation and the
Cloud
In this part, you will learn to install and use the Python IDE
and understand the cloud basics. In order to get into cloud
computing via Python programming in AWS, we will also
open an AWS account.
Installing Python
Installing PyCharm
Installing Python
To install Python, carry out the following steps:
After the installation, you will have a Python 3.X folder. The
Python folder has the following contents:
Figure 1.2 – Installation folder content
application:
Installing PyCharm
PyCharm is one of the most powerful IDEs used to develop
Python applications. For the examples, we will use
PyCharm; you can also use another IDE if you prefer. You
have to carry out the following steps:
Summary
In this chapter, we explored the cloud basics and
advantages. After that, we installed Python and one of the
most popular and useful IDEs, PyCharm. PyCharm will be
our main tool in order to implement the applications for
AWS.
4. Once you fill out the Root user email address and
AWS account name fields, you will receive a
verification code via email. This code should be filled out
in the Verification code input field. Click Verify.
Figure 2.3 – Add the verification code
IMPORTANT NOTE
I would recommend having a budget-limited card, because if you mistakenly
open an AWS service that has a big cost or is constantly running, this limited
card could prevent you from overspending.
Figure 2.6 – Credit card info
Once you enter the credit card info, you might be asked for
confirmation depending on your banking account.
Summary
In this chapter, we looked into AWS account creation. The
AWS account will help you to carry out Python exercises in
the cloud environment. The point to note is that AWS is a
paid service and you have to consider the cost of what you
are going to use. In the next chapter, we will take a look at
popular services such as Lambda.
Part 2: A Deep Dive into AWS with
Python
In this part, you will deep-dive into the most used AWS
services for Python programming, such as Lambda, EC2,
and Elastic Beanstalk. However, some other AWS services
will be mentioned, such as S3, to gain broader knowledge.
Cloud computing
What is Lambda?
A Lambda skeleton
Logging in Lambda
Cloud computing
Cloud computing allows you to use computer resources
such as disk and memory without managing an
infrastructure. The concept of the cloud is important in
order to free you up to focus on your application. When you
use your infrastructure, you need to buy or hire a
computer, install all the necessary software, wire the
cables, and keep the computer safe from physical as well as
soft attacks. It is clear that it takes a significant amount of
time; hence, your focus will be on reducing configuration
time for your application. With cloud computing, you don’t
have this kind of headache. The cloud provider takes most
of the responsibility and sets up and maintains the data
center for you. What you need to do is carry out some
configuration and deploy your application to the data
center. It makes your life easier; the cloud provider focuses
on the infrastructure and you focus on the application. This
is the biggest advantage of cloud computing.
What is Lambda?
Lambda is a computing service that allows you to run
Python, Java, Node.js, Ruby, .NET, and Go code without
provisioning and managing any server. In AWS, it is one of
the most used services in the AWS stack. The only thing
you need to do is develop and run your code. Lambda also
has some advantages in terms of cost.
It is a pay-as-you-go model
Once you create the Lambda function, you will have basic
Python code to be tested:
After running the test, Lambda will run, and you will be
able to see the results:
A Lambda skeleton
When you implement a Lambda function via Python, you
need to follow some rules in order to execute the
application. When a Lambda function is run, it calls the
handler method, which is shown with the following syntax:
{
"Temperature": 10,
"Wind": -5
}
Logging in Lambda
It is important to use logging functionality in order to trace
your application. In some cases, you need to get
information about an application; alternatively, you may be
processing data via Lambda and you may get an
exceptional result. Hence, logging is helpful to check the
information to understand the real problem in the
application.
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def handler_name(event, context):
logger.info('Process has finished and result will be returned')
return {
"statusCode": 200,
"Temperature": 10,
"Wind": -5
}
import json
import urllib.parse
import boto3
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']
['object']['key'], encoding='utf-8')
try:
response = s3.get_object(Bucket=bucket, Key=key)
print("CONTENT TYPE: " + response['ContentType'])
return response['ContentType']
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make
sure they exist and your zbucket is in the same region as this
function.'.format(key, bucket))
raise e
You can also find the original code block from AWS:
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-
example.xhtml.
You can also see the latest code snippet within Lambda:
Figure 3.12 – A Lambda function with code
9. Go to the S3 service.
Now, you should see a list of buckets and the bucket that
you created:
Figure 3.16 – A bucket list
5. In the form, fill out the event name and select the event
type in the Event types section. For this example, we
are going to select the All object create events option.
Hence, when an object is created, the Lambda function
will be triggered:
Figure 3.20 – Event configuration
6. At the bottom of the page, select the Lambda function
that will be triggered, under the Destination section,
and click the Save changes button:
Figure 3.21 – The event destination
You should see a success message in the AWS console:
2. Click the Add files button, which allows you to add any
kind of file from your computer. For this example, we
have uploaded one RTF file. You can also upload an
image, PDF, or whatever you want:
Figure 3.25 – The S3 Upload page
5. When you click the link under Log stream, you will be
able to see the logs that you implemented in the Lambda
function:
Summary
In this chapter, we dived into Lambda, which is one of the
most important services in AWS. Lambda helps you to
deploy and run your application without provisioning a
server, which facilitates deployment time. We also touched
upon the S3 service, which is used for object storage and
has good integration with Lambda. In the following
chapter, we will take a look at how to provision a server
and run a Python application on an AWS-based server.
4
What is EC2?
What is EC2?
AWS EC2 is a service that provides a secure and scalable
server machine in the cloud. The main advantage of EC2 is
that server management is very easy from the AWS
Management Console. When you provision an on-premises
server, it is not easy to configure security policies, disk
management, backup management, and so on. AWS
accelerates all this. When you provision EC2, AWS offers
different contracts that you need to select and all these
types impact the cost.
On-Demand
In this offer, you don’t need to contract for a specific time
period. AWS charges according to the time you use the
server. You can provision a server, shut it down, and
release the server whenever you want. It is a pay-as-you-go
model.
Reserved
You need to sign a contract with AWS for 1–3 years. The
key thing to note is that AWS offers a discount for a
Reserved commitment.
Spot
Let’s imagine you have an application that has flexible start
and end times. You define a bid price for whatever you are
willing to pay for the server. Let’s imagine you have a data
processing application that runs for five hours and the
running time is not important. You are able to run at the
beginning or end of the month; it is not a problem. You can
provision a Spot instance that significantly reduces your
cost.
Dedicated
This is useful when your organization has a software
license and is moving to AWS. These servers are only used
for your organization. Hence, you can keep the license that
is served to your company.
Auto-scaling
If you need a clustered environment, it would be better to
define an auto-scaling policy in order to manage resources
efficiently.
5. You can now see the Key pair (login) panel. A key pair
is used to connect to the server via the SSH key in a
secure way. In order to create a new SSH key, click
Create new key pair:
Figure 4.6 – Creating a new key pair
Once you click Create key pair, it will download the file.
Please keep this file; it will be used to connect to the
machine. The Key pair name dropdown will also be
selected with your creation. When you create a new key
pair in the upper section, the new key pair name will be
visible, which you can see in the following screenshot. For
this example, our key pair is key_for_test_python:
As you see, once you add one of the servers to the VPC
subnet in AZ 2, it means the EC2 instances are logically
isolated from others. Hence, you can add access controls to
keep the server secure.
3. Once you click the button, under the VPC settings, VPC
and more is selected by default. This option allows you
to create a VPC with subnets, which you see on the right
side of the following screenshot. With this option, you
can create a VPC and subnet together:
When you click Create VPC, the VPC begins creation and
you can see the status of the progress:
Figure 4.16 – The VPC creation process
After it has been created, you are able to see the VPC and
subnet in the VPC console:
1. Open the EC2 launch page again. In this case, the VPC
and subnet are selected by default. Click Edit:
Figure 4.18 – Network settings
2. Under the SSH client tab, you can see the steps to
connect to the EC2 machine:
Figure 4.23 – Steps to connect
After running the mkdir command, you can execute with the
ls command in order to list your directory. As you see, the
csv folder is created.
wget https://raw.githubusercontent.com/PacktPublishing/Python-
Essentials-for-AWS-Cloud-Developers/main/sample.csv
wget https://raw.githubusercontent.com/PacktPublishing/Python-
Essentials-for-AWS-Cloud-Developers/main/fileprocessor.py
The following code is very simple; the code imports the csv
library and prints the first five lines within the CSV:
Figure 4.34 – Python code
2. Click Roles on the left panel and then click Create role:
Figure 4.37 – Create role
5. Give a name to the role and click the Create role button
to create a role:
(a)
(b)
Summary
In this chapter, we learned about the AWS EC2 service,
which is used to create a server on the cloud. You can
create your server in an efficient way and use it for
different purposes, such as an application server, web
server, or database server. We also created an EC2 server
as an example and ran our Python application on EC2. In
the following chapter, we will take a look at how to debug
our Python application via PyCharm.
5
4. After installation, the IDE will ask you to restart it. Click
the Restart IDE button:
Figure 5.3 – Restart the IDE
1. After restarting the IDE, you will see the text AWS: No
credentials selected at the bottom-right of the page.
Click this text:
Figure 5.4 – AWS: No credentials selected
The following GitHub link consists of the code block for the
S3 Reader application:
https://github.com/PacktPublishing/Python-Essentials-for-
AWS-Cloud-Developers/blob/main/S3Reader.py.
When you click the link, the AWS Toolkit will run the
Lambda function via PyCharm:
Summary
In this chapter, we learned how to install and use the AWS
Toolkit within PyCharm. It is always helpful when you
implement and deploy AWS services within PyCharm in a
practical way. AWS Toolkit has AWS services integration;
therefore, instead of using the AWS Management Console,
you can use PyCharm where it is installed on the local
machine. In the following chapter, we will take a look at
how to deploy a Python application to Elastic Beanstalk.
6
The code imports the Flask library and runs the application
on localhost port 5000. When you run it, you will see "Hello
World!" in the browser.
You can also check the Flask framework at the following
website: https://flask.palletsprojects.com/en/2.2.x/.
4. I have named the file Python Web app. You can name it
whatever you want:
Figure 6.5 – Naming the application
8. Scroll down and you will see the latest panel on the
page. In this example, we will proceed with Sample
application and click Create environment:
Figure 6.9 – Finalizing the platform
Once you click the Choose file button, your Python web
application will be deployed to Elastic Beanstalk.
Summary
In this chapter, we learned about the AWS Elastic
Beanstalk service and how to create a Python web
environment in the cloud. Elastic Beanstalk is useful when
you deploy web applications in the cloud. It comes with
scalability, logging, and monitoring advantages. In the
following chapter, we will take a look at how to monitor our
applications via CloudWatch.
Part 3: Useful AWS Services to
Implement Python
In this part, you will deep-dive into other AWS services for
Python programming, such as monitoring, creating an API,
database operations, and NoSQL with DynamoDB.
What is CloudWatch?
CloudWatch alarms
What is CloudWatch?
When you deploy any application, it is important to track
that it meets the set expectations regarding availability,
performance, and stability. It is possible an issue may have
occurred in the application. It’s important to note that
some of the AWS services could be down or run incorrectly.
This is a very bad experience from a customer’s point of
view, and it would be better to observe these issues before
the customer finds out. If you service an application via
AWS, you need to use CloudWatch to monitor your
applications to observe how they behave.
import json
import os
def lambda_handler(event, context):
print('ENVIRONMENT VARIABLES')
print(os.environ)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
3. Once you click Log groups, you will see a list. This list
represents the running AWS services that create a log.
In this list, find the Lambda function that you run:
Figure 7.7 – Log list
After clicking the link, you can see the detailed logs that
Lambda creates:
Figure 7.10 – Lambda logs
This list shows a summary view of the log. When you click
the down arrow to the left, the panel will open and you can
investigate the detailed logs. In Lambda, we have logged
the operating system variables for Lambda. Hence, you will
see some details for that, such as region, memory size, and
language:
Figure 7.11 – Log details
3. Once you select it, you can see the default query:
Figure 7.14 – The Log Insights filter
keyword:
Figure 7.15 – Logs
Let’s add one more filter to search for a keyword within the
message. You can use the following query format:
When you expand the message, you will find what you
searched for – in this case, AWS_DEFAULT_REGION:
Figure 7.17 – Detailed logs
CloudWatch alarms
AWS has more than 100 services, and it is not easy to
control the behavior of all the services. You need to be
informed if some AWS services achieve a specific metric. In
Chapter 4, we covered how to create a server with an EC2
service. For example, you define a server for an EC2
service, and sometimes, its CPU usage is more than 90%,
causing some performance problems. Another example
would be to add a notification if you exceed a specific cost
in AWS. For these kinds of scenarios, you can define a
metric, and if the metric is reached, you will be notified via
email.
3. From the list, select USD and click Select metric. The
currency type may vary, depending on your AWS
account:
Summary
In this chapter, we learned about the AWS CloudWatch
service and how to investigate service logs in AWS.
CloudWatch is very useful for logging; it also allows you to
define some metrics and alarms to monitor services. In the
following chapter, we will take a look at database
operations within AWS.
8
Features of RDS
Provisioning RDS
Secrets Manager
Features of RDS
RDS comes with different features that facilitate the
creation and maintenance of the database. Let’s look at the
most important features:
Provisioning RDS
In this section, we are going to create a sample relational
database on the cloud. To provision the RDS on AWS, carry
out the following steps:
1. Open the AWS console and type rds in the search box:
11. As a final step, you can keep other values as is. Click
Create database and proceed with the database
creation:
Figure 8.11 – Database creation
After some time, you can see the database is ready to use:
12. Click the Connectivity & security tab. You will see
VPC security groups; click the link:
Figure 8.14 – Security groups
13. In the new panel, click Edit inbound rules. This will
allow us to define the inbound connections:
Figure 8.15 – Inbound rules
14. Add the rule for the MySQL/Aurora type and click Save,
which isn’t depicted in the following figure but is
situated at the bottom of the page:
USE address;
The table has two rows, and we are going to read these
values from the Lambda function:
Figure 8.27 – Select script
7. Copy and paste the following code to read data from the
database:
import mysql.connector
#rds settings
rds_host = "database-1.********.us-east-1.rds.amazonaws.com"
name = "**min"
password = "*****234"
db_name = "address"
if __name__ == '__main__':
conn = mysql.connector.connect(host=rds_host, user=name,
passwd=password, database=db_name, port=3306)
cursor = conn.cursor()
cursor.execute("select * from address")
data = cursor.fetchall()
print(data)
The preceding code block connects to the RDS database
and reads from the address table by executing the select *
from address query. For rds_host, name, and password, please fill
out your database host and credentials:
8. When you click Run, you can see the results from the
database:
Figure 8.34 – Results from the database
Congrats! You are able to read data from the AWS database
via Python. You can also extend your query by
implementing insert and update queries. In this topic, we
learned how to make a database operation via Python.
Secrets Manager
Secrets Manager is an AWS service that allows you to
manage and retrieve database credentials, which can be
helpful when using a database. Let’s learn how to use
Secrets Manager:
3. Select the secret type that you want to store a secret for,
and fill out the username and password. In this case, we
will select the database-1 instance. After filling out the
details, click Next:
Figure 8.37 – Filling out the details
5. On the next screen, you will see the options for using
this secret with different programming languages. Click
Store to finalize it:
Figure 8.39 – Store secret
6. As the final step, you will see the secret on the list:
Summary
In this chapter, we learned about AWS RDS, which is used
to create a relational database on the cloud. You can create
your database in an efficient way. The point to note is that
you have the possibility to create different databases,
including MySQL, Microsoft SQL, and PostgreSQL. In this
chapter, we have created an RDS instance on the cloud and
run a Python application to make a read operation. In the
following chapter, we will take a look at creating an API in
AWS.
9
Now that we have a good idea of what API Gateway is, let’s
have a look at its features.
A few seconds later, you will see the Lambda function has
been created with the template code:
Figure 9.6 – Lambda template
import json
def lambda_handler(event, context):
number1 = event['Number1']
number2 = event['Number2']
sum = number1 + number2
return {
'statusCode': 200,
'Sum': sum
}
{
"Number1": 10,
"Number2": 15
}
While you implement an API, you can select API types. The
following are the most used API types:
After setting the permissions, you can see the data flow for
the API:
Figure 9.20 – The API flow
2. Fill out the form and click Enable CORS and replace
existing CORS headers. You can retain the form
details as is. The form defines the following:
{
"Number1": 10,
"Number2": 15
}
When you check the logs, you can see the results of the API
response. As you can see, the sum of the values is 25.
Summary
In this chapter, we learned how to use the AWS API
Gateway service and how to create an API gateway that has
a backend service with Python Lambda. API Gateway is
useful when you need to implement an API service with
backend support via Python. It comes with scalability,
logging, and monitoring advantages. In the next chapter,
we will take a look at the basics of DynamoDB and NoSQL.
10
Key-value database
In this NoSQL database type, you can access data based on
keys. For example, you have customer ID as a key, and
address, age, and family information as values. When you
need to access the value, you just provide the key as a
query parameter:
Document database
A document database is another type of NoSQL database
that can store unstructured data such as JSON. It is useful
if you need to store unstructured big data and retrieve data
with different parameters:
Figure 10.3 – Document database
{
"employee": {
"name":"Jack",
"age":25
}
}
2. Click Tables on the left side, and then click the Create
table button:
{
"customer_id": {
"S": "123"
},
"customer_mail": {
"S": "[email protected]"
},
"name": {
"S": "Serkan"
},
"address": {
"S": "Germany"
}
}
Since you are using NoSQL, you can also insert the JSON,
which is a different format from the previous JSON that we
inserted. The following JSON is also valid for the customer
table:
{
"customer_id": {
"S": "1234"
},
"customer_mail": {
"S": "[email protected]"
},
"name": {
"S": "Jane"
},
"profession": {
"S": "Data Engineer"
}
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:GetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:BatchWriteItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem"
],
"Resource": "arn:aws:dynamodb:us-east-
1:961487522622:table/customer"
}
]
}
4. You can add the policy name and finish creating the
policy. In this example, I am using
DynamoDBCustomerTableOperations as a policy
name:
Figure 10.18 – Policy creation
8. Fill in Role name and create the role. As you see, the
name we have given to the Lambda function is
DynamoDBCustomerTableRole. Scroll down and click
the Create role button:
Figure 10.22 – Creating a role
import json
import boto3
def lambda_handler(event, context):
dynamodb = boto3.resource('dynamodb', region_name="us-east-1")
table = dynamodb.Table('customer')
response = table.get_item(Key={'customer_id': "123",
'customer_mail': "[email protected]"})
item = response['Item']
print(item)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
The code imports the boto3 library, which provides useful
functions for DynamoDB operations. boto3 is a library that
includes AWS service-specific features to facilitate the
implementation of cloud applications while working with
Python on AWS. You can get more details from the
following link:
https://boto3.amazonaws.com/v1/documentation/api/latest
/index.xhtml.
Once you run the Lambda function, you are able to see the
result:
Figure 10.26 – Execution results
Summary
In this chapter, we learned about the AWS DynamoDB
service and how to create a DynamoDB database in AWS.
After creating the database, we implemented a Lambda
Python code snippet that read items from DynamoDB. You
now also know how to extend the Lambda code to insert
data into a DynamoDB table. DynamoDB is useful when you
need to implement a key-value database that is managed by
AWS. It comes with scalability, logging, and monitoring
advantages. In the following chapter, we will take a look at
the Glue service.
11
id,location_id,address_1,city,state_province
1,1,2600 Middlefield Road,Redwood City,CA
2,2,24 Second Avenue,San Mateo,CA
3,3,24 Second Avenue,San Mateo,CA
4,4,24 Second Avenue,San Mateo,CA
5,5,24 Second Avenue,San Mateo,CA
6,6,800 Middle Avenue,Menlo Park,CA
7,7,500 Arbor Road,Menlo Park,CA
8,8,800 Middle Avenue,Menlo Park,CA
9,9,2510 Middlefield Road,Redwood City,CA
10,10,1044 Middlefield Road,Redwood City,CA
After the upload, the bucket will include the CSV file:
6. Give a name for the role that we are creating, then you
can click Create role to finish the role creation:
(a)
(b)
6. Select the Transform tab from the panel and you will
see the following data mapping. This mapping is
generated by Glue:
Figure 11.17 – Mapping
7. Select the Data target properties - S3 tab from the
panel and fill out the panel with target details. Since we
are converting to JSON, the format will be JSON. The
target location could also be another S3 bucket; in this
example, I will give the same S3 location for input and
output:
Figure 11.18 – Data target
11. Click Save. As you can see, Glue has created a Python
Spark script that is going to convert CSV to JSON.
PySpark is a data processing library that can also be
used in the AWS Glue job:
Figure 11.22 – Code generation
After some time, you can check the job status from the
Runs tab:
{"id":"1","location_id":"1","address_1":"2600 Middlefield
Road","city":"Redwood City","state_province":"CA"}
{"id":"2","location_id":"2","address_1":"24 Second
Avenue","city":"San Mateo","state_province":"CA"}
{"id":"3","location_id":"3","address_1":"24 Second
Avenue","city":"San Mateo","state_province":"CA"}
Summary
In this chapter, we learned about the AWS Glue service and
how to create an ETL pipeline with AWS Glue. Glue is very
efficient when you need to create data pipelines. One cool
feature of Glue is the visual flow generator, which allows
you to create a flow with drag and drop. It makes it easy to
create and generate the flow, which saves lots of time. In
addition to that, for people who don’t have that much code
experience, Glue’s visual flow facilitates their tasks. Hence,
if you work with data, Glue is one of the best services
within AWS. In the next chapter, we will create a sample
project within AWS using the Python programming
language.
12
import boto3
import base64
import json
def lambda_handler(event, context):
try:
s3 = boto3.resource('s3')
s1 = json.dumps(event)
data = json.loads(s1)
image = data['image_base64']
file_content = base64.b64decode(image)
bucket = data['bucket']
s3_file_name = data['s3_file_name']
obj = s3.Object(bucket,s3_file_name)
obj.put(Body=file_content)
return 'Image is uploaded to ' + bucket
except BaseException as exc:
return exc
s1 = json.dumps(event)
data = json.loads(s1)
image = data['image_base64']
file_content = base64.b64decode(image)
bucket = data['bucket']
s3_file_name = data['s3_file_name']
obj = s3.Object(bucket,s3_file_name)
obj.put(Body=file_content)
1. Open the IAM role and create a new role for Lambda:
Figure 12.6 – Creating a role
6. After creating the role, you will see the role on the list:
Figure 12.11 – The role on the list
2. Provide a name for the REST API. We will use the name
UploadImageToS3 in this subsection:
5. Once you have imported the API, you are ready to call
the API. In the POST section, select the raw request
type with JSON as follows:
Figure 12.25 – The raw parameter
{
"image_base64":"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1H
AwCAAAAC0lEQVR42mNk+A8AAQUBAScY42YAAAAASUVORK5CYII=",
"bucket":"python-book-image",
"s3_file_name":"image.jpeg"
}
7. Click the Send button in order to call the API. Once you
click it, you can see the response of the API:
Figure 12.27 – JSON response
Summary
In this chapter, we have created an application to upload
an image using API Gateway, Lambda, and S3. The image is
converted to base64 to be stored in S3. One of the best
aspects of using Lambda, S3, and API Gateway is that we
haven’t provisioned any server. Lambda, S3, and API
Gateway are serverless and we don’t need to manage the
infrastructure. AWS manages and handles it for you.
We have finished all the chapters and learned how to use
the most common AWS services with Python. I hope all the
chapters have provided you with good knowledge about
AWS. Following this, you can implement more complex
Python projects with these services as well as use more
services within AWS.
Index
A
Amazon Web Services (AWS) 3
architecture 133
features 134
AWS account
creating 11-14
awscli 57
AWS CLI 57
creating 175-181
features 170
AWS Toolkit
configuring 67-69
B
Boto3 30
C
cloud 3
advantages 4
cloud computing 19
cloud services
considerations 4
cost management 4
security 4
features 87, 88
Log Insights 94
configurations, Lambda 24
destinations 25
environment variable 25
ephemeral storage 25
memory 25
permissions 25
tags 25
timeout 25
triggers 25
CSV file
D
database operations
destinations 25
creating 154-161
features 153
E
EC2 41, 42, 183
purchasing options 42
EC2 server
connecting to 53, 54
provisioning 44-53
features 75, 76
environment variable 25
ephemeral storage 25
F
Flask 76
reference link 76
G
global secondary indexes 157
Glue job
L
Lambda 20, 41, 183
advantages 20
configurations 24-26
limitations 20
logging functionality 27
memory limit 20
returning value 26
skeleton 26
timeout limit 20
Lambda function
Lambda logs
reference link 27
Log Insights
M
memory 25
MySQL WorkBench
N
NoSQL database 151, 184
P
permissions 25
Postman 193
URL 194
dedicated 42
on-demand 42
reserved 42
spot 42
PyCharm 7
download link 7
installing 7, 8
project, creating 8, 9
PySpark 180
Python 4
download link 5
installing 5, 6
creating 76
R
Relational Database Management System (RDBMS) 106
connecting to 117-122
features 106
provisioning 107-117
RESTful 133
S
S3 28
creating 170-172
using 128-131
Spark 43
subnet 48
Swagger 193
T
table
tags 25
timeout 25
triggers 25
U
up-to-date limits, AWS Lambda quotas page
reference link 20
V
Virtual Private Cloud (VPC) 25
W
WebSocket 133
Packtpub.com
Why subscribe?
Spend less time learning and more time coding with
practical eBooks and Videos from over 4,000 industry
professionals
https://packt.link/9781804614341
Spyridon Maniotis
ISBN: 978-1-80461-434-1
https://packt.link/9781801812078
Rajnish Harjika
ISBN: 978-1-80181-207-8
Don’t worry, now with every Packt book you get a DRM-
free PDF version of that book at no cost.
The perks don’t stop there, you can get exclusive access to
discounts, newsletters, and great free content in your inbox
daily
https://packt.link/free-ebook/9781804610060
3. That’s it! We’ll send your free PDF and other benefits to
your email directly