Docker

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

Docker

Docker is a popular platform for developing, shipping, and running applications inside
containers. Containers are lightweight, isolated environments that package an application and
its dependencies, making it easier to ensure consistency between different environments,
from development to production. In this detailed explanation, I'll cover the key concepts of
Docker and provide examples to illustrate these concepts.

Key Docker Concepts:

1. Images: Docker images are read-only templates that define how a container should
run. Images contain the application code, libraries, and dependencies needed to
execute an application. Images are often created from a Dockerfile, which is a text file
that specifies the instructions for building the image.

2. Containers: Containers are instances of Docker images. They are lightweight,


isolated, and run in their own environment. Containers can be started, stopped, and
deleted quickly. They provide a consistent runtime environment, regardless of the
host system.

3. Dockerfile: A Dockerfile is a text file that contains a set of instructions for building a
Docker image. These instructions include things like specifying the base image,
copying files into the image, setting environment variables, and running commands.
Here's a simple example:

# Use a base image


FROM ubuntu:20.04

# Set an environment variable


ENV MY_VAR=HelloDocker

# Copy files into the image


COPY ./app /app

# Run a command when the container starts


CMD ["./app/start.sh"]

4. Docker Hub: Docker Hub is a public registry of Docker images. It allows developers
to share and distribute Docker images. You can find official images for various
software and create your own images to publish.

5. Docker Compose: Docker Compose is a tool for defining and running multi-
container Docker applications. It uses a YAML file (docker-compose.yml) to define
services, networks, and volumes for your application. It simplifies the management of
complex applications consisting of multiple containers.

Brief summary of Docker's key components:

1. Docker Daemon:
o The Docker daemon (also known as dockerd) is a background service that
manages Docker containers on a host system.
o It is responsible for building, running, and managing containers.
o The Docker daemon listens for Docker API requests and communicates with
the container runtime to execute those requests.
o It typically runs as a system service and handles the low-level container
operations.

2. Docker Client:

o The Docker client (usually invoked using the docker command) is a


command-line tool that allows users to interact with the Docker daemon.
o Users issue commands to the Docker client to perform various tasks like
creating containers, building images, managing volumes, and more.
o The Docker client communicates with the Docker daemon via the Docker API
to carry out these actions.
o It acts as the primary interface for users to control and manage Docker
containers and resources.

3. Docker Socket:

o The Docker socket (typically /var/run/docker.sock on Unix-based systems) is a


Unix socket that serves as a communication channel between the Docker
client and the Docker daemon.
o When a Docker client issues a command, it sends a request to the Docker
socket.
o The Docker daemon, in turn, listens to this socket and processes the client's
request, executing the requested Docker operation.
o This socket allows secure communication between the client and the daemon
without exposing the Docker API over a network port.

In summary, the Docker daemon is responsible for managing containers, the Docker client is
the user interface for interacting with Docker, and the Docker socket serves as the
communication bridge between the client and the daemon, enabling users to control and
manage containers and resources on a host system.

Examples:

1. Installing Docker:

To install Docker on Ubuntu using the docker.io package, you can follow these steps:

1. Update Package List:

Open a terminal and update the local package index to ensure you have the latest
information about available packages:

sudo apt update


2. Install Docker:

You can install Docker using the docker.io package as follows:


sudo apt install docker.io

3. Start and Enable Docker Service:

After the installation is complete, start the Docker service and enable it to start on
boot:

sudo systemctl start docker


sudo systemctl enable docker

4. Verify Docker Installation:

To verify that Docker has been installed correctly, run the following command:

docker --version

You should see the Docker version information displayed in the terminal.

5. Manage Docker Without Sudo (Optional):

By default, the Docker command requires sudo privileges. If you want to use Docker
without sudo, you can add your user to the "docker" group:

sudo usermod -aG docker $USER

After adding your user to the "docker" group, log out and log back in or run the
following command to apply the group changes without logging out:

newgrp docker

You should now be able to run Docker commands without sudo.

That's it! Docker is now installed on your Ubuntu system using the docker.io package, and
you can start using it to manage containers.

2. Building a Docker Image:

The Dockerfile snippet you provided is used to build a Docker image for a Java application
based on the Alpine Linux image with OpenJDK 17. It copies your application's JAR file into
the image and specifies how to run it as a container. However, there's a small issue with
the ENTRYPOINT line. It should reference app.jar, not your-app.jar. Here's the corrected
Dockerfile snippet:
# Use the OpenJDK 17 Alpine Linux image as the base image
FROM openjdk:17-alpine

# Copy the JAR file from your local system to the image
COPY target/database_service_project-0.0.1.jar app.jar

# Expose port 8080 to the outside world (for networking)


EXPOSE 8080

# Set the entry point for running the application


ENTRYPOINT ["java", "-jar", "app.jar"]
With this Dockerfile, you can build an image for your Java application using the docker
build command, and then run containers based on that image to host your application. Make
sure that the database_service_project-0.0.1.jar file is in the target directory of your project
before building the Docker image.
Here's how you can build the Docker image:

docker build -t my-java-app .


And then you can run a container from the image:

docker run -p 8080:8080 my-java-app


This will start your Java application inside a Docker container, and it will be accessible on
port 8080 of your host machine.

3. Running a Docker Container:

Now that we have our Docker image, we can run a container from it:

docker run -d -p 8080:80 my-python-app


This command starts a container in detached mode (-d), mapping port 8080 on your host to
port 80 in the container. Your Python web app should be accessible at http://localhost:8080.

4. Docker Compose Example:

Suppose you have a microservices application with multiple containers. You can use Docker
Compose to manage them together. Here's a simple example with a web app and a database:

docker-compose.yml:

version: '3'

services:
web:
image: my-python-app
ports:
- 8080:80
db:
image: postgres:13
environment:
POSTGRES_PASSWORD: mysecretpassword
Start the application stack using Docker Compose:

docker-compose up -d
This starts both the web and database containers in detached mode.
5. Docker Hub and Pulling Images:

You can find and use existing Docker images from Docker Hub. For example, to pull an
official Nginx image:

docker pull nginx:latest


This command downloads the Nginx image from Docker Hub, making it available for you to
run as a container.

These examples cover the basics of Docker. Docker is a powerful tool that simplifies
application deployment and management, especially in a containerized and microservices
architecture. It allows you to package applications and their dependencies, ensuring
consistency and ease of deployment across different environments.

Docker commands, grouped by their primary functions:

Managing Containers:

1. Run a Container:

docker run [OPTIONS] IMAGE [COMMAND] [ARGS]

2. List Running Containers:

docker ps

3. List All Containers (including stopped ones):

docker ps -a

4. Start a Stopped Container:

docker start CONTAINER_ID

5. Stop a Running Container:

docker stop CONTAINER_ID

6. Restart a Container:

docker restart CONTAINER_ID

7. Remove a Container (stop and delete):

docker rm CONTAINER_ID

8. Execute a Command in a Running Container:

docker exec [OPTIONS] CONTAINER_ID|NAME [COMMAND] [ARGS]

9. Inspect Container Details:


docker inspect CONTAINER_ID

10. Attach to a Running Container's STDIN, STDOUT, and STDERR:

docker attach CONTAINER_ID

Managing Images:

11. List Docker Images:

docker images

12. Pull an Image from a Registry:

docker pull IMAGE_NAME[:TAG]

13. Build an Image from a Dockerfile:

docker build [OPTIONS] PATH_TO_DOCKERFILE

14. Tag an Image:

docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

15. Remove an Image:

docker rmi IMAGE_ID

16. Search for Images on Docker Hub:

docker search IMAGE_NAME

17. Save an Image to a Tarball File:

docker save -o OUTPUT_FILE.tar IMAGE_NAME[:TAG]

18. Load an Image from a Tarball File:

docker load -i INPUT_FILE.tar

Managing Docker Volumes:

19. List Docker Volumes:

docker volume ls

20. Create a Docker Volume:

docker volume create VOLUME_NAME

21. Inspect a Docker Volume:

docker volume inspect VOLUME_NAME


22. Remove a Docker Volume:

docker volume rm VOLUME_NAME

Managing Networks:

23. List Docker Networks:

docker network ls

24. Create a Docker Network:

docker network create NETWORK_NAME

25. Inspect a Docker Network:

docker network inspect NETWORK_NAME

26. Remove a Docker Network:

docker network rm NETWORK_NAME

Managing Docker Compose:

27. Start Docker Compose Services:

docker-compose up [OPTIONS] [SERVICE...]

28. Stop Docker Compose Services:

docker-compose down [OPTIONS]

29. Build or Rebuild Docker Compose Services:

docker-compose build [SERVICE...]

30. View Docker Compose Logs:

docker-compose logs [SERVICE...]

Docker Registry and Authentication:

31. Login to a Docker Registry:

docker login [OPTIONS] [SERVER]

32. Logout from a Docker Registry:

docker logout [SERVER]

Miscellaneous Commands:
33. View Docker Version Info:

docker version

34. Check Docker System Information:

docker info

35. Display Docker Disk Usage:

docker system df

36. Monitor Docker Events:

docker events [OPTIONS]

37. Pull and Apply Updates to Docker Swarm Services:

docker service update [OPTIONS] SERVICE

38. Clean Up Unused Resources (Containers, Images, Volumes, Networks):

docker system prune

39. Pause a Running Container:

docker pause CONTAINER_ID

40. Unpause a Paused Container:

docker unpause CONTAINER_ID

41. Inspect Docker Daemon Logs:

docker logs docker

Docker Swarm (Container Orchestration):


42. Initialize a Docker Swarm:

docker swarm init [OPTIONS]

43. Join a Node to a Docker Swarm:

docker swarm join [OPTIONS] HOST:PORT

44. List Nodes in a Docker Swarm:

docker node ls

45. Create a Docker Service:


docker service create [OPTIONS] IMAGE [COMMAND] [ARGS]

46. List Docker Services:

docker service ls

47. Scale a Docker Service:

docker service scale SERVICE=REPLICAS

48. Inspect a Docker Service:

docker service inspect SERVICE

49. Remove a Docker Service:

docker service rm SERVICE

50. Leave a Docker Swarm (Node):

docker swarm leave [OPTIONS]

These are some of the most commonly used Docker commands for managing containers,
images, volumes, networks, and Docker Swarm. Depending on your specific use case, you
may need to use additional commands and options to tailor Docker to your needs.

Docker provides a flexible networking system that allows containers to communicate with
each other and with the outside world. You can create and manage Docker networks using the
Docker CLI. Here are some basic Docker network commands with examples:

1. List Docker Networks: To see a list of all available Docker networks, use the docker
network ls command.
docker network ls

2. Create a Custom Bridge Network: You can create a custom bridge network to
isolate containers from the host network. This is useful when you want containers to
communicate with each other privately.

docker network create my_custom_network


3. Create a Container on a Specific Network: When running a container, you can
specify the network it should connect to using the --network flag.
docker run --name container1 --network my_custom_network -d nginx
4. Inspect Network Details: To view details about a specific network, use the docker
network inspect command.
docker network inspect my_custom_network
5. Create a Container with a Specific IP Address: You can specify a static IP address
for a container within a custom bridge network using the --ip flag.
docker run --name container2 --network my_custom_network --ip 172.18.0.10 -d
nginx
6. Connect an Existing Container to a Network: You can also connect an existing
container to a network using the docker network connect command.
docker network connect my_custom_network container1
7. Disconnect a Container from a Network: To disconnect a container from a
network, use the docker network disconnect command.
docker network disconnect my_custom_network container1
8. Remove a Custom Network: To remove a custom network, use the docker network
rm command. Make sure no containers are using the network before removing it.
docker network rm my_custom_network
These are some common Docker networking commands and examples. Docker provides
various network drivers, such as bridge, host, overlay, and macvlan, which offer different
networking capabilities. Choose the appropriate network driver based on your use case and
requirements.

Docker provides various types of networks and network drivers to enable different network
configurations and communication patterns for containers. Here are some of the most
commonly used Docker network types and their associated network drivers:

1. Bridge Network (bridge):


o Description: The default network mode for Docker containers when no
network is specified. It creates an internal private network on the host, and
containers can communicate with each other using container names.
o Use Cases: Suitable for most containerized applications where containers
need to communicate on the same host.

2. Host Network (host):


o Description: Containers share the host network stack, making them directly
accessible from the host and other containers without any network address
translation (NAT).
o Use Cases: High-performance scenarios where containers need to bind to
specific host ports, but it lacks network isolation.

3. Overlay Network (overlay):


o Description: Used in Docker Swarm mode to facilitate communication
between containers running on different nodes in a swarm cluster. It uses
VXLAN encapsulation for inter-node communication.
o Use Cases: Multi-host, multi-container applications orchestrated with Docker
Swarm.

4. Macvlan Network (macvlan):


o Description: Allows containers to have their own MAC addresses and appear
as separate devices on the host network. Each container has a unique network
identity.
o Use Cases: When containers need to be directly on an external network, e.g.,
connecting containers to physical networks or VLANs.

5. None Network (none):


o Description: Containers on this network have no network connectivity. It's
often used for isolated testing or when the container only needs loopback
connectivity.
o Use Cases: Limited use cases, primarily for debugging or security purposes.
6. Custom Bridge Network (user-defined bridge):
o Description: Users can create their custom bridge networks to have better
control over container connectivity, DNS resolution, and isolation.
o Use Cases: Isolating containers, customizing DNS settings, or when you need
multiple bridge networks on the same host.

7. Overlay2 Network (overlay2):


o Description: Introduced in Docker 20.10, the Overlay2 network driver is
optimized for container-to-container communication within the same network
namespace.
o Use Cases: High-performance communication between containers on the
same host, especially when using the Overlay2 storage driver.

8. Cilium Network (cilium):


o Description: Cilium is an open-source networking and security project that
offers advanced networking features, including API-aware network security
and load balancing.
o Use Cases: Advanced networking and security requirements, often in
Kubernetes environments.

9. Gossip Network (gossip):


o Description: Used in Docker Swarm mode to enable gossip-based cluster
management for container orchestration and service discovery.
o Use Cases: Docker Swarm cluster communication and coordination.

These network types and drivers provide flexibility and cater to different use cases and
requirements in containerized applications. Choosing the right network type and driver
depends on your application's architecture, networking needs, and deployment environment.

Docker-Compose

Docker Compose is a tool for defining and running multi-container Docker applications. It
allows you to define your application's services, networks, and volumes in a single docker-
compose.yml file, making it easier to manage complex Docker setups. Here's a guide on how
to use Docker Compose with examples:

Install Docker Compose

Before you begin, make sure you have Docker Compose installed. You can download it from
the official Docker Compose website.

Creating a Docker Compose File

Create a docker-compose.yml file in your project directory. This file will define your Docker
services and their configurations.
Here's a simple example that defines two services, a web application using Nginx and a
backend using Node.js:

version: '3'
services:
web:
image: nginx:latest
ports:
- "80:80"
backend:
image: node:14
working_dir: /app
volumes:
- ./backend:/app
command: npm start
In this example:

 version: '3' specifies the Docker Compose file version.


 services section defines two services: web and backend.
 web uses the official Nginx image and maps port 80 of the host to port 80 of the
container.
 backend uses the official Node.js image, sets a working directory, mounts a local
directory as a volume, and specifies a command to run when the container starts.

Docker Compose Commands

Here are some common Docker Compose commands you can use:

1. Start Containers: Start your services defined in the docker-compose.yml file.


docker-compose up
Add the -d flag to run in detached mode (in the background).
docker-compose up -d
2. Stop Containers: Stop the containers defined in the docker-compose.yml file.
docker-compose down

3. View Logs: View the logs of your running containers.

docker-compose logs

4. Build Services: Build or rebuild services (useful when you make changes to your
Dockerfile or source code).

docker-compose build
5. Scale Services: You can scale services by specifying the desired number of replicas.
For example, to run two instances of the backend service:
docker-compose up -d --scale backend=2
6. Execute a Command in a Service: You can execute commands within a specific
service using docker-compose exec. For example, to run a shell in
the backend service:
docker-compose exec backend sh

Cleaning Up
To remove all containers and networks created by Docker Compose, use:

docker-compose down --volumes


This will also remove the volumes associated with your services.

These are some of the basic Docker Compose commands and examples to get you started.
Docker Compose is a powerful tool for managing containerized applications, and you can
define more complex configurations and dependencies in your docker-compose.yml file as
your project evolves.

SAMPLE

# Start MongoDB container


docker run -d \
-p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=rootuser \
-e MONGO_INITDB_ROOT_PASSWORD=rootpass \
--name mongodb \
--net mongo-network \
mongo

# Start Mongo Express container


docker run -d \
-p 8081:8081 \
-e ME_CONFIG_MONGODB_ADMINUSERNAME=rootuser \
-e ME_CONFIG_MONGODB_ADMINPASSWORD=rootpass \
-e ME_CONFIG_MONGODB_SERVER=mongodb \
--name mongo-express \
--net mongo-network \
mongo-express

Here's the equivalent Docker Compose file for your setup

version: '3.5'
services:
mongodb:
image: mongo
container_name: mongodb
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=rootuser
- MONGO_INITDB_ROOT_PASSWORD=rootpass
networks:
- mongo-network

mongo-express:
image: mongo-express
container_name: mongo-express
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=rootuser
- ME_CONFIG_MONGODB_ADMINPASSWORD=rootpass
- ME_CONFIG_MONGODB_SERVER=mongodb
restart: unless-stopped
depends_on:
- mongodb
networks:
- mongo-network

networks:
mongo-network:
name: mongo-network

This Docker Compose file defines two services, mongodb and mongo-express, just like your
original Docker commands. It also specifies the necessary environment variables, ports, and
network configurations. To use it, create a docker-compose.yml file in your project directory
and run docker-compose up -d to start the services.

Practice Repo

https://github.com/DanielMichalski/responsive-personal-website

DOCKER VOLUMES

Docker volumes are a way to persist data generated or used by Docker containers. They
provide a means to store and manage data separately from the container itself, ensuring that
data persists even if the container is stopped or removed. Docker volumes are commonly
used for scenarios where you need to share data between containers or when you want to
keep data separate from the container's file system.

Here are some key aspects of Docker volumes:

1. Persistent Data: Docker containers are typically ephemeral, meaning their file
systems are isolated and any data generated within a container is lost when the
container is removed. Volumes provide a way to store data outside of containers,
ensuring that it persists across container lifecycle events.
2. Types of Volumes: Docker supports several types of volumes, including named
volumes, host-mounted volumes, and anonymous volumes.
o Named Volumes
o Host-Mounted Volumes
o Anonymous Volumes
3. Volume Management: You can create, list, inspect, and remove volumes using
Docker CLI commands like docker volume create, docker volume ls, docker volume
inspect, and docker volume rm.
4. Using Volumes: To use a volume in a Docker container, you specify the volume's
name or mount point in the container's configuration, typically in a Docker Compose
file or when running docker run with the -v or --volume option.

Here's an example of how to create and use a named volume in Docker:

# Create a named volume


docker volume create mydata

# Run a container and mount the volume


docker run -d --name mycontainer -v mydata:/app/data myimage

# Data in /app/data inside the container is stored in the 'mydata' volume


Using Docker volumes is a common practice for managing data in Dockerized applications,
especially in scenarios where you need to ensure data persistence and share data between
containers.

Docker volumes are used to persist data when containers are created, removed, or stopped.
Here's when data persists when using Docker volumes:

1. Container Restart: If a container is stopped and then restarted, the data stored in
volumes associated with that container will persist. This is useful for ensuring that
your application's data survives container restarts.
2. Container Removal: When you remove a container using docker rm, the data within
the container itself is lost. However, if you have mapped a Docker volume to store
data, that data will persist even after the container is removed. Volumes are separate
from containers, so they can outlive the containers that use them.
3. Container Replacement: If you replace a container with a new one (e.g., updating to
a new version of your application), you can attach the same volume to the new
container, allowing it to access and manipulate the same data.
4. Host System Reboot: Even if the host machine running Docker is rebooted, the data
stored in Docker volumes should remain intact. Docker manages volumes
independently from the host's filesystem.
5. Scaling Containers: When you use Docker Compose or orchestration tools like
Docker Swarm or Kubernetes to scale your application by creating multiple
containers, each container can use the same volume to access and share data.

Docker volume types

Docker supports three main types of volumes for managing persistent data in
containers: host-mounted volumes, anonymous volumes, and named volumes. Here are
examples of each:

1. Host-Mounted Volumes:
o Host-mounted volumes allow you to specify a directory from the host machine
that is mounted into the container. This can be useful when you want to share
data between the host and container.
docker run -v /path/on/host:/path/in/container myapp
Example: Mount the /var/data directory on the host machine to the /data directory in
the container.
docker run -v /var/data:/data myapp

2. Anonymous Volumes:
o Anonymous volumes are created automatically by Docker and are managed
for you. They are typically used when you don't need to manage the volume
explicitly, such as for temporary or cache data.

docker run -v /path/in/container myapp

Example: Create an anonymous volume for a PostgreSQL database container.

docker run -v /var/lib/postgresql/data postgres

3. Named Volumes:
o Named volumes are explicitly created and given a name, making it easier to
manage and share data between containers. They are useful for maintaining
data between container restarts and for sharing data between multiple
containers.
4. docker volume create mydata
docker run -v mydata:/path/in/container myapp
Example: Create a named volume called mydata and use it to persist data for a web
application container.
docker volume create mydata
docker run -v mydata:/app/data myapp
These are the three main types of Docker volumes, each with its own use cases. You can
choose the one that best fits your requirements based on whether you need to manage the
volume explicitly, share data with the host, or share data between containers.

EXAMPLE

You can use Docker Compose to set up a MongoDB container and a MongoDB Express
(Mongo-Express) container. This example assumes you already have Docker and Docker
Compose installed.

Create a directory for your project and create a docker-compose.yml file inside it with the
following content:
version: '3'

services:
mongodb:
image: mongo
container_name: mongodb
networks:
- mongo-network
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=123

mongo-express:
image: mongo-express
container_name: mongo-express
networks:
- mongo-network
ports:
- "8081:8081"
environment:
- ME_CONFIG_MONGODB_SERVER=mongodb
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=123
- ME_CONFIG_BASICAUTH_USERNAME=admin
- ME_CONFIG_BASICAUTH_PASSWORD=123

networks:
mongo-network:
driver: bridge
In this docker-compose.yml file:

 We define two services: mongodb and mongo-express.


 The mongodb service uses the official MongoDB image and specifies a named
volume mongodb_data for persisting MongoDB data.
 We set environment variables for the MongoDB container to create an initial admin
user with a username and password.
 The mongo-express service uses the official Mongo Express image and connects to
the mongodb service using the ME_CONFIG_MONGODB_SERVER environment
variable.
 We also set environment variables for the Mongo Express container to configure it.

Now, navigate to the directory containing the docker-compose.yml file in your terminal and
run:
docker-compose up
Docker Compose will download the necessary images (if not already downloaded) and start
the MongoDB and Mongo Express containers. You can access the MongoDB Express web
interface at http://localhost:8081 and log in using the MongoDB admin credentials you
specified in the docker-compose.yml file.
The data for MongoDB will be stored in a Docker named volume named mongodb_data,
ensuring that it persists even if you stop and remove the containers.
To stop the containers, press Ctrl+C in the terminal where they are running, and then run:
docker-compose down
This will stop and remove the containers, but the data will remain in the named volume for
future use.

Dockerfile
In a Dockerfile, both CMD and ENTRYPOINT are instructions used to specify the command
that should be run when a container is started. However, they serve slightly different
purposes.

1. CMD Instruction:

o The CMD instruction sets the default command and/or parameters for the
container.
o If the Dockerfile contains multiple CMD instructions, only the last one is
effective.
o If a command is specified when running the container (using docker run), it
overrides the CMD instruction.
o The syntax is CMD ["executable","param1","param2"] or CMD command
param1 param2.

Example:

FROM ubuntu
CMD ["echo", "Hello, World!"]

When you run the container without specifying a command:

docker run my-image


It will execute echo Hello, World! by default.

2. ENTRYPOINT Instruction:

o The ENTRYPOINT instruction allows you to configure a container that will


run as an executable.
o If the Dockerfile contains multiple ENTRYPOINT instructions, only the last
one is effective.
o The primary purpose of ENTRYPOINT is to provide the default executable
for the container. However, it can also be overridden at runtime using docker
run.
o The syntax is similar to CMD: ENTRYPOINT ["executable", "param1",
"param2"] or ENTRYPOINT command param1 param2.

Example:

FROM ubuntu
ENTRYPOINT ["echo", "Hello"]

When you run the container without specifying a command:

docker run my-image World!


It will execute echo Hello World!.
You can still override the ENTRYPOINT at runtime by providing a command:
docker run my-image echo Goodbye
It will execute echo Goodbye.
In summary, CMD is used to provide defaults for an executing container, and it can be
overridden by specifying a command at runtime. On the other hand, ENTRYPOINT is used
to set the default executable for the container, and it can also be overridden at runtime. In
some cases, both CMD and ENTRYPOINT can be used together to provide a default
command with optional arguments.

Docker issue

sudo chmod 666 /var/run/docker.sock

sudo systemctl restart docker

Trivy install steps

sudo apt-get install wget apt-transport-https gnupg lsb-release

wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo


tee /usr/share/keyrings/trivy.gpg > /dev/null

echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-


repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list

sudo apt-get update

sudo apt-get install trivy -y

Pipeline

pipeline {
agent any

tools{
jdk 'jdk17'
maven 'maven3'
}

environment {
SONARQUBE_HOME= tool 'sonar-scanner'
}

stages {
stage('Git CheckOut') {
steps {
git 'https://github.com/jaiswaladi2468/BoardgameListingWebApp.git'
}
}

stage('Compile') {
steps {
sh "mvn compile"
}
}

stage('Unit Tests') {
steps {
sh "mvn test"
}
}

stage('Package') {
steps {
sh "mvn package"
}
}

stage('OWASP Dependency Check ') {


steps {
dependencyCheck additionalArguments: ' --scan ./', odcInstallation: 'DC'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}

stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('sonar') {
sh ''' $SONARQUBE_HOME/bin/sonar-scanner -
Dsonar.projectName=Boardgame -Dsonar.projectKey=Boardgame \
-Dsonar.java.binaries=. '''

}
}
}

stage('Quality Gate') {
steps {
waitForQualityGate abortPipeline: false
}
}

stage('Deploy Artifacts To Nexus') {


steps {
withMaven(globalMavenSettingsConfig: 'global-maven-settings', jdk: 'jdk17',
maven: 'maven3', mavenSettingsConfig: '', traceability: false) {
nexusArtifactUploader artifacts: [[artifactId: 'database_service_project',
classifier: '', file:
'/var/lib/jenkins/workspace/Full-stack-CICD/target/database_service_project-0.0.1.jar', type:
'jar']], credentialsId: 'nx', groupId: 'com.javaproject', nexusUrl: '43.204.25.115:8081/',
nexusVersion: 'nexus3', protocol: 'http', repository: 'maven-releases', version: '0.0.1'
}
}
}

stage('Deploy Artifacts ') {


steps {
withMaven(globalMavenSettingsConfig: 'global-maven-settings', jdk: 'jdk17',
maven: 'maven3', mavenSettingsConfig: '', traceability: false) {
sh "mvn deploy"
}
}
}

stage('Docker Build Image') {


steps {
script {
withDockerRegistry(credentialsId: 'docker-cred', toolName: 'docker') {

sh "docker build -t boardwebapp:latest ."


sh "docker tag boardwebapp:latest adijaiswal/boardwebapp:latest"

}
}
}

stage('trivy Image scan') {


steps {
sh " trivy image adijaiswal/boardwebapp:latest "
}
}

stage('Docker Push Image') {


steps {
script {
withDockerRegistry(credentialsId: 'docker-cred', toolName: 'docker') {

sh "docker push adijaiswal/boardwebapp:latest"

}
}
}

stage('Deploy application to container') {


steps {
script {
withDockerRegistry(credentialsId: 'docker-cred', toolName: 'docker') {

sh "docker run -d -p 8085:8080 adijaiswal/boardwebapp:latest"

}
}
}

}
}

You might also like