SoFunction
Updated on 2025-03-03

Linux method to get file path in docker container

How to get the file path in the docker container

1.1 Use the docker cp command

docker cpCommands are used to copy files or directories between the host and the container. We can use it to copy files from the container to the host, or copy files from the host to the container.

Copy files from container to host

docker cp <containerIDOr name>:/path/to/file /path/on/host

For example, from the namemycontainerCopy in the container/app/File to host/tmpTable of contents:

docker cp mycontainer:/app/ /tmp/

Copy files from host into container

docker cp /path/on/host <containerIDOr name>:/path/to/destination

For example, the host's/tmp/Copy the file to the namemycontainerof container/appTable of contents:

docker cp /tmp/ mycontainer:/app/

1.2 Use the docker exec command combined with the shell command

If we want to execute commands inside the container to view or operate files, we can usedocker execOrder. For example, we can uselsCommand to list files in the container.

docker exec -it <containerIDOr name> ls /path/to/directory

For example, list the namemycontainerin the container/appContents of the directory:

docker exec -it mycontainer ls /app/

If we want to use it inside the containercatCommand to view file content:

docker exec -it <containerIDOr name> cat /path/to/file

For example, view the namemycontainerin the container/app/Contents of the file:

docker exec -it mycontainer cat /app/

1.3 Using Docker Volumes

If we often need to share files or directories between the host and the container, we might consider using Docker volumes. Docker volumes are special directories that can be mounted by containers and can be shared among multiple containers. By mounting a volume into a container, we can easily share files between the host and the container.

To create a volume and mount it into a container, we candocker runUsed in the command-vor--volumeOptions. The specific usage method depends on our needs.

1.4 Using other tools or methods

In addition to the above methods, there are some other tools and methods that can help us access or manipulate files in Docker containers, such as using nsenter, nsinit, or other similar tools to enter the container's namespace. However, these methods are generally more complex than the above and require more in-depth knowledge of Docker and Linux. In most cases, usedocker cpanddocker execThe command should be sufficient for our needs.

2. What is Docker

Docker is an open source containerized platform that allows developers to package applications and their dependencies into a portable container, then publish them to any Linux machine, and ensures that applications run consistently on all environments. The container uses Linux's kernel functions, such as Linux's cgroup and namespace, to separate system resources such as processes, file systems and networks, thereby realizing the independent running environment of the application.

2.1The main components of Docker

(1) Docker Engine:

The Docker engine is a client-server application that includes a daemon (dockerd), a REST API, and a command line interface (CLI). The CLI interacts with the Docker container through the Docker daemon, which is responsible for managing (starting, stopping, building, etc.) Docker containers.

(2) Docker image:

Docker image is a read-only template used to create Docker containers, including all the code, libraries, configuration files, etc. required to run the application. A mirror can be regarded as a static, immutable file that can be defined and built through a Dockerfile file.

(3) Docker Container:

The Docker container is created by a Docker image and is a runnable instance of the image. The container contains all the dependencies needed for the application and its operation and is completely isolated from the host and other containers. Containers can be started, stopped, deleted, etc. without affecting the host and other containers.

(4)Dockerfile:

Dockerfile is a text file that defines how to automatically build Docker images. By specifying a series of commands and parameters, Dockerfile can automatically complete the image construction process, thereby simplifying the creation and management of images.

(5) Docker Repository:

A Docker repository is a repository used to store Docker images, similar to a code repository. Docker officially provides a public Docker Hub repository where developers can publish their own images or download images published by other developers from the repository. In addition, enterprises can also build private Docker repositories to store and manage their own images.

2.2Docker's main advantages

(1)portability: Docker containers can run on any Docker-enabled Linux machine without worrying about environment differences and dependencies.

(2)Isolation: Docker containers use Linux's kernel functions to achieve resource isolation, ensuring that applications run independently in the container without interfering with each other.

(3)Lightweight: Compared to virtual machines, Docker containers share the host's operating system and kernel, making it lighter and faster to start.

(4)automation: Through tools such as Dockerfile and Docker Compose, the image construction, deployment and management process can be completed automatically to improve development efficiency.

(5)Security:Docker provides a variety of security features, such as mirror signatures, container access control, etc., to ensure the safe operation of applications in containers.

Which scenarios are suitable for

Docker is suitable for a variety of scenarios, especially those that require rapid deployment, isolation, portability, and version control. The following are some common application scenarios for Docker:

(1) Microservice architecture:

In a microservice architecture, Docker can be used to deploy and manage a large number of standalone services. Each service can be packaged into a Docker container and can be managed and extended through container orchestration tools such as Kubernetes.

(2) Development environment:

Docker can provide developers with a consistent development environment. By using Docker images, developers can make sure that the environment they use on their local machines is exactly the same as the production environment, which helps reduce the “runable on my machine” problem.

(3) Test environment:

Docker can quickly create and destroy test environments. Testers can create a Docker container for each test scenario and test it in the container. Once the test is completed, the container can be destroyed to create a clean environment for the next test scenario.

(4) Continuous Integration/Continuous Deployment (CI/CD):

Docker containers can be easily integrated into CI/CD processes. When the code is submitted to the code base, the CI system can automatically build the Docker image and deploy the image to the production environment through the CD system. This ensures that code changes can be deployed quickly and reliably to production environments.

(5) Application packaging and distribution:

Docker containers can be used as the packaging and distribution format for applications. By packing the application and its dependencies into a Docker image, developers can ensure that the application runs consistently across all environments without worrying about dependency conflicts or environment differences.

(6) Multi-tenant environment:

In a multi-tenant environment, Docker can be used to isolate different tenants. By using Docker containers, each tenant can be provided with an independent, isolated operational environment to ensure that data and resources between tenants are not interfered with each other.

(7) Data Science and Machine Learning:

Docker can provide a consistent environment for data scientists and machine learning engineers to train and deploy models. By packing data science tools and libraries into Docker containers, you can ensure that the model runs consistently across different environments.

(8) Cloud native applications:

Docker is an important part of cloud-native applications. Cloud-native applications are applications designed and built specifically for cloud environments that take advantage of the elasticity and scalability provided by the cloud. Cloud-native applications can be easily built, deployed, and managed by using Docker and container orchestration tools.

(9) Hybrid and multi-cloud environments:

In hybrid and multi-cloud environments, Docker can be used to ensure consistency and portability of applications across different cloud providers. By using Docker images, you can ensure that applications run consistently on any Docker-enabled cloud platform.

4. How to install Docker

When installing Docker, Windows and Ubuntu have different steps and requirements. Here is a tutorial on installing Docker for these two operating systems:

4.1 Windows installation Docker tutorial

4.1.1 Preparation phase

(1) Check system requirements:

Make sure our Windows version is Professional or Enterprise and has been updated for the Anniversary (version 1607) or above.

Docker supports Windows 10, Windows Server 2016, and Windows Server 2019.

(2) Enable Hyper-V (if not enabled):

Open Control Panel > Programs > Enable or turn off Windows features.

Check "Hyper-V" and confirm the changes.

4.1.2 Install Docker

(1) Download Docker Desktop:

Visit the Docker official website to download the Docker Desktop installation package (Docker Desktop) for Windows.

(2) Run the installation package:

Double-click the downloaded "Docker Desktop" to install.

Follow the instructions of the installation wizard and keep selecting "Next" to install.

(3) Start Docker Desktop:

After the installation is complete, double-click the Docker Desktop icon on the desktop to launch it.

(4) Verify the installation:

Open a command prompt (cmd) or PowerShell.

enterdocker -vAnd press Enter. If you see the version number of Docker, it means the installation is successful.

4.1.3 Settings (optional)

Configure image acceleration: You can consider configuring Docker's image acceleration to improve image download speed. In the settings of Docker Desktop, select Docker Engine and add the Alibaba Cloud Accelerator address in the JSON configuration file.

4.2 Ubuntu installation Docker tutorial

4.2.1 Preparation phase

Update the software package: Open the terminal and entersudo apt updateandsudo apt upgradeTo update the Ubuntu package list and the version of the installed software.

4.2.2 Uninstall the old version of Docker (if installed)

Check and uninstall the old version: Entersudo apt-get remove docker docker-engine containerd runcTo uninstall old versions of Docker and its related components that may exist.

4.2.3 Install Docker

(1) Installation dependencies:

entersudo apt-get install ca-certificates curl gnupg lsb-releaseTo install the required dependencies of Docker.

(2) Add Docker official GPG key:

entercurl -fsSL /linux/ubuntu/gpg | sudo apt-key add -To add Docker's official GPG key.

(3) Add Docker software source:

entersudo add-apt-repository "deb [arch=amd64] /linux/ubuntu $(lsb_release -cs) stable"To add Docker's software source.

(4) Install Docker:

entersudo apt-get updateTo update the package list.

entersudo apt-get install docker-ce docker-ce-cli To install Docker.

(5) Start Docker:

entersudo systemctl start dockerTo start the Docker service.

(6) Verify installation:

entersudo docker run hello-world, if you see the output of "Hello from Docker!", it means that Docker has been installed and run successfully.

The above is a detailed tutorial on installing Docker on Windows and Ubuntu systems. Please note that since Docker and operating system versions may be updated, it is best to refer to the official Docker documentation or the latest information provided by the relevant community for installation.

4.3 Overview of commonly used Docker commands

Docker provides many command-line tools to manage containers and images. Here are some commonly used Docker commands:

  • docker run: Run a container.

  • docker stop: Stop one or more running containers.

  • docker start: Start one or more containers that have been stopped.

  • docker rm: Delete one or more containers.

  • docker ps: List running containers.

  • docker images: List all images.

  • docker pull: Pull a mirror from the Docker repository.

  • docker push: Push an image into the Docker repository.

  • docker build: Build a new image based on Dockerfile.

5. How to apply Docker in actual projects

When applying Docker in a real-life project, here are some practical suggestions to help you understand and implement Docker technology more clearly:

5.1 Understand project requirements

(1) Clarify the goal: First, determine why Docker is introduced into the project. Is it to improve deployment efficiency, achieve environmental consistency, or to support microservice architecture, etc.

(2) Assess the impact: Analyze the possible impact of the introduction of Docker on the project, including the adjustment of the technology stack, changes in the development process, etc.

5.2 Select the right Docker image

(1) Lightweight basic mirror: Use lightweight basic images such as Alpine Linux to reduce the image size and improve startup speed.

(2) Official mirror: Priority is given to the use of officially provided images, as they are usually subject to rigorous testing and security review.

5.3 Build and deploy Docker containers

(1) Dockerfile writing:

  • Follow the principle of least privileges and avoid using root users in containers.

  • Reduce the mirroring hierarchy, merge multiple RUN commands, or use multi-stage builds.

(2)Docker Compose: For scenarios where multiple containers need to work together, use Docker Compose for container orchestration.

(3) Automatic deployment: Use Docker's automated deployment functions, such as using tools such as Jenkins to implement CI/CD processes.

5.4 Optimize Docker container performance

(1) Resource restrictions: Container resources such as CPU and memory are restricted to ensure that the application can run normally even when resources are limited.

(2) Network optimization: Select the appropriate network driver and configuration, such as bridge mode or overlay network, to optimize the network performance of the container.

(3) Cache and volume: Use Docker's caching mechanism and volume rationally to avoid repeated downloads and constructions, and improve data reading and writing efficiency.

5.5 Security considerations

(1) The principle of minimum permission: Specify non-root users in the Dockerfile to run the container, limiting the impact of potential attacks.

(2) Safety context: Set the security_opt option in the Docker Compose file to enable the security context function.

(3) Regular cleaning: Regularly clean useless containers and mirrors to free up storage space and avoid potential security risks.

5.6 Monitoring and logging

(1) Monitoring tools: Use monitoring tools such as cAdvisor and Prometheus to monitor the running status and performance indicators of Docker containers.

(2) Log collection: Collect container logs into centralized storage and analysis systems through Docker's log driver (such as json-file, syslog, etc.).

5.7 Backup and Restore

(1) Backup Strategy: Develop backup policies for Docker images, containers and data to ensure data security and recoverability.

(2) Recovery process: Define the recovery process and steps in the event of a failure or data loss, including restoring images and data from backups, etc.

5.8 Training and documentation

(1) Training developers: Provide developers with Docker-related training and documentation support to ensure that they can use Docker to develop and deploy effectively.

(2) Maintain the document: Write and maintain detailed documentation and best practice guides on Docker usage for team reference.

By following the above suggestions and adjusting and optimizing in combination with the actual situation of the project, you can better apply Docker technology in actual projects.

What is the difference between a virtual machine

There are significant differences between Docker and virtual machines in many aspects. The following are their differences, summarized in a clear and partial form:

(1) Startup speed:

  • Docker starts quickly and is in the second level.

  • Virtual machines usually take several minutes to start because they require the entire operating system to be started.

(2) Performance loss:

  • Docker requires less resources because it is virtualized at the operating system level, with Docker containers and kernel interactions with little performance loss.

  • The virtual machine requires additional operating system operation, thus consuming more system resources, including CPU, memory, and disk space.

(3) Isolation:

  • Docker is an isolation between processes and is relatively weak in isolation. Docker containers share the host's operating system, and containers can access and influence each other.

  • Virtual machines implement system-level isolation, and each virtual machine runs in an independent environment and has no impact on each other.

(4) Resource utilization:

  • Docker is lighter, and its architecture can share a kernel and a shared application library, occupying very little memory.

  • Virtual machines need to run a complete operating system, so resource utilization is relatively low.

(5) Portability:

  • Docker containers can run on almost any platform, including virtual machines, physical machines, public clouds, private clouds, personal computers, servers, etc.

  • Virtual machines can also run on different platforms, but compatibility and performance issues are often required to be considered.

(6) Deployment and extension:

  • Docker can efficiently deploy and expand capacity, and quickly deploy containers through mirroring, improving application deployment efficiency.

  • Virtual machines are relatively slow to deploy and scale because they require the entire operating system to be started.

(7) Security:

  • Docker is relatively weak because it shares resources such as kernel, file system, etc. with the host, and is more likely to have an impact on other containers and hosts.

  • Virtual machines provide better isolation, so they are relatively safe.

To sum up, Docker and virtual machines have their own advantages and disadvantages. Docker is favored for its lightweight, fast startup, efficient resource utilization and strong portability, and is especially suitable for automatic operation and maintenance systems of elastic cloud platform. Virtual machines are more suitable in some scenarios for their strong isolation and system-level compatibility. Which technology to choose depends on the specific business needs and scenarios.

The above is the detailed content of the method of obtaining the file path in the docker container in Linux. For more information about obtaining the docker file path in Linux, please follow my other related articles!