1
0
Fork 0
forked from fedi/mastodon

update Docker section of README (#1231)

Re-ordered the steps so it doesn't read "Do this, but first, do this
other step"
Added note about keeping the REDIS and DB settings as they are for
Docker use
Add which variables you will NEED to set to make the Mastodon work
Add how to generate the secrets
Add how to connect to your Mastodon
Add a note to read the Production-guide
This commit is contained in:
Eric Blade 2017-04-11 19:14:56 -04:00 committed by Eugen
parent 40bdf43297
commit 3442bc0ea3

View file

@ -67,23 +67,53 @@ Consult the example configuration file, `.env.production.sample` for the full li
[![](https://images.microbadger.com/badges/version/gargron/mastodon.svg)](https://microbadger.com/images/gargron/mastodon "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/gargron/mastodon.svg)](https://microbadger.com/images/gargron/mastodon "Get your own image badge on microbadger.com")
The project now includes a `Dockerfile` and a `docker-compose.yml` file (which requires at least docker-compose version `1.10.0`). You need to turn `.env.production.sample` into `.env.production` with all the variables set before you can:
The project now includes a `Dockerfile` and a `docker-compose.yml` file (which requires at least docker-compose version `1.10.0`).
Review the settings in docker-compose.yml. Note that it is not default to store the postgresql database and redis databases in a persistent storage location,
so you may need or want to adjust the settings there.
Before running the first time, you need to build the images:
docker-compose build
And finally
Then, you need to fill in the .env.production file:
cp .env.production.sample .env.production
vi .env.production
docker-compose up -d
Do NOT change the REDIS_* or DB_* settings when running with the default docker configurations.
As usual, the first thing you would need to do would be to run migrations:
You will need to fill in, at least:
LOCAL_DOMAIN, LOCAL_HTTPS, PAPERCLIP_SECRET, SECRET_KEY_BASE, OTP_SECRET, and the SMTP_*
settings. To generate the PAPERCLIP_SECRET, SECRET_KEY_BASE, and OTP_SECRET, you may use:
docker-compose run --rm web rake secret
Do this once for each of those keys, and copy the result into the .env.production file in
the appropriate field.
Then you should run the db:migrate command to create the database, or migrate it from an older release:
docker-compose run --rm web rails db:migrate
And since the instance running in the container will be running in production mode, you need to pre-compile assets:
Then, you will also need to precompile the assets:
docker-compose run --rm web rails assets:precompile
The container has two volumes, for the assets and for user uploads. The default docker-compose.yml maps them to the repository's `public/assets` and `public/system` directories, you may wish to put them somewhere else. Likewise, the PostgreSQL and Redis images have data containers that you may wish to map somewhere where you know how to find them and back them up.
before you can launch the docker image with:
docker-compose up
If you wish to run this as a daemon process instead of monitoring it on console, use instead:
docker-compose up -d
Then you may login to your new Mastodon instance by browsing to http(s)://(yourhost):3000/
Following that, make sure that you read the [production guide](docs/Running-Mastodon/Production-guide.md). You are probably going to want to understand how
to configure NGINX to make your Mastodon instance available to the rest of the world.
The container has two volumes, for the assets and for user uploads, and optionally two more, for the postgresql and redis databases.
The default docker-compose.yml maps them to the repository's `public/assets` and `public/system` directories, you may wish to put them somewhere else. Likewise, the PostgreSQL and Redis images have data containers that you may wish to map somewhere where you know how to find them and back them up.
**Note**: The `--rm` option for docker-compose will remove the container that is created to run a one-off command after it completes. As data is stored in volumes it is not affected by that container clean-up.