1
0
Fork 0
forked from fedi/mastodon
ETR flavored
Go to file
Eugen Rochko 4bec613897 Fix #24 - Thread resolving for remote statuses
This is a big one, so let me enumerate:

Accounts as well as stream entry pages now contain Link headers that
reference the Atom feed and Webfinger URL for the former and Atom entry
for the latter. So you only need to HEAD those resources to get that
information, no need to download and parse HTML <link>s.

ProcessFeedService will now queue ThreadResolveWorker for each remote
status that it cannot find otherwise. Furthermore, entries are now
processed in reverse order (from bottom to top) in case a newer entry
references a chronologically previous one.

ThreadResolveWorker uses FetchRemoteStatusService to obtain a status
and attach the child status it was queued for to it.

FetchRemoteStatusService looks up the URL, first with a HEAD, tests
if it's an Atom feed, in which case it processes it directly. Next
for Link headers to the Atom feed, in which case that is fetched
and processed. Lastly if it's HTML, it is checked for <link>s to the Atom
feed, and if such is found, that is fetched and processed. The account for
the status is derived from author/name attribute in the XML and the hostname
in the URL (domain). FollowRemoteAccountService and ProcessFeedService
are used.

This means that potentially threads are resolved recursively until a dead-end
is encountered, however it is performed asynchronously over background jobs,
so it should be ok.
2016-09-21 01:50:31 +02:00
app Fix #24 - Thread resolving for remote statuses 2016-09-21 01:50:31 +02:00
bin Upgrade to Rails 5.0.0.1 2016-08-17 17:58:00 +02:00
config Adding media controller, 1 webm/compose form allowed, previews generated 2016-09-17 17:47:26 +02:00
db Upgrade to PubSubHubbub 0.4 (removing verify_token) 2016-09-20 02:43:20 +02:00
lib Upgrade to PubSubHubbub 0.4 (removing verify_token) 2016-09-20 02:43:20 +02:00
log Initial commit 2016-02-20 22:53:20 +01:00
public Adding favicon 2016-03-18 12:36:57 +01:00
spec Fix #24 - Thread resolving for remote statuses 2016-09-21 01:50:31 +02:00
vendor/assets Initial commit 2016-02-20 22:53:20 +01:00
.babelrc Reblogs fixed 2016-09-01 14:12:11 +02:00
.dockerignore Dockerfile adjustments 2016-08-24 18:51:36 +02:00
.env.production.sample Fixing the docker container setup (with assets compilation &co) 2016-03-16 12:57:01 +01:00
.eslintrc Reblogs fixed 2016-09-01 14:12:11 +02:00
.gitignore Adding React.js, Redux, revamping dashboard 2016-08-24 17:56:44 +02:00
.rspec Adding a Mention model, test stubs 2016-02-25 00:17:01 +01:00
.ruby-version Initial commit 2016-02-20 22:53:20 +01:00
.travis.yml Trying to fix travis builds 2016-09-03 14:20:59 +02:00
config.ru Initial commit 2016-02-20 22:53:20 +01:00
docker-compose.yml Adding Sidekiq for background processing (firstly just of mailers) 2016-03-25 02:50:48 +01:00
Dockerfile Fix typo in dockerfile 2016-09-17 18:29:15 +02:00
Gemfile Fix #24 - Thread resolving for remote statuses 2016-09-21 01:50:31 +02:00
Gemfile.lock Fix #24 - Thread resolving for remote statuses 2016-09-21 01:50:31 +02:00
LICENSE Adding GNU Public license, adding home timeline, reblog/favourite counters 2016-03-06 17:52:23 +01:00
package.json Re-organizing components to be more modular, adding loading bars 2016-09-19 23:26:21 +02:00
Rakefile Initial commit 2016-02-20 22:53:20 +01:00
README.md Update README.md 2016-09-02 14:07:21 +02:00

Mastodon

Build Status Code Climate

Mastodon is a federated microblogging engine. An alternative implementation of the GNU Social project. Based on ActivityStreams, Webfinger, PubsubHubbub and Salmon.

Focus of the project on a clean REST API and a good user interface. Ruby on Rails is used for the back-end, while React.js and Redux are used for the dynamic front-end. A static front-end for public resources (profiles and statuses) is also provided.

If you would like, you can support the development of this project on Patreon.

Current status of the project is early development

Resources

Status

  • GNU Social users can follow Mastodon users
  • Mastodon users can follow GNU Social users
  • Retweets, favourites, mentions, replies work in both directions
  • Public pages for profiles and single statuses
  • Sign up, login, forgotten passwords and changing password
  • Mentions and URLs converted to links in statuses
  • REST API, including home and mention timelines
  • OAuth2 provider system for the API
  • Upload header image for profile page
  • Deleting statuses, deletion propagation
  • Real-time timelines via Websockets

Configuration

  • LOCAL_DOMAIN should be the domain/hostname of your instance. This is absolutely required as it is used for generating unique IDs for everything federation-related
  • LOCAL_HTTPS set it to true if HTTPS works on your website. This is used to generate canonical URLs, which is also important when generating and parsing federation-related IDs
  • HUB_URL should be the URL of the PubsubHubbub service that your instance is going to use. By default it is the open service of Superfeedr

Consult the example configuration file, .env.production.sample for the full list.

Requirements

  • PostgreSQL
  • Redis

Running with Docker and Docker-Compose

The project now includes a Dockerfile and a docker-compose.yml. You need to turn .env.production.sample into .env.production with all the variables set before you can:

docker-compose build

And finally

docker-compose up -d

As usual, the first thing you would need to do would be to run migrations:

docker-compose run web rake db:migrate

And since the instance running in the container will be running in production mode, you need to pre-compile assets:

docker-compose run web rake assets:precompile

The container has two volumes, for the assets and for user uploads. The default docker-compose.yml maps them to the repository's public/assets and public/system directories, you may wish to put them somewhere else. Likewise, the PostgreSQL and Redis images have data containers that you may wish to map somewhere where you know how to find them and back them up.

Updating

This approach makes updating to the latest version a real breeze.

git pull

To pull down the updates, re-run

docker-compose build

And finally,

docker-compose up -d

Which will re-create the updated containers, leaving databases and data as is. Depending on what files have been updated, you might need to re-run migrations and asset compilation.